text
stringlengths
25
143k
source
stringlengths
12
112
--- draft: false title: "Qdrant and OVHcloud Bring Vector Search to All Enterprises" short_description: "Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy." description: "Collaborating to support startups and enterprises in Europe with a strong focus on data control and privacy." preview_image: /blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud.png date: 2024-04-10T00:05:00Z author: Qdrant featured: false weight: 1004 tags: - Qdrant - Vector Database --- With the official release of [Qdrant Hybrid Cloud](/hybrid-cloud/), businesses running their data infrastructure on [OVHcloud](https://ovhcloud.com/) are now able to deploy a fully managed vector database in their existing OVHcloud environment. We are excited about this partnership, which has been established through the [OVHcloud Open Trusted Cloud](https://opentrustedcloud.ovhcloud.com/en/) program, as it is based on our shared understanding of the importance of trust, control, and data privacy in the context of the emerging landscape of enterprise-grade AI applications. As part of this collaboration, we are also providing a detailed use case tutorial on building a recommendation system that demonstrates the benefits of running Qdrant Hybrid Cloud on OVHcloud. Deploying Qdrant Hybrid Cloud on OVHcloud's infrastructure represents a significant leap for European businesses invested in AI-driven projects, as this collaboration underscores the commitment to meeting the rigorous requirements for data privacy and control of European startups and enterprises building AI solutions. As businesses are progressing on their AI journey, they require dedicated solutions that allow them to make their data accessible for machine learning and AI projects, without having it leave the company's security perimeter. Prioritizing data sovereignty, a crucial aspect in today's digital landscape, will help startups and enterprises accelerate their AI agendas and build even more differentiating AI-enabled applications. The ability of running Qdrant Hybrid Cloud on OVHcloud not only underscores the commitment to innovative, secure AI solutions but also ensures that companies can navigate the complexities of AI and machine learning workloads with the flexibility and security required. > *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely.“* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud #### Qdrant & OVHcloud: High Performance Vector Search With Full Data Control Through the seamless integration between Qdrant Hybrid Cloud and OVHcloud, developers and businesses are able to deploy the fully managed vector database within their existing OVHcloud setups in minutes, enabling faster, more accurate AI-driven insights. - **Simple setup:** With the seamless “one-click” installation, developers are able to deploy Qdrant’s fully managed vector database to their existing OVHcloud environment. - **Trust and data sovereignty**: Deploying Qdrant Hybrid Cloud on OVHcloud enables developers with vector search that prioritizes data sovereignty, a crucial aspect in today's AI landscape where data privacy and control are essential. True to its “Sovereign by design” DNA, OVHcloud guarantees that all the data stored are immune to extraterritorial laws and comply with the highest security standards. - **Open standards and open ecosystem**: OVHcloud’s commitment to open standards and an open ecosystem not only facilitates the easy integration of Qdrant Hybrid Cloud with OVHcloud’s AI services and GPU-powered instances but also ensures compatibility with a wide range of external services and applications, enabling seamless data workflows across the modern AI stack. - **Cost efficient sector search:** By leveraging Qdrant's quantization for efficient data handling and pairing it with OVHcloud's eco-friendly, water-cooled infrastructure, known for its superior price/performance ratio, this collaboration provides a strong foundation for cost efficient vector search. #### Build a RAG-Based System with Qdrant Hybrid Cloud and OVHcloud ![hybrid-cloud-ovhcloud-tutorial](/blog/hybrid-cloud-ovhcloud/hybrid-cloud-ovhcloud-tutorial.png) To show how Qdrant Hybrid Cloud deployed on OVHcloud allows developers to leverage the benefits of an AI use case that is completely run within the existing infrastructure, we put together a comprehensive use case tutorial. This tutorial guides you through creating a recommendation system using collaborative filtering and sparse vectors with Qdrant Hybrid Cloud on OVHcloud. It employs the Movielens dataset for practical application, providing insights into building efficient, scalable recommendation engines suitable for developers and data scientists looking to leverage advanced vector search technologies within a secure, GDPR-compliant European cloud infrastructure. [Try the Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/) #### Get Started Today and Leverage the Benefits of Qdrant Hybrid Cloud Setting up Qdrant Hybrid Cloud on OVHcloud is straightforward and quick, thanks to the intuitive integration with Kubernetes. Here's how: - **Hybrid Cloud Activation**: Log into your Qdrant account and enable 'Hybrid Cloud'. - **Cluster Integration**: Add your OVHcloud Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud settings. - **Effortless Deployment**: Use the Qdrant Management Console for easy deployment and management of Qdrant clusters on OVHcloud. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-ovhcloud.md
--- draft: false title: Full-text filter and index are already available! slug: qdrant-introduces-full-text-filters-and-indexes short_description: Qdrant v0.10 introduced full-text filters description: Qdrant v0.10 introduced full-text filters and indexes to enable more search capabilities for those working with textual data. preview_image: /blog/from_cms/andrey.vasnetsov_black_hole_sucking_up_the_word_tag_cloud_f349586d-3e51-43c5-9e5e-92abf9a9e871.png date: 2022-11-16T09:53:05.860Z author: Kacper Łukawski featured: false tags: - Information Retrieval - Database - Open Source - Vector Search Database --- Qdrant is designed as an efficient vector database, allowing for a quick search of the nearest neighbours. But, you may find yourself in need of applying some extra filtering on top of the semantic search. Up to version 0.10, Qdrant was offering support for keywords only. Since 0.10, there is a possibility to apply full-text constraints as well. There is a new type of filter that you can use to do that, also combined with every other filter type. ## Using full-text filters without the payload index Full-text filters without the index created on a field will return only those entries which contain all the terms included in the query. That is effectively a substring match on all the individual terms but **not a substring on a whole query**. ![](/blog/from_cms/1_ek61_uvtyn89duqtmqqztq.webp "An example of how to search for “long_sleeves” in a “detail_desc” payload field.") ## Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by. ![](/blog/from_cms/1_pohx4eznqpgoxak6ppzypq.webp "Full-text search behaviour on an indexed payload field There are more options if you create a full-text index on a field you will filter by.") First and foremost, you can choose the tokenizer. It defines how Qdrant should split the text into tokens. There are three options available: * **word** — spaces, punctuation marks and special characters define the token boundaries * **whitespace** — token boundaries defined by whitespace characters * **prefix** — token boundaries are the same as for the “word” tokenizer, but in addition to that, there are prefixes created for every single token. As a result, “Qdrant” will be indexed as “Q”, “Qd”, “Qdr”, “Qdra”, “Qdran”, and “Qdrant”. There are also some additional parameters you can provide, such as * **min_token_len** — minimal length of the token * **max_token_len** — maximal length of the token * **lowercase** — if set to *true*, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase ## Using text filters in practice ![](/blog/from_cms/1_pbtd2tzqtjqqlbi61r8czg.webp "There are also some additional parameters you can provide, such as min_token_len — minimal length of the token max_token_len — maximal length of the token lowercase — if set to true, then the index will be case-insensitive, as Qdrant will convert all the texts to lowercase Using text filters in practice") The main difference between using full-text filters on the indexed vs non-indexed field is the performance of such query. In a simple benchmark, performed on the [H&M dataset](https://www.kaggle.com/competitions/h-and-m-personalized-fashion-recommendations) (with over 105k examples), the average query time looks as follows (n=1000): ![](/blog/from_cms/screenshot_31.png) It is evident that creating a filter on a field that we’ll query often, may lead us to substantial performance gains without much effort.
blog/full-text-filter-and-index-are-already-available.md
--- draft: false preview_image: /blog/from_cms/docarray.png sitemapExclude: true title: "Qdrant and Jina integration: storage backend support for DocArray" slug: qdrant-and-jina-integration short_description: "One more way to use Qdrant: Jina's DocArray is now supporting Qdrant as a storage backend." description: We are happy to announce that Jina.AI integrates Qdrant engine as a storage backend to their DocArray solution. date: 2022-03-15T15:00:00+03:00 author: Alyona Kavyerina featured: false author_link: https://medium.com/@alyona.kavyerina tags: - jina integration - docarray categories: - News --- We are happy to announce that [Jina.AI](https://jina.ai/) integrates Qdrant engine as a storage backend to their [DocArray](https://docarray.jina.ai/) solution. Now you can experience the convenience of Pythonic API and Rust performance in a single workflow. DocArray library defines a structure for the unstructured data and simplifies processing a collection of documents, including audio, video, text, and other data types. Qdrant engine empowers scaling of its vector search and storage. Read more about the integration by this [link](/documentation/install/#docarray)
blog/qdrant_and_jina_integration.md
--- title: "Qdrant Attains SOC 2 Type II Audit Report" draft: false slug: qdrant-soc2-type2-audit # Change this slug to your page slug if needed short_description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, Processing Integrity, Confidentiality, and Privacy. description: We're proud to announce achieving SOC 2 Type II compliance for Security, Availability, and Confidentiality. preview_image: /blog/soc2-type2-report/soc2-preview.jpeg # social_preview_image: /blog/soc2-type2-report/soc2-preview.jpeg date: 2024-05-23T20:26:20-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - soc2 - audit - security - confidenciality - data privacy - soc2 type 2 --- At Qdrant, we are happy to announce the successful completion our the SOC 2 Type II Audit. This achievement underscores our unwavering commitment to upholding the highest standards of security, availability, and confidentiality for our services and our customers’ data. ## SOC 2 Type II: What Is It? SOC 2 Type II certification is an examination of an organization's controls in reference to the American Institute of Certified Public Accountants [(AICPA) Trust Services criteria](https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022). It evaluates not only our written policies but also their practical implementation, ensuring alignment between our stated objectives and operational practices. Unlike Type I, which is a snapshot in time, Type II verifies over several months that the company has lived up to those controls. The report represents thorough auditing of our security procedures throughout this examination period: January 1, 2024 to April 7, 2024. ## Key Audit Findings The audit ensured with no exceptions noted the effectiveness of our systems and controls on the following Trust Service Criteria: * Security * Confidentiality * Availability These certifications are available today and automatically apply to your existing workloads. The full SOC 2 Type II report is available to customers and stakeholders upon request through the [Trust Center](https://app.drata.com/trust/9cbbb75b-0c38-11ee-865f-029d78a187d9). ## Future Compliance Going forward, Qdrant will maintain SOC 2 Type II compliance by conducting continuous, annual audits to ensure our security practices remain aligned with industry standards and evolving risks. Recognizing the critical importance of data security and the trust our clients place in us, achieving SOC 2 Type II compliance underscores our ongoing commitment to prioritize data protection with the utmost integrity and reliability. ## About Qdrant Qdrant is a vector database designed to handle large-scale, high-dimensional data efficiently. It allows for fast and accurate similarity searches in complex datasets. Qdrant strives to achieve seamless and scalable vector search capabilities for various applications. For more information about Qdrant and our security practices, please visit our [website](http://qdrant.tech) or [reach out to our team directly](https://qdrant.tech/contact-us/).
blog/soc2-type2-report.md
--- draft: false title: Binary Quantization - Andrey Vasnetsov | Vector Space Talks slug: binary-quantization short_description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary quantization and its applications in vector indexing. description: Andrey Vasnetsov, CTO of Qdrant, discusses the concept of binary quantization and its benefits in vector indexing, including the challenges and potential future developments of this technique. preview_image: /blog/from_cms/andrey-vasnetsov-cropped.png date: 2024-01-09T10:30:10.952Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Binary Quantization - Qdrant --- > *"Everything changed when we actually tried binary quantization with OpenAI model.”*\ > -- Andrey Vasnetsov Ever wonder why we need quantization for vector indexes? Andrey Vasnetsov explains the complexities and challenges of searching through proximity graphs. Binary quantization reduces storage size and boosts speed by 30x, but not all models are compatible. Andrey worked as a Machine Learning Engineer most of his career. He prefers practical over theoretical, working demo over arXiv paper. He is currently working as the CTO at Qdrant a Vector Similarity Search Engine, which can be used for semantic search, similarity matching of text, images or even videos, and also recommendations. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7dPOm3x4rDBwSFkGZuwaMq?si=Ip77WCa_RCCYebeHX6DTMQ), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/4aUq5VnR_VI).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/4aUq5VnR_VI?si=CdT2OL-eQLEFjswr" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Binary-Quantization---Andrey-Vasnetsov--Vector-Space-Talk-001-e2bsa3m/a-aajrqfd" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## Top Takeaways: Discover how oversampling optimizes precision in real-time, enhancing the accuracy without altering stored data structures in our very first episode of the Vector Space Talks by Qdrant, with none other than the CTO of Qdrant, Andrey Vasnetsov. In this episode, Andrey shares invaluable insights into the world of binary quantization and its profound impact on Vector Space technology. 5 Keys to Learning from the Episode: 1. The necessity of quantization and the complex challenges it helps to overcome. 2. The transformative effects of binary quantization on processing speed and storage size reduction. 3. A detailed exploration of oversampling and its real-time precision control in query search. 4. Understanding the simplicity and effectiveness of binary quantization, especially when compared to more intricate quantization methods. 5. The ongoing research and potential impact of binary quantization on future models. > Fun Fact: Binary quantization can deliver processing speeds over 30 times faster than traditional quantization methods, which is a revolutionary advancement in Vector Space technology. > ## Show Notes: 00:00 Overview of HNSW vector index.\ 03:57 Efficient storage needed for large vector sizes.\ 07:49 Oversampling controls precision in real-time search.\ 12:21 Comparison of vectors using dot production.\ 15:20 Experimenting with models, OpenAI has compatibility.\ 18:29 Qdrant architecture doesn't support removing original vectors. ## More Quotes from Andrey: *"Inside Qdrant we use HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors.”*\ -- Andrey Vasnetsov *"The main idea is that we convert the float point elements of the vector into binary representation. So, it's either zero or one, depending if the original element is positive or negative.”*\ -- Andrey Vasnetsov *"We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI.”*\ -- Andrey Vasnetsov ## Transcript: Demetrios: Okay, welcome everyone. This is the first and inaugural vector space talks, and who better to kick it off than the CTO of Qdrant himself? Andrey V. Happy to introduce you and hear all about this binary quantization that you're going to be talking about. I've got some questions for you, and I know there are some questions that came through in the chat. And the funny thing about this is that we recorded it live on Discord yesterday. But the thing about Discord is you cannot trust the recordings on there. And so we only got the audio and we wanted to make this more visual for those of you that are watching on YouTube. Hence here we are recording it again. Demetrios: And so I'll lead us through some questions for you, Andrey. And I have one thing that I ask everyone who is listening to this, and that is if you want to give a talk and you want to showcase either how you're using Qdrant, how you've built a rag, how you have different features or challenges that you've overcome with your AI, landscape or ecosystem or stack that you've set up, please reach out to myself and I will get you on here and we can showcase what you've done and you can give a talk for the vector space talk. So without further ado, let's jump into this, Andrey, we're talking about binary quantization, but let's maybe start a step back. Why do we need any quantization at all? Why not just use original vectors? Andrey Vasnetsov: Yep. Hello, everyone. Hello Demetrios. And it's a good question, and I think in order to answer it, I need to first give a short overview of what is vector index, how it works and what challenges it possess. So, inside Qdrant we use so called HNSW vector Index, which is essentially a proximity graph. You can imagine it as a number of vertices where each vertex is representing one vector and links between those vertices representing nearest neighbors. So in order to search through this graph, what you actually need to do is do a greedy deep depth first search, and you can tune the precision of your search with the beam size of the greedy search process. But this structure of the index actually has its own challenges and first of all, its index building complexity. Andrey Vasnetsov: Inserting one vector into the index is as complicated as searching for one vector in the graph. And the graph structure overall have also its own limitations. It requires a lot of random reads where you can go in any direction. It's not easy to predict which path the graph will take. The search process will take in advance. So unlike traditional indexes in traditional databases, like binary trees, like inverted indexes, where we can pretty much serialize everything. In HNSW it's always random reads and it's actually always sequential reads, because you need to go from one vertex to another in a sequential manner. And this actually creates a very strict requirement for underlying storage of vectors. Andrey Vasnetsov: It had to have a very low latency and it have to support this randomly spatter. So basically we can only do it efficiently if we store all the vectors either in very fast solid state disks or if we use actual RAM to store everything. And RAM is not cheap these days, especially considering that the size of vectors increases with each new version of the model. And for example, OpenAI model is already more than 1000 dimensions. So you can imagine one vector is already 6 data, no matter how long your text is, and it's just becoming more and more expensive with the advancements of new models and so on. So in order to actually fight this, in order to compensate for the growth of data requirement, what we propose to do, and what we already did with different other quantization techniques is we actually compress vectors into quantized vector storage, which is usually much more compact for the in memory representation. For example, on one of the previous releases we have scalar quantization and product quantization, which can compress up to 64 times the size of the vector. And we only keep in fast storage these compressed vectors. Andrey Vasnetsov: We retrieve them and get a list of candidates which will later rescore using the original vectors. And the benefit here is this reordering or rescoring process actually doesn't require any kind of sequential or random access to data, because we already know all the IDs we need to rescore, and we can efficiently read it from the disk using asynchronous I O, for example, and even leverage the advantage of very cheap network mounted disks. And that's the main benefit of quantization. Demetrios: I have a few questions off the back of this one, being just a quick thing, and I'm wondering if we can double benefit by using this binary quantization, but also if we're using smaller models that aren't the GBTs, will that help? Andrey Vasnetsov: Right. So not all models are as big as OpenAI, but what we see, the trend in this area, the trend of development of different models, indicates that they will become bigger and bigger over time. Just because we want to store more information inside vectors, we want to have larger context, we want to have more detailed information, more detailed separation and so on. This trend is obvious if like five years ago the usual size of the vector was 100 dimensions now the usual size is 700 dimensions, so it's basically. Demetrios: Preparing for the future while also optimizing for today. Andrey Vasnetsov: Right? Demetrios: Yeah. Okay, so you mentioned on here oversampling. Can you go into that a little bit more and explain to me what that is? Andrey Vasnetsov: Yeah, so oversampling is a special technique we use to control precision of the search in real time, in query time. And the thing is, we can internally retrieve from quantized storage a bit more vectors than we actually need. And when we do rescoring with original vectors, we assign more precise score. And therefore from this overselection, we can pick only those vectors which are actually good for the user. And that's how we can basically control accuracy without rebuilding index, without changing any kind of parameters inside the stored data structures. But we can do it real time in just one parameter change of the search query itself. Demetrios: I see, okay, so basically this is the quantization. And now let's dive into the binary quantization and how it works. Andrey Vasnetsov: Right, so binary quantization is actually very simple. The main idea that we convert the float point elements of the vector into binary representation. So it's either zero or one, depending if the original element is positive or negative. And by doing this we can approximate dot production or cosine similarity, whatever metric you use to compare vectors with just hemming distance, and hemming distance is turned to be very simple to compute. It uses only two most optimized CPU instructions ever. It's Pixor and Popcount. Instead of complicated float point subprocessor, you only need those tool. It works with any register you have, and it's very fast. Andrey Vasnetsov: It uses very few CPU cycles to actually produce a result. That's why binary quantization is over 30 times faster than regular product. And it actually solves the problem of complicated index building, because this computation of dot products is the main source of computational requirements for HNSW. Demetrios: So if I'm understanding this correctly, it's basically taking all of these numbers that are on the left, which can be, yes, decimal numbers. Andrey Vasnetsov: On the left you can see original vector and it converts it in binary representation. And of course it does lose a lot of precision in the process. But because first we have very large vector and second, we have oversampling feature, we can compensate for this loss of accuracy and still have benefit in both speed and the size of the storage. Demetrios: So if I'm understanding this correctly, it's basically saying binary quantization on its own probably isn't the best thing that you would want to do. But since you have these other features that will help counterbalance the loss in accuracy. You get the speed from the binary quantization and you get the accuracy from these other features. Andrey Vasnetsov: Right. So the speed boost is so overwhelming that it doesn't really matter how much over sampling is going to be, we will still benefit from that. Demetrios: Yeah. And how much faster is it? You said that, what, over 30 times faster? Andrey Vasnetsov: Over 30 times and some benchmarks is about 40 times faster. Demetrios: Wow. Yeah, that's huge. And so then on the bottom here you have dot product versus hammering distance. And then there's. Yeah, hamming. Sorry, I'm inventing words over here on your slide. Can you explain what's going on there? Andrey Vasnetsov: Right, so dot production is the metric we usually use in comparing a pair of vectors. It's basically the same as cosine similarity, but this normalization on top. So internally, both cosine and dot production actually doing only dot production, that's usual metric we use. And in order to do this operation, we first need to multiply each pair of elements to the same element of the other vector and then add all these multiplications in one number. It's going to be our score instead of this in binary quantization, in binary vector, we do XOR operation and then count number of ones. So basically, Hemming distance is an approximation of dot production in this binary space. Demetrios: Excellent. Okay, so then it looks simple enough, right? Why are you implementing it now after much more complicated product quantization? Andrey Vasnetsov: It's actually a great question. And the answer to this is binary questization looked too simple to be true, too good to be true. And we thought like this, we tried different things with open source models that didn't work really well. But everything changed when we actually tried binary quantization with OpenAI model. And it turned out that OpenAI model has very good compatibility with this type of quantization. Unfortunately, not every model have as good compatibility as OpenAI. And to be honest, it's not yet absolutely clear for us what makes models compatible and whatnot. We do know that it correlates with number of dimensions, but it is not the only factor. Andrey Vasnetsov: So there is some secret source which exists and we should find it, which should enable models to be compatible with binary quantization. And I think it's actually a future of this space because the benefits of this hemming distance benefits of binary quantization is so great that it makes sense to incorporate these tricks on the learning process of the model to make them more compatible. Demetrios: Well, you mentioned that OpenAI's model is one that obviously works well with binary quantization, but there are models that don't work well with it, which models have not been very good. Andrey Vasnetsov: So right now we are in the process of experimenting with different models. We tried most popular open source models, and unfortunately they are not as good compatible with binary quantization as OpenAI. We also tried different closed source models, for example Cohere AI, which is on the same level of compatibility with binary quantization as OpenAI, but they actually have much larger dimensionality. So instead of 1500 they have 4000. And it's not yet clear if only dimensionality makes this model compatible. Or there is something else in training process, but there are open source models which are getting close to OpenAI 1000 dimensions, but they are not nearly as good as Openi in terms of this compression compatibility. Demetrios: So let that be something that hopefully the community can help us figure out. Why is it that this works incredibly well with these closed source models, but not with the open source models? Maybe there is something that we're missing there. Andrey Vasnetsov: Not all closed source models are compatible as well, so some of them work similar as open source, but a few works well. Demetrios: Interesting. Okay, so is there a plan to implement other quantization methods, like four bit quantization or even compressing two floats into one bit? Andrey Vasnetsov: Right, so our choice of quantization is mostly defined by available CPU instructions we can apply to perform those computations. In case of binary quantization, it's straightforward and very simple. That's why we like binary quantization so much. In case of, for example, four bit quantization, it is not as clear which operation we should use. It's not yet clear. Would it be efficient to convert into four bits and then apply multiplication of four bits? So this would require additional investigation, and I cannot say that we have immediate plans to do so because still the binary quincellation field is not yet explored on 100% and we think it's a lot more potential with this than currently unlocked. Demetrios: Yeah, there's some low hanging fruits still on the binary quantization field, so tackle those first and then move your way over to four bit and all that fun stuff. Last question that I've got for you is can we remove original vectors and only keep quantized ones in order to save disk space? Andrey Vasnetsov: Right? So unfortunately Qdrant architecture is not designed and not expecting this type of behavior for several reasons. First of all, removing of the original vectors will compromise some features like oversampling, like segment building. And actually removing of those original vectors will only be compatible with some types of quantization for example, it won't be compatible with scalar quantization because in this case we won't be able to rebuild index to do maintenance of the system. And in order to maintain, how would you say, consistency of the API, consistency of the engine, we decided to enforce always enforced storing of the original vectors. But the good news is that you can always keep original vectors on just disk storage. It's very cheap. Usually it's ten times or even more times cheaper than RAM, and it already gives you great advantage in terms of price. That's answer excellent. Demetrios: Well man, I think that's about it from this end, and it feels like it's a perfect spot to end it. As I mentioned before, if anyone wants to come and present at our vector space talks, we're going to be doing these, hopefully biweekly, maybe weekly, if we can find enough people. And so this is an open invitation for you, and if you come present, I promise I will send you some swag. That is my promise to you. And if you're listening after the fact and you have any questions, come into discord on the Qdrant. Discord. And ask myself or Andrey any of the questions that you may have as you're listening to this talk about binary quantization. We will catch you all later. Demetrios: See ya, have a great day. Take care.
blog/binary-quantization-andrey-vasnetsov-vector-space-talk-001.md
--- draft: true preview_image: /blog/from_cms/new-cmp-demo.gif sitemapExclude: true title: "Introducing the Quaterion: a framework for fine-tuning similarity learning models" slug: quaterion short_description: Please meet Quaterion—a framework for training and fine-tuning similarity learning models. description: We're happy to share the result of the work we've been into during the last months - Quaterion. It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient. date: 2022-06-28T12:48:36.622Z author: Andrey Vasnetsov featured: true author_link: https://www.linkedin.com/in/andrey-vasnetsov-75268897/ tags: - Corporate news - Release - Quaterion - PyTorch categories: - News - Release - Quaterion --- We're happy to share the result of the work we've been into during the last months - [Quaterion](https://quaterion.qdrant.tech/). It is a framework for fine-tuning similarity learning models that streamlines the training process to make it significantly faster and cost-efficient. To develop Quaterion, we utilized PyTorch Lightning, leveraging a high-performing AI research approach to constructing training loops for ML models. ![quaterion](/blog/from_cms/new-cmp-demo.gif) This framework empowers vector search [solutions](/solutions/), such as semantic search, anomaly detection, and others, by advanced coaching mechanism, specially designed head layers for pre-trained models, and high flexibility in terms of customization according to large-scale training pipelines and other features. Here you can read why similarity learning is preferable to the traditional machine learning approach and how Quaterion can help benefit <https://quaterion.qdrant.tech/getting_started/why_quaterion.html#why-quaterion>    A quick start with Quaterion:<https://quaterion.qdrant.tech/getting_started/quick_start.html>\ \ And try it and give us a star on GitHub :) <https://github.com/qdrant/quaterion>
blog/introducing-the-quaterion-a-framework-for-fine-tuning-similarity-learning-models.md
--- draft: true title: "OCI and Qdrant Hybrid Cloud for Maximum Data Sovereignty" short_description: "Qdrant Hybrid Cloud is now available for OCI customers as a managed vector search engine for data-sensitive AI apps." description: "Qdrant Hybrid Cloud is now available for OCI customers as a managed vector search engine for data-sensitive AI apps." preview_image: /blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure.png date: 2024-04-11T00:03:00Z author: Qdrant featured: false weight: 1005 tags: - Qdrant - Vector Database --- Qdrant and [Oracle Cloud Infrastructure (OCI) Cloud Engineering](https://www.oracle.com/cloud/) are thrilled to announce the ability to deploy [Qdrant Hybrid Cloud](/hybrid-cloud/) as a managed service on OCI. This marks the next step in the collaboration between Qdrant and Oracle Cloud Infrastructure, which will enable enterprises to realize the benefits of artificial intelligence powered through scalable vector search. In 2023, OCI added Qdrant to its [Oracle Cloud Infrastructure solution portfolio](https://blogs.oracle.com/cloud-infrastructure/post/vecto-database-qdrant-support-oci-kubernetes). Qdrant Hybrid Cloud is the managed service of the Qdrant vector search engine that can be deployed and run in any existing OCI environment, allowing enterprises to run fully managed vector search workloads in their existing infrastructure. This is a milestone for leveraging a managed vector search engine for data-sensitive AI applications. In the past years, enterprises have been actively engaged in exploring AI applications to enhance their products and services or unlock internal company knowledge to drive the productivity of teams. These applications range from generative AI use cases, for example, powered by retrieval augmented generation (RAG), recommendation systems, or advanced enterprise search through semantic, similarity, or neural search. As these vector search applications continue to evolve and grow with respect to dimensionality and complexity, it will be increasingly relevant to have a scalable, manageable vector search engine, also called out by Gartner’s 2024 Impact Radar. In addition to scalability, enterprises also require flexibility in deployment options to be able to maximize the use of these new AI tools within their existing environment, ensuring interoperability and full control over their data. > *"We are excited to partner with Qdrant to bring their powerful vector search capabilities to Oracle Cloud Infrastructure. By offering Qdrant Hybrid Cloud as a managed service on OCI, we are empowering enterprises to harness the full potential of AI-driven applications while maintaining complete control over their data. This collaboration represents a significant step forward in making scalable vector search accessible and manageable for businesses across various industries, enabling them to drive innovation, enhance productivity, and unlock valuable insights from their data."* Dr. Sanjay Basu, Senior Director of Cloud Engineering, AI/GPU Infrastructure at Oracle. #### How Qdrant and OCI Support Enterprises in Unlocking Value Through AI Deploying Qdrant Hybrid Cloud on OCI facilitates vector search in production environments without altering existing setups, ideal for enterprises and developers leveraging OCI's services. Key benefits include: - **Seamless Deployment:** Qdrant Hybrid Cloud’s Kubernetes-native architecture allows you to simply connect your OCI cluster as a Hybrid Cloud Environment and deploy Qdrant with a one-step installation ensuring a smooth and scalable setup. - **Seamless Integration with OCI Services:** The integration facilitates efficient resource utilization and enhances security provisions by leveraging OCI's comprehensive suite of services. - **Simplified Cluster Management**: Qdrant’s central cluster management allows to scale your cluster on OCI (vertically and horizontally), and supports seamless zero-downtime upgrades and disaster recovery, - **Control and Data Privacy**: Deploying Qdrant on OCI ensures complete data isolation, while enjoying the benefits of a fully managed cluster management. #### Qdrant on OCI in Action: Building a RAG System for AI-Enabled Support ![hybrid-cloud-oracle-cloud-infrastructure-tutorial](/blog/hybrid-cloud-oracle-cloud-infrastructure/hybrid-cloud-oracle-cloud-infrastructure-tutorial.png) We created a comprehensive tutorial to show how to leverage the benefits of Qdrant Hybrid Cloud on OCI and build AI applications with a focus on data sovereignty. This use case is focused on building a RAG system for FAQ, leveraging the strengths of Qdrant Hybrid Cloud for vector search, Oracle Cloud Infrastructure (OCI) as a managed Kubernetes provider, Cohere models for embedding, and LangChain as a framework. [Try the Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) Deploying Qdrant Hybrid Cloud on Oracle Cloud Infrastructure only takes a few minutes due to the seamless Kubernetes-native integration. You can get started by following these three steps: 1. **Hybrid Cloud Activation**: Start by signing into your [Qdrant Cloud account](https://qdrant.to/cloud) and activate **Hybrid Cloud**. 2. **Cluster Integration**: In the Hybrid Cloud section, add your OCI Kubernetes clusters as a Hybrid Cloud Environment. 3. **Effortless Deployment**: Utilize the Qdrant Management Console to seamlessly create and manage your Qdrant clusters on OCI. You can find a detailed description in our documentation focused on deploying Qdrant on OCI. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-oracle-cloud-infrastructure.md
--- title: "Intel’s New CPU Powers Faster Vector Search" draft: false slug: qdrant-cpu-intel-benchmark short_description: "New generation silicon is a game-changer for AI/ML applications." description: "Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space. " preview_image: /blog/qdrant-cpu-intel-benchmark/social_preview.jpg social_preview_image: /blog/qdrant-cpu-intel-benchmark/social_preview.jpg date: 2024-05-10T00:00:00-08:00 author: David Myriel, Kumar Shivendu featured: true tags: - vector search - intel benchmark - next gen cpu - vector database --- #### New generation silicon is a game-changer for AI/ML applications ![qdrant cpu intel benchmark report](/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark.png) > *Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space.* Vector search is surging in popularity with institutional customers, and Intel is ready to support the emerging industry. Their latest generation CPU performed exceptionally with Qdrant, a leading vector database used for enterprise AI applications. Intel just released the latest Xeon processor (**codename: Emerald Rapids**) for data centers, a market which is expected to grow to $45 billion. Emerald Rapids offers higher-performance computing and significant energy efficiency over previous generations. Compared to the 4th generation Sapphire Rapids, Emerald boosts AI inference performance by up to 42% and makes vector search 38% faster. ## The CPU of choice for vector database operations The latest generation CPU performed exceptionally in tests carried out by Qdrant’s R&D division. Intel’s CPU was stress-tested for query speed, database latency and vector upload time against massive-scale datasets. Results showed that machines with 32 cores were 1.38x faster at running queries than their previous generation counterparts. In this range, Qdrant’s latency also dropped 2.79x when compared to Sapphire. Qdrant strongly recommends the use of Intel’s next-gen chips in the 8-64 core range. In addition to being a practical number of cores for most machines in the cloud, this compute capacity will yield the best results with mass-market use cases. The CPU affects vector search by influencing the speed and efficiency of mathematical computations. As of recently, companies have started using GPUs to carry large workloads in AI model training and inference. However, for vector search purposes, studies show that CPU architecture is a great fit because it can handle concurrent requests with great ease. > *“Vector search is optimized for CPUs. Intel’s new CPU brings even more performance improvement and makes vector operations blazing fast for AI applications. Customers should consider deploying more CPUs instead of GPU compute power to achieve best performance results and reduce costs simultaneously.”* > > - André Zayarni, Qdrant CEO ## **Why does vector search matter?** ![qdrant cpu intel benchmark report](/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark-future.png) Vector search engines empower AI to look deeper into stored data and retrieve strong relevant responses. Qdrant’s vector database is key to modern information retrieval and machine learning systems. Those looking to run massive-scale Retrieval Augmented Generation (RAG) solutions need to leverage such semantic search engines in order to generate the best results with their AI products. Qdrant is purpose-built to enable developers to store and search for high-dimensional vectors efficiently. It easily integrates with a host of AI/ML tools: Large Language Models (LLM), frameworks such as LangChain, LlamaIndex or Haystack, and service providers like Cohere, OpenAI, and Ollama. ## Supporting enterprise-scale AI/ML The market is preparing for a host of artificial intelligence and machine learning cases, pushing compute to the forefront of the innovation race. The main strength of a vector database like Qdrant is that it can consistently support the user way past the prototyping and launch phases. Qdrant’s product is already being used by large enterprises with billions of data points. Such users can go from testing to production almost instantly. Those looking to host large applications might only need up to 18GB RAM to support 1 million OpenAI Vectors. This makes Qdrant the best option for maximizing resource usage and data connection. Intel’s latest development is crucial to the future of vector databases. Vector search operations are very CPU-intensive. Therefore, Qdrant relies on the innovations made by chip makers like Intel to offer large-scale support. > *“Vector databases are a mainstay in today’s AI/ML toolchain, powering the latest generation of RAG and other Gen AI Applications. In teaming with Qdrant, Intel is helping enterprises deliver cutting-edge Gen-AI solutions and maximize their ROI by leveraging Qdrant’s high-performant and cost-efficient vector similarity search capabilities running on latest Intel Architecture based infrastructure across deployment models.”* > > - Arijit Bandyopadhyay, CTO - Enterprise Analytics & AI, Head of Strategy – Cloud and Enterprise, CSV Group, Intel Corporation ## Advancing vector search and the role of next-gen CPUs Looking ahead, the vector database market is on the cusp of significant growth, particularly for the enterprise market. Developments in CPU technologies, such as those from Intel, are expected to enhance vector search operations by 1) improving processing speeds and 2) boosting retrieval efficiency and quality. This will allow enterprise users to easily manage large and more complex datasets and introduce AI on a global scale. As large companies continue to integrate sophisticated AI and machine learning tools, the reliance on robust vector databases is going to increase. This evolution in the market underscores the importance of continuous hardware innovation in meeting the expanding demands of data-intensive applications, with Intel's contributions playing a notable role in shaping the future of enterprise-scale AI/ML solutions. ## Next steps Qdrant is open source and offers a complete SaaS solution, hosted on AWS, GCP, and Azure. Getting started is easy, either spin up a [container image](https://hub.docker.com/r/qdrant/qdrant) or start a [free Cloud instance](https://cloud.qdrant.io/login). The documentation covers [adding the data](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [creating your indices](/documentation/tutorials/optimize/). We would love to hear about what you are building and please connect with our engineering team on [Github](https://github.com/qdrant/qdrant), [Discord](https://discord.com/invite/tdtYvXjC4h), or [LinkedIn](https://www.linkedin.com/company/qdrant).
blog/qdrant-cpu-intel-benchmark.md
--- title: "Response to CVE-2024-3829: Arbitrary file upload vulnerability" draft: false slug: cve-2024-3829-response short_description: Qdrant keeps your systems secure description: Upgrade your deployments to at least v1.9.0. Cloud deployments not materially affected. preview_image: /blog/cve-2024-3829-response/cve-2024-3829-response-social-preview.png # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-06-10T17:00:00Z author: Mac Chaffee featured: false tags: - cve - security weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ### Summary A security vulnerability has been discovered in Qdrant affecting all versions prior to v1.9, described in [CVE-2024-3829](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-3829). The vulnerability allows an attacker to upload arbitrary files to the filesystem, which can be used to gain remote code execution. This is a different but similar vulnerability to CVE-2024-2221, announced in April 2024. The vulnerability does not materially affect Qdrant cloud deployments, as that filesystem is read-only and authentication is enabled by default. At worst, the vulnerability could be used by an authenticated user to crash a cluster, which is already possible, such as by uploading more vectors than can fit in RAM. Qdrant has addressed the vulnerability in v1.9.0 and above with code that restricts file uploads to a folder dedicated to that purpose. ### Action Check the current version of your Qdrant deployment. Upgrade if your deployment is not at least v1.9.0. To confirm the version of your Qdrant deployment in the cloud or on your local or cloud system, run an API GET call, as described in the [Qdrant Quickstart guide](https://qdrant.tech/documentation/cloud/quickstart-cloud/#step-2-test-cluster-access). If your Qdrant deployment is local, you do not need an API key. Your next step depends on how you installed Qdrant. For details, read the [Qdrant Installation](https://qdrant.tech/documentation/guides/installation/) guide. #### If you use the Qdrant container or binary Upgrade your deployment. Run the commands in the applicable section of the [Qdrant Installation](https://qdrant.tech/documentation/guides/installation/) guide. The default commands automatically pull the latest version of Qdrant. #### If you use the Qdrant helm chart If you’ve set up Qdrant on kubernetes using a helm chart, follow the README in the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main?tab=readme-ov-file#upgrading) repository. Make sure applicable configuration files point to version v1.9.0 or above. #### If you use the Qdrant cloud No action is required. This vulnerability does not materially affect you. However, we suggest that you upgrade your cloud deployment to the latest version.
blog/cve-2024-3829-response.md
--- draft: false title: "FastEmbed: Fast & Lightweight Embedding Generation - Nirant Kasliwal | Vector Space Talks" slug: fast-embed-models short_description: Nirant Kasliwal, AI Engineer at Qdrant, discusses the power and potential of embedding models. description: Nirant Kasliwal discusses the efficiency and optimization techniques of FastEmbed, a Python library designed for speedy, lightweight embedding generation in machine learning applications. preview_image: /blog/from_cms/nirant-kasliwal-cropped.png date: 2024-01-09T11:38:59.693Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Quantized Emdedding Models - FastEmbed --- > *"When things are actually similar or how we define similarity. They are close to each other and if they are not, they're far from each other. This is what a model or embedding model tries to do.”*\ >-- Nirant Kasliwal Heard about FastEmbed? It's a game-changer. Nirant shares tricks on how to improve your embedding models. You might want to give it a shot! Nirant Kasliwal, the creator and maintainer of FastEmbed, has made notable contributions to the Finetuning Cookbook at OpenAI Cookbook. His contributions extend to the field of Natural Language Processing (NLP), with over 5,000 copies of the NLP book sold. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4QWCyu28SlURZfS2qCeGKf?si=GDHxoOSQQ_W_UVz4IzzC_A), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/e67jLAx_F2A).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/e67jLAx_F2A?si=533LvUwRKIt_qWWu" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/FastEmbed-Fast--Lightweight-Embedding-Generation---Nirant-Kasliwal--Vector-Space-Talks-004-e2c8s3b/a-aal40k6" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Nirant Kasliwal, AI Engineer at Qdrant joins us on Vector Space Talks to dive into FastEmbed, a lightning-quick method for generating embeddings. In this episode, Nirant shares insights, tips, and innovative ways to enhance embedding generation. 5 Keys to Learning from the Episode: 1. Nirant introduces some hacker tricks for improving embedding models - you won't want to miss these! 2. Learn how quantized embedding models can enhance CPU performance. 3. Get an insight into future plans for GPU-friendly quantized models. 4. Understand how to select default models in Qdrant based on MTEB benchmark, and how to calibrate them for domain-specific tasks. 5. Find out how Fast Embed, a Python library created by Nirant, can solve common challenges in embedding creation and enhance the speed and efficiency of your workloads. > Fun Fact: The largest header or adapter used in production is only about 400-500 KBs -- proof that bigger doesn't always mean better! > ## Show Notes: 00:00 Nirant discusses FastEmbed at Vector Space Talks.\ 05:00 Tokens are expensive and slow in open air.\ 08:40 FastEmbed is fast and lightweight.\ 09:49 Supporting multimodal embedding is our plan.\ 15:21 No findings. Enhancing model downloads and performance.\ 16:59 Embed creation on your own compute, not cloud. Control and simplicity are prioritized.\ 21:06 Qdrant is fast for embedding similarity search.\ 24:07 Engineer's mindset: make informed guesses, set budgets.\ 26:11 Optimize embeddings with questions and linear layers.\ 29:55 Fast, cheap inference using mixed precision embeddings. ## More Quotes from Nirant: *"There is the academic way of looking at and then there is the engineer way of looking at it, and then there is the hacker way of looking at it. And I will give you all these three answers in that order.”*\ -- Nirant Kasliwal *"The engineer's mindset now tells you that the best way to build something is to make an informed guess about what workload or challenges you're going to foresee. Right. Like a civil engineer builds a bridge around how many cars they expect, they're obviously not going to build a bridge to carry a shipload, for instance, or a plane load, which are very different.”*\ -- Nirant Kasliwal *"I think the more correct way to look at it is that we use the CPU better.”*\ -- Nirant Kasliwal ## Transcript: Demetrios: Welcome back, everyone, to another vector space talks. Today we've got my man Nirant coming to us talking about FastEmbed. For those, if this is your first time at our vector space talks, we like to showcase some of the cool stuff that the community in Qdrant is doing, the Qdrant community is doing. And we also like to show off some of the cool stuff that Qdrant itself is coming out with. And this is one of those times that we are showing off what Qdrant itself came out with with FastEmbed. And we've got my man Nirant around here somewhere. I am going to bring him on stage and I will welcome him by saying Nirant a little bit about his bio, we could say. So, Naran, what's going on, dude? Let me introduce you real fast before we get cracking. Demetrios: And you are a man that wears many hats. You're currently working on the Devrel team at Qdrant, right? I like that shirt that you got there. And you have worked with ML models and embeddings since 2017. That is wild. You are also the creator and maintainer of fast embed. So you're the perfect guy to talk to about this very topic that we are doing today. Now, if anyone has questions, feel free to throw them into the chat and I will ask Nirant as he's going through it. I will also take this moment to encourage anyone who is watching to come and join us in discord, if you are not already there for the Qdrant discord. Demetrios: And secondly, I will encourage you if you have something that you've been doing with Qdrant or in the vector database space, or in the AI application space and you want to show it off, we would love to have you talk at the vector space talks. So without further ado, Nirant, my man, I'm going to kick it over to you and I am going to start it off with what are the challenges with embedding creation today? Nirant Kasliwal: I think embedding creation has it's not a standalone problem, as you might first think like that's a first thought that it's a standalone problem. It's actually two problems. One is a classic compute that how do you take any media? So you can make embeddings from practically any form of media, text, images, video. In theory, you could make it from bunch of things. So I recently saw somebody use soup as a metaphor. So you can make soup from almost anything. So you can make embeddings from almost anything. Now, what do we want to do though? Embedding are ultimately a form of compression. Nirant Kasliwal: So now we want to make sure that the compression captures something of interest to us. In this case, we want to make sure that embeddings capture some form of meaning of, let's say, text or images. And when we do that, what does that capture mean? We want that when things are actually similar or whatever is our definition of similarity. They are close to each other and if they are not, they're far from each other. This is what a model or embedding model tries to do basically in this piece. The model itself is quite often trained and built in a way which retains its ability to learn new things. And you can separate similar embeddings faster and all of those. But when we actually use this in production, we don't need all of those capabilities, we don't need the train time capabilities. Nirant Kasliwal: And that means that all the extra compute and features and everything that you have stored for training time are wasted in production. So that's almost like saying that every time I have to speak to you I start over with hello, I'm Nirant and I'm a human being. It's extremely infuriating but we do this all the time with embedding and that is what fast embed primarily tries to fix. We say embeddings from the lens of production and we say that how can we make a Python library which is built for speed, efficiency and accuracy? Those are the core ethos in that sense. And I think people really find this relatable as a problem area. So you can see this on our GitHub issues. For instance, somebody says that oh yeah, we actually does what it says and yes, that's a good thing. So for 8 million tokens we took about 3 hours on a MacBook Pro M one while some other Olama embedding took over two days. Nirant Kasliwal: You can expect what 8 million tokens would cost on open air and how slow it would be given that they frequently rate limit you. So for context, we made a 1 million embedding set which was a little more than it was a lot more than 1 million tokens and that took us several hundred of us. It was not expensive, but it was very slow. So as a batch process, if you want to embed a large data set, it's very slow. I think the more colorful version of this somebody wrote on LinkedIn, Prithvira wrote on LinkedIn that your embeddings will go and I love that idea that we have optimized speed so that it just goes fast. That's the idea. So what do we I mean let's put names to these things, right? So one is we want it to be fast and light. And I'll explain what do we mean by light? We want recall to be fast, right? I mean, that's what we started with that what are embedding we want to be make sure that similar things are similar. Nirant Kasliwal: That's what we call recall. We often confuse this with accuracy but in retrieval sense we'll call it recall. We want to make sure it's still easy to use, right? Like there is no reason for this to get complicated. And we are fast, I mean we are very fast. And part of that is let's say we use BGE small En, the English model only. And let's say this is all in tokens per second and the token is model specific. So for instance, the way BGE would count a token might be different from how OpenAI might count a token because the tokenizers are slightly different and they have been trained on slightly different corporates. So that's the idea. Nirant Kasliwal: I would love you to try this so that I can actually brag about you trying it. Demetrios: What was the fine print on that slide? Benchmarks are my second most liked way to brag. What's your first most liked way to brag? Nirant Kasliwal: The best way is that when somebody tells me that they're using it. Demetrios: There we go. So I guess that's an easy way to get people to try and use it. Nirant Kasliwal: Yeah, I would love it if you try it. Tell us how it went for you, where it's working, where it's broken, all of that. I love it if you report issue then say I will even appreciate it if you yell at me because that means you're not ignoring me. Demetrios: That's it. There we go. Bug reports are good to throw off your mojo. Keep it rolling. Nirant Kasliwal: So we said fast and light. So what does light mean? So you will see a lot of these Embedding servers have really large image sizes. When I say image, I mean typically or docker image that can typically go to a few GPS. For instance, in case of sentence transformers, which somebody's checked out with Transformers the package and PyTorch, you get a docker image of roughly five GB. The Ram consumption is not that high by the way. Right. The size is quite large and of that the model is just 400 MB. So your dependencies are very large. Nirant Kasliwal: And every time you do this on, let's say an AWS Lambda, or let's say if you want to do horizontal scaling, your cold start times can go in several minutes. That is very slow and very inefficient if you are working in a workload which is very spiky. And if you were to think about it, people have more queries than, let's say your corpus quite often. So for instance, let's say you are in customer support for an ecommerce food delivery app. Bulk of your order volume will be around lunch and dinner timing. So that's a very spiky load. Similarly, ecommerce companies, which are even in fashion quite often see that people check in on their orders every evening and for instance when they leave from office or when they get home. And that's another spike. Nirant Kasliwal: So whenever you have a spiky load, you want to be able to scale horizontally and you want to be able to do it fast. And that speed comes from being able to be light. And that is why Fast Embed is very light. So you will see here that we call out that Fast Embed is just half a GB versus five GB. So on the extreme cases, this could be a ten x difference in your docker, image sizes and even Ram consumptions recall how good or bad are these embeddings? Right? So we said we are making them fast but do we sacrifice how much performance do we trade off for that? So we did a cosine similarity test with our default embeddings which was VG small en initially and now 1.5 and they're pretty robust. We don't sacrifice a lot of performance. Everyone with me? I need some audio to you. Demetrios: I'm totally with you. There is a question that came through the chat if this is the moment to ask it. Nirant Kasliwal: Yes, please go for it. Demetrios: All right it's from a little bit back like a few slides ago. So I'm just warning you. Are there any plans to support audio or image sources in fast embed? Nirant Kasliwal: If there is a request for that we do have a plan to support multimodal embedding. We would love to do that. If there's specific model within those, let's say you want Clip or Seglip or a specific audio model, please mention that either on that discord or our GitHub so that we can plan accordingly. So yeah, that's the idea. We need specific suggestions so that we keep adding it. We don't want to have too many models because then that creates confusion for our end users and that is why we take opinated stance and that is actually a good segue. Why do we prioritize that? We want this package to be easy to use so we're always going to try and make the best default choice for you. So this is a very Linux way of saying that we do one thing and we try to do that one thing really well. Nirant Kasliwal: And here, let's say for instance, if you were to look at Qdrant client it's just passing everything as you would. So docs is a list of strings, metadata is a list of dictionaries and IDs again is a list of IDs valid IDs as per the Qdrant Client spec. And the search is also very straightforward. The entire search query is basically just two params. You could even see a very familiar integration which is let's say langchain. I think most people here would have looked at this in some shape or form earlier. This is also very familiar and very straightforward. And under the hood what are we doing is just this one line. Nirant Kasliwal: We have a dot embed which is a generator and we call a list on that so that we actually get a list of embeddings. You will notice that we have a passage and query keys here which means that our retrieval model which we have used as default here, takes these into account that if there is a passage and a query they need to be mapped together and a question and answer context is captured in the model training itself. The other caveat is that we pass on the token limits or context windows from the embedding model creators themselves. So in the case of this model, which is BGE base, that is 512 BGE tokens. Demetrios: One thing on this, we had Neil's from Cohere on last week and he was talking about Cohere's embed version three, I think, or V three, he was calling it. How does this play with that? Does it is it supported or no? Nirant Kasliwal: As of now, we only support models which are open source so that we can serve those models directly. Embed V three is cloud only at the moment, so that is why it is not supported yet. But that said, we are not opposed to it. In case there's a requirement for that, we are happy to support that so that people can use it seamlessly with Qdrant and fast embed does the heavy lifting of passing it to Qdrant, structuring the schema and all of those for you. So that's perfectly fair. As I ask, if we have folks who would love to try coherent embed V three, we'd use that. Also, I think Nils called out that coherent embed V three is compatible with binary quantization. And I think that's the only embedding which officially supports that. Nirant Kasliwal: Okay, we are binary quantization aware and they've been trained for it. Like compression awareness is, I think, what it was called. So Qdrant supports that. So please of that might be worth it because it saves about 30 x in memory costs. So that's quite powerful. Demetrios: Excellent. Nirant Kasliwal: All right, so behind the scenes, I think this is my favorite part of this. It's also very short. We do literally two things. Why are we fast? We use ONNX runtime as of now, our configurations are such that it runs on CPU and we are still very fast. And that's because of all the multiple processing and ONNX runtime itself at some point in the future. We also want to support GPUs. We had some configuration issues on different Nvidia configurations. As the GPU changes, the OnX runtime does not seamlessly change the GPU. Nirant Kasliwal: So that is why we do not allow that as a provider. But you can pass that. It's not prohibited, it's just not a default. We want to make sure your default is always available and will be available in the happy path, always. And we quantize the models for you. So when we quantize, what it means is we do a bunch of tricks supported by a huge shout out to hugging faces optimum. So we do a bunch of optimizations in the quantization, which is we compress some activations, for instance, gelu. We also do some graph optimizations and we don't really do a lot of dropping the bits, which is let's say 32 to 16 or 64 to 32 kind of quantization only where required. Nirant Kasliwal: Most of these gains come from the graph optimizations themselves. So there are different modes which optimum itself calls out. And if there are folks interested in that, happy to share docs and details around that. Yeah, that's about it. Those are the two things which we do from which we get bulk of these speed gains. And I think this goes back to the question which you opened with. Yes, we do want to support multimodal. We are looking at how we can do an on and export of Clip, which is as robust as Clip. Nirant Kasliwal: So far we have not found anything. I've spent some time looking at this, the quality of life upgrades. So far, most of our model downloads have been through Google cloud storage hosted by Qdrant. We want to support hugging Face hub so that we can launch new models much, much faster. So we will do that soon. And the next thing is, as I called out, we always want to take performance as a first class citizen. So we are looking at how we can allow you to change or adapt frozen Embeddings, let's say open a Embedding or any other model to your specific domain. So maybe a separate toolkit within Fast Embed which is optional and not a part of the default path, because this is not something which you will use all the time. Nirant Kasliwal: We want to make sure that your training and experience parts are separate. So we will do that. Yeah, that's it. Fast and sweet. Demetrios: Amazing. Like FastEmbed. Nirant Kasliwal: Yes. Demetrios: There was somebody that talked about how you need to be good at your puns and that might be the best thing, best brag worthy stuff you've got. There's also a question coming through that I want to ask you. Is it true that when we use Qdrant client add Fast Embedding is included? We don't have to do it? Nirant Kasliwal: What do you mean by do it? As in you don't have to specify a Fast Embed model? Demetrios: Yeah, I think it's more just like you don't have to add it on to Qdrant in any way or this is completely separated. Nirant Kasliwal: So this is client side. You own all your data and even when you compress it and send us all the Embedding creation happens on your own compute. This Embedding creation does not happen on Cauldron cloud, it happens on your own compute. It's consistent with the idea that you should have as much control as possible. This is also why, as of now at least, Fast Embed is not a dedicated server. We do not want you to be running two different docker images for Qdrant and Fast Embed. Or let's say two different ports for Qdrant and Discord within the sorry, Qdrant and Fast Embed in the same docker image or server. So, yeah, that is more chaos than we would like. Demetrios: Yeah, and I think if I understood it, I understood that question a little bit differently, where it's just like this comes with Qdrant out of the box. Nirant Kasliwal: Yes, I think that's a good way to look at it. We set all the defaults for you, we select good practices for you and that should work in a vast majority of cases based on the MTEB benchmark, but we cannot guarantee that it will work for every scenario. Let's say our default model is picked for English and it's mostly tested on open domain open web data. So, for instance, if you're doing something domain specific, like medical or legal, it might not work that well. So that is where you might want to still make your own Embeddings. So that's the edge case here. Demetrios: What are some of the other knobs that you might want to be turning when you're looking at using this. Nirant Kasliwal: With Qdrant or without Qdrant? Demetrios: With Qdrant. Nirant Kasliwal: So one thing which I mean, one is definitely try the different models which we support. We support a reasonable range of models, including a few multilingual ones. Second is while we take care of this when you do use with Qdrants. So, for instance, let's say this is how you would have to manually specify, let's say, passage or query. When you do this, let's say add and query. What we do, we add the passage and query keys while creating the Embeddings for you. So this is taken care of. So whatever is your best practices for the Embedding model, make sure you use it when you're using it with Qdrant or just in isolation as well. Nirant Kasliwal: So that is one knob. The second is, I think it's very commonly recommended, we would recommend that you start with some evaluation, like have maybe let's even just five sentences to begin with and see if they're actually close to each other. And as a very important shout out in Embedding retrieval, when we use Embedding for retrieval or vector similarity search, it's the relative ordering which matters. So, for instance, we cannot say that zero nine is always good. It could also mean that the best match is, let's say, 0.6 in your domain. So there is no absolute cut off for threshold in terms of match. So sometimes people assume that we should set a minimum threshold so that we get no noise. So I would suggest that you calibrate that for your queries and domain. Nirant Kasliwal: And you don't need a lot of queries. Even if you just, let's say, start with five to ten questions, which you handwrite based on your understanding of the domain, you will do a lot better than just picking a threshold at random. Demetrios: This is good to know. Okay, thanks for that. So there's a question coming through in the chat from Shreya asking how is the latency in comparison to elasticsearch? Nirant Kasliwal: Elasticsearch? I believe that's a Qdrant benchmark question and I'm not sure how is elastics HNSW index, because I think that will be the fair comparison. I also believe elastics HNSW index puts some limitations on how many vectors they can store with the payload. So it's not an apples to apples comparison. It's almost like comparing, let's say, a single page with the entire book, because that's typically the ratio from what I remember I also might be a few months outdated on this, but I think the intent behind that question is, is Qdrant fast enough for what Qdrant does? It is definitely fast is, which is embedding similarity search. So for that, it's exceptionally fast. It's written in Rust and Twitter for all C. Similar tweets uses this at really large scale. They run a Qdrant instance. Nirant Kasliwal: So I think if a Twitter scale company, which probably does about anywhere between two and 5 million tweets a day, if they can embed and use Qdrant to serve that similarity search, I think most people should be okay with that latency and throughput requirements. Demetrios: It's also in the name. I mean, you called it Fast Embed for a reason, right? Nirant Kasliwal: Yes. Demetrios: So there's another question that I've got coming through and it's around the model selection and embedding size. And given the variety of models and the embedding sizes available, how do you determine the most suitable models and embedding sizes? You kind of got into this on how yeah, one thing that you can do to turn the knobs are choosing a different model. But how do you go about choosing which model is better? There. Nirant Kasliwal: There is the academic way of looking at and then there is the engineer way of looking at it, and then there is the hacker way of looking at it. And I will give you all these three answers in that order. So the academic and the gold standard way of doing this would probably look something like this. You will go at a known benchmark, which might be, let's say, something like Kilt K-I-L-T or multilingual text embedding benchmark, also known as MTEB or Beer, which is beir one of these three benchmarks. And you will look at their retrieval section and see which one of those marks very close to whatever is your domain or your problem area, basically. So, for instance, let's say you're working in Pharmacology, the ODS that a customer support retrieval task is relevant to. You are near zero unless you are specifically in, I don't know, a Pharmacology subscription app. So that is where you would start. Nirant Kasliwal: This will typically take anywhere between two to 20 hours, depending on how familiar you are with these data sets already. But it's not going to take you, let's say, a month to do this. So just to put a rough order of magnitude, once you have that, you try to take whatever is the best model on that subdomain data set and you see how does it work within your domain and you launch from there. At that point, you switch into the engineer's mindset. The engineer's mindset now tells you that the best way to build something is to make an informed guess about what workload or challenges you're going to foresee. Right. Like a civil engineer builds a bridge around how many cars they expect, they're obviously not going to build a bridge to carry a ship load, for instance, or a plane load, which are very different. So you start with that and you say, okay, this is the number of requests which I expect, this is what my budget is, and your budget will quite often be, let's say, in terms of latency budgets, compute and memory budgets. Nirant Kasliwal: So for instance, one of the reasons I mentioned binary quantization and product quantization is with something like binary quantization you can get 98% recall, but with 30 to 40 x memory savings because it discards all the extraneous bits and just keeps the zero or one bit of the embedding itself. And Qdrant has already measured it for you. So we know that it works for OpenAI and Cohere embeddings for sure. So you might want to use that to just massively scale while keeping your budgets as an engineer. Now, in order to do this, you need to have some sense of three numbers, right? What are your latency requirements, your cost requirements, and your performance requirement. Now, for the performance, which is where engineers are most unfamiliar with, I will give the hacker answer, which is this. Demetrios: Is what I was waiting for. Man, so excited for this one, exactly this. Please tell us the hacker answer. Nirant Kasliwal: The hacker answer is this there are two tricks which I will share. One is write ten questions, figure out the best answer, and see which model gets as many of those ten, right? The second is most embedding models which are larger or equivalent to 768 embeddings, can be optimized and improved by adding a small linear head over it. So for instance, I can take the Open AI embedding, which is 1536 embedding, take my text, pass it through that, and for my own domain, adapt the Open A embedding by adding two or three layers of linear functions, basically, right? Y is equals to MX plus C or Ax plus B y is equals to C, something like that. So it's very simple, you can do it on NumPy, you don't need Torch for it because it's very small. The header or adapter size will typically be in this range of few KBS to be maybe a megabyte, maybe. I think the largest I have used in production is about 400 500 KBS. That's about it. And that will improve your recall several, several times. Nirant Kasliwal: So that's one, that's two tricks. And a third bonus hacker trick is if you're using an LLM, sometimes what you can do is take a question and rewrite it with a prompt and make embeddings from both, and pull candidates from both. And then with Qdrant Async, you can fire both these queries async so that you're not blocked, and then use the answer of both the original question which the user gave and the one which you rewrote using the LLM and see select the results which are there in both, or figure some other combination method. Also, so most Kagglers would be familiar with the idea of ensembling. This is the way to do query inference time ensembling, that's awesome. Demetrios: Okay, dude, I'm not going to lie, that was a lot more than I was expecting for that answer. Nirant Kasliwal: Got into the weeds of retrieval there. Sorry. Demetrios: I like it though. I appreciate it. So what about when it comes to the know, we had Andre V, the CTO of Qdrant on here a few weeks ago. He was talking about binary quantization. But then when it comes to quantizing embedding models, in the docs you mentioned like quantized embedding models for fast CPU generation. Can you explain a little bit more about what quantized embedding models are and how they enhance the CPU performance? Nirant Kasliwal: So it's a shorthand to say that they optimize CPU performance. I think the more correct way to look at it is that we use the CPU better. But let's talk about optimization or quantization, which we do here, right? So most of what we do is from optimum and the way optimum call set up is they call these levels. So you can basically go from let's say level zero, which is there are no optimizations to let's say 99 where there's a bunch of extra optimizations happening. And these are different flags which you can switch. And here are some examples which I remember. So for instance, there is a norm layer which you can fuse with the previous operation. Then there are different attention layers which you can fuse with the previous one because you're not going to update them anymore, right? So what we do in training is we update them. Nirant Kasliwal: You know that you're not going to update them because you're using them for inference. So let's say when somebody asks a question, you want that to be converted into an embedding as fast as possible and as cheaply as possible. So you can discard all these extra information which you are most likely to not going to use. So there's a bunch of those things and obviously you can use mixed precision, which most people have heard of with projects, let's say like lounge CPP that you can use FP 16 mixed precision or a bunch of these things. Let's say if you are doing GPU only. So some of these things like FP 16 work better on GPU. The CPU part of that claim comes from how ONNX the runtime which we use allows you to optimize whatever CPU instruction set you are using. So as an example with intel you can say, okay, I'm going to use the Vino instruction set or the optimization. Nirant Kasliwal: So when we do quantize it, we do quantization right now with CPUs in mind. So what we would want to do at some point in the future is give you a GPU friendly quantized model and we can do a device check and say, okay, we can see that a GPU is available and download the GPU friendly model first for you. Awesome. Does that answer the. Question. Demetrios: I mean, for me, yeah, but we'll see what the chat says. Nirant Kasliwal: Yes, let's do that. Demetrios: What everybody says there. Dude, this has been great. I really appreciate you coming and walking through everything we need to know, not only about fast embed, but I think about embeddings in general. All right, I will see you later. Thank you so much, Naran. Thank you, everyone, for coming out. If you want to present, please let us know. Hit us up, because we would love to have you at our vector space talks.
blog/fastembed-fast-lightweight-embedding-generation-nirant-kasliwal-vector-space-talks-004.md
--- draft: false title: How to meow on the long tail with Cheshire Cat AI? - Piero and Nicola | Vector Space Talks slug: meow-with-cheshire-cat short_description: Piero Savastano and Nicola Procopio discusses the ins and outs of Cheshire Cat AI. description: Cheshire Cat AI's Piero Savastano and Nicola Procopio discusses the framework's vector space complexities, community growth, and future cloud-based expansions. preview_image: /blog/from_cms/piero-and-nicola-bp-cropped.png date: 2024-04-09T03:05:00.000Z author: Demetrios Brinkmann featured: false tags: - LLM - Qdrant - Cheshire Cat AI - Vector Search - Vector database --- > *"We love Qdrant! It is our default DB. We support it in three different forms, file based, container based, and cloud based as well.”*\ — Piero Savastano > Piero Savastano is the Founder and Maintainer of the open-source project, Cheshire Cat AI. He started in Deep Learning pure research. He wrote his first neural network from scratch at the age of 19. After a period as a researcher at La Sapienza and CNR, he provides international consulting, training, and mentoring services in the field of machine and deep learning. He spreads Artificial Intelligence awareness on YouTube and TikTok. > *"Another feature is the quantization because with this Qdrant feature we improve the accuracy at the performance. We use the scalar quantitation because we are model agnostic and other quantitation like the binary quantitation.”*\ — Nicola Procopio > Nicola Procopio has more than 10 years of experience in data science and has worked in different sectors and markets from Telco to Healthcare. At the moment he works in the Media market, specifically on semantic search, vector spaces, and LLM applications. He has worked in the R&D area on data science projects and he has been and is currently a contributor to some open-source projects like Cheshire Cat. He is the author of popular science articles about data science on specialized blogs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2d58Xui99QaUyXclIE1uuH?si=68c5f1ae6073472f), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/K40DIG9ZzAU?feature=shared).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/K40DIG9ZzAU?si=rK0EVXmvNJ5OSZa4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/How-to-meow-on-the-long-tail-with-Cheshire-Cat-AI----Piero-and-Nicola--Vector-Space-Talks-018-e2h7k59/a-ab31teu" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Did you know that companies across Italy, Germany, and the USA are already harnessing the power of Cheshire Cat for a variety of nifty purposes? It's not just a pretty face; it's evolved from a simple tutorial to an influential framework! It’s time to learn how to meow! Piero in this episode of Vector Space Talks discusses the community and open-source nature that contributes to the framework's success and expansion while Nicola reveals the Cheshire Cat’s use of Qdrant and quantization to enhance search accuracy and performance in a hybrid mode. Here are the highlights from this episode: 1. **The Art of Embedding:** Discover how Cheshire Cat uses collections with an embedder, fine-tuning them through scalar quantization and other methods to enhance accuracy and performance. 2. **Vectors in Harmony:** Get the lowdown on storing quantized vectors in a hybrid mode – it's all about saving memory without compromising on speed. 3. **Memory Matters:** Scoop on managing different types of memory within Qdrant, the go-to vector DB for Cheshire Cat. 4. **Community Chronicles:** Talking about the growing community that's shaping the evolution of Cheshire Cat - from enthusiasts to core contributors! 5. **Looking Ahead:** They've got grand plans brewing for a cloud version of Cheshire Cat. Imagine a marketplace buzzing with user-generated plugins. This is the future they're painting! > Fun Fact: The Cheshire Cat community on Discord plays a crucial role in the development and user support of the framework, described humorously by Piero as "a mess" due to its large and active nature. > ## Show notes: 00:00 Powerful open source framework.\ 06:11 Tutorials, code customization, conversational forms, community challenges.\ 09:09 Exploring Qdrant's memory features.\ 13:02 Qdrant experiments with document quantization.\ 17:52 Explore details, export, and memories.\ 20:42 Addressing challenges in ensuring Cheshire Cat's reliability.\ 23:36 Leveraging cool features presents significant challenges.\ 27:06 Plugin-based approach distinguishes the CAT framework.\ 29:28 Wrap up ## More Quotes from Piero and Nicola: *"We have a little partnership going on with Qdrant because the native DB in this framework is Qdrant.”*\ — Piero Savastano *"We explore the feature, the Qdrant aliases feature, and we call this topic the drunken cut effect because if we have several embedders, for example two model, two embedders with the same dimension, we can put in the collection in the episodic or declarative collection factors from two different embeddings with the same dimension. But the points are different for the same sentences and for the cat is like for the human, when he mixes drinks he has a big headache and don't understand what it retrieved.”*\ — Nicola Procopio *"It's a classic language model assistant chat we have for each message you have explainability, you can upload documents. This is all handled automatically and we start with new stuff. You have a memory page where you can search through the memories of your cat, delete, explore collections, collection from Qdrant.”*\ — Piero Savastano *"Because I'm a researcher, a data scientist, I like to play with strange features like binary quantization, but we need to maintain the focus on the user needs, on the user behavior.”*\ — Nicola Procopio ## Transcript: Demetrios: What is up, good people of the Internet? We are here for another one of these vector space talks and I've got to say it's a special day. We've got the folks from Cheshire Cat coming at you full on today and I want to get it started right away because I know they got a lot to talk about. And today we get a two for one discount. It's going to be nothing like you have experienced before. Or maybe those are big words. I'm setting them up huge. We've got Piero coming at us live. Where you at, Piero? Piero, founder. Demetrios: There he is, founder at Cheshire Cat. And you are joined today by Nicola, one of the core contributors. It's great to have you both very excited. So you guys are going to be talking to us all about what you poetically put how to meow on the long tail with Cheshire Cat. And so I know you've got some slides prepared. I know you've got all that fun stuff working right now and I'm going to let you hop right into it so we don't waste any time. You ready? Who wants to share their screen first? Is it you, Nicola, or go? Piero Savastano: I'll go. Thanks. Demetrios: Here we go. Man, you should be seeing it right now. Piero Savastano: Yes. Demetrios: Boom. Piero Savastano: Let's go. Thank you, Demetrios. We're happy to be hosted at the vector space talk. Let's talk about the Cheshire Cat AI. This is an open source framework. We have a little partnership going on with Qdrant because the native DB in this framework is Qdrant. It's a python framework. And before starting to get into the details, I'm going to show you a little video. Piero Savastano: This is the website. So you see, it's a classic language model assistant chat we have for each message you have explainability, you can upload documents. This is all handled automatically and we start with new stuff. You have a memory page where you can search through the memories of your cat, delete, explore collections, collection from Qdrant. We have a plugin system and you can publish any plugin. You can sell your plugin. There is a big ecosystem already and we also give explanation on memories. We have adapters for the most common language models. Piero Savastano: Dark team, you can do a lot of stuff with the framework. This is how it presents itself. We have a blog with tutorials, but going back to our numbers, it is open source, GPL licensed. We have some good numbers. We are mostly active in Italy and in a good part of Europe, East Europe, and also a little bit of our communities in the United States. There are a lot of contributors already and our docker image has been downloaded quite a few times, so it's really easy to start up and running because you just docker run and you're good to go. We have also a discord server with thousands of members. If you want to join us, it's going to be fun. Piero Savastano: We like meme, we like to build culture around code, so it is not just the code, these are the main components of the cat. You have a chat as usual. The rabbit hole is our module dedicated to document ingestion. You can extend all of these parts. We have an agent manager. Meddetter is the module to manage plugins. We have a vectordb which is Qdrant natively, by the way. We use both the file based Qdrant, the container version, and also we support the cloud version. Piero Savastano: So if you are using Qdrant, we support the whole stack. Right now with the framework we have an embedder and a large language model coming to the embedder and language models. You can use any language model or embedded you want, closed source API, open Ollama, self hosted anything. These are the main features. So the first feature of the cat is that he's ready to fight. It is already dogsized. It's model agnostic. One command in the terminal and you can meow. Piero Savastano: The other aspect is that there is not only a retrieval augmented generation system, but there is also an action agent. This is all customizable. You can plug in any script you want as an agent, or you can customize the ready default presence default agent. And one of our specialty is that we do retrieve augmented generation, not only on documents as everybody's doing, but we do also augmented generation over conversations. I can hear your keyboard. We do augmented generation over conversations and over procedures. So also our tools and form conversational forms are embedded into the DB. We have a big plugin system. Piero Savastano: It's really easy to use and with different primitives. We have hooks which are events, WordPress style events. We have tools, function calling, and also we just build up a spec for conversational forms. So you can use your assistant to order a pizza, for example, multitool conversation and order a pizza, book a flight. You can do operative stuff. I already told you, and I repeat a little, not just a runner, but it's a full fledged framework. So we built this not to use language model, but to build applications on top of language models. There is a big documentation where all the events are described. Piero Savastano: You find tutorials and with a few lines of code you can change the prompt. You can use long chain inspired tools, and also, and this is the big part we just built, you can use conversational forms. We launched directly on GitHub and in our discord a pizza challenge, where we challenged our community members to build up prototypes to support a multi turn conversational pizza order. And the result of this challenge is this spec where you define a pedantic model in Python and then you subclass the pizza form, the cut form from the framework, and you can give examples on utterances that triggers the form, stops the forms, and you can customize the submit function and any other function related to the form. So with a simple subclass you can handle pragmatic, operational, multi turn conversations. And I truly believe we are among the first in the world to build such a spec. We have a lot of plugins. Many are built from the community itself. Piero Savastano: Many people is already hosting private plugins. There is a little marketplace independent about plugins. All of these plugins are open source. There are many ways to customize the cat. The big advantage here is no vendor lock in. So since the framework is open and the plugin system can be open, you do not need to pass censorship from big tech giants. This is one of the best key points of moving the framework along the open source values for the future. We plan to add the multimodality. Piero Savastano: At the moment we are text only, but there are plugins to generate images. But we want to have images and sounds natively into the framework. We already accomplished the conversational forms. In a later talk we can speak in more detail about this because it's really cool and we want to integrate a knowledge graph into the framework so we can play with both symbolic vector representations and symbolic network ones like linked data, for example wikidata. This stuff is going to be really interesting within. Yes, we love the Qdrant. It is our default DB. We support it in three different forms, file based, container based, and cloud based also. Piero Savastano: But from now on I want to give word to Nicola, which is way more expert on this vector search topic and he wrote most of the part related to the DB. So thank you guys. Nicola to you. Nicola Procopio: Thanks Piero. Thanks Demetrios. I'm so proud to be hosted here because I'm a vector space talks fan. Okay, Qdrant is the vector DB of the cat and now I will try to explore the feature that we use on Cheshire Cat. The first slide, explain the cut's memory. Because Qdrant is our memory. We have a long term memory in three parts. The episodic memory when we store and manage the conversation, the chart, the declarative memory when we store and manage documents and the procedural memory when we store and manage the tools how to manage three memories with several embedder because the user can choose his fabric embedder and change it. Nicola Procopio: We explore the feature, the Qdrant aliases feature, and we call this topic the drunken cut effect because if we have several embedders, for example two model, two embedders with the same dimension, we can put in the collection in the episodic or declarative collection factors from two different embeddings with the same dimension. But the points are different for the same sentences and for the cat is like for the human, when he mixes drinks he has a big headache and don't understand what it retrieved. To us the flow now is this. We create the collection with the name and we use the aliases to. Piero Savastano: Label. Nicola Procopio: This collection with the name of the embedder used. When the user changed the embedder, we check if the embedder has the same dimension. If has the same dimension, we check also the aliases. If the aliases is the same we don't change nothing. Otherwise we create another collection and this is the drunken cut effect. The first feature that we use in the cat. Another feature is the quantization because with this Qdrant feature we improve the accuracy at the performance. We use the scalar quantitation because we are model agnostic and other quantitation like the binary quantitation. Nicola Procopio: If you read on the Qdrant documents are experimented on not to all embedder but also for OpenAI and Coer. If I remember well with this discover quantitation and the scour quantization is used in the storage step. The vector are quantized and stored in a hybrid mode, the original vector on disk, the quantized vector in RAM and with this procedure we procedure we can use less memory. In case of Qdrant scalar quantization, the flat 32 elements is converted to int eight on a single number on a single element needs 75% less memory. In case of big embeddings like I don't know Gina embeddings or mistral embeddings with more than 1000 elements. This is big improvements. The second part is the retriever step. We use a quantizement query at the quantized vector to calculate causing similarity and we have the top n results like a simple semantic search pipeline. Nicola Procopio: But if we want a top end results in quantize mod, the quantity mod has less quality on the information and we use the oversampling. The oversampling is a simple multiplication. If we want top n with n ten with oversampling with a score like one five, we have 15 results, quantities results. When we have these 15 quantities results, we retrieve also the same 15 unquanted vectors. And on these unquanted vectors we rescale busset on the query and filter the best ten. This is an improvement because the retrieve step is so fast. Yes, because using these tip and tricks, the Cheshire capped vectors achieve up. Piero Savastano: Four. Nicola Procopio: Times lower memory footprint and two time performance increase. We are so fast using this Qdrant feature. And last but not least, we go in deep on the memory. This is the visualization that Piero showed before. This is the vector space in 2D we use Disney is very similar to the Qdrant cloud visualization. For the embeddings we have the search bar, how many vectors we want to retrieve. We can choose the memory and other filters. We can filter on the memory and we can wipe a memory or all memory and clean all our space. Nicola Procopio: We can go in deep using the details. We can pass on the dot and we have a bubble or use the detail, the detail and we have a list of first n results near our query for every memory. Last but not least, we can export and share our memory in two modes. The first is exporting the JSON using the export button from the UI. Or if you are very curious, you can navigate the folder in the project and share the long term memory folder with all the memories. Or the experimental feature is wake up the door mouse. This feature is simple, the download of Qdrant snapshots. This is experimental because the snapshot is very easy to download and we will work on faster methods to use it. Nicola Procopio: But now it works and sometimes us, some user use this feature for me is all and thank you. Demetrios: All right, excellent. So that is perfect timing. And I know there have been a few questions coming through in the chat, one from me. I think you already answered, Piero. But when we can have some pistachio gelato made from good old Cheshire cat. Piero Savastano: So the plan is make the cat order gelato from service from an API that can already be done. So we meet somewhere or at our house and gelato is going to come through the cat. The cat is able to take, each of us can do a different order, but to make the gelato itself, we're going to wait for more open source robotics to come to our way. And then we go also there. Demetrios: Then we do that, we can get the full program. How cool is that? Well, let's see, I'll give it another minute, let anyone from the chat ask any questions. This was really cool and I appreciate you all breaking down. Not only the space and what you're doing, but the different ways that you're using Qdrant and the challenges and the architecture behind it. I would love to know while people are typing in their questions, especially for you, Nicola, what have been some of the challenges that you've faced when you're dealing with just trying to get Cheshire Cat to be more reliable and be more able to execute with confidence? Nicola Procopio: The challenges are in particular to mix a lot of Qdrant feature with the user needs. Because I'm a researcher, a data scientist, I like to play with strange features like binary quantization, but we need to maintain the focus on the user needs, on the user behavior. And sometimes we cut some feature on the Cheshire cat because it's not important now for for the user and we can introduce some bug, or rather misunderstanding for the user. Demetrios: Can you hear me? Yeah. All right, good. Now I'm seeing a question come through in the chat that is asking if you are thinking about cloud version of the cat. Like a SaaS, it's going to come. It's in the works. Piero Savastano: It's in the works. Not only you can self host the cat freely, some people install it on a raspberry, so it's really lightweight. We plan to have an osted version and also a bigger plugin ecosystem with a little marketplace. Also user will be able to upload and maybe sell their plugins. So we want to build an know our vision is a WordPress style ecosystem. Demetrios: Very cool. Oh, that is awesome. So basically what I'm hearing from Nicola asking about some of the challenges are like, hey, there's some really cool features that we've got in Qdrant, but it's almost like you have to keep your eye on the prize and make sure that you're building for what people need and want instead of just using cool features because you can use cool features. And then Piero, you're saying, hey, we really want to enable people to be able to build more cool things and use all these cool different features and whatever flavors or tools they want to use. But we want to be that ecosystem creator so that anyone can bring and create an app on top of the ecosystem and then enable them to get paid also. So it's not just Cheshire cat getting paid, it's also the contributors that are creating cool stuff. Piero Savastano: Yeah. Community is the first protagonist without community. I'm going to tell you, the cat started as a tutorial. When chat GPT came out, I decided to do a little rug tutorial and I chose Qdrant as vector. I took OpenAI as a language model, and I built a little tutorial, and then from being a tutorial to show how to build an agent on GitHub, it completely went out of hand. So the whole framework is organically grown? Demetrios: Yeah, that's the best. That is really cool. Simone is asking if there's companies that are already using Cheshire cat, and if you can mention a few. Piero Savastano: Yeah, okay. In Italy, there are at least 1015 companies distributed along education, customer care, typical chatbot usage. Also, one of them in particular is trying to build for public administration, which is really hard to do on the international level. We are seeing something in Germany, like web agencies starting to use the cat a little on the USA. Mostly they are trying to build agents using the cat and Ollama as a runner. And a company in particular presented in a conference in Vegas a pitch about a 3d avatar. Inside the avatar, there is the cat as a linguistic device. Demetrios: Oh, nice. Piero Savastano: To be honest, we have a little problem tracking companies because we still have no telemetry. We decided to be no telemetry for the moment. So I hope companies will contribute and make themselves happen. If that does not, we're going to track a little more. But companies using the cat are at least in the 50, 60, 70. Demetrios: Yeah, nice. So if anybody out there is using the cat, and you have not talked to Piero yet, let him know so that he can have a good idea of what you're doing and how you're doing it. There's also another question coming through about the market analysis. Are there some competitors? Piero Savastano: There are many competitors. When you go down to what distinguishes the cat from many other frameworks that are coming out, we decided since the beginning to go for a plugin based operational agent. And at the moment, most frameworks are retrieval augmented generation frameworks. We have both retrieval augmented generation. We have tooling, we have forms. The tools and the forms are also embedded. So the cat can have 20,000 tools, because we also embed the tools and we make a recall over the function calling. So we scaled up both documents, conversation and tools, conversational forms, and I've not seen anybody doing that till now. Piero Savastano: So if you want to build an application, a pragmatic, operational application, to buy products, order pizza, do stuff, have a company assistant. The cat is really good at the moment. Demetrios: Excellent. Nicola Procopio: And the cat has a very big community on discord works. Piero Savastano: Our discord is a mess. Demetrios: You got the best memes around. If that doesn't make people join the discord, I don't know what will. Piero Savastano: Please, Nicola. Sorry for interrupting. Demetrios: No. Nicola Procopio: Okay. The community is a plus for Cheshire Cat because we have a lot of developer user on Discord, and for an open source project, the community is fundamentally 100%. Demetrios: Well fellas, this has been awesome. I really appreciate you coming on the vector space talks and sharing about the cat for anybody that is interested. Hopefully they go, they check it out, they join your community, they share some memes and they get involved, maybe even contribute back and create some tools. That would be awesome. So Piero and Nicola, I really appreciate your time. We'll see you all later. Piero Savastano: Thank you. Nicola Procopio: Thank you. Demetrios: And for anybody out there that wants to come on to the vector space talks and give us a bit of an update on how you're using Qdrant, we'd love to hear it. Just reach out and we'll schedule you in. Until next time. See y'all. Bye.
blog/how-to-meow-on-the-long-tail-with-cheshire-cat-ai-piero-and-nicola-vector-space-talks.md
--- draft: false title: Production-scale RAG for Real-Time News Distillation - Robert Caulk | Vector Space Talks slug: real-time-news-distillation-rag short_description: Robert Caulk tackles the challenges and innovations in open source AI and news article modeling. description: Robert Caulk, founder of Emergent Methods, discusses the complexities of context engineering, the power of Newscatcher API for broader news access, and the sophisticated use of tools like Qdrant for improved recommendation systems, all while emphasizing the importance of efficiency and modularity in technology stacks for real-time data management. preview_image: /blog/from_cms/robert-caulk-bp-cropped.png date: 2024-03-25T08:49:22.422Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - LLM --- > *"We've got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody's just trying to figure out what's going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us.”*\ -- Robert Caulk > Robert, Founder of Emergent Methods is a scientist by trade, dedicating his career to a variety of open-source projects that range from large-scale artificial intelligence to discrete element modeling. He is currently working with a team at Emergent Methods to adaptively model over 1 million news articles per day, with a goal of reducing media bias and improving news awareness. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/7lQnfv0v2xRtFksGAP6TUW?si=Vv3B9AbjQHuHyKIrVtWL3Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/0ORi9QJlud0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/0ORi9QJlud0?si=rpSOnS2kxTFXiVBq" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Production-scale-RAG-for-Real-Time-News-Distillation---Robert-Caulk--Vector-Space-Talks-015-e2g6464/a-ab0c1sq" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** How do Robert Caulk and Emergent Methods contribute to the open-source community, particularly in AI systems and news article modeling? In this episode, we'll be learning stuff about open-source projects that are reshaping how we interact with AI systems and news article modeling. Robert takes us on an exploration into the evolving landscape of news distribution and the tech making it more efficient and balanced. Here are some takeaways from this episode: 1. **Context Matters**: Discover the importance of context engineering in news and how it ensures a diversified and consumable information flow. 2. **Introducing Newscatcher API**: Get the lowdown on how this tool taps into 50,000 news sources for more thorough and up-to-date reporting. 3. **The Magic of Embedding**: Learn about article summarization and semantic search, and how they're crucial for discovering content that truly resonates. 4. **Qdrant & Cloud**: Explore how Qdrant's cloud offering and its single responsibility principle support a robust, modular approach to managing news data. 5. **Startup Superpowers**: Find out why startups have an edge in implementing new tech solutions and how incumbents are tied down by legacy products. > Fun Fact: Did you know that startups' lack of established practices is actually a superpower in the face of new tech paradigms? Legacy products can't keep up! > ## Show notes: 00:00 Intro to Robert and Emergent Methods.\ 05:22 Crucial dedication to scaling context engineering.\ 07:07 Optimizing embedding for semantic similarity in search.\ 13:07 New search technology boosts efficiency and speed.\ 14:17 Reliable cloud provider with privacy and scalability.\ 17:46 Efficient data movement and resource management.\ 22:39 GoLang for services, Rust for security.\ 27:34 Logistics organized; Newscatcher provides up-to-date news.\ 30:27 Tested Weaviate and another in Rust.\ 32:01 Filter updates by starring and user preferences. ## More Quotes from Robert: *"Web search is powerful, but it's slow and ultimately inaccurate. What we're building is real time indexing and we couldn't do that without Qdrant*”\ -- Robert Caulk *"You need to start thinking about persistence and search and making sure those services are robust. That's where Qdrant comes into play. And we found that the all in one solutions kind of sacrifice performance for convenience, or sacrifice accuracy for convenience, but it really wasn't for us. We'd rather just orchestrate it ourselves and let Qdrant do what Qdrant does, instead of kind of just hope that an all in one solution is handling it for us and that allows for modularity performance.”*\ -- Robert Caulk *"Anyone riding the Qdrant wave is just reaping benefits. It seems monthly, like two months ago, sparse vector support got added. There's just constantly new massive features that enable products.”*\ -- Robert Caulk ## Transcript: Demetrios: Robert, it's great to have you here for the vector space talks. I don't know if you're familiar with some of this fun stuff that we do here, but we get to talk with all kinds of experts like yourself on what they're doing when it comes to the vector space and how you've overcome challenges, how you're working through things, because this is a very new field and it is not the most intuitive, as you will tell us more in this upcoming talk. I really am excited because you've been a scientist by trade. Now, you're currently founder at Emergent Methods and you've dedicated your career to a variety of open source projects that range from the large scale AI systems to the discrete element modeling. Now at emergent methods, you are adaptively modeling over 1 million news articles per day. That sounds like a whole lot of news articles. And you've been talking and working through production grade RAG, which is basically everyone's favorite topic these days. So I know you got to talk for us, man. Demetrios: I'm going to hand it over to you. I'll bring up your screen right now, and when someone wants to answer or ask a question, feel free to throw it in the chat and I'll jump out at Robert and stop him if needed. Robert Caulk: Sure. Demetrios: Great to have you here, man. I'm excited for this one. Robert Caulk: Thanks for having me, Demetrios. Yeah, it's a great opportunity. I love talking about vector spaces, parameter spaces. So to talk on the show is great. We've got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody's just trying to figure out what's going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us. So let me start. Robert Caulk: Yeah, like you said, I'm Robert and I'm a founder of emergent methods. Our background, like you said, we are really committed to free and open source software. We started with a lot of narrow AI. Freak AI was one of our original projects, which is AI ML for algo trading very narrow AI, but we came together and built flowdapt. It's a really nice cluster orchestration software, and I'll talk a little bit about that during this presentation. But some of our background goes into, like you said, large scale deep learning for supercomputers. Really cool, interesting stuff. We have some cloud experience. Robert Caulk: We really like configuration, so let's dive into it. Why do we actually need to engineer context in the news? There's a lot of reasons why news is important and why it needs to be distributed in a way that's balanced and diversified, but also consumable. Right, let's look at Chat GPT on the left. This is Chat GPT plus it's kind of hanging out searching for Gaza news on Bing, trying to find the top three articles live. Web search is powerful, but it's slow and ultimately inaccurate. What we're building is real time indexing and we couldn't do that without Qdrant, and there's a lot of reasons which I'll be perfectly happy to dive into, but eventually Chat GPT will pull something together here. There it is. And the first thing it reports is 25 day old article with 25 day old nudes. Robert Caulk: Old news. So it's just inaccurate. So it's borderline dangerous, what's happening here. Right, so this is a very delicate topic. Engineering context in news properly, which takes a lot of energy, a lot of time and dedication and focus, and not every company really has this sort of resource. So we're talking about enforcing journalistic standards, right? OpenAI and Chat GPt, they just don't have the time and energy to build a dedicated prompt for this sort of thing. It's fine, they're doing great stuff, they're helping you code. But someone needs to step in and really do enforce some journalistic standards here. Robert Caulk: And that includes enforcing diversity, languages, regions and sources. If I'm going to read about Gaza, what's happening over there, you can bet I want to know what Egypt is saying and what France is saying and what Algeria is saying. So let's do this right. That's kind of what we're suggesting, and the only way to do that is to parse a lot of articles. That's how you avoid outdated, stale reporting. And that's a real danger, which is kind of what we saw on that first slide. Everyone here knows hallucination is a problem and it's something you got to minimize, especially when you're talking about the news. It's just a really high cost if you get it wrong. Robert Caulk: And so you need people dedicated to this. And if you're going to dedicate a ton of resources and ton of people, you might as well scale that properly. So that's kind of where this comes into. We call this context engineering news context engineering, to be precise, before llama two, which also is enabling products left and right. As we all know, the traditional pipeline was chunk it up, take 512 tokens, put it through a translator, put it through distill art, do some sentence extraction, and maybe text classification, if you're lucky, get some sentiment out of it and it works. It gets you something. But after we're talking about reading full articles, getting real rich, context, flexible output, translating, summarizing, really deciding that custom extraction on the fly as your product evolves, that's something that the traditional pipeline really just doesn't support. Right. Robert Caulk: We're talking being able to on the fly say, you know what, actually we want to ask this very particular question of all articles and get this very particular field out. And it's really just a prompt modification. This all is based on having some very high quality, base level, diversified news. And so we'll talk a little bit more. But newscatchers is one of the sources that we're using, which opens up 50,000 different sources. So check them out. That's newscatcherapi.com. They even give free access to researchers if you're doing research in this. Robert Caulk: So I don't want to dive too much into the direct rag stuff. We can go deep, but I'm happy to talk about some examples of how to optimize this and how we've optimized it. Here on the right, you can see the diagram where we're trying to follow along the process of summarizing and embedding. And I'll talk a bit more about that in a moment. It's here to support after we've summarized those articles and we're ready to embed that. Embedding is really important to get that right because like the name of the show suggests you have to have a clean cluster vector space if you're going to be doing any sort of really rich semantic similarity searches. And if you're going to be able to dive deep into extracting important facts out of all 1 million articles a day, you're going to need to do this right. So having a user query which is not equivalent to the embedded page where this is the data, the enriched data that the embedding that we really want to be able to do search on. Robert Caulk: And then how do we connect the dots here? Of course, there are many ways to go about it. One way which is interesting and fun to talk about is ide. So that's basically a hypothetical document embedding. And what you do is you use the LLM directly to generate a fake article. And that's what we're showing here on the right. So let's say if the user says, what's going on in New York City government, well, you could say, hey, write me just a hypothetical summary based, it could completely fake and use that to create a fake embedding page and use that for the search. Right. So then you're getting a lot closer to where you want to go. Robert Caulk: There's some limitations to this, to it's, there's a computational cost also, it's not updated. It's based on whatever. It's basically diving into what it knows about the New York City government and just creating keywords for you. So there's definitely optimizations here as well. When you talk about ambiguity, well, what if the user follows up and says, well, why did they change the rules? Of course, that's where you can start prompt engineering a little bit more and saying, okay, given this historic conversation and the current question, give me some explicit question without ambiguity, and then do the high, if that's something you want to do. The real goal here is to stay in a single parameter space, a single vector space. Stay as close as possible when you're doing your search as when you do your embedding. So we're talking here about production scale of stuff. Robert Caulk: So I really am happy to geek out about the stack, the open source stack that we're relying on, which includes Qdrant here. But let's start with VLLM. I don't know if you guys have heard of it. This is a really great new project, and their focus on continuous batching and page detention. And if I'm being completely honest with you, it's really above my pay grade in the technicals and how they're actually implementing all of that inside the GPU memory. But what we do is we outsource that to that project and we really like what they're doing, and we've seen really good results. It's increasing throughput. So when you're talking about trying to parse through a million articles, you're going to need a lot of throughput. Robert Caulk: The other is text embedding inference. This is a great server. A lot of vector databases will say, okay, we'll do all the embedding for you and we'll do all everything. But when you move to production scale, I'll talk a bit about this later. You need to be using micro service architecture, so it's not super smart to have your database bogged down with doing sorting out the embeddings and sorting out other things. So honestly, I'm a real big fan of single responsibility principle, and that's what Tei does for you. And it also does dynamic batching, which is great in this world where everything is heterogeneous lengths of what's coming in and what's going out. So it's great. Robert Caulk: It really simplifies the process and allows you to isolate resources. But now the star of the show Qdrant, it's really come into its own. Anyone riding the Qdrant wave is just reaping benefits. It seems monthly, like two months ago, sparse vector support got added. There's just constantly new massive features that enable products. Right. So for us, we're doing so much up Cert, we really need to minimize client connections and networking overhead. So you got that batch up cert. Robert Caulk: The filters are huge. We're talking about real time filtering. We can't be searching on news articles from a month ago, two months ago, if the user is asking for a question that's related to the last 24 hours. So having that timestamp filtering and having it be efficient, which is what it is in Qdrant, is huge. Keyword filtering really opens up a massive realm of product opportunities for us. And then the sparse vectors, we hopped on this train immediately and are just seeing benefits. I don't want to say replacement of elasticsearch, but elasticsearch is using sparse vectors as well. So you can add splade into elasticsearch, and splade is great. Robert Caulk: It's a really great alternative to BM 25. It's based on that architecture, and that really opens up a lot of opportunities for filtering out keywords that are kind of useless to the search when the user uses the and a, and then there, these words that are less important splays a bit of a hybrid into semantics, but sparse retrieval. So it's really interesting. And then the idea of hybrid search with semantic and a sparse vector also opens up the ability to do ranking, and you got a higher quality product at the end, which is really the goal, right, especially in production. Point number four here, I would say, is probably one of the most important to us, because we're dealing in a world where latency is king, and being able to deploy Qdrant inside of the same cluster as all the other services. So we're just talking through the switch. That's huge. We're never getting bogged down by network. Robert Caulk: We're never worried about a cloud provider potentially getting overloaded or noisy neighbor problems, stuff like that, completely removed. And then you got high privacy, right. All the data is completely isolated from the external world. So this point number four, I'd say, is one of the biggest value adds for us. But then distributing deployment is huge because high availability is important, and deep storage, which when you're in the business of news archival, and that's one of our main missions here, is archiving the news forever. That's an ever growing database, and so you need a database that's going to be able to grow with you as your data grows. So what's the TLDR to this context? Engineering? Well, service orchestration is really just based on service orchestration in a very heterogeneous and parallel event driven environment. On the right side, we've got the user requests coming in. Robert Caulk: They're hitting all the same services, which every five minutes or every two minutes, whatever you've scheduled the scrape workflow on, also hitting the same services, this requires some orchestration. So that's kind of where I want to move into discussing the real production, scaling, orchestration of the system and how we're doing that. Provide some diagrams to show exactly why we're using the tools we're using here. This is an overview of our Kubernetes cluster with the services that we're using. So it's a bit of a repaint of the previous diagram, but a better overview about showing kind of how these things are connected and why they're connected. I'll go through one by one on these services to just give a little deeper dive into each one. But the goal here is for us, in our opinion, microservice orchestration is key. Sticking to single responsibility principle. Robert Caulk: Open source projects like Qdrant, like Tei, like VLLM and Kubernetes, it's huge. Kubernetes is opening up doors for security and for latency. And of course, if you're going to be getting involved in this game, you got to find the strong DevOps. There's no escaping that. So let's step through kind of piece by piece and talk about flow Dapp. So that's our project. That's our open source project. We've spent about two years building this for our needs, and we're really excited because we did a public open sourcing maybe last week or the week before. Robert Caulk: So finally, after all of our testing and rewrites and refactors, we're open. We're open for business. And it's running asknews app right now, and we're really excited for where it's going to go and how it's going to help other people orchestrate their clusters. Our goal and our priorities were highly paralyzed compute and we were running tests using all sorts of different executors, comparing them. So when you use Flowdapt, you can choose ray or dask. And that's key. Especially with vanilla Python, zero code changes, you don't need to know how ray or dask works. In the back end, flowdapt is vanilla Python. Robert Caulk: That was a key goal for us to ensure that we're optimizing how data is moving around the cluster. Automatic resource management this goes back to Ray and dask. They're helping manage the resources of the cluster, allocating a GPU to a task, or allocating multiple tasks to one GPU. These can come in very, very handy when you're dealing with very heterogeneous workloads like the ones that we discussed in those previous slides. For us, the biggest priority was ensuring rapid prototyping and debugging locally. When you're dealing with clusters of 1015 servers, 40 or 5100 with ray, honestly, ray just scales as far as you want. So when you're dealing with that big of a cluster, it's really imperative that what you see on your laptop is also what you are going to see once you deploy. And being able to debug anything you see in the cluster is big for us, we really found the need for easy cluster wide data sharing methods between tasks. Robert Caulk: So essentially what we've done is made it very easy to get and put values. And so this makes it extremely easy to move data and share data between tasks and make it highly available and stay in cluster memory or persist it to disk, so that when you do the inevitable version update or debug, you're reloading from a persisted state in the real time. News business scheduling is huge. Scheduling, making sure that various workflows are scheduled at different points and different periods or frequencies rather, and that they're being scheduled correctly, and that their triggers are triggering exactly what you need when you need it. Huge for real time. And then one of our biggest selling points, if you will, for this project is Kubernetes style. Everything. Our goal is everything's Kubernetes style, so that if you're coming from Kubernetes, everything's familiar, everything's resource oriented. Robert Caulk: We even have our own flowectl, which would be the Kubectl style command schemas. A lot of what we've done is ensuring deployment cycle efficiency here. So the goal is that flowdapt can schedule everything and manage all these services for you, create workflows. But why these services? For this particular use case, I'll kind of skip through quickly. I know I'm kind of running out of time here, but of course you're going to need some proprietary remote models. That's just how it works. You're going to of course share that load with on premise llms to reduce cost and to have some reasoning engine on premise. But there's obviously advantages and disadvantages to these. Robert Caulk: I'm not going to go through them. I'm happy to make these slides available, and you're welcome to kind of parse through the details. Yeah, for sure. You need to start thinking about persistence and search and making sure those services are robust. That's where Qdrant comes into play. And we found that the all in one solutions kind of sacrifice performance for convenience, or sacrifice accuracy for convenience, but it really wasn't for us. We'd rather just orchestrate it ourselves and let Qdrant do what Qdrant does, instead of kind of just hope that an all in one solution is handling it for us and that allows for modularity performance. And we'll dump Qdrant if we want to. Robert Caulk: Probably we won't. Or we'll dump if we need to, or we'll swap out for whatever replaces vllm. Trying to keep things modular so that future engineers are able to adapt with the tech that's just blowing up and exploding right now. Right. The last thing to talk about here in a production scale environment is really minimizing the latency. I touched on this with Kubernetes ensuring that these services are sitting on the same network, and that is huge. But that talks about decommunication latency. But when you start talking about getting hit with a ton of traffic, production scale, tons of people asking a question all simultaneously, and you needing to go hit a variety of services, well, this is where you really need to isolate that to an asynchronous environment. Robert Caulk: And of course, if you could write this all in Golang, that's probably going to be your best bet for us. We have some services written in Golang, but predominantly, especially the endpoints that the ML engineers need to work with. We're using fast API on pydantic and honestly, it's powerful. Pydantic V 2.0 now runs on Rust, and as anyone in the Qdrant community knows, Rust is really valuable when you're dealing with highly parallelized environments that require high security and protections for immutability and atomicity. Forgive me for the pronunciation, that kind of sums up the production scale talk, and I'm happy to answer questions. I love diving into this sort of stuff. I do have some just general thoughts on why startups are so much more well positioned right now than some of these incumbents, and I'll just do kind of a quick run through, less than a minute just to kind of get it out there. We can talk about it, see if we agree or disagree. Robert Caulk: But you touched on it, Demetrios, in the introduction, which was the best practices have not been established. That's it. That is why startups have such a big advantage. And the reason they're not established is because, well, the new paradigm of technology is just underexplored. We don't really know what the limits are and how to properly handle these things. And that's huge. Meanwhile, some of these incumbents, they're dealing with all sorts of limitations and resistance to change and stuff, and then just market expectations for incumbents maintaining these kind of legacy products and trying to keep them hobbling along on this old tech. In my opinion, startups, you got your reasoning engine building everything around a reasoning engine, using that reasoning engine for every aspect of your system to really open up the adaptivity of your product. Robert Caulk: And okay, I won't put elasticsearch in the incumbent world. I'll keep elasticsearch in the middle. I understand it still has a lot of value, but some of these vendor lock ins, not a huge fan of. But anyway, that's it. That's kind of all I have to say. But I'm happy to take questions or chat a bit. Demetrios: Dude, I've got so much to ask you and thank you for breaking down that stack. That is like the exact type of talk that I love to see because you open the kimono full on. And I was just playing around with asknews app. And so I think it's probably worth me sharing my screen just to show everybody what exactly that is and how that looks at the moment. So you should be able to see it now. Right? And super cool props to you for what you've built. Because I went, and intuitively I was able to say like, oh, cool, I can change, I can see positive news, and I can go by the region that I'm looking at. I want to make sure that I'm checking out all the stuff in Europe or all the stuff in America categories. Demetrios: I can look at sports, blah blah blah, like as if you were flipping the old newspaper and you could go to the sports section or the finance section, and then you cite the sources and you see like, oh, what's the trend in the coverage here? What kind of coverage are we getting? Where are we at in the coverage cycle? Probably something like that. And then, wait, although I was on the happy news, I thought murder, she wrote. So anyway, what we do is we. Robert Caulk: Actually sort it from we take the poll and we actually just sort most positive to the least positive. But you're right, we were talking the other day, we're like, let's just only show the positive. But yeah, that's a good point. Demetrios: There you go. Robert Caulk: Murder, she wrote. Demetrios: But the one thing that I was actually literally just yesterday talking to someone about was how you update things inside of your vector database. So I can imagine that news, as you mentioned, news cycles move very fast and the news that happened 2 hours ago is very different. The understanding of what happened in a very big news event is very different 2 hours ago than it is right now. So how do you make sure that you're always pulling the most current and up to date information? Robert Caulk: This is another logistical point that we think needs to get sorted properly and there's a few layers to it. So for us, as we're parsing that data coming in from Newscatcher, so newscatcher is doing a good job of always feeding the latest buckets to us. Sometimes one will be kind of arrive, but generally speaking, it's always the latest news. So we're taking five minute buckets, and then with those buckets, we're going through and doing all of our enrichment on that, adding it to Qdrant. And that is the point where we use that timestamp filtering, which is such an important point. So in the metadata of Qdrant, we're using the range filter, which is where we call that the timestamp filter, but it's really range filter, and that helps. So when we're going back to update things, we're sorting and ensuring that we're filtering out only what we haven't seen. Demetrios: Okay, that makes complete sense. And basically you could generalize this to something like what I was talking to with people yesterday about, which was, hey, I've got an HR policy that gets updated every other month or every quarter, and I want to make sure that if my HR chatbot is telling people what their vacation policy is, it's pulling from the most recent HR policy. So how do I make sure and do that? And how do I make sure that my vector database isn't like a landmine where it's pulling any information, but we don't necessarily have that control to be able to pull the correct information? And this comes down to that retrieval evaluation, which is such a hot topic, too. Robert Caulk: That's true. No, I think that's a key piece of the puzzle. Now, in that particular example, maybe you actually want to go in and start cleansing a bit, your database, just to make sure if it's really something you're never going to need again. You got to get rid of it. This is a piece I didn't add to the presentation, but it's tangential. You got to keep multiple databases and you got to making sure to isolate resources and cleaning out a database, especially in real time. So ensuring that your database is representative of what you want to be searching on. And you can do this with collections too, if you want. Robert Caulk: But we find there's sometimes a good opportunity to isolate resources in that sense, 100%. Demetrios: So, another question that I had for you was, I noticed Mongo was in the stack. Why did you not just use the Mongo vector option? Is it because of what you were mentioning, where it's like, yeah, you have these all-in-one options, but you sacrifice that performance for the convenience? Robert Caulk: We didn't test that, to be honest, I can't say. All I know is we tested weaviate, we tested one other, and I just really like. Although I was going to say I like that it's written in rust, although I believe Mongo is also written in rust, if I'm not mistaken. But for us, the document DB is more of a representation of state and what's happening, especially for our configurations and workflows. Meanwhile, we really like keeping and relying on Qdrant and all the features. Qdrant is updating, so, yeah, I'd say single responsibility principle is key to that. But I saw some chat in Qdrant discord about this, which I think the only way to use vector is actually to use their cloud offering, if I'm not mistaken. Do you know about this? Demetrios: Yeah, I think so, too. Robert Caulk: This would also be a piece that we couldn't do. Demetrios: Yeah. Where it's like it's open source, but not open source, so that makes sense. Yeah. This has been excellent, man. So I encourage anyone who is out there listening, check out again this is asknews app, and stay up to date with the most relevant news in your area and what you like. And I signed in, so I'm guessing that when I sign in, it's going to tweak my settings. Am I going to be able. Robert Caulk: Good question. Demetrios: Catch this next time. Robert Caulk: Well, at the moment, if you star a story, a narrative that you find interesting, then you can filter on the star and whatever the latest updates are, you'll get it for that particular story. Okay. It brings up another point about Qdrant, which is at the moment we're not doing it yet, but we have plans to use the recommendation system for letting a user kind of create their profile by just saying what they like, what they don't like, and then using the recommender to start recommending stories that they may or may not like. And that's us outsourcing the Qdrant almost entirely. Right. It's just us building around it. So that's nice. Demetrios: Yeah. That makes life a lot easier, especially knowing recommender systems. Yeah, that's excellent. Robert Caulk: Thanks. I appreciate that. For sure. And I'll try to make the slides available. I don't know if I can send them to the two Qdrant or something. They could post them in the discord maybe, for sure. Demetrios: And we can post them in the link in the description of this talk. So this has been excellent. Rob, I really appreciate you coming on here and chatting with me about this, and thanks for breaking down everything that you're doing. I also love the VllM project. It's blowing up. It's cool to see so much usage and all the good stuff that you're doing with it. And yeah, man, for anybody that wants to follow along on your journey, we'll drop a link to your LinkedIn so that they can connect with you and. Robert Caulk: Cool. Demetrios: Thank you. Robert Caulk: Thanks for having me. Demetrios, talk to you later. Demetrios: Catch you later, man. Take care.
blog/production-scale-rag-for-real-time-news-distillation-robert-caulk-vector-space-talks.md
--- draft: false title: "Elevate Your Data With Airbyte and Qdrant Hybrid Cloud" short_description: "Leverage Airbyte and Qdrant Hybrid Cloud for best-in-class data performance." description: "Leverage Airbyte and Qdrant Hybrid Cloud for best-in-class data performance." preview_image: /blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte.png date: 2024-04-10T00:00:00Z author: Qdrant featured: false weight: 1013 tags: - Qdrant - Vector Database --- In their mission to support large-scale AI innovation, [Airbyte](https://airbyte.com/) and Qdrant are collaborating on the launch of Qdrant’s new offering - [Qdrant Hybrid Cloud](/hybrid-cloud/). This collaboration allows users to leverage the synergistic capabilities of both Airbyte and Qdrant within a private infrastructure. Qdrant’s new offering represents the first managed vector database that can be deployed in any environment. Businesses optimizing their data infrastructure with Airbyte are now able to host a vector database either on premise, or on a public cloud of their choice - while still reaping the benefits of a managed database product. This is a major step forward in offering enterprise customers incredible synergy for maximizing the potential of their AI data. Qdrant's new Kubernetes-native design, coupled with Airbyte’s powerful data ingestion pipelines meet the needs of developers who are both prototyping and building production-level apps. Airbyte simplifies the process of data integration by providing a platform that connects to various sources and destinations effortlessly. Moreover, Qdrant Hybrid Cloud leverages advanced indexing and search capabilities to empower users to explore and analyze their data efficiently. In a major benefit to Generative AI, businesses can leverage Airbyte's data replication capabilities to ensure that their data in Qdrant Hybrid Cloud is always up to date. This empowers all users of Retrieval Augmented Generation (RAG) applications with effective analysis and decision-making potential, all based on the latest information. Furthermore, by combining Airbyte's platform and Qdrant's hybrid cloud infrastructure, users can optimize their data operations while keeping costs under control via flexible pricing models tailored to individual usage requirements. > *“The new Qdrant Hybrid Cloud is an exciting addition that offers peace of mind and flexibility, aligning perfectly with the needs of Airbyte Enterprise users who value the same balance. Being open-source at our core, both Qdrant and Airbyte prioritize giving users the flexibility to build and test locally—a significant advantage for data engineers and AI practitioners. We're enthusiastic about the Hybrid Cloud launch, as it mirrors our vision of enabling users to confidently transition from local development and local deployments to a managed solution, with both cloud and hybrid cloud deployment options.”* AJ Steers, Staff Engineer for AI, Airbyte #### Optimizing Your GenAI Data Stack With Airbyte and Qdrant Hybrid Cloud By integrating Airbyte with Qdrant Hybrid Cloud, you can achieve seamless data ingestion from diverse sources into Qdrant's powerful indexing system. This integration enables you to derive valuable insights from your data. Here are some key advantages: **Effortless Data Integration:** Airbyte's intuitive interface lets you set up data pipelines that extract, transform, and load (ETL) data from various sources into Qdrant. Additionally, Qdrant Hybrid Cloud’s Kubernetes-native architecture means that the destination vector database can now be deployed in a few clicks to any environment. With such flexibility, you can supply even the most advanced RAG applications with optimal data pipelines. **Scalability and Performance:** With Airbyte and Qdrant Hybrid Cloud, you can scale your data infrastructure according to your needs. Whether you're dealing with terabytes or petabytes of data, this combination ensures optimal performance and scalability. This is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **Powerful Indexing and Search:** Qdrant Hybrid Cloud’s architecture combines the scalability of cloud infrastructure with the performance of on-premises indexing. Qdrant's advanced algorithms enable lightning-fast search and retrieval of data, even across large datasets. **Open-Source Compatibility:** Airbyte and Qdrant pride themselves on maintaining a reliable and mature integration that brings peace of mind to those prototyping and deploying large-scale AI solutions. Extensive open-source documentation and code samples help users of all skill levels in leveraging highly advanced features of data ingestion and vector search. #### Build a Modern GenAI Application With Qdrant Hybrid Cloud and Airbyte ![hybrid-cloud-airbyte-tutorial](/blog/hybrid-cloud-airbyte/hybrid-cloud-airbyte-tutorial.png) We put together an end-to-end tutorial to show you how to build a GenAI application with Qdrant Hybrid Cloud and Airbyte’s advanced data pipelines. #### Tutorial: Build a RAG System to Answer Customer Support Queries Learn how to set up a private AI service that addresses customer support issues with high accuracy and effectiveness. By leveraging Airbyte’s data pipelines with Qdrant Hybrid Cloud, you will create a customer support system that is always synchronized with up-to-date knowledge. [Try the Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-airbyte.md
--- draft: false title: Qdrant Summer of Code 24 slug: qdrant-summer-of-code-24 short_description: Introducing Qdrant Summer of Code 2024 program. description: "Introducing Qdrant Summer of Code 2024 program. GSoC alternative." preview_image: /blog/Qdrant-summer-of-code.png date: 2024-02-21T00:39:53.751Z author: Andre Zayarni featured: false tags: - Open Source - Vector Database - Summer of Code - GSoC24 --- Google Summer of Code (#GSoC) is celebrating its 20th anniversary this year with the 2024 program. Over the past 20 years, 19K new contributors were introduced to #opensource through the program under the guidance of thousands of mentors from over 800 open-source organizations in various fields. Qdrant participated successfully in the program last year. Both projects, the UI Dashboard with unstructured data visualization and the advanced Geo Filtering, were completed in time and are now a part of the engine. One of the two young contributors joined the team and continues working on the project. We are thrilled to announce that Qdrant was 𝐍𝐎𝐓 𝐚𝐜𝐜𝐞𝐩𝐭𝐞𝐝 into the GSoc 2024 program for unknown reasons, but instead, we are introducing our own 𝐐𝐝𝐫𝐚𝐧𝐭 𝐒𝐮𝐦𝐦𝐞𝐫 𝐨𝐟 𝐂𝐨𝐝𝐞 program with a stipend for contributors! To not reinvent the wheel, we follow all the timelines and rules of the official Google program. ## Our project ideas. We have prepared some excellent project ideas. Take a look and choose if you want to contribute in Rust or a Python-based project. ➡ *WASM-based dimension reduction viz* 📊 Implement a dimension reduction algorithm in Rust, compile to WASM and integrate the WASM code with Qdrant Web UI. ➡ *Efficient BM25 and Okapi BM25, which uses the BERT Tokenizer* 🥇 BM25 and Okapi BM25 are popular ranking algorithms. Qdrant's FastEmbed supports dense embedding models. We need a fast, efficient, and massively parallel Rust implementation with Python bindings for these. ➡ *ONNX Cross Encoders in Python* ⚔️ Export a cross-encoder ranking models to operate on ONNX runtime and integrate this model with the Qdrant's FastEmbed to support efficient re-ranking ➡ *Ranking Fusion Algorithms implementation in Rust* 🧪 Develop Rust implementations of various ranking fusion algorithms including but not limited to Reciprocal Rank Fusion (RRF). For a complete list, see: https://github.com/AmenRa/ranx and create Python bindings for the implemented Rust modules. ➡ *Setup Jepsen to test Qdrant’s distributed guarantees* 💣 Design and write Jepsen tests based on implementations for other Databases and create a report or blog with the findings. See all details on our Notion page: https://www.notion.so/qdrant/GSoC-2024-ideas-1dfcc01070094d87bce104623c4c1110 Contributor application period begins on March 18th. We will accept applications via email. Let's contribute and celebrate together! In open-source, we trust! 🦀🤘🚀
blog/gsoc24-summer-of-code.md
--- title: "Navigating challenges and innovations in search technologies" draft: false slug: navigating-challenges-innovations short_description: Podcast on search and LLM with Datatalk.club description: Podcast on search and LLM with Datatalk.club preview_image: /blog/navigating-challenges-innovations/preview/preview.png date: 2024-01-12T15:39:53.751Z author: Atita Arora featured: false tags: - podcast - search - blog - retrieval-augmented generation - large language models --- ## Navigating challenges and innovations in search technologies We participated in a [podcast](#podcast-discussion-recap) on search technologies, specifically with retrieval-augmented generation (RAG) in language models. RAG is a cutting-edge approach in natural language processing (NLP). It uses information retrieval and language generation models. We describe how it can enhance what AI can do to understand, retrieve, and generate human-like text. ### More about RAG Think of RAG as a system that finds relevant knowledge from a vast database. It takes your query, finds the best available information, and then provides an answer. RAG is the next step in NLP. It goes beyond the limits of traditional generation models by integrating retrieval mechanisms. With RAG, NLP can access external knowledge sources, databases, and documents. This ensures more accurate, contextually relevant, and informative output. With RAG, we can set up more precise language generation as well as better context understanding. RAG helps us incorporate real-world knowledge into AI-generated text. This can improve overall performance in tasks such as: - Answering questions - Creating summaries - Setting up conversations ### The importance of evaluation for RAG and LLM Evaluation is crucial for any application leveraging LLMs. It promotes confidence in the quality of the application. It also supports implementation of feedback and improvement loops. ### Unique challenges of evaluating RAG and LLM-based applications *Retrieval* is the key to Retrieval Augmented Generation, as it affects quality of the generated response. Potential problems include: - Setting up a defined or expected set of documents, which can be a significant challenge. - Measuring *subjectiveness*, which relates to how well the data fits or applies to a given domain or use case. ### Podcast Discussion Recap In the podcast, we addressed the following: - **Model evaluation(LLM)** - Understanding the model at the domain-level for the given use case, supporting required context length and terminology/concept understanding. - **Ingestion pipeline evaluation** - Evaluating factors related to data ingestion and processing such as chunk strategies, chunk size, chunk overlap, and more. - **Retrieval evaluation** - Understanding factors such as average precision, [Distributed cumulative gain](https://en.wikipedia.org/wiki/Discounted_cumulative_gain) (DCG), as well as normalized DCG. - **Generation evaluation(E2E)** - Establishing guardrails. Evaulating prompts. Evaluating the number of chunks needed to set up the context for generation. ### The recording Thanks to the [DataTalks.Club](https://datatalks.club) for organizing [this podcast](https://www.youtube.com/watch?v=_fbe1QyJ1PY). ### Event Alert If you're interested in a similar discussion, watch for the recording from the [following event](https://www.eventbrite.co.uk/e/the-evolution-of-genai-exploring-practical-applications-tickets-778359172237?aff=oddtdtcreator), organized by [DeepRec.ai](https://deeprec.ai). ### Further reading - [Qdrant Blog](/blog/)
blog/datatalk-club-podcast-plug.md
--- title: "Qdrant 1.11 - The Vector Stronghold: Optimizing Data Structures for Scale and Efficiency" draft: false short_description: "On-Disk Payload Index. UUID Payload Support. Tenant Defragmentation." description: "Enhanced payload flexibility with on-disk indexing, UUID support, and tenant-based defragmentation." preview_image: /blog/qdrant-1.11.x/social_preview.png social_preview_image: /blog/qdrant-1.11.x/social_preview.png date: 2024-08-12T00:00:00-08:00 author: David Myriel featured: true tags: - vector search - on-disk payload index - tenant defragmentation - group-by search - random sampling --- [Qdrant 1.11.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.11.0) This release largely focuses on features that improve memory usage and optimize segments. However, there are a few cool minor features, so let's look at the whole list: Optimized Data Structures:</br> **Defragmentation:** Storage for multitenant workloads is more optimized and scales better.</br> **On-Disk Payload Index:** Store less frequently used data on disk, rather than in RAM.</br> **UUID for Payload Index:** Additional data types for payload can result in big memory savings. Improved Query API:</br> **GroupBy Endpoint:** Use this query method to group results by a certain payload field.</br> **Random Sampling:** Select a subset of data points from a larger dataset randomly.</br> **Hybrid Search Fusion:** We are adding the Distribution-Based Score Fusion (DBSF) method.</br> New Web UI Tools:</br> **Search Quality Tool:** Test the precision of your semantic search requests in real-time.</br> **Graph Exploration Tool:** Visualize vector search in context-based exploratory scenarios.</br> ### Quick Recap: Multitenant Workloads Before we dive into the specifics of our optimizations, let's first go over Multitenancy. This is one of our most significant features, [best used for scaling and data isolation](https://qdrant.tech/articles/multitenancy/). If you’re using Qdrant to manage data for multiple users, regions, or workspaces (tenants), we suggest setting up a [multitenant environment](/documentation/guides/multiple-partitions/). This approach keeps all tenant data in a single global collection, with points separated and isolated by their payload. To avoid slow and unnecessary indexing, it’s better to create an index for each relevant payload rather than indexing the entire collection globally. Since some data is indexed more frequently, you can focus on building indexes for specific regions, workspaces, or users. *For more details on scaling best practices, read [How to Implement Multitenancy and Custom Sharding](https://qdrant.tech/articles/multitenancy/).* ### Defragmentation of Tenant Storage With version 1.11, Qdrant changes how vectors from the same tenant are stored on disk, placing them **closer together** for faster bulk reading and reduced scaling costs. This approach optimizes storage and retrieval operations for different tenants, leading to more efficient system performance and resource utilization. **Figure 1:** Re-ordering by payload can significantly speed up access to hot and cold data. ![defragmentation](/blog/qdrant-1.11.x/defragmentation.png) **Example:** When creating an index, you may set `is_tenant=true`. This configuration will optimize the storage based on your collection’s usage patterns. ```http PUT /collections/{collection_name}/index { "field_name": "group_id", "field_schema": { "type": "keyword", "is_tenant": true } } ``` ```python client.create_payload_index( collection_name="{collection_name}", field_name="group_id", field_schema=models.KeywordIndexParams( type="keyword", is_tenant=True, ), ) ``` ```typescript client.createPayloadIndex("{collection_name}", { field_name: "group_id", field_schema: { type: "keyword", is_tenant: true, }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url("http://localhost:6334").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( "{collection_name}", "group_id", FieldType::Keyword, ).field_index_params( KeywordIndexParamsBuilder::default() .is_tenant(true) ) ).await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "group_id", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setIsTenant(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync( collectionName: "{collection_name}", fieldName: "group_id", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { IsTenant = true } } ); ``` As a result, the storage structure will be organized in a way to co-locate vectors of the same tenant together at the next optimization. *To learn more about defragmentation, read the [Multitenancy documentation](/documentation/guides/multiple-partitions/).* ### On-Disk Support for the Payload Index When managing billions of records across millions of tenants, keeping all data in RAM is inefficient. That is especially true when only a small subset is frequently accessed. As of 1.11, you can offload "cold" data to disk and cache the “hot” data in RAM. *This feature can help you manage a high number of different payload indexes, which is beneficial if you are working with large varied datasets.* **Figure 2:** By moving the data from Workspace 2 to disk, the system can free up valuable memory resources for Workspaces 1, 3 and 4, which are accessed more frequently. ![on-disk-payload](/blog/qdrant-1.11.x/on-disk-payload.png) **Example:** As you create an index for Workspace 2, set the `on_disk` parameter. ```http PUT /collections/{collection_name}/index { "field_name": "group_id", "field_schema": { "type": "keyword", "is_tenant": true, "on_disk": true } } ``` ```python client.create_payload_index( collection_name="{collection_name}", field_name="group_id", field_schema=models.KeywordIndexParams( type="keyword", is_tenant=True, on_disk=True, ), ) ``` ```typescript client.createPayloadIndex("{collection_name}", { field_name: "group_id", field_schema: { type: "keyword", is_tenant: true, on_disk: true }, }); ``` ```rust use qdrant_client::qdrant::{ CreateFieldIndexCollectionBuilder, KeywordIndexParamsBuilder, FieldType }; use qdrant_client::{Qdrant, QdrantError}; let client = Qdrant::from_url("http://localhost:6334").build()?; client.create_field_index( CreateFieldIndexCollectionBuilder::new( "{collection_name}", "group_id", FieldType::Keyword, ) .field_index_params( KeywordIndexParamsBuilder::default() .is_tenant(true) .on_disk(true), ), ); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.PayloadIndexParams; import io.qdrant.client.grpc.Collections.PayloadSchemaType; import io.qdrant.client.grpc.Collections.KeywordIndexParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createPayloadIndexAsync( "{collection_name}", "group_id", PayloadSchemaType.Keyword, PayloadIndexParams.newBuilder() .setKeywordIndexParams( KeywordIndexParams.newBuilder() .setIsTenant(true) .setOnDisk(true) .build()) .build(), null, null, null) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreatePayloadIndexAsync( collectionName: "{collection_name}", fieldName: "group_id", schemaType: PayloadSchemaType.Keyword, indexParams: new PayloadIndexParams { KeywordIndexParams = new KeywordIndexParams { IsTenant = true, OnDisk = true } } ); ``` By moving the index to disk, Qdrant can handle larger datasets that exceed the capacity of RAM, making the system more scalable and capable of storing more data without being constrained by memory limitations. *To learn more about this, read the [Indexing documentation](/documentation/concepts/indexing/).* ### UUID Datatype for the Payload Index Many Qdrant users rely on UUIDs in their payloads, but storing these as strings comes with a substantial memory overhead—approximately 36 bytes per UUID. In reality, UUIDs only require 16 bytes of storage when stored as raw bytes. To address this inefficiency, we’ve developed a new index type tailored specifically for UUIDs that stores them internally as bytes, **reducing memory usage by up to 2.25x.** **Example:** When adding two separate points, indicate their UUID in the payload. In this example, both data points belong to the same user (with the same UUID). ```http PUT /collections/{collection_name}/points { "points": [ { "id": 1, "vector": [0.05, 0.61, 0.76, 0.74], "payload": {"id": 550e8400-e29b-41d4-a716-446655440000} }, { "id": 2, "vector": [0.19, 0.81, 0.75, 0.11], "payload": {"id": 550e8400-e29b-41d4-a716-446655440000} }, ] } ``` > For organizations that have numerous users and UUIDs, this simple fix can significantly reduce the cluster size and improve efficiency. *To learn more about this, read the [Payload documentation](/documentation/concepts/payload/).* ### Query API: Groups Endpoint When searching over data, you can group results by specific payload field, which is useful when you have multiple data points for the same item and you want to avoid redundant entries in the results. **Example:** If a large document is divided into several chunks, and you need to search or make recommendations on a per-document basis, you can group the results by the `document_id`. ```http POST /collections/{collection_name}/points/query/groups { "query": [0.01, 0.45, 0.67], group_by="document_id", # Path of the field to group by limit=4, # Max amount of groups group_size=2, # Max amount of points per group } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.query_points_groups( collection_name="{collection_name}", query=[0.01, 0.45, 0.67], group_by="document_id", limit=4, group_size=2, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.queryGroups("{collection_name}", { query: [0.01, 0.45, 0.67], group_by: "document_id", limit: 4, group_size: 2, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Query, QueryPointsBuilder}; let client = Qdrant::from_url("http://localhost:6334").build()?; client.query_groups( QueryPointGroupsBuilder::new("{collection_name}", "document_id") .query(Query::from(vec![0.01, 0.45, 0.67])) .limit(4u64) .group_size(2u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.QueryPointGroups; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .queryGroupsAsync( QueryPointGroups.newBuilder() .setCollectionName("{collection_name}") .setGroupBy("document_id") .setQuery(nearest(0.01f, 0.45f, 0.67f)) .setLimit(4) .setGroupSize(2) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.QueryGroupsAsync( collectionName: "{collection_name}", groupBy: "document_id", query: new float[] { 0.01f, 0.45f, 0.67f }, limit: 4, groupSize: 2 ); ``` This endpoint will retrieve the best N points for each document, assuming that the payload of the points contains the document ID. Sometimes, the best N points cannot be fulfilled due to lack of points or a big distance with respect to the query. In every case, the `group_size` is a best-effort parameter, similar to the limit parameter. *For more information on grouping capabilities refer to our [Hybrid Queries documentation](/documentation/concepts/hybrid-queries/).* ### Query API: Random Sampling Our [Food Discovery Demo](https://food-discovery.qdrant.tech) always shows a random sample of foods from the larger dataset. Now you can do the same and set the randomization from a basic Query API endpoint. When calling the Query API, you will be able to select a subset of data points from a larger dataset randomly. *This technique is often used to reduce the computational load, improve query response times, or provide a representative sample of the data for various analytical purposes.* **Example:** When querying the collection, you can configure it to retrieve a random sample of data. ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") # Random sampling (as of 1.11.0) sampled = client.query_points( collection_name="{collection_name}", query=models.SampleQuery(sample=models.Sample.Random) ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); let sampled = client.query("{collection_name}", { query: { sample: "random" }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Query, QueryPointsBuilder, Sample}; let client = Qdrant::from_url("http://localhost:6334").build()?; let sampled = client .query( QueryPointsBuilder::new("{collection_name}").query(Query::new_sample(Sample::Random)), ) .await?; ``` ```java import static io.qdrant.client.QueryFactory.sample; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Sample; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName("{collection_name}") .setQuery(sample(Sample.Random)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.QueryAsync( collectionName: "{collection_name}", query: Sample.Random ); ``` *To learn more, check out the [Query API documentation](/documentation/concepts/hybrid-queries/).* ### Query API: Distribution-Based Score Fusion In version 1.10, we added Reciprocal Rank Fusion (RRF) as a way of fusing results from Hybrid Queries. Now we are adding Distribution-Based Score Fusion (DBSF). Michelangiolo Mazzeschi talks more about this fusion method in his latest [Medium article](https://medium.com/plain-simple-software/distribution-based-score-fusion-dbsf-a-new-approach-to-vector-search-ranking-f87c37488b18). *DBSF normalizes the scores of the points in each query, using the mean +/- the 3rd standard deviation as limits, and then sums the scores of the same point across different queries.* **Example:** To fuse `prefetch` results from sparse and dense queries, set `"fusion": "dbsf"` ```http POST /collections/{collection_name}/points/query { "prefetch": [ { "query": { "indices": [1, 42], // <┐ "values": [0.22, 0.8] // <┴─Sparse vector }, "using": "sparse", "limit": 20 }, { "query": [0.01, 0.45, 0.67, ...], // <-- Dense vector "using": "dense", "limit": 20 } ], "query": { "fusion": “dbsf" }, // <--- Distribution Based Score Fusion "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.query_points( collection_name="{collection_name}", prefetch=[ models.Prefetch( query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]), using="sparse", limit=20, ), models.Prefetch( query=[0.01, 0.45, 0.67, ...], # <-- dense vector using="dense", limit=20, ), ], query=models.FusionQuery(fusion=models.Fusion.DBSF), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.query("{collection_name}", { prefetch: [ { query: { values: [0.22, 0.8], indices: [1, 42], }, using: 'sparse', limit: 20, }, { query: [0.01, 0.45, 0.67], using: 'dense', limit: 20, }, ], query: { fusion: 'dbsf', }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url("http://localhost:6334").build()?; client.query( QueryPointsBuilder::new("{collection_name}") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice())) .using("sparse") .limit(20u64) ) .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using("dense") .limit(20u64) ) .query(Query::new_fusion(Fusion::Dbsf)) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import java.util.List; import static io.qdrant.client.QueryFactory.fusion; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Fusion; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName("{collection_name}") .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42))) .setUsing("sparse") .setLimit(20) .build()) .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.01f, 0.45f, 0.67f))) .setUsing("dense") .setLimit(20) .build()) .setQuery(fusion(Fusion.DBSF)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.QueryAsync( collectionName: "{collection_name}", prefetch: new List < PrefetchQuery > { new() { Query = new(float, uint)[] { (0.22f, 1), (0.8f, 42), }, Using = "sparse", Limit = 20 }, new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Using = "dense", Limit = 20 } }, query: Fusion.Dbsf ); ``` Note that `dbsf` is stateless and calculates the normalization limits only based on the results of each query, not on all the scores that it has seen. *To learn more, check out the [Hybrid Queries documentation](/documentation/concepts/hybrid-queries/).* ## Web UI: Search Quality Tool We have updated the Qdrant Web UI with additional testing functionality. Now you can check the quality of your search requests in real time and measure it against exact search. **Try it:** In the Dashboard, go to collection settings and test the **Precision** from the Search Quality menu tab. > The feature will conduct a semantic search for each point and produce a report below. <iframe width="560" height="315" src="https://www.youtube.com/embed/PJHzeVay_nQ?si=u-6lqCVECd-A319M" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> ## Web UI: Graph Exploration Tool Deeper exploration is highly dependent on expanding context. This is something we previously covered in the [Discovery Needs Context](/articles/discovery-search/) article earlier this year. Now, we have developed a UI feature to help you visualize how semantic search can be used for exploratory and recommendation purposes. **Try it:** Using the feature is pretty self-explanatory. Each collection's dataset can be explored from the **Graph** tab. As you see the images change, you can steer your search in the direction of specific characteristics that interest you. > Search results will become more "distilled" and tailored to your preferences. <iframe width="560" height="315" src="https://www.youtube.com/embed/PXH4WPYUP7E?si=nFqLBIcxo-km9i4V" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> ## Next Steps If you’re new to Qdrant, now is the perfect time to start. Check out our [documentation](/documentation/) guides and see why Qdrant is the go-to solution for vector search. We’re very happy to bring you this latest version of Qdrant, and we can’t wait to see what you build with it. As always, your feedback is invaluable—feel free to reach out with any questions or comments on our [community forum](https://qdrant.to/discord).
blog/qdrant-1.11.x.md
--- draft: true title: v0.8.0 update of the Qdrant engine was released slug: qdrant-0-8-0-released short_description: "The new version of our engine - v0.8.0, went live. " description: "The new version of our engine - v0.8.0, went live. " preview_image: /blog/from_cms/v0.8.0.jpg date: 2022-06-09T10:03:29.376Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ categories: - News - Release update tags: - Corporate news - Release sitemapExclude: True --- <!--StartFragment--> The new version of our engine - v0.8.0, went live. Let's go through the new features it has: * On-disk payload storage allows storing more with less RAM usage. * Distributed deployment support is available. And we continue improving it, so stay tuned for new updates. * The payload can be indexed in the process without rebuilding the segment. * Advanced filtering support now includes filtering by similarity score. Also, it has a faster payload index, better error reporting, HNSW Speed improvements, and many more. Check out the change log for more details [](https://github.com/qdrant/qdrant/releases/tag/v0.8.0)https://github.com/qdrant/qdrant/releases/tag/v0.8.0. <!--EndFragment-->
blog/v0-8-0-update-of-the-qdrant-engine-was-released.md
--- draft: false title: "Developing Advanced RAG Systems with Qdrant Hybrid Cloud and LangChain " short_description: "Empowering engineers and scientists globally to easily and securely develop and scale their GenAI applications." description: "Empowering engineers and scientists globally to easily and securely develop and scale their GenAI applications." preview_image: /blog/hybrid-cloud-langchain/hybrid-cloud-langchain.png date: 2024-04-14T00:04:00Z author: Qdrant featured: false weight: 1007 tags: - Qdrant - Vector Database --- [LangChain](https://www.langchain.com/) and Qdrant are collaborating on the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), which is designed to empower engineers and scientists globally to easily and securely develop and scale their GenAI applications. Harnessing LangChain’s robust framework, users can unlock the full potential of vector search, enabling the creation of stable and effective AI products. Qdrant Hybrid Cloud extends the same powerful functionality of Qdrant onto a Kubernetes-based architecture, enhancing LangChain’s capability to cater to users across any environment. Qdrant Hybrid Cloud provides users with the flexibility to deploy their vector database in a preferred environment. Through container-based scalable deployments, companies can leverage cutting-edge frameworks like LangChain while maintaining compatibility with their existing hosting architecture for data sources, embedded models, and LLMs. This potent combination empowers organizations to develop robust and secure applications capable of text-based search, complex question-answering, recommendations and analysis. Despite LLMs being trained on vast amounts of data, they often lack user-specific or private knowledge. LangChain helps developers build context-aware reasoning applications, addressing this challenge. Qdrant’s vector database sifts through semantically relevant information, enhancing the performance gains derived from LangChain’s data connection features. With LangChain, users gain access to state-of-the-art functionalities for querying, chatting, sorting, and parsing data. Through the seamless integration of Qdrant Hybrid Cloud and LangChain, developers can effortlessly vectorize their data and conduct highly accurate semantic searches—all within their preferred environment. > *“The AI industry is rapidly maturing, and more companies are moving their applications into production. We're really excited at LangChain about supporting enterprises' unique data architectures and tooling needs through integrations and first-party offerings through LangSmith. First-party enterprise integrations like Qdrant's greatly contribute to the LangChain ecosystem with enterprise-ready retrieval features that seamlessly integrate with LangSmith's observability, production monitoring, and automation features, and we're really excited to develop our partnership further.”* -Erick Friis, Founding Engineer at LangChain #### Discover Advanced Integration Options with Qdrant Hybrid Cloud and LangChain Building apps with Qdrant Hybrid Cloud and LangChain comes with several key advantages: **Seamless Deployment:** With Qdrant Hybrid Cloud's Kubernetes-native architecture, deploying Qdrant is as simple as a few clicks, allowing you to choose your preferred environment. Coupled with LangChain's flexibility, users can effortlessly create advanced RAG solutions anywhere with minimal effort. **Open-Source Compatibility:** LangChain and Qdrant support a dependable and mature integration, providing peace of mind to those developing and deploying large-scale AI solutions. With comprehensive documentation, code samples, and tutorials, users of all skill levels can harness the advanced features of data ingestion and vector search to their fullest potential. **Advanced RAG Performance:** By infusing LLMs with relevant context, Qdrant offers superior results for RAG use cases. Integrating vector search yields improved retrieval accuracy, faster query speeds, and reduced computational overhead. LangChain streamlines the entire process, offering speed, scalability, and efficiency, particularly beneficial for enterprise-scale deployments dealing with vast datasets. Furthermore, [LangSmith](https://www.langchain.com/langsmith) provides one-line instrumentation for debugging, observability, and ongoing performance testing of LLM applications. #### Start Building With LangChain and Qdrant Hybrid Cloud: Develop a RAG-Based Employee Onboarding System To get you started, we’ve put together a tutorial that shows how to create next-gen AI applications with Qdrant Hybrid Cloud using the LangChain framework and Cohere embeddings. ![hybrid-cloud-langchain-tutorial](/blog/hybrid-cloud-langchain/hybrid-cloud-langchain-tutorial.png) #### Tutorial: Build a RAG System for Employee Onboarding We created a comprehensive tutorial to show how you can build a RAG-based system with Qdrant Hybrid Cloud, LangChain and Cohere’s embeddings. This use case is focused on building a question-answering system for internal corporate employee onboarding. [Try the Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-langchain.md
--- draft: false title: Building LLM Powered Applications in Production - Hamza Farooq | Vector Space Talks slug: llm-complex-search-copilot short_description: Hamza Farooq discusses the future of LLMs, complex search, and copilots. description: Hamza Farooq presents the future of large language models, complex search, and copilot, discussing real-world applications and the challenges of implementing these technologies in production. preview_image: /blog/from_cms/hamza-farooq-cropped.png date: 2024-01-09T12:16:22.760Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - LLM - Vector Database --- > *"There are 10 billion search queries a day, estimated half of them go unanswered. Because people don't actually use search as what we used.”*\ > -- Hamza Farooq > How do you think Hamza's background in machine learning and previous experiences at Google and Walmart Labs have influenced his approach to building LLM-powered applications? Hamza Farooq, an accomplished educator and AI enthusiast, is the founder of Traversaal.ai. His journey is marked by a relentless passion for AI exploration, particularly in building Large Language Models. As an adjunct professor at UCLA Anderson, Hamza shapes the future of AI by teaching cutting-edge technology courses. At Traversaal.ai, he empowers businesses with domain-specific AI solutions, focusing on conversational search and recommendation systems to deliver personalized experiences. With a diverse career spanning academia, industry, and entrepreneurship, Hamza brings a wealth of experience from time at Google. His overarching goal is to bridge the gap between AI innovation and real-world applications, introducing transformative solutions to the market. Hamza eagerly anticipates the dynamic challenges and opportunities in the ever-evolving field of AI and machine learning. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1oh31JA2XsqzuZhCUQVNN8?si=viPPgxiZR0agFhz1QlimSA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/0N9ozwgmEQM).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/0N9ozwgmEQM?si=4f_MaEUrberT575w" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Building-LLM-Powered-Applications-in-Production---Hamza-Farooq--Vector-Space-Talks-006-e2cuur5/a-aan8b8j" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## Top Takeaways: UX specialist? Your expertise in designing seamless user experiences for GenAI products is guaranteed to be in high demand. Let's elevate the user interface for next-gen technology! In this episode, Hamza presents the future of large language models and complex search, discussing real-world applications and the challenges of implementing these technologies in production. 5 Keys to Learning from the Episode: 1. **Complex Search** - Discover how LLMs are revolutionizing the way we interact with search engines and enhancing the search experience beyond basic queries. 2. **Conversational Search and Personalization** - Explore the potential of conversational search and personalized recommendations using open-source LLMs, bringing a whole new level of user engagement. 3. **Challenges and Solutions** - Uncover the downtime challenges faced by LLM services and learn the strategies deployed to mitigate these issues for seamless operation. 4. **Traversal AI's Unique Approach** - Learn how Traversal AI has created a unified platform with a myriad of applications, simplifying the integration of LLMs and domain-specific search. 5. **The Importance of User Experience (UX)** - Understand the unparalleled significance of UX professionals in shaping the future of Gen AI products, and how they play a pivotal role in enhancing user interactions with LLM-powered applications. > Fun Fact: User experience (UX) designers are anticipated to be crucial in the development of AI-powered products as they bridge the gap between user interaction and the technical aspects of the AI systems. > ## Show Notes: 00:00 Teaching GPU AI with open source products.\ 06:40 Complex search leads to conversational search implementation.\ 07:52 Generating personalized travel itineraries with ease.\ 12:02 Maxwell's talk highlights challenges in search technology.\ 16:01 Balancing preferences and trade-offs in travel.\ 17:45 Beta mode, selective, personalized database.\ 22:15 Applications needed: chatbot, knowledge retrieval, recommendation, job matching, copilot\ 23:59 Challenges for UX in developing gen AI. ## More Quotes from Hamza: *"Ux people are going to be more rare who can work on gen AI products than product managers and tech people, because for tech people, they can follow and understand code and they can watch videos, business people, they're learning GPT prompting and so on and so forth. But the UX people, there's literally no teaching guide except for a Chat GPT interface. So this user experience, they are going to be, their worth is going to be inequal in gold.”*\ -- Hamza Farooq *"Usually they don't come to us and say we need a pine cone or we need a quadrant or we need a local llama, they say, this is the problem you're trying to solve. And we are coming from a problem solving initiative from our company is that we got this. You don't have to hire three ML engineers and two NLP research scientists and three people from here for the cost of two people. We can do an entire end to end implementation. Because what we have is 80% product which is built and we can tune the 20% to what you need.”*\ -- Hamza Farooq *"Imagine you're trying to book a hotel, and you also get an article from New York Times that says, this is why this is a great, or a blogger that you follow and it sort of shows up in your. That is the strength that we have been powering, that you don't need to wait or you don't need to depend anymore on just the company's website itself. You can use the entire Internet to come up with an arsenal.”*\ -- Hamza Farooq ## Transcript: Demetrios: Yes, we are live. So what is going on? Hamza, it's great to have you here for this edition of the Vector Space Talks. Let's first start with this. Everybody that is here with us right now, great to have you. Let us know where you're dialing in from in the chat and feel free over the course of the next 20 - 25 minutes to ask any questions as they. Come up in the chat. I'll be monitoring it and maybe jumping. In in case we need to stop. Hunts at any moment. And if you or anybody you know would like to come and give a presentation on our vector space talks, we are very open to that. Reach out to me either on discord or LinkedIn or your preferred method of communication. Maybe it's carrier Pigeon. Whatever it may be, I am here and ready to hear your pitch about. What you want to talk about. It's always cool hearing about how people are building with Qdrant or what they. Are building in this space. So without further ado, let's jump into this with my man Hamza. Great to have you here, dude. Hamza Farooq: Thank you for having me. It's an honor. Demetrios: You say that now. Just wait. You don't know me that well. I guess that's the only thing. So let's just say this. You're doing some incredible stuff. You're the founder of Traversaal.ai. You have been building large language models in the past, and you're also a professor at UCLA. You're doing all kinds of stuff. And that is why I think it. Is my honor to have you here with us today. I know you've got all kinds of fun stuff that you want to get. Into, and it's really about building llm powered applications in production. You have some slides for us, I believe. So I'm going to kick it over. To you, let you start rocking, and in case anything comes up, I'll jump. In and stop you from going too. Far down the road. Hamza Farooq: Awesome. Thank you for that. I really like your joke of the carrier pigeon. Is it a geni carrier pigeon with multiple areas and h 100 attached to it? Demetrios: Exactly. Those are the expensive carrier pigeons. That's the premium version. I am not quite that GPU rich yet. Hamza Farooq: Absolutely. All right. I think that's a great segue. I usually tell people that I'm going to teach you all how to be a GPU poor AI gap person, and my job is to basically teach everyone, or the thesis of my organization is also, how can we build powerful solutions, LLM powered solutions by using open source products and open source llms and architectures so that we can stretch the dollar as much as possible. That's been my thesis and I have always pushed for open source because they've done some great job over there and they are coming in close to pretty much at par of what the industry standard is. But I digress. Let's start with my overall presentation. I'm here to talk about the future of search and copilots and just the overall experience which we are looking with llms. Hamza Farooq: So I know you gave a background about me. I am a founder at Traversaal.ai. Previously I was at Google and Walmart Labs. I have quite a few years of experience in machine learning. In fact, my first job in 2007 was working for SaaS and I was implementing trees for identifying fraud, for fraud detection. And I did not know that was honestly data science, but we were implementing that. I have had the experience of teaching at multiple universities and that sort of experience has really helped me do better at what I do, because when you can teach something, you actually truly understand that. All right, so why are we here? Why are we really here? I have a very strong mean game. Hamza Farooq: So we started almost a year ago, Char GPT came into our lives and almost all of a sudden we started using it. And I think in January, February, March, it was just an explosion of usage. And now we know all the different things that have been going on and we've seen peripheration of a lot of startups that have come in this space. Some of them are wrappers, some of them have done a lot, have a lot more motor. There are many, many different ways that we have been using it. I don't think we even know how many ways we can use charge GBT, but most often it's just been text generation, one form or the other. And that is what the focus has been. But if we look deeper, the llms that we know, they also can help us with a very important part, something which is called complex search. Hamza Farooq: And complex search is basically when we converse with a search system to actually give a much longer query of how we would talk to a human being. And that is something that has been missing for the longest time in our interfacing with any kind of search engine. Google has always been at the forefront of giving the best form of search for us all. But imagine if you were to look at any other e commerce websites other than Amazon. Imagine you go to Nike.com, you go to gap, you go to Banana Republic. What you see is that their search is really basic and this is an opportunity for a lot of companies to actually create a great search experience for the users with a multi tier engagement model. So you basically make a request. I would like to buy a Nike blue t shirt specially designed for golf with all these features which I need and at a reasonable price point. Hamza Farooq: It shows you a set of results and then from that you can actually converse more to it and say, hey, can you remove five or six or reduce this by a certain degree? That is the power of what we have at hand with complex search. And complex search is becoming quickly a great segue to why we need to implement conversational search. We would need to implement large language models in our ecosystem so that we can understand the context of what users have been asking. So I'll show you a great example of sort of know complex search that TripAdvisor has been. Last week in one of my classes at Stanford, we had head of AI from Trivia Advisor come in and he took us through an experience of a new way of planning your trips. So I'll share this example. So if you go to the website, you can use AI and you can actually select a city. So let's say I'm going to select London for that matter. Hamza Farooq: And I can say I'm going to go for a few days, I do next and I'm going to go with my partner now at the back end. This is just building up a version of complex search and I want to see attractions, great food, hidden gems. I basically just want to see almost everything. And then when I hit submit, the great thing what it does is that it sort of becomes a starting point for something that would have taken me quite a while to put it together, sort of takes all my information and generates an itinerary. Now see what's different about this. It has actual data about places where I can stay, things I can do literally day by day, and it's there for you free of cost generated within 10 seconds. This is an experience that did not exist before. You would have to build this by yourself and what you would usually do is you would go to chat. Hamza Farooq: GPT if you've started this year, you would say seven day itinerary to London and it would identify a few things over here. However, you see it has able to integrate the ability to book, the ability to actually see those restaurants all in one place. That is something that has not been done before. And this is the truest form of taking complex search and putting that into production and sort of create a great experience for the user so that they can understand what they can select. They can highlight and sort of interact with it. Going to pause here. Is there any question or I can help answer anything? Demetrios: No. Demetrios: Man, this is awesome though. I didn't even realize that this is already live, but it's 100% what a travel agent would be doing. And now you've got that at your fingertips. Hamza Farooq: So they have built a user experience which takes 10 seconds to build. Now, was it really happening in the back end? You have this macro task that I want to plan a vacation in Paris, I want to plan a vacation to London. And what web agents or auto agents or whatever you want to call them, they are recursively breaking down tasks into subtasks. And when you reach to an individual atomic subtask, it is able to divide it into actions which can be taken. So there's a task decomposition and a task recognition scene that is going on. And from that, for instance, Stripadvisor is able to build something of individual actions. And then it makes one interface for you where you can see everything ready to go. And that's the part that I have always been very interested in. Hamza Farooq: Whenever we go to Amazon or anything for search, we just do one tier search. We basically say, I want to buy a jeans, I want to buy a shirt, I want to buy. It's an atomic thing. Do you want to get a flight? Do you want to get an accommodation? Imagine if you could do, I would like to go to Tokyo or what kind of gear do I need? What kind of overall grade do I need to go to a glacier? And it can identify all the different subtasks that are involved in it and then eventually show you the action. Well, it's all good that it exists, but the biggest thing is that it's actually difficult to build complex search. Google can get away with it. Amazon can get away with it. But if you imagine how do we make sure that it's available to the larger masses? It's available to just about any company for that matter, if they want to build that experience at this point. Hamza Farooq: This is from a talk that was given by Maxwell a couple of months ago. There are 10 billion search queries a day, estimated half of them go unanswered. Because people don't actually use search as what we used. Because again, also because of GPT coming in and the way we have been conversing with our products, our search is getting more coherent, as we would expect it to be. We would talk to a person and it's great for finding a website for more complex questions or tasks. It often falls too short because a lot of companies, 99.99% companies, I think they are just stuck on elasticsearch because it's cheaper to run it, it's easier, it's out of the box, and a lot of companies do not want to spend the money or they don't have the people to help them build that as a product, as an SDK that is available and they can implement and starts working for them. And the biggest thing is that there are complex search is not just one query, it's multiple queries, sessions or deep, which requires deep engagement with search. And what I mean by deep engagement is imagine when you go to Google right now, you put in a search, you can give feedback on your search, but there's nothing that you can do that it can unless you start a new search all over again. Hamza Farooq: In perplexity, you can ask follow up questions, but it's also a bit of a broken experience because you can't really reduce as you would do with Jarvis in Ironman. So imagine there's a human aspect to it. And let me show you another example of a copilot system, let's say. So this is an example of a copilot which we have been working on. Demetrios: There is a question, there's actually two really good questions that came through, so I'm going to stop you before you get into this. Cool copilot Carlos was asking, what about downtime? When it comes to these LLM services. Hamza Farooq: I think the downtime. This is the perfect question. If you have a production level system running on Chat GPT, you're going to learn within five days that you can't run a production system on Chat GPT and you need to host it by yourself. And then you start with hugging face and then you realize hugging face can also go down. So you basically go to bedrock, or you go to an AWS or GCP and host your LLM over there. So essentially it's all fun with demos to show oh my God, it works beautifully. But consistently, if you have an SLA that 99.9% uptime, you need to deploy it in an architecture with redundancies so that it's up and running. And the eventual solution is to have dedicated support to it. Hamza Farooq: It could be through Azure open AI, I think, but I think even Azure openi tends to go down with open ais out of it's a little bit. Demetrios: Better, but it's not 100%, that is for sure. Hamza Farooq: Can I just give you an example? Recently we came across a new thing, the token speed. Also varies with the day and with the time of the day. So the token generation. And another thing that we found out that instruct, GPT. Instruct was great, amazing. But it's leaking the data. Even in a rack solution, it's leaking the data. So you have to go back to then 16k. Hamza Farooq: It's really slow. So to generate an answer can take up to three minutes. Demetrios: Yeah. So it's almost this catch 22. What do you prefer, leak data or slow speeds? There's always trade offs, folks. There's always trade offs. So Mike has another question coming through in the chat. And Carlos, thanks for that awesome question Mike is asking, though I presume you could modify the search itinerary with something like, I prefer italian restaurants when possible. And I was thinking about that when it comes to. So to add on to what Mike is saying, it's almost like every single piece of your travel or your itinerary would be prefaced with, oh, I like my flights at night, or I like to sit in the aisle row, and I don't want to pay over x amount, but I'm cool if we go anytime in December, et cetera, et cetera. Demetrios: And then once you get there, I like to go into hotels that are around this part of this city. I think you get what I'm going at, but the preference list for each of these can just get really detailed. And you can preference all of these different searches with what you were talking about. Hamza Farooq: Absolutely. So I think that's a great point. And I will tell you about a company that we have been closely working with. It's called Tripsby or Tripspy AI, and we actually help build them the ecosystem where you can have personalized recommendations with private discovery. It's pretty much everything that you just said. I prefer at this time, I prefer this. I prefer this. And it sort of takes audio and text, and you can converse it through WhatsApp, you can converse it through different ways. Hamza Farooq: They are still in the beta mode, and they go selectively, but literally, they have built this, they have taken a lot more personalization into play, and because the database is all the same, it's Ahmedius who gives out, if I'm pronouncing correct, they give out the database for hotels or restaurants or availability, and then you can build things on top of it. So they have gone ahead and built something, but with more user expectation. Imagine you're trying to book a hotel, and you also get an article from New York Times that says, this is why this is a great, or a blogger that you follow and it sort of shows up in your. That is the strength that we have been powering, that you don't need to wait or you don't need to depend anymore on just the company's website itself. You can use the entire Internet to come up with an arsenal. Demetrios: Yeah. Demetrios: And your ability. I think another example of this would be how I love to watch TikTok videos and some of the stuff that pops up on my TikTok feed is like Amazon finds you need to know about, and it's talking about different cool things you can buy on Amazon. If Amazon knew that I was liking that on TikTok, it would probably show it to me next time I'm on Amazon. Hamza Farooq: Yeah, I mean, that's what cookies are, right? Yeah. It's a conspiracy theory that you're talking about a product and it shows up on. Demetrios: Exactly. Well, so, okay. This website that you're showing is absolutely incredible. Carlos had a follow up question before we jump into the next piece, which is around the quality of these open source models and how you deal with that, because it does seem that OpenAI, the GPT-3 four, is still quite a. Hamza Farooq: Bit ahead these days, and that's the silver bullet you have to buy. So what we suggest is have open llms as a backup. So at a point in time, I know it will be subpar, but something subpar might be a little better than breakdown of your complete system. And that's what we have been employed, we have deployed. What we've done is that when we're building large scale products, we basically tend to put an ecosystem behind or a backup behind, which is like, if the token rate is not what we want, if it's not working, it's taking too long, we automatically switch to a redundant version, which is open source. It does perform. Like, for instance, even right now, perplexity is running a lot of things on open source llms now instead of just GPT wrappers. Demetrios: Yeah. Gives you more control. So I didn't want to derail this too much more. I know we're kind of running low on time, so feel free to jump back into it and talk fast. Demetrios: Yeah. Hamza Farooq: So can you give me a time check? How are we doing? Demetrios: Yeah, we've got about six to eight minutes left. Hamza Farooq: Okay, so I'll cover one important thing of why I built my company, Traversaal.ai. This is a great slide to see what everyone is doing everywhere. Everyone is doing so many different things. They're looking into different products for each different thing. You can pick one thing. Imagine the concern with this is that you actually have to think about every single product that you have to pick up because you have to meticulously go through, oh, for this I need this. For this I need this. For this I need this. Hamza Farooq: All what we have done is that we have created one platform which has everything under one roof. And I'll show you with a very simple example. This is our website. We call ourselves one platform with multiple applications. And in this what we have is we have any kind of data format, pretty much that you have any kind of integrations which you need, for example, any applications. And I'll zoom in a little bit. And if you need domain specific search. So basically, if you're looking for Internet search to come in any kind of llms that are in the market, and vector databases, you see Qdrant right here. Hamza Farooq: And what kind of applications that are needed? Do you need a chatbot? You need a knowledge retrieval system, you need recommendation system? You need something which is a job matching tool or a copilot. So if you've built a one stop shop where a lot of times when a customer comes in, usually they don't come to us and say we need a pine cone or we need a Qdrant or we need a local llama, they say, this is the problem you're trying to solve. And we are coming from a problem solving initiative from our company is that we got this. You don't have to hire three ML engineers and two NLP research scientists and three people from here for the cost of two people. We can do an entire end to end implementation. Because what we have is 80% product which is built and we can tune the 20% to what you need. And that is such a powerful thing that once they start trusting us, and the best way to have them trust me is they can come to my class on maven, they can come to my class in Stanford, they come to my class in UCLA, or they can. Demetrios: Listen to this podcast and sort of. Hamza Farooq: It adds credibility to what we have been doing with them. Sorry, stop sharing what we have been doing with them and sort of just goes in that direction that we can do these things pretty fast and we tend to update. I want to just cover one slide. At the end of the day, this is the main slide. Right now. All engineers and product managers think of, oh, llms and Gen AI and this and that. I think one thing we don't talk about is UX experience. I just showed you a UX experience on Tripadvisor. Hamza Farooq: It's so easy to explain, right? Like you're like, oh, I know how to use it and you can already find problems with it, which means that they've done a great job thinking about a user experience. I predict one main thing. Ux people are going to be more rare who can work on gen AI products than product managers and tech people, because for tech people, they can follow and understand code and they can watch videos, business people, they're learning GPT prompting and so on and so forth. But the UX people, there's literally no teaching guide except for a Chat GPT interface. So this user experience, they are going to be, their worth is going to be inequal in gold. Not bitcoin, but gold. It's basically because they will have to build user experiences because we can't imagine right now what it will look like. Demetrios: Yeah, I 100% agree with that, actually. Demetrios: I. Demetrios: Imagine you have seen some of the work from Linus Lee from notion and how notion is trying to add in the clicks. Instead of having to always chat with the LLM, you can just point and click and give it things that you want to do. I noticed with the demo that you shared, it was very much that, like, you're highlighting things that you like to do and you're narrowing that search and you're giving it more context without having to type in. I like italian food and I don't like meatballs or whatever it may be. Hamza Farooq: Yes. Demetrios: So that's incredible. Demetrios: This is perfect, man. Demetrios: And so for anyone that wants to continue the conversation with you, you are on LinkedIn. We will leave a link to your LinkedIn. And you're also teaching on Maven. You're teaching in Stanford, UCLA, all this fun stuff. It's been great having you here. Demetrios: I'm very excited and I hope to have you back because it's amazing seeing what you're building and how you're building it. Hamza Farooq: Awesome. I think, again, it's a pleasure and an honor and thank you for letting. Demetrios: Me speak about the UX part a. Hamza Farooq: Lot because when you go to your customers, you realize that you need the UX and all those different things. Demetrios: Oh, yeah, it's so true. It is so true. Well, everyone that is out there watching. Demetrios: Us, thank you for joining and we will see you next time. Next week we'll be back for another. Demetrios: Session of these vector talks and I am pleased to have you again. Demetrios: Reach out to me if you want to join us. Demetrios: You want to give a talk? I'll see you all later. Have a good one. Hamza Farooq: Thank you. Bye.
blog/building-llm-powered-applications-in-production-hamza-farooq-vector-space-talks-006.md
--- title: "Dust and Qdrant: Using AI to Unlock Company Knowledge and Drive Employee Productivity" draft: false slug: dust-and-qdrant #short_description: description: Using AI to Unlock Company Knowledge and Drive Employee Productivity preview_image: /case-studies/dust/preview.png date: 2024-02-06T07:03:26-08:00 author: Manuel Meyer featured: false tags: - Dust - case_study weight: 0 --- One of the major promises of artificial intelligence is its potential to accelerate efficiency and productivity within businesses, empowering employees and teams in their daily tasks. The French company [Dust](https://dust.tt/), co-founded by former Open AI Research Engineer [Stanislas Polu](https://www.linkedin.com/in/spolu/), set out to deliver on this promise by providing businesses and teams with an expansive platform for building customizable and secure AI assistants. ## Challenge "The past year has shown that large language models (LLMs) are very useful but complicated to deploy," Polu says, especially in the context of their application across business functions. This is why he believes that the goal of augmenting human productivity at scale is especially a product unlock and not only a research unlock, with the goal to identify the best way for companies to leverage these models. Therefore, Dust is creating a product that sits between humans and the large language models, with the focus on supporting the work of a team within the company to ultimately enhance employee productivity. A major challenge in leveraging leading LLMs like OpenAI, Anthropic, or Mistral to their fullest for employees and teams lies in effectively addressing a company's wide range of internal use cases. These use cases are typically very general and fluid in nature, requiring the use of very large language models. Due to the general nature of these use cases, it is very difficult to finetune the models - even if financial resources and access to the model weights are available. The main reason is that “the data that’s available in a company is a drop in the bucket compared to the data that is needed to finetune such big models accordingly,” Polu says, “which is why we believe that retrieval augmented generation is the way to go until we get much better at fine tuning”. For successful retrieval augmented generation (RAG) in the context of employee productivity, it is important to get access to the company data and to be able to ingest the data that is considered ‘shared knowledge’ of the company. This data usually sits in various SaaS applications across the organization. ## Solution Dust provides companies with the core platform to execute on their GenAI bet for their teams by deploying LLMs across the organization and providing context aware AI assistants through RAG. Users can manage so-called data sources within Dust and upload files or directly connect to it via APIs to ingest data from tools like Notion, Google Drive, or Slack. Dust then handles the chunking strategy with the embeddings models and performs retrieval augmented generation. ![solution-laptop-screen](/case-studies/dust/laptop-solutions.jpg) For this, Dust required a vector database and evaluated different options including Pinecone and Weaviate, but ultimately decided on Qdrant as the solution of choice. “We particularly liked Qdrant because it is open-source, written in Rust, and it has a well-designed API,” Polu says. For example, Dust was looking for high control and visibility in the context of their rapidly scaling demand, which made the fact that Qdrant is open-source a key driver for selecting Qdrant. Also, Dust's existing system which is interfacing with Qdrant, is written in Rust, which allowed Dust to create synergies with regards to library support. When building their solution with Qdrant, Dust took a two step approach: 1. **Get started quickly:** Initially, Dust wanted to get started quickly and opted for [Qdrant Cloud](https://qdrant.to/cloud), Qdrant’s managed solution, to reduce the administrative load on Dust’s end. In addition, they created clusters and deployed them on Google Cloud since Dust wanted to have those run directly in their existing Google Cloud environment. This added a lot of value as it allowed Dust to centralize billing and increase security by having the instance live within the same VPC. “The early setup worked out of the box nicely,” Polu says. 2. **Scale and optimize:** As the load grew, Dust started to take advantage of Qdrant’s features to tune the setup for optimization and scale. They started to look into how they map and cache data, as well as applying some of Qdrant’s [built-in compression features](/documentation/guides/quantization/). In particular, Dust leveraged the control of the [MMAP payload threshold](/documentation/concepts/storage/#configuring-memmap-storage) as well as [Scalar Quantization](/articles/scalar-quantization/), which enabled Dust to manage the balance between storing vectors on disk and keeping quantized vectors in RAM, more effectively. “This allowed us to scale smoothly from there,” Polu says. ## Results Dust has seen success in using Qdrant as their vector database of choice, as Polu acknowledges: “Qdrant’s ability to handle large-scale models and the flexibility it offers in terms of data management has been crucial for us. The observability features, such as historical graphs of RAM, Disk, and CPU, provided by Qdrant are also particularly useful, allowing us to plan our scaling strategy effectively.” ![“We were able to reduce the footprint of vectors in memory, which led to a significant cost reduction as we don’t have to run lots of nodes in parallel. While being memory-bound, we were able to push the same instances further with the help of quantization. While you get pressure on MMAP in this case you maintain very good performance even if the RAM is fully used. With this we were able to reduce our cost by 2x.” - Stanislas Polu, Co-Founder of Dust](/case-studies/dust/Dust-Quote.jpg) Dust was able to scale its application with Qdrant while maintaining low latency across hundreds of thousands of collections with retrieval only taking milliseconds, as well as maintaining high accuracy. Additionally, Polu highlights the efficiency gains Dust was able to unlock with Qdrant: "We were able to reduce the footprint of vectors in memory, which led to a significant cost reduction as we don’t have to run lots of nodes in parallel. While being memory-bound, we were able to push the same instances further with the help of quantization. While you get pressure on MMAP in this case you maintain very good performance even if the RAM is fully used. With this we were able to reduce our cost by 2x." ## Outlook Dust will continue to build out their platform, aiming to be the platform of choice for companies to execute on their internal GenAI strategy, unlocking company knowledge and driving team productivity. Over the coming months, Dust will add more connections, such as Intercom, Jira, or Salesforce. Additionally, Dust will expand on its structured data capabilities. To learn more about how Dust uses Qdrant to help employees in their day to day tasks, check out our [Vector Space Talk](https://www.youtube.com/watch?v=toIgkJuysQ4) featuring Stanislas Polu, Co-Founder of Dust.
blog/case-study-dust.md
--- title: "Are You Vendor Locked?" draft: false slug: are-you-vendor-locked short_description: "Redefining freedom in the age of Generative AI." description: "Redefining freedom in the age of Generative AI. We believe that vendor-dependency comes from hardware, not software. " preview_image: /blog/are-you-vendor-locked/are-you-vendor-locked.png social_preview_image: /blog/are-you-vendor-locked/are-you-vendor-locked.png date: 2024-05-05T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - vendor lock - hybrid cloud --- We all are. > *“There is no use fighting it. Pick a vendor and go all in. Everything else is a mirage.”* The last words of a seasoned IT professional > As long as we are using any product, our solution’s infrastructure will depend on its vendors. Many say that building custom infrastructure will hurt velocity. **Is this true in the age of AI?** It depends on where your company is at. Most startups don’t survive more than five years, so putting too much effort into infrastructure is not the best use of their resources. You first need to survive and demonstrate product viability. **Sometimes you may pick the right vendors and still fail.** ![gpu-costs](/blog/are-you-vendor-locked/gpu-costs.png) We have all started to see the results of the AI hardware bottleneck. Running LLMs is expensive and smaller operations might fold to high costs. How will this affect large enterprises? > If you are an established corporation, being dependent on a specific supplier can make or break a solid business case. For large-scale GenAI solutions, costs are essential to maintenance and dictate the long-term viability of such projects. In the short run, enterprises may afford high costs, but when the prices drop - then it’s time to adjust. > Unfortunately, the long run goal of scalability and flexibility may be countered by vendor lock-in. Shifting operations from one host to another requires expertise and compatibility adjustments. Should businesses become dependent on a single cloud service provider, they open themselves to risks ranging from soaring costs to stifled innovation. **Finding the best vendor is key; but it’s crucial to stay mobile.** ## **Hardware is the New Vendor Lock** > *“We’re so short on GPUs, the less people that use the tool [ChatGPT], the better.”* OpenAI CEO, Sam Altman > When GPU hosting becomes too expensive, large and exciting Gen AI projects lose their luster. If moving clouds becomes too costly or difficulty to implement - you are vendor-locked. This used to be common with software. Now, hardware is the new dependency. *Enterprises have many reasons to stay provider agnostic - but cost is the main one.* [Appenzeller, Bornstein & Casado from Andreessen Horowitz](https://a16z.com/navigating-the-high-cost-of-ai-compute/) point to growing costs of AI compute. It is still a vendor’s market for A100 hourly GPUs, largely due to supply constraints. Furthermore, the price differences between AWS, GCP and Azure are dynamic enough to justify extensive cost-benefit analysis from prospective customers. ![gpu-costs-a16z](/blog/are-you-vendor-locked/gpu-costs-a16z.png) *Source: Andreessen Horowitz* Sure, your competitors can brag about all the features they can access - but are they willing to admit how much their company has lost to convenience and increasing costs? As an enterprise customer, one shouldn’t expect a vendor to stay consistent in this market. ## How Does This Affect Qdrant? As an open source vector database, Qdrant is completely risk-free. Furthermore, cost savings is one of the many reasons companies use it to augment the LLM. You won’t need to burn through GPU cash for training or inference. A basic instance with a CPU and RAM can easily manage indexing and retrieval. > *However, we find that many of our customers want to host Qdrant in the same place as the rest of their infrastructure, such as the LLM or other data engineering infra. This can be for practical reasons, due to corporate security policies, or even global political reasons.* One day, they might find this infrastructure too costly. Although vector search will remain cheap, their training, inference and embedding costs will grow. Then, they will want to switch vendors. What could interfere with the switch? Compatibility? Technologies? Lack of expertise? In terms of features, cloud service standardization is difficult due to varying features between cloud providers. This leads to custom solutions and vendor lock-in, hindering migration and cost reduction efforts, [as seen with Snapchat and Twitter](https://www.businessinsider.com/snap-google-cloud-aws-reducing-costs-2023-2). ## **Fear, Uncertainty and Doubt** You spend months setting up the infrastructure, but your competitor goes all in with a cheaper alternative and has a competing product out in one month? Does avoiding the lock-in matter if your company will be out of business while you try to setup a fully agnostic platform? **Problem:** If you're not locked into a vendor, you're locked into managing a much larger team of engineers. The build vs buy tradeoff is real and it comes with its own set of risks and costs. **Acknowledgement:** Any organization that processes vast amounts of data with AI needs custom infrastructure and dedicated resources, no matter the industry. Having to work with expensive services such as A100 GPUs justifies the existence of in-house DevOps crew. Any enterprise that scales up needs to employ vigilant operatives if it wants to manage costs. > There is no need for **Fear, Uncertainty and Doubt**. Vendor lock is not a futile cause - so let’s dispel the sentiment that all vendors are adversaries. You just need to work with a company that is willing to accommodate flexible use of products. > **The Solution is Kubernetes:** Decoupling your infrastructure from a specific cloud host is currently the best way of staying risk-free. Any component of your solution that runs on Kubernetes can integrate seamlessly with other compatible infrastructure. This is how you stay dynamic and move vendors whenever it suits you best. ## **What About Hybrid Cloud?** The key to freedom is to building your applications and infrastructure to run on any cloud. By leveraging containerization and service abstraction using Kubernetes or Docker, software vendors can exercise good faith in helping their customers transition to other cloud providers. We designed the architecture of Qdrant Hybrid Cloud to meet the evolving needs of businesses seeking unparalleled flexibility, control, and privacy. This technology integrates Kubernetes clusters from any setting - cloud, on-premises, or edge - into a unified, enterprise-grade managed service. #### Take a look. It's completely yours. We’ll help you manage it. <p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/BF02jULGCfo" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> [Qdrant Hybrid Cloud](/hybrid-cloud/) marks a significant advancement in vector databases, offering the most flexible way to implement vector search. You can test out Qdrant Hybrid Cloud today. Sign up or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and get started in the **Hybrid Cloud** section. Also, to learn more about Qdrant Hybrid Cloud read our [Official Release Blog](/blog/hybrid-cloud/) or our [Qdrant Hybrid Cloud website](/hybrid-cloud/). For additional technical insights, please read our [documentation](/documentation/hybrid-cloud/). #### Try it out! [![hybrid-cloud-cta.png](/blog/are-you-vendor-locked/hybrid-cloud-cta.png)](https://qdrant.to/cloud)
blog/are-you-vendor-locked.md
--- title: "Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database" draft: false slug: case-study-dailymotion # Change this slug to your page slug if needed short_description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database preview_image: /case-studies/dailymotion/preview-dailymotion.png # Change this # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-02-27T13:22:31+01:00 author: Atita Arora featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - dailymotion - case study - recommender system weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ## Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database In today's digital age, the consumption of video content has become ubiquitous, with an overwhelming abundance of options available at our fingertips. However, amidst this vast sea of videos, the challenge lies not in finding content, but in discovering the content that truly resonates with individual preferences and interests and yet is diverse enough to not throw users into their own filter bubble. As viewers, we seek meaningful and relevant videos that enrich our experiences, provoke thought, and spark inspiration. Dailymotion is not just another video application; it's a beacon of curated content in an ocean of options. With a steadfast commitment to providing users with meaningful and ethical viewing experiences, Dailymotion stands as the bastion of videos that truly matter. They aim to boost a dynamic visual dialogue, breaking echo chambers and fostering discovery. ### Scale - **420 million+ videos** - **2k+ new videos / hour** - **13 million+ recommendations / day** - **300+ languages in videos** - **Required response time < 100 ms** ### Challenge - **Improve video recommendations** across all 3 applications of Dailymotion (mobile app, website and embedded video player on all major French and International sites) as it is the main driver of audience engagement and revenue stream of the platform. - Traditional [collaborative recommendation model](https://en.wikipedia.org/wiki/Collaborative_filtering) tends to recommend only popular videos, fresh and niche videos suffer due to zero or minimal interaction - Video content based recommendation system required processing all the video embedding at scale and in real time, as soon as they are added to the platform - Exact neighbor search at the scale and keeping them up to date with new video updates in real time at Dailymotion was unreasonable and unrealistic - Precomputed [KNN](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) would be expensive and may not work due to video updates every hour - Platform needs fast recommendations ~ &lt; 100ms - Needed fast ANN search on a vector search engine which could support the scale and performance requirements of the platform ### Background / Journey The quest of Dailymotion to deliver an intelligent video recommendation engine providing a curated selection of videos to its users started with a need to present more relevant videos to the first-time users of the platform (cold start problem) and implement an ideal home feed experience to allow users to watch videos that are expected to be relevant, diverse, explainable, and easily tunable. \ This goal accounted for their efforts focused on[ Optimizing Video Recommender for Dailymotion's Home Feed ](https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd)back in the time. They continued their work in [Optimising the recommender engine with vector databases and opinion mining](https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020) later with emphasis on ranking videos based on features like freshness, real views ratio, watch ratio, and aspect ratio to enhance user engagement and optimise watch time per user on the home feed. Furthermore, the team continued to focus on diversifying user interests by grouping videos based on interest and using stratified sampling to ensure a balanced experience for users. By now it was clear to the Dailymotion team that the future initiatives will involve overcoming obstacles related to data processing, sentiment analysis, and user experience to provide meaningful and diverse recommendations. The main challenge stayed at the candidate generation process, textual embeddings, opinion mining, along with optimising the efficiency and accuracy of these processes and tackling the complexities of large-scale content curation. ### Solution at glance ![solution-at-glance](/case-studies/dailymotion/solution-at-glance.png) The solution involved implementing a content based Recommendation System leveraging Qdrant to power the similar videos, with the following characteristics. **Fields used to represent each video** - Title , Tags , Description , Transcript (generated by [OpenAI whisper](https://openai.com/research/whisper)) **Encoding Model used** - [MUSE - Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa) * Supports - 16 languages ### Why Qdrant? ![quote-from-Samuel](/case-studies/dailymotion/Dailymotion-Quote.jpg) Looking at the complexity, scale and adaptability of the desired solution, the team decided to leverage Qdrant’s vector database to implement a content-based video recommendation that undoubtedly offered several advantages over other methods: **1. Efficiency in High-Dimensional Data Handling:** Video content is inherently high-dimensional, comprising various features such as audio, visual, textual, and contextual elements. Qdrant excels in efficiently handling high-dimensional data and out-of-the-box support for all the models with up to 65536 dimensions, making it well-suited for representing and processing complex video features with choice of any embedding model. **2. Scalability:** As the volume of video content and user interactions grows, scalability becomes paramount. Qdrant is meticulously designed to scale vertically as well as horizontally, allowing for seamless expansion to accommodate large volumes of data and user interactions without compromising performance. **3. Fast and Accurate Similarity Search:** Efficient video recommendation systems rely on identifying similarities between videos to make relevant recommendations. Qdrant leverages advanced HNSW indexing and similarity search algorithms to support fast and accurate retrieval of similar videos based on their feature representations nearly instantly (20ms for this use case) **4. Flexibility in vector representation with metadata through payloads:** Qdrant offers flexibility in storing vectors with metadata in form of payloads and offers support for advanced metadata filtering during the similarity search to incorporate custom logic. **5. Reduced Dimensionality and Storage Requirements:** Vector representations in Qdrant offer various Quantization and memory mapping techniques to efficiently store and retrieve vectors, leading to reduced storage requirements and computational overhead compared to alternative methods such as content-based filtering or collaborative filtering. **6. Impressive Benchmarks:** [Qdrant’s benchmarks](/benchmarks/) has definitely been one of the key motivations for the Dailymotion’s team to try the solution and the team comments that the performance has been only better than the benchmarks. **7. Ease of usage:** Qdrant API’s have been immensely easy to get started with as compared to Google Vertex Matching Engine (which was Dailymotion’s initial choice) and the support from the team has been of a huge value to us. **8. Being able to fetch data by id** Qdrant allows to retrieve vector point / videos by ids while the Vertex Matching Engine requires a vector input to be able to search for other vectors which was another really important feature for Dailymotion ### Data Processing pipeline ![data-processing](/case-studies/dailymotion/data-processing-pipeline.png) Figure shows the streaming architecture of the data processing pipeline that processes everytime a new video is uploaded or updated (Title, Description, Tags, Transcript), an updated embedding is computed and fed directly into Qdrant. ### Results ![before-qdrant-results](/case-studies/dailymotion/before-qdrant.png) There has been a big improvement in the recommended content processing time and quality as the existing system had issues like: 1. Subpar video recommendations due to long processing time ~ 5 hours 2. Collaborative recommender tended to recommend and focused on high signal / popular videos 3. Metadata based recommender focussed only on a very small scope of trusted video sources 4. The recommendations did not take contents of the video into consideration ![after-qdrant-results](/case-studies/dailymotion/after-qdrant.png) The new recommender system implementation leveraging Qdrant along with the collaborative recommender offered various advantages : 1. The processing time for the new video content reduced significantly to a few minutes which enabled the fresh videos to be part of recommendations. 2. The performant & scalable scope of video recommendation currently processes 22 Million videos and can provide recommendation for videos with fewer interactions too. 3. The overall huge performance gain on the low signal videos has contributed to more than 3 times increase on the interaction and CTR ( number of clicks) on the recommended videos. 4. Seamlessly solved the initial cold start and low performance problems with the fresh content. ### Outlook / Future plans The team is very excited with the results they achieved on their recommender system and wishes to continue building with it. \ They aim to work on Perspective feed next and say >”We've recently integrated this new recommendation system into our mobile app through a feature called Perspective. The aim of this feature is to disrupt the vertical feed algorithm, allowing users to discover new videos. When browsing their feed, users may encounter a video discussing a particular movie. With Perspective, they have the option to explore different viewpoints on the same topic. Qdrant plays a crucial role in this feature by generating candidate videos related to the subject, ensuring users are exposed to diverse perspectives and preventing them from being confined to an echo chamber where they only encounter similar viewpoints.” \ > Gladys Roch - Machine Learning Engineer ![perspective-feed-with-qdrant](/case-studies/dailymotion/perspective-feed-qdrant.jpg) The team is also interested in leveraging advanced features like [Qdrant’s Discovery API](/documentation/concepts/explore/#recommendation-api) to promote exploration of content to enable finding not only similar but dissimilar content too by using positive and negative vectors in the queries and making it work with the existing collaborative recommendation model. ### References **2024 -** [https://www.youtube.com/watch?v=1ULpLpWD0Aw](https://www.youtube.com/watch?v=1ULpLpWD0Aw) **2023 -** [https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020](https://medium.com/dailymotion/reinvent-your-recommender-system-using-vector-database-and-opinion-mining-a4fadf97d020) **2022 -** [https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd](https://medium.com/dailymotion/optimizing-video-feed-recommendations-with-diversity-machine-learning-first-steps-4cf9abdbbffd)
blog/case-study-dailymotion.md
--- draft: false title: "Vector Search Complexities: Insights from Projects in Image Search and RAG - Noé Achache | Vector Space Talks" slug: vector-image-search-rag short_description: Noé Achache discusses their projects in image search and RAG and its complexities. description: Noé Achache shares insights on vector search complexities, discussing projects on image matching, document retrieval, and handling sensitive medical data with practical solutions and industry challenges. preview_image: /blog/from_cms/noé-achache-cropped.png date: 2024-01-09T13:51:26.168Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Image Search - Retrieval Augmented Generation --- > *"I really think it's something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects.”*\ -- Noé Achache on the future of image embedding > Exploring the depths of vector search? Want an analysis of its application in image search and document retrieval? Noé got you covered. Noé Achache is a Lead Data Scientist at Sicara, where he worked on a wide range of projects mostly related to computer vision, prediction with structured data, and more recently LLMs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2YgcSFjP7mKE0YpDGmSiq5?si=6BhlAMveSty4Yt7umPeHjA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/1vKoiFAdorE).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/1vKoiFAdorE?si=wupcX2v8vHNnR_QB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Navigating-the-Complexities-of-Vector-Search-Practical-Insights-from-Diverse-Projects-in-Image-Search-and-RAG---No-Achache--Vector-Space-Talk-008-e2diivl/a-aap4q5d" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Discover the efficacy of Dino V2 in image representation and the complexities of deploying vector databases, while navigating the challenges of fine-tuning and data safety in sensitive fields. In this episode, Noe, shares insights on vector search from image search to retrieval augmented generation, emphasizing practical application in complex projects. 5 key insights you’ll learn: 1. Cutting-edge Image Search: Learn about the advanced model Dino V2 and its efficacy in image representation, surpassing traditional feature transform methods. 2. Data Deduplication Strategies: Gain knowledge on the sophisticated process of deduplicating real estate listings, a vital task in managing extensive data collections. 3. Document Retrieval Techniques: Understand the challenges and solutions in retrieval augmented generation for document searches, including the use of multi-language embedding models. 4. Protection of Sensitive Medical Data: Delve into strategies for handling confidential medical information and the importance of data safety in health-related applications. 5. The Path Forward in Model Development: Hear Noe discuss the pressing need for new types of models to address the evolving needs within the industry. > Fun Fact: The best-performing model Noé mentions for image representation in his image search project is Dino V2, which interestingly didn't require fine-tuning to understand objects and patterns. > ## Show Notes: 00:00 Relevant experience in vector DB projects and talks.\ 05:57 Match image features, not resilient to changes.\ 07:06 Compute crop vectors, and train to converge.\ 11:37 Simple training task, improve with hard examples.\ 15:25 Improving text embeddings using hard examples.\ 22:29 Future of image embedding for document search.\ 27:28 Efficient storage and retrieval process feature.\ 29:01 Models handle varied data; sparse vectors now possible.\ 35:59 Use memory, avoid disk for CI integration.\ 37:43 Challenging metadata filtering for vector databases and new models ## More Quotes from Noé: *"So basically what was great is that Dino manages to understand all objects and close patterns without fine tuning. So you can get an off the shelf model and get started very quickly and start bringing value very quickly without having to go through all the fine tuning processes.”*\ -- Noé Achache *"And at the end, the embeddings was not learning any very complex features, so it was not really improving it.”*\ -- Noé Achache *"When using an API model, it's much faster to use it in asynchronous mode like the embedding equation went something like ten times or 100 times faster. So it was definitely, it changed a lot of things.”*\ -- Noé Achache ## Transcript: Demetrios: Noe. Great to have you here everyone. We are back for another vector space talks and today we are joined by my man Noe, who is the lead data scientist at Sicara, and if you do not know, he is working on a wide range of projects, mostly related to computer vision. Vision. And today we are talking about navigating the complexities of vector search. We're going to get some practical insights from diverse projects in image search and everyone's favorite topic these days, retrieval augmented generation, aka rags. So noe, I think you got something for us. You got something planned for us here? Noe Acache: Yeah, I do. I can share them. Demetrios: All right, well, I'm very happy to have you on here, man. I appreciate you doing this. And let's get you sharing your screen so we can start rocking, rolling. Noe Acache: Okay. Can you see my screen? Demetrios: Yeah. Awesome. Noe Acache: Great. Thank you, Demetrius, for the great introduction. I just completed quickly. So as you may have guessed, I'm french. I'm a lead data scientist at Sicara. So Secura is a service company helping its clients in data engineering and data science, so building projects for them. Before being there, I worked at realtics on optical character recognition, and I'm now working mostly on, as you said, computer vision and also Gen AI. So I'm leading the geni side and I've been there for more than three years. Noe Acache: So some relevant experience on vector DB is why I'm here today, because I did four projects, four vector soft projects, and I also wrote an article on how to choose your database in 2023, your vector database. And I did some related talks in other conferences like Pydata, DVC, all the geni meetups of London and Paris. So what are we going to talk about today? First, an overview of the vector search projects. Just to give you an idea of the kind of projects we can do with vector search. Then we will dive into the specificities of the image search project and then into the specificities of the text search project. So here are the four projects. So two in image search, two in text search. The first one is about matching objects in videos to sell them afterwards. Noe Acache: So basically you have a video. We first detect the object. So like it can be a lamp, it can be a piece of clothes, anything, we classify it and then we compare it to a large selection of similar objects to retrieve the most similar one to a large collection of sellable objects. The second one is about deduplicating real estate adverts. So when agencies want to sell a property, like sometimes you have several agencies coming to take pictures of the same good. So you have different pictures of the same good. And the idea of this project was to match the different pictures of the same good, the same profile. Demetrios: I've seen that dude. I have been a victim of that. When I did a little house shopping back like five years ago, it would be the same house in many different ones, and sometimes you wouldn't know because it was different photos. So I love that you were thinking about it that way. Sorry to interrupt. Noe Acache: Yeah, so to be fair, it was the idea of my client. So basically I talk about it a bit later with aggregating all the adverts and trying to deduplicate them. And then the last two projects are about drugs retrieval, augmented generation. So the idea to be able to ask questions to your documentation. The first one was for my company's documentation and the second one was for a medical company. So different kind of complexities. So now we know all about this project, let's dive into them. So regarding the image search project, to compute representations of the images, the best performing model from the benchmark, and also from my experience, is currently Dino V two. Noe Acache: So a model developed by meta that you may have seen, which is using visual transformer. And what's amazing about it is that using the attention map, you can actually segment what's important in the picture, although you haven't told it specifically what's important. And as a human, it will learn to focus on the dog, on this picture and do not take into consideration the noisy background. So when I say best performing model, I'm talking about comparing to other architecture like Resnet efficient nets models, an approach I haven't tried, which also seems interesting. If anyone tried it for similar project, please reach out afterwards. I'll be happy to talk about it. Is sift for feature transform something about feature transform. It's basically a more traditional method without learned features through machine learning, as in you don't train the model, but it's more traditional methods. Noe Acache: And you basically detect the different features in an image and then try to find the same features in an image which is supposed to post to be the same. All the blue line trying to match the different features. Of course it's made to match image with exactly the same content, so it wouldn't really work. Probably not work in the first use case, because we are trying to match similar clothes, but which are not exactly the same one. And also it's known to be not very resilient with the changes of angles when it changes too much, et cetera. So it may not be very good as well for the second use case, but again, I haven't tried it, so just leaving it here on the side. Just a quick word about how Dino works in case you're interested. So it's a vision transformer and it's trade in an unsupervised way, as in you don't have any labels provided, so you just take pictures and you first extract small crops and large crops and you augment them. Noe Acache: And then you're going to use the model to compute vectors, representations of each of these crops. And since they all represent the same image, they should all be the same. So then you can compute a loss to see how they diverge and to basically train them to become the same. So this is how it works and how it works. And the difference between the second version is just that they use more data sets and the distillation method to have a very performant model, which is also very fast to run regarding the first use case. So, matching objects in videos to sellable items for people who use Google lengths before, it's quite similar, where in Google lens you can take a picture of something and then it will try to find similar objects to buy. So again, you have a video and then you detect one of the objects in the video, put it and compare it to a vector database which contains a lot of objects which are similar for the representation. And then it will output the most similar lamp here. Noe Acache: Now we're going to try to analyze how this project went regarding the positive outcomes and the changes we faced. So basically what was great is that Dino manages to understand all objects and close patterns without fine tuning. So you can get an off the shelf model and get started very quickly and start bringing value very quickly without having to go through all the fine tuning processes. And it also manages to focus on the object without segmentation. What I mean here is that we're going to get a box of the object, and in this box there will be a very noisy background which may disturb the matching process. And since Dino really manages to focus on the object, that's important on the image. It doesn't really matter that we don't segmentate perfectly the image. Regarding the vector database, this project started a while ago, and I think we chose the vector database something like a year and a half ago. Noe Acache: And so it was before all the vector database hype. And at the time, the most famous one was Milvos, the only famous one actually. And we went for an on premise development deployment. And actually our main learning is that the DevOps team really struggled to deploy it, because basically it's made of a lot of pods. And the documentations about how these pods are supposed to interact together is not really perfect. And it was really buggy at this time. So the clients lost a lot of time and money in this deployment. The challenges, other challenges we faced is that we noticed that the matching wasn't very resilient to large distortions. Noe Acache: So for furnitures like lamps, it's fine. But let's say you have a trouser and a person walking. So the trouser won't exactly have the same shape. And since you haven't trained your model to specifically know, it shouldn't focus on the movements. It will encode this movement. And then in the matching, instead of matching trouser, which looks similar, it will just match trouser where in the product picture the person will be working as well, which is not really what we want. And the other challenges we faced is that we tried to fine tune the model, but our first fine tuning wasn't very good because we tried to take an open source model and, and get the labels it had, like on different furnitures, clothes, et cetera, to basically train a model to classify the different classes and then remove the classification layer to just keep the embedding parts. The thing is that the labels were not specific enough. Noe Acache: So the training task was quite simple. And at the end, the embeddings was not learning any very complex features, so it was not really improving it. So jumping onto the areas of improvement, knowing all of that, the first thing I would do if I had to do it again will be to use the managed milboss for a better fine tuning, it would be to labyd hard examples, hard pairs. So, for instance, you know that when you have a matching pair where the similarity score is not too high or not too low, you know, it's where the model kind of struggles and you will find some good matching and also some mistakes. So it's where it kind of is interesting to level to then be able to fine tune your model and make it learn more complex things according to your tasks. Another possibility for fine tuning will be some sort of multilabel classification. So for instance, if you consider tab close, you could say, all right, those disclose contain buttons. It have a color, it have stripes. Noe Acache: And for all of these categories, you'll get a score between zero and one. And concatenating all these scores together, you can get an embedding which you can put in a vector database for your vector search. It's kind of hard to scale because you need to do a specific model and labeling for each type of object. And I really wonder how Google lens does because their algorithm work very well. So are they working more like with this kind of functioning or this kind of functioning? So if anyone had any thought on that or any idea, again, I'd be happy to talk about it afterwards. And finally, I feel like we made a lot of advancements in multimodal training, trying to combine text inputs with image. We've made input to build some kind of complex embeddings. And how great would it be to have an image embeding you could guide with text. Noe Acache: So you could just like when creating an embedding of your image, just say, all right, here, I don't care about the movements, I only care about the features on the object, for instance. And then it will learn an embedding according to your task without any fine tuning. I really feel like with the current state of the arts we are able to do this. I mean, we need to do it, but the technology is ready. Demetrios: Can I ask a few questions before you jump into the second use case? Noe Acache: Yes. Demetrios: What other models were you looking at besides the dyno one? Noe Acache: I said here, compared to Resnet, efficient nets and these kind of architectures. Demetrios: Maybe this was too early, or maybe it's not actually valuable. Was that like segment anything? Did that come into the play? Noe Acache: So segment anything? I don't think they redo embeddings. It's really about segmentation. So here I was just showing the segmentation part because it's a cool outcome of the model and it shows that the model works well here we are really here to build a representation of the image we cannot really play with segment anything for the matching, to my knowledge, at least. Demetrios: And then on the next slide where you talked about things you would do differently, or the last slide, I guess the areas of improvement you mentioned label hard examples for fine tuning. And I feel like, yeah, there's one way of doing it, which is you hand picking the different embeddings that you think are going to be hard. And then there's another one where I think there's tools out there now that can kind of show you where there are different embeddings that aren't doing so well or that are more edge cases. Noe Acache: Which tools are you talking about? Demetrios: I don't remember the names, but I definitely have seen demos online about how it'll give you a 3d space and you can kind of explore the different embeddings and explore what's going on I. Noe Acache: Know exactly what you're talking about. So tensorboard embeddings is a good tool for that. I could actually demo it afterwards. Demetrios: Yeah, I don't want to get you off track. That's something that came to mind if. Noe Acache: You'Re talking about the same tool. Turns out embedding. So basically you have an embedding of like 1000 dimensions and it just reduces it to free dimensions. And so you can visualize it in a 3d space and you can see how close your embeddings are from each other. Demetrios: Yeah, exactly. Noe Acache: But it's really for visualization purposes, not really for training purposes. Demetrios: Yeah, okay, I see. Noe Acache: Talking about the same thing. Demetrios: Yeah, I think that sounds like what I'm talking about. So good to know on both of these. And you're shooting me straight on it. Mike is asking a question in here, like text embedding, would that allow you to include an image with alternate text? Noe Acache: An image with alternate text? I'm not sure the question. Demetrios: So it sounds like a way to meet regulatory accessibility requirements if you have. I think it was probably around where you were talking about the multimodal and text to guide the embeddings and potentially would having that allow you to include an image with alternate text? Noe Acache: The idea is not to. I feel like the question is about inserting text within the image. It's what I understand. My idea was just if you could create an embedding that could combine a text inputs and the image inputs, and basically it would be trained in such a way that the text would basically be used as a guidance of the image to only encode the parts of the image which are required for your task to not be disturbed by the noisy. Demetrios: Okay. Yeah. All right, Mike, let us know if that answers the question or if you have more. Yes. He's saying, yeah, inserting text with image for people who can't see. Noe Acache: Okay, cool. Demetrios: Yeah, right on. So I'll let you keep cruising and I'll try not to derail it again. But that was great. It was just so pertinent. I wanted to stop you and ask some questions. Noe Acache: Larry, let's just move in. So second use case is about deduplicating real estate adverts. So as I was saying, you have two agencies coming to take different pictures of the same property. And the thing is that they may not put exactly the same price or the same surface or the same location. So you cannot just match them with metadata. So what our client was doing beforehand, and he kind of built a huge if machine, which is like, all right, if the location is not too far and if the surface is not too far. And the price, and it was just like very complex rules. And at the end there were a lot of edge cases. Noe Acache: It was very hard to maintain. So it was like, let's just do a simpler solution just based on images. So it was basically the task to match images of the same properties. Again on the positive outcomes is that the dino really managed to understand the patterns of the properties without any fine tuning. And it was resilient to read different angles of the same room. So like on the pictures I shown, I just showed, the model was quite good at identifying. It was from the same property. Here we used cudrant for this project was a bit more recent. Noe Acache: We leveraged a lot the metadata filtering because of course we can still use the metadata even it's not perfect just to say, all right, only search vectors, which are a price which is more or less 10% this price. The surface is more or less 10% the surface, et cetera, et cetera. And indexing of this metadata. Otherwise the search is really slowed down. So we had 15 million vectors and without this indexing, the search could take up to 20, 30 seconds. And with indexing it was like in a split second. So it was a killer feature for us. And we use quantization as well to save costs because the task was not too hard. Noe Acache: Since using the metadata we managed to every time reduce the task down to a search of 1000 vector. So it wasn't too annoying to quantize the vectors. And at the end for 15 million vectors, it was only $275 per month, which with the village version, which is very decent. The challenges we faced was really about bathrooms and empty rooms because all bathrooms kind of look similar. They have very similar features and same for empty rooms since there is kind of nothing in them, just windows. The model would often put high similarity scores between two bathroom of different properties and same for the empty rooms. So again, the method to overcome this thing will be to label harpers. So example were like two images where the model would think they are similar to actually tell the model no, they are not similar to allow it to improve its performance. Noe Acache: And again, same thing on the future of image embedding. I really think it's something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects. So the principle of retribution generation for those of you who are not familiar with it is just you take some documents, you have an embedding model here, an embedding model trained on text and not on images, which will output representations from these documents, put it in a vector database, and then when a user will ask a question over the documentation, it will create an embedding of the request and retrieve the most similar documents. And afterwards we usually pass it to an LLM, which will generate an answer. But here in this talk, we won't focus on the overall product, but really on the vector search part. So the two projects was one, as I told you, a rack for my nutrition company, so endosion with around a few hundred thousand of pages, and the second one was for medical companies, so for the doctors. So it was really about the documentation search rather than the LLM, because you cannot output any mistake. The model we used was OpenAI Ada two. Noe Acache: Why? Mostly because for the first use case it's multilingual and it was off the shelf, very easy to use, so we did not spend a lot of time on this project. So using an API model made it just much faster. Also it was multilingual, approved by the community, et cetera. For the second use case, we're still working on it. So since we use GPT four afterwards, because it's currently the best LLM, it was also easier to use adatu to start with, but we may use a better one afterwards because as I'm saying, it's not the best one if you refer to the MTAB. So the massive text embedding benchmark made by hugging face, which basically gathers a lot of embeddings benchmark such as retrieval for instance, and so classified the different model for these benchmarks. The M tab is not perfect because it's not taking into account cross language capabilities. All the benchmarks are just for one language and it's not as well taking into account most of the languages, like it's only considering English, Polish and Chinese. Noe Acache: And also it's probably biased for models trained on close source data sets. So like most of the best performing models are currently closed source APIs and hence closed source data sets, and so we don't know how they've been trained. So they probably trained themselves on these data sets. At least if I were them, it's what I would do. So I assume they did it to gain some points in these data sets. Demetrios: So both of these rags are mainly with documents that are in French? Noe Acache: Yes. So this one is French and English, and this one is French only. Demetrios: Okay. Yeah, that's why the multilingual is super important for these use cases. Noe Acache: Exactly. Again, for this one there are models for French working much better than other two, so we may change it afterwards, but right now the performance we have is decent. Since both projects are very similar, I'll jump into the conclusion for both of them together. So Ada two is good for understanding diverse context, wide range of documentation, medical contents, technical content, et cetera, without any fine tuning. The cross language works quite well, so we can ask questions in English and retrieve documents in French and the other way around. And also, quick note, because I did not do it from the start, is that when using an API model, it's much faster to use it in asynchronous mode like the embedding equation went something like ten times or 100 times faster. So it was definitely, it changed a lot of things. Again, here we use cudrant mostly to leverage the free tier so they have a free version. Noe Acache: So you can pop it in a second, get the free version, and using the feature which allows to put the vectors on disk instead of storing them on ram, which makes it a bit slower, you can easily support few hundred thousand of vectors and with a very decent response time. The challenge we faced is that mostly for the notion, so like mostly in notion, we have a lot of pages which are just a title because they are empty, et cetera. And so when pages have just a title, the content is so small that it will be very similar actually to a question. So often the documents were retrieved were document with very little content, which was a bit frustrating. Chunking appropriately was also tough. Basically, if you want your retrieval process to work well, you have to divide your documents the right way to create the embeddings. So you can use matrix rules, but basically you need to divide your documents in content which semantically makes sense and it's not always trivial. And also for the rag, for the medical company, sometimes we are asking questions about a specific drug and it's just not under our search is just not retrieving the good documents, which is very frustrating because a basic search would. Noe Acache: So to handle these changes, a good option would be to use models handing differently question and documents like Bg or cohere. Basically they use the same model but trained differently on long documents and questions which allow them to map them differently in the space. And my guess is that using such model documents, which are only a title, et cetera, will not be as close as the question as they are right now because they will be considered differently. So I hope it will help this problem. Again, it's just a guess, maybe I'm wrong. Heap research so for the keyword problem I was mentioning here, so in the recent release, Cudran just enabled sparse vectors which make actually TFEdev vectors possible. The TFEDEF vectors are vectors which are based on keywords, but basically there is one number per possible word in the data sets, and a lot of zeros, so storing them as a normal vector will make the vector search very expensive. But as a sparse vector it's much better. Noe Acache: And so you can build a debrief search combining the TFDF search for keyword search and the other search for semantic search to get the best of both worlds and overcome this issue. And finally, I'm actually quite surprised that with all the work that is going on, generative AI and rag, nobody has started working on a model to help with chunking. It's like one of the biggest challenge, and I feel like it's quite doable to have a model which will our model, or some kind of algorithm which will understand the structure of your documentation and understand why it semantically makes sense to chunk your documents. Dude, so good. Demetrios: I got questions coming up. Don't go anywhere. Actually, it's not just me. Tom's also got some questions, so I'm going to just blame it on Tom, throw him under the bus. Rag with medical company seems like a dangerous use case. You can work to eliminate hallucinations and other security safety concerns, but you can't make sure that they're completely eliminated, right? You can only kind of make sure they're eliminated. And so how did you go about handling these concerns? Noe Acache: This is a very good question. This is why I mentioned this project is mostly about the document search. Basically what we do is that we use chainlit, which is a very good tool for chatting, and then you can put a react front in front of it to make it very custom. And so when the user asks a question, we provide the LLM answer more like as a second thought, like something the doctor could consider as a fagon thought. But what's the most important is that we directly put the, instead of just citing the sources, we put the HTML of the pages the source is based on, and what bring the most value is really these HTML pages. And so we know the answer may have some problems. The fact is, based on documents, hallucinations are almost eliminated. Like, we don't notice any hallucinations, but of course they can happen. Noe Acache: So it's really the way, it's really a product problem rather than an algorithm problem, an algorithmic problem, yeah. The documents retrieved rather than the LLM answer. Demetrios: Yeah, makes sense. My question around it is a lot of times in the medical space, the data that is being thrown around is super sensitive. Right. And you have a lot of Pii. How do you navigate that? Are you just not touching that? Noe Acache: So basically we work with a provider in front which has public documentation. So it's public documentation. There is no PII. Demetrios: Okay, cool. So it's not like some of it. Noe Acache: Is private, but still there is no PII in the documents. Demetrios: Yeah, because I think that's another really incredibly hard problem is like, oh yeah, we're just sending all this sensitive information over to the IDA model to create embeddings with it. And then we also pass it through Chat GPT before we get it back. And next thing you know, that is the data that was used to train GPT five. And you can say things like create an unlimited poem and get that out of it. So it's super sketchy, right? Noe Acache: Yeah, of course, one way to overcome that is to, for instance, for the notion project, it's our private documentation. We use Ada over Azure, which guarantees data safety. So it's quite a good workaround. And when you have to work with different level of security, if you deal with PII, a good way is to play with metadata. Depending on the security level of the person who has the question, you play with the metadata to output only some kind of documents. The database metadata. Demetrios: Excellent. Well, don't let me stop you. I know you had some conclusionary thoughts there. Noe Acache: No, sorry, I was about to conclude anyway. So just to wrap it up, so we got some good models without any fine tuning. With the model, we tried to overcome them, to overcome these limitations we still faced. For MS search, fine tuning is required at the moment. There's no really any other way to overcome it otherwise. While for tech search, fine tuning is not really necessary, it's more like tricks which are required about using eBrid search, using better models, et cetera. So two kind of approaches, Qdrant really made a lot of things easy. For instance, I love the feature where you can use the database as a disk file. Noe Acache: You can even also use it in memory for CI integration and stuff. But since for all my experimentations, et cetera, I won't use it as a disk file because it's much easier to play with. I just like this feature. And then it allows to use the same tool for your experiment and in production. When I was playing with milverse, I had to use different tools for experimentation and for the database in production, which was making the technical stock a bit more complex. Sparse vector for Tfedef, as I was mentioning, which allows to search based on keywords to make your retrieval much better. Manage deployment again, we really struggle with the deployment of the, I mean, the DevOps team really struggled with the deployment of the milverse. And I feel like in most cases, except if you have some security requirements, it will be much cheaper to use the managed deployments rather than paying dev costs. Noe Acache: And also with the free cloud and on these vectors, you can really do a lot of, at least start a lot of projects. And finally, the metadata filtering and indexing. So by the way, we went into a small trap. It's that indexing. It's recommended to index on your metadata before adding your vectors. Otherwise your performance may be impacted. So you may not retrieve the good vectors that you need. So it's interesting thing to take into consideration. Noe Acache: I know that metadata filtering is something quite hard to do for vector database, so I don't really know how it works, but I assume there is a good reason for that. And finally, as I was mentioning before, in my view, new types of models are needed to answer industrial needs. So the model we are talking about, tech guidance to make better image embeddings and automatic chunking, like some kind of algorithm and model which will automatically chunk your documents appropriately. So thank you very much. If you still have questions, I'm happy to answer them. Here are my social media. If you want to reach me out afterwards, twitch out afterwards, and all my writing and talks are gathered here if you're interested. Demetrios: Oh, I like how you did that. There is one question from Tom again, asking about if you did anything to handle images and tables within the documentation when you were doing those rags. Noe Acache: No, I did not do anything for the images and for the tables. It depends when they are well structured. I kept them because the model manages to understand them. But for instance, we did a small pock for the medical company when he tried to integrate some external data source, which was a PDF, and we wanted to use it as an HTML to be able to display the HTML otherwise explained to you directly in the answer. So we converted the PDF to HTML and in this conversion, the tables were absolutely unreadable. So even after cleaning. So we did not include them in this case. Demetrios: Great. Well, dude, thank you so much for coming on here. And thank you all for joining us for yet another vector space talk. If you would like to come on to the vector space talk and share what you've been up to and drop some knowledge bombs on the rest of us, we'd love to have you. So please reach out to me. And I think that is it for today. Noe, this was awesome, man. I really appreciate you doing this. Noe Acache: Thank you, Demetrius. Have a nice day. Demetrios: We'll see you all later. Bye.
blog/vector-image-search-rag-vector-space-talk-008.md
--- title: "Qdrant 1.10 - Universal Query, Built-in IDF & ColBERT Support" draft: false short_description: "Single search API. Server-side IDF. Native multivector support." description: "Consolidated search API, built-in IDF, and native multivector support. " preview_image: /blog/qdrant-1.10.x/social_preview.png social_preview_image: /blog/qdrant-1.10.x/social_preview.png date: 2024-07-01T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - ColBERT late interaction - BM25 algorithm - search API - new features --- [Qdrant 1.10.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.10.0) This version introduces some major changes, so let's dive right in: **Universal Query API:** All search APIs, including Hybrid Search, are now in one Query endpoint.</br> **Built-in IDF:** We added the IDF mechanism to Qdrant's core search and indexing processes.</br> **Multivector Support:** Native support for late interaction ColBERT is accessible via Query API. ## One Endpoint for All Queries **Query API** will consolidate all search APIs into a single request. Previously, you had to work outside of the API to combine different search requests. Now these approaches are reduced to parameters of a single request, so you can avoid merging individual results. You can now configure the Query API request with the following parameters: |Parameter|Description| |-|-| |no parameter|Returns points by `id`| |`nearest`|Queries nearest neighbors ([Search](/documentation/concepts/search/))| |`fusion`|Fuses sparse/dense prefetch queries ([Hybrid Search](/documentation/concepts/hybrid-queries/#hybrid-search))| |`discover`|Queries `target` with added `context` ([Discovery](/documentation/concepts/explore/#discovery-api))| |`context` |No target with `context` only ([Context](/documentation/concepts/explore/#context-search))| |`recommend`|Queries against `positive`/`negative` examples. ([Recommendation](/documentation/concepts/explore/#recommendation-api))| |`order_by`|Orders results by [payload field](/documentation/concepts/hybrid-queries/#re-ranking-with-payload-values)| For example, you can configure Query API to run [Discovery search](/documentation/concepts/explore/#discovery-api). Let's see how that looks: ```http POST collections/{collection_name}/points/query { "query": { "discover": { "target": <vector_input>, "context": [ { "positive": <vector_input>, "negative": <vector_input> } ] } } } ``` We will be publishing code samples in [docs](/documentation/concepts/hybrid-queries/) and our new [API specification](http://api.qdrant.tech).</br> *If you need additional support with this new method, our [Discord](https://qdrant.to/discord) on-call engineers can help you.* ### Native Hybrid Search Support Query API now also natively supports **sparse/dense fusion**. Up to this point, you had to combine the results of sparse and dense searches on your own. This is now sorted on the back-end, and you only have to configure them as basic parameters for Query API. ```http POST /collections/{collection_name}/points/query { "prefetch": [ { "query": { "indices": [1, 42], // <┐ "values": [0.22, 0.8] // <┴─sparse vector }, "using": "sparse", "limit": 20 }, { "query": [0.01, 0.45, 0.67, ...], // <-- dense vector "using": "dense", "limit": 20 } ], "query": { "fusion": "rrf" }, // <--- reciprocal rank fusion "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.query_points( collection_name="{collection_name}", prefetch=[ models.Prefetch( query=models.SparseVector(indices=[1, 42], values=[0.22, 0.8]), using="sparse", limit=20, ), models.Prefetch( query=[0.01, 0.45, 0.67], using="dense", limit=20, ), ], query=models.FusionQuery(fusion=models.Fusion.RRF), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.query("{collection_name}", { prefetch: [ { query: { values: [0.22, 0.8], indices: [1, 42], }, using: 'sparse', limit: 20, }, { query: [0.01, 0.45, 0.67], using: 'dense', limit: 20, }, ], query: { fusion: 'rrf', }, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{Fusion, PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url("http://localhost:6334").build()?; client.query( QueryPointsBuilder::new("{collection_name}") .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest([(1, 0.22), (42, 0.8)].as_slice())) .using("sparse") .limit(20u64) ) .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using("dense") .limit(20u64) ) .query(Query::new_fusion(Fusion::Rrf)) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import java.util.List; import static io.qdrant.client.QueryFactory.fusion; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.Fusion; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client.queryAsync( QueryPoints.newBuilder() .setCollectionName("{collection_name}") .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.22f, 0.8f), List.of(1, 42))) .setUsing("sparse") .setLimit(20) .build()) .addPrefetch(PrefetchQuery.newBuilder() .setQuery(nearest(List.of(0.01f, 0.45f, 0.67f))) .setUsing("dense") .setLimit(20) .build()) .setQuery(fusion(Fusion.RRF)) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.QueryAsync( collectionName: "{collection_name}", prefetch: new List < PrefetchQuery > { new() { Query = new(float, uint)[] { (0.22f, 1), (0.8f, 42), }, Using = "sparse", Limit = 20 }, new() { Query = new float[] { 0.01f, 0.45f, 0.67f }, Using = "dense", Limit = 20 } }, query: Fusion.Rrf ); ``` Query API can now pre-fetch vectors for requests, which means you can run queries sequentially within the same API call. There are a lot of options here, so you will need to define a strategy to merge these requests using new parameters. For example, you can now include **rescoring within Hybrid Search**, which can open the door to strategies like iterative refinement via matryoshka embeddings. *To learn more about this, read the [Query API documentation](/documentation/concepts/search/#query-api).* ## Inverse Document Frequency [IDF] IDF is a critical component of the **TF-IDF (Term Frequency-Inverse Document Frequency)** weighting scheme used to evaluate the importance of a word in a document relative to a collection of documents (corpus). There are various ways in which IDF might be calculated, but the most commonly used formula is: $$ \text{IDF}(q_i) = \ln \left(\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\right) $$ Where:</br> `N` is the total number of documents in the collection. </br> `n` is the number of documents containing non-zero values for the given vector. This variant is also used in BM25, whose support was heavily requested by our users. We decided to move the IDF calculation into the Qdrant engine itself. This type of separation allows streaming updates of the sparse embeddings while keeping the IDF calculation up-to-date. The values of IDF previously had to be calculated using all the documents on the client side. However, now that Qdrant does it out of the box, you won't need to implement it anywhere else and recompute the value if some documents are removed or newly added. You can enable the IDF modifier in the collection configuration: ```http PUT /collections/{collection_name} { "sparse_vectors": { "text": { "modifier": "idf" } } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( collection_name="{collection_name}", sparse_vectors={ "text": models.SparseVectorParams( modifier=models.Modifier.IDF, ), }, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { sparse_vectors: { "text": { modifier: "idf" } } }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{CreateCollectionBuilder, sparse_vectors_config::SparseVectorsConfigBuilder, Modifier, SparseVectorParamsBuilder}; let client = Qdrant::from_url("http://localhost:6334").build()?; let mut config = SparseVectorsConfigBuilder::default(); config.add_named_vector_params( "text", SparseVectorParamsBuilder::default().modifier(Modifier::Idf), ); client .create_collection( CreateCollectionBuilder::new("{collection_name}") .sparse_vectors_config(config), ) .await?; ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Modifier; import io.qdrant.client.grpc.Collections.SparseVectorConfig; import io.qdrant.client.grpc.Collections.SparseVectorParams; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setSparseVectorsConfig( SparseVectorConfig.newBuilder() .putMap("text", SparseVectorParams.newBuilder().setModifier(Modifier.Idf).build())) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", sparseVectorsConfig: ("text", new SparseVectorParams { Modifier = Modifier.Idf, }) ); ``` ### IDF as Part of BM42 This quarter, Qdrant also introduced BM42, a novel algorithm that combines the IDF element of BM25 with transformer-based attention matrices to improve text retrieval. It utilizes attention matrices from your embedding model to determine the importance of each token in the document based on the attention value it receives. We've prepared the standard `all-MiniLM-L6-v2` Sentence Transformer so [it outputs the attention values](https://huggingface.co/Qdrant/all_miniLM_L6_v2_with_attentions). Still, you can use virtually any model of your choice, as long as you have access to its parameters. This is just another reason to stick with open source technologies over proprietary systems. In practical terms, the BM42 method addresses the tokenization issues and computational costs associated with SPLADE. The model is both efficient and effective across different document types and lengths, offering enhanced search performance by leveraging the strengths of both BM25 and modern transformer techniques. > To learn more about IDF and BM42, read our [dedicated technical article](/articles/bm42/). **You can expect BM42 to excel in scalable RAG-based scenarios where short texts are more common.** Document inference speed is much higher with BM42, which is critical for large-scale applications such as search engines, recommendation systems, and real-time decision-making systems. ## Multivector Support We are adding native support for multivector search that is compatible, e.g., with the late-interaction [ColBERT](https://github.com/stanford-futuredata/ColBERT) model. If you are working with high-dimensional similarity searches, **ColBERT is highly recommended as a reranking step in the Universal Query search.** You will experience better quality vector retrieval since ColBERT’s approach allows for deeper semantic understanding. This model retains contextual information during query-document interaction, leading to better relevance scoring. In terms of efficiency and scalability benefits, documents and queries will be encoded separately, which gives an opportunity for pre-computation and storage of document embeddings for faster retrieval. **Note:** *This feature supports all the original quantization compression methods, just the same as the regular search method.* **Run a query with ColBERT vectors:** Query API can handle exceedingly complex requests. The following example prefetches 1000 entries most similar to the given query using the `mrl_byte` named vector, then reranks them to get the best 100 matches with `full` named vector and eventually reranks them again to extract the top 10 results with the named vector called `colbert`. A single API call can now implement complex reranking schemes. ```http POST /collections/{collection_name}/points/query { "prefetch": { "prefetch": { "query": [1, 23, 45, 67], // <------ small byte vector "using": "mrl_byte", "limit": 1000 }, "query": [0.01, 0.45, 0.67, ...], // <-- full dense vector "using": "full", "limit": 100 }, "query": [ // <─┐ [0.1, 0.2, ...], // < │ [0.2, 0.1, ...], // < ├─ multi-vector [0.8, 0.9, ...] // < │ ], // <─┘ "using": "colbert", "limit": 10 } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.query_points( collection_name="{collection_name}", prefetch=models.Prefetch( prefetch=models.Prefetch(query=[1, 23, 45, 67], using="mrl_byte", limit=1000), query=[0.01, 0.45, 0.67], using="full", limit=100, ), query=[ [0.1, 0.2], [0.2, 0.1], [0.8, 0.9], ], using="colbert", limit=10, ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.query("{collection_name}", { prefetch: { prefetch: { query: [1, 23, 45, 67], using: 'mrl_byte', limit: 1000 }, query: [0.01, 0.45, 0.67], using: 'full', limit: 100, }, query: [ [0.1, 0.2], [0.2, 0.1], [0.8, 0.9], ], using: 'colbert', limit: 10, }); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{PrefetchQueryBuilder, Query, QueryPointsBuilder}; let client = Qdrant::from_url("http://localhost:6334").build()?; client.query( QueryPointsBuilder::new("{collection_name}") .add_prefetch(PrefetchQueryBuilder::default() .add_prefetch(PrefetchQueryBuilder::default() .query(Query::new_nearest(vec![1.0, 23.0, 45.0, 67.0])) .using("mrl_byte") .limit(1000u64) ) .query(Query::new_nearest(vec![0.01, 0.45, 0.67])) .using("full") .limit(100u64) ) .query(Query::new_nearest(vec![ vec![0.1, 0.2], vec![0.2, 0.1], vec![0.8, 0.9], ])) .using("colbert") .limit(10u64) ).await?; ``` ```java import static io.qdrant.client.QueryFactory.nearest; import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Points.PrefetchQuery; import io.qdrant.client.grpc.Points.QueryPoints; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .queryAsync( QueryPoints.newBuilder() .setCollectionName("{collection_name}") .addPrefetch( PrefetchQuery.newBuilder() .addPrefetch( PrefetchQuery.newBuilder() .setQuery(nearest(1, 23, 45, 67)) // <------------- small byte vector .setUsing("mrl_byte") .setLimit(1000) .build()) .setQuery(nearest(0.01f, 0.45f, 0.67f)) // <-- dense vector .setUsing("full") .setLimit(100) .build()) .setQuery( nearest( new float[][] { {0.1f, 0.2f}, // <─┐ {0.2f, 0.1f}, // < ├─ multi-vector {0.8f, 0.9f} // < ┘ })) .setUsing("colbert") .setLimit(10) .build()) .get(); ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.QueryAsync( collectionName: "{collection_name}", prefetch: new List <PrefetchQuery> { new() { Prefetch = { new List <PrefetchQuery> { new() { Query = new float[] { 1, 23, 45, 67 }, // <------------- small byte vector Using = "mrl_byte", Limit = 1000 }, } }, Query = new float[] {0.01f, 0.45f, 0.67f}, // <-- dense vector Using = "full", Limit = 100 } }, query: new float[][] { [0.1f, 0.2f], // <─┐ [0.2f, 0.1f], // < ├─ multi-vector [0.8f, 0.9f] // < ┘ }, usingVector: "colbert", limit: 10 ); ``` **Note:** *The multivector feature is not only useful for ColBERT; it can also be used in other ways.*</br> For instance, in e-commerce, you can use multi-vector to store multiple images of the same item. This serves as an alternative to the [group-by](/documentation/concepts/search/#grouping-api) method. ## Sparse Vectors Compression In version 1.9, we introduced the `uint8` [vector datatype](/documentation/concepts/vectors/#datatypes) for sparse vectors, in order to support pre-quantized embeddings from companies like JinaAI and Cohere. This time, we are introducing a new datatype **for both sparse and dense vectors**, as well as a different way of **storing** these vectors. **Datatype:** Sparse and dense vectors were previously represented in larger `float32` values, but now they can be turned to the `float16`. `float16` vectors have a lower precision compared to `float32`, which means that there is less numerical accuracy in the vector values - but this is negligible for practical use cases. These vectors will use half the memory of regular vectors, which can significantly reduce the footprint of large vector datasets. Operations can be faster due to reduced memory bandwidth requirements and better cache utilization. This can lead to faster vector search operations, especially in memory-bound scenarios. When creating a collection, you need to specify the `datatype` upfront: ```http PUT /collections/{collection_name} { "vectors": { "size": 1024, "distance": "Cosine", "datatype": "float16" } } ``` ```python from qdrant_client import QdrantClient, models client = QdrantClient(url="http://localhost:6333") client.create_collection( "{collection_name}", vectors_config=models.VectorParams( size=1024, distance=models.Distance.COSINE, datatype=models.Datatype.FLOAT16 ), ) ``` ```typescript import { QdrantClient } from "@qdrant/js-client-rest"; const client = new QdrantClient({ host: "localhost", port: 6333 }); client.createCollection("{collection_name}", { vectors: { size: 1024, distance: "Cosine", datatype: "float16" } }); ``` ```java import io.qdrant.client.QdrantClient; import io.qdrant.client.QdrantGrpcClient; import io.qdrant.client.grpc.Collections.CreateCollection; import io.qdrant.client.grpc.Collections.Datatype; import io.qdrant.client.grpc.Collections.Distance; import io.qdrant.client.grpc.Collections.VectorParams; import io.qdrant.client.grpc.Collections.VectorsConfig; QdrantClient client = new QdrantClient(QdrantGrpcClient.newBuilder("localhost", 6334, false).build()); client .createCollectionAsync( CreateCollection.newBuilder() .setCollectionName("{collection_name}") .setVectorsConfig(VectorsConfig.newBuilder() .setParams(VectorParams.newBuilder() .setSize(1024) .setDistance(Distance.Cosine) .setDatatype(Datatype.Float16) .build()) .build()) .build()) .get(); ``` ```rust use qdrant_client::Qdrant; use qdrant_client::qdrant::{CreateCollectionBuilder, Datatype, Distance, VectorParamsBuilder}; let client = Qdrant::from_url("http://localhost:6334").build()?; client .create_collection( CreateCollectionBuilder::new("{collection_name}").vectors_config( VectorParamsBuilder::new(1024, Distance::Cosine).datatype(Datatype::Float16), ), ) .await?; ``` ```csharp using Qdrant.Client; using Qdrant.Client.Grpc; var client = new QdrantClient("localhost", 6334); await client.CreateCollectionAsync( collectionName: "{collection_name}", vectorsConfig: new VectorParams { Size = 1024, Distance = Distance.Cosine, Datatype = Datatype.Float16 } ); ``` **Storage:** On the backend, we implemented bit packing to minimize the bits needed to store data, crucial for handling sparse vectors in applications like machine learning and data compression. For sparse vectors with mostly zeros, this focuses on storing only the indices and values of non-zero elements. You will benefit from a more compact storage and higher processing efficiency. This can also lead to reduced dataset sizes for faster processing and lower storage costs in data compression. ## New Rust Client Qdrant’s Rust client has been fully reshaped. It is now more accessible and easier to use. We have focused on putting together a minimalistic API interface. All operations and their types now use the builder pattern, providing an easy and extensible interface, preventing breakage with future updates. See the Rust [ColBERT query](#multivector-support) as great example. Additionally, Rust supports safe concurrent execution, which is crucial for handling multiple simultaneous requests efficiently. Documentation got a significant improvement as well. It is much better organized and provides usage examples across the board. Everything links back to our main documentation, making it easier to navigate and find the information you need. <p align="center"> Visit our <a href="https://docs.rs/qdrant-client/1.10/qdrant_client/">client</a> and <a href="https://docs.rs/qdrant-client/1.10/qdrant_client/struct.Qdrant.html">operations</a> documentation </p> ## S3 Snapshot Storage Qdrant **Collections**, **Shards** and **Storage** can be backed up with [Snapshots](/documentation/concepts/snapshots/) and saved in case of data loss or other data transfer purposes. These snapshots can be quite large and the resources required to maintain them can result in higher costs. AWS S3 and other S3-compatible implementations like [min.io](https://min.io/) is a great low-cost alternative that can hold snapshots without incurring high costs. It is globally reliable, scalable and resistant to data loss. You can configure S3 storage settings in the [config.yaml](https://github.com/qdrant/qdrant/blob/master/config/config.yaml), specifically with `snapshots_storage`. For example, to use AWS S3: ```yaml storage: snapshots_config: # Use 's3' to store snapshots on S3 snapshots_storage: s3 s3_config: # Bucket name bucket: your_bucket_here # Bucket region (e.g. eu-central-1) region: your_bucket_region_here # Storage access key # Can be specified either here or in the `AWS_ACCESS_KEY_ID` environment variable. access_key: your_access_key_here # Storage secret key # Can be specified either here or in the `AWS_SECRET_ACCESS_KEY` environment variable. secret_key: your_secret_key_here ``` *Read more about [S3 snapshot storage](/documentation/concepts/snapshots/#s3) and [configuration](/documentation/guides/configuration/).* This integration allows for a more convenient distribution of snapshots. Users of **any S3-compatible object storage** can now benefit from other platform services, such as automated workflows and disaster recovery options. S3's encryption and access control ensure secure storage and regulatory compliance. Additionally, S3 supports performance optimization through various storage classes and efficient data transfer methods, enabling quick and effective snapshot retrieval and management. ## Issues API Issues API notifies you about potential performance issues and misconfigurations. This powerful new feature allows users (such as database admins) to efficiently manage and track issues directly within the system, ensuring smoother operations and quicker resolutions. You can find the Issues button in the top right. When you click the bell icon, a sidebar will open to show ongoing issues. ![issues api](/blog/qdrant-1.10.x/issues.png) ## Minor Improvements - Pre-configure collection parameters; quantization, vector storage & replication factor - [#4299](https://github.com/qdrant/qdrant/pull/4299) - Overwrite global optimizer configuration for collections. Lets you separate roles for indexing and searching within the single qdrant cluster - [#4317](https://github.com/qdrant/qdrant/pull/4317) - Delta encoding and bitpacking compression for sparse vectors reduces memory consumption for sparse vectors by up to 75% - [#4253](https://github.com/qdrant/qdrant/pull/4253), [#4350](https://github.com/qdrant/qdrant/pull/4350)
blog/qdrant-1.10.x.md
--- draft: false title: Optimizing Semantic Search by Managing Multiple Vectors slug: storing-multiple-vectors-per-object-in-qdrant short_description: Qdrant's approach to storing multiple vectors per object, unraveling new possibilities in data representation and retrieval. description: Discover the power of vector storage optimization and learn how to efficiently manage multiple vectors per object for enhanced semantic search capabilities. preview_image: /blog/from_cms/andrey.vasnetsov_a_space_station_with_multiple_attached_modules_853a27c7-05c4-45d2-aebc-700a6d1e79d0.png date: 2022-10-05T10:05:43.329Z author: Kacper Łukawski featured: false tags: - Data Science - Neural Networks - Database - Search - Similarity Search --- # How to Optimize Vector Storage by Storing Multiple Vectors Per Object In a real case scenario, a single object might be described in several different ways. If you run an e-commerce business, then your items will typically have a name, longer textual description and also a bunch of photos. While cooking, you may care about the list of ingredients, and description of the taste but also the recipe and the way your meal is going to look. Up till now, if you wanted to enable [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) with multiple vectors per object, Qdrant would require you to create separate collections for each vector type, even though they could share some other attributes in a payload. However, since Qdrant 0.10 you are able to store all those vectors together in the same collection and share a single copy of the payload! Running the new version of Qdrant is as simple as it always was. By running the following command, you are able to set up a single instance that will also expose the HTTP API: ``` docker run -p 6333:6333 qdrant/qdrant:v0.10.1 ``` ## Creating a collection Adding new functionalities typically requires making some changes to the interfaces, so no surprise we had to do it to enable the multiple vectors support. Currently, if you want to create a collection, you need to define the configuration of all the vectors you want to store for each object. Each vector type has its own name and the distance function used to measure how far the points are. ```python from qdrant_client import QdrantClient from qdrant_client.http.models import VectorParams, Distance client = QdrantClient() client.create_collection( collection_name="multiple_vectors", vectors_config={ "title": VectorParams( size=100, distance=Distance.EUCLID, ), "image": VectorParams( size=786, distance=Distance.COSINE, ), } ) ``` In case you want to keep a single vector per collection, you can still do it without putting a name though. ```python client.create_collection( collection_name="single_vector", vectors_config=VectorParams( size=100, distance=Distance.COSINE, ) ) ``` All the search-related operations have slightly changed their interfaces as well, so you can choose which vector to use in a specific request. However, it might be easier to see all the changes by following an end-to-end Qdrant usage on a real-world example. ## Building service with multiple embeddings Quite a common approach to building search engines is to combine semantic textual capabilities with image search as well. For that purpose, we need a dataset containing both images and their textual descriptions. There are several datasets available with [MS_COCO_2017_URL_TEXT](https://huggingface.co/datasets/ChristophSchuhmann/MS_COCO_2017_URL_TEXT) being probably the simplest available. And because it’s available on HuggingFace, we can easily use it with their [datasets](https://huggingface.co/docs/datasets/index) library. ```python from datasets import load_dataset dataset = load_dataset("ChristophSchuhmann/MS_COCO_2017_URL_TEXT") ``` Right now, we have a dataset with a structure containing the image URL and its textual description in English. For simplicity, we can convert it to the DataFrame, as this structure might be quite convenient for future processing. ```python import pandas as pd dataset_df = pd.DataFrame(dataset["train"]) ``` The dataset consists of two columns: *TEXT* and *URL*. Thus, each data sample is described by two separate pieces of information and each of them has to be encoded with a different model. ## Processing the data with pretrained models Thanks to [embetter](https://github.com/koaning/embetter), we can reuse some existing pretrained models and use a convenient scikit-learn API, including pipelines. This library also provides some utilities to load the images, but only supports the local filesystem, so we need to create our own class that will download the file, given its URL. ```python from pathlib import Path from urllib.request import urlretrieve from embetter.base import EmbetterBase class DownloadFile(EmbetterBase): def __init__(self, out_dir: Path): self.out_dir = out_dir def transform(self, X, y=None): output_paths = [] for x in X: output_file = self.out_dir / Path(x).name urlretrieve(x, output_file) output_paths.append(str(output_file)) return output_paths ``` Now we’re ready to define the pipelines to process our images and texts using *all-MiniLM-L6-v2* and *vit_base_patch16_224* models respectively. First of all, let’s start with Qdrant configuration. ## Creating Qdrant collection We’re going to put two vectors per object (one for image and another one for text), so we need to create a collection with a configuration allowing us to do so. ```python from qdrant_client import QdrantClient from qdrant_client.http.models import VectorParams, Distance client = QdrantClient(timeout=None) client.create_collection( collection_name="ms-coco-2017", vectors_config={ "text": VectorParams( size=384, distance=Distance.EUCLID, ), "image": VectorParams( size=1000, distance=Distance.COSINE, ), }, ) ``` ## Defining the pipelines And since we have all the puzzles already in place, we can start the processing to convert raw data into the embeddings we need. The pretrained models come in handy. ```python from sklearn.pipeline import make_pipeline from embetter.grab import ColumnGrabber from embetter.vision import ImageLoader, TimmEncoder from embetter.text import SentenceEncoder output_directory = Path("./images") image_pipeline = make_pipeline( ColumnGrabber("URL"), DownloadFile(output_directory), ImageLoader(), TimmEncoder("vit_base_patch16_224"), ) text_pipeline = make_pipeline( ColumnGrabber("TEXT"), SentenceEncoder("all-MiniLM-L6-v2"), ) ``` Thanks to the scikit-learn API, we can simply call each pipeline on the created DataFrame and put created vectors into Qdrant to enable fast vector search. For convenience, we’re going to put the vectors as other columns in our DataFrame. ```python sample_df = dataset_df.sample(n=2000, random_state=643) image_vectors = image_pipeline.transform(sample_df) text_vectors = text_pipeline.transform(sample_df) sample_df["image_vector"] = image_vectors.tolist() sample_df["text_vector"] = text_vectors.tolist() ``` The created vectors might be easily put into Qdrant. For the sake of simplicity, we’re going to skip it, but if you are interested in details, please check out the [Jupyter notebook](https://gist.github.com/kacperlukawski/961aaa7946f55110abfcd37fbe869b8f) going step by step. ## Searching with multiple vectors If you decided to describe each object with several [neural embeddings](https://qdrant.tech/articles/neural-search-tutorial/), then at each search operation you need to provide the vector name along with the [vector embedding](https://qdrant.tech/articles/what-are-embeddings/), so the engine knows which one to use. The interface of the search operation is pretty straightforward and requires an instance of NamedVector. ```python from qdrant_client.http.models import NamedVector text_results = client.search( collection_name="ms-coco-2017", query_vector=NamedVector( name="text", vector=row["text_vector"], ), limit=5, with_vectors=False, with_payload=True, ) ``` If we, on the other hand, decided to search using the image embedding, then we just provide the vector name we have chosen while creating the collection, so instead of “text”, we would provide “image”, as this is how we configured it at the very beginning. ## The results: image vs text search Since we have two different vectors describing each object, we can perform the search query using any of those. That shouldn’t be surprising then, that the results are different depending on the chosen embedding method. The images below present the results returned by Qdrant for the image/text on the left-hand side. ### Image search If we query the system using image embedding, then it returns the following results: ![](/blog/from_cms/0_5nqlmjznjkvdrjhj.webp "Image search results") ### Text search However, if we use textual description embedding, then the results are slightly different: ![](/blog/from_cms/0_3sdgctswb99xtexl.webp "Text search However, if we use textual description embedding, then the results are slightly different:") It is not surprising that a method used for creating neural encoding plays an important role in the search process and its quality. If your data points might be described using several vectors, then the latest release of Qdrant gives you an opportunity to store them together and reuse the payloads, instead of creating several collections and querying them separately. ### Summary: - Qdrant 0.10 introduces efficient vector storage optimization, allowing seamless management of multiple vectors per object within a single collection. - This update streamlines semantic search capabilities by eliminating the need for separate collections for each vector type, enhancing search accuracy and performance. - With Qdrant's new features, users can easily configure vector parameters, including size and distance functions, for each vector type, optimizing search results and user experience. If you’d like to check out some other examples, please check out our [full notebook](https://gist.github.com/kacperlukawski/961aaa7946f55110abfcd37fbe869b8f) presenting the search results and the whole pipeline implementation.
blog/storing-multiple-vectors-per-object-in-qdrant.md
--- draft: false title: "Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud" short_description: "Empowering the world’s best companies in their AI journey." description: "Empowering the world’s best companies in their AI journey." preview_image: /blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha.png date: 2024-04-11T00:01:00Z author: Qdrant featured: false weight: 1012 tags: - Qdrant - Vector Database --- [Aleph Alpha](https://aleph-alpha.com/) and Qdrant are on a joint mission to empower the world’s best companies in their AI journey. The launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) furthers this effort by ensuring complete data sovereignty and hosting security. This latest collaboration is all about giving enterprise customers complete transparency and sovereignty to make use of AI in their own environment. By using a hybrid cloud vector database, those looking to leverage vector search for the AI applications can now ensure their proprietary and customer data is completely secure. Aleph Alpha’s state-of-the-art technology, offering unmatched quality and safety, cater perfectly to large-scale business applications and complex scenarios utilized by professionals across fields such as science, law, and security globally. Recognizing that these sophisticated use cases often demand comprehensive data processing capabilities beyond what standalone LLMs can provide, the collaboration between Aleph Alpha and Qdrant Hybrid Cloud introduces a robust platform. This platform empowers customers with full data sovereignty, enabling secure management of highly specific and sensitive information within their own infrastructure. Together with Aleph Alpha, Qdrant Hybrid Cloud offers an ecosystem where individual components seamlessly integrate with one another. Qdrant's new Kubernetes-native design coupled with Aleph Alpha's powerful technology meet the needs of developers who are both prototyping and building production-level apps. #### How Aleph Alpha and Qdrant Blend Data Control, Scalability, and European Standards Building apps with Qdrant Hybrid Cloud and Aleph Alpha’s models leverages some common value propositions: **Data Sovereignty:** Qdrant Hybrid Cloud is the first vector database that can be deployed anywhere, with complete database isolation, while still providing fully managed cluster management. Furthermore, as the best option for organizations that prioritize data sovereignty, Aleph Alpha offers foundation models which are aimed at serving regional use cases. Together, both products can be leveraged to keep highly specific data safe and isolated. **Scalable Vector Search:** Once deployed to a customer’s host of choice, Qdrant Hybrid Cloud provides a fully managed vector database that lets users effortlessly scale the setup through vertical or horizontal scaling. Deployed in highly secure environments, this is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **European Origins & Expertise**: With a strong presence in the European Union ecosystem, Aleph Alpha is ideally positioned to partner with European-based companies like Qdrant, providing local expertise and infrastructure that aligns with European regulatory standards. #### Build a Data-Sovereign AI System With Qdrant Hybrid Cloud and Aleph Alpha’s Models ![hybrid-cloud-aleph-alpha-tutorial](/blog/hybrid-cloud-aleph-alpha/hybrid-cloud-aleph-alpha-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud and Aleph Alpha’s advanced models. #### Tutorial: Build a Region-Specific Contract Management System Learn how to develop an AI system that reads lengthy contracts and gives complex answers based on stored content. This system is completely hosted inside of Germany for GDPR compliance purposes. The tutorial shows how enterprises with a vast number of stored contract documents can leverage AI in a closed environment that doesn’t leave the hosting region, thus ensuring data sovereignty and security. [Try the Tutorial](/documentation/examples/rag-contract-management-stackit-aleph-alpha/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-aleph-alpha.md
--- title: "Introducing Qdrant Stars: Join Our Ambassador Program!" draft: false slug: qdrant-stars-announcement # Change this slug to your page slug if needed short_description: Qdrant Stars recognizes and supports key contributors to the Qdrant ecosystem through content creation and community leadership. # Change this description: Say hello to the first Qdrant Stars and learn more about our new ambassador program! preview_image: /blog/qdrant-stars-announcement/preview-image.png social_preview_image: /blog/qdrant-stars-announcement/preview-image.png date: 2024-05-19T11:57:37-03:00 author: Sabrina Aquino featured: false tags: - news - vector search - qdrant - ambassador program - community --- We're excited to introduce **Qdrant Stars**, our new ambassador program created to recognize and support Qdrant users making a strong impact in the AI and vector search space. Whether through innovative content, real-world applications tutorials, educational events, or engaging discussions, they are constantly making vector search more accessible and interesting to explore. ### 👋 Say hello to the first Qdrant Stars! Our inaugural Qdrant Stars are a diverse and talented lineup who have shown exceptional dedication to our community. You might recognize some of their names: <div style="display: flex; flex-direction: column;"> <div class="qdrant-stars"> <div style="display: flex; align-items: center;"> <h5>Robert Caulk</h5> <a href="https://www.linkedin.com/in/rcaulk/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Robert LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/robert-caulk-profile.jpeg" alt="Robert Caulk" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Robert is working with a team on <a href="https://asknews.app">AskNews</a> to adaptively enrich, index, and report on over 1 million news articles per day. His team maintains an open-source tool geared toward cluster orchestration <a href="https://flowdapt.ai">Flowdapt</a>, which moves data around highly parallelized production environments. This is why Robert and his team rely on Qdrant for low-latency, scalable, hybrid search across dense and sparse vectors in asynchronous environments.</p> </div> </div> <blockquote> I am interested in brainstorming innovative ways to interact with Qdrant vector databases and building presentations that show the power of coupling Flowdapt with Qdrant for large-scale production GenAI applications. I look forward to networking with Qdrant experts and users so that I can learn from their experience. </blockquote> <div style="display: flex; align-items: center;"> <h5>Joshua Mo</h5> <a href="https://www.linkedin.com/in/joshua-mo-4146aa220/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Josh LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/Josh-Mo-profile.jpg" alt="Josh" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Josh is a Rust developer and DevRel Engineer at <a href="https://shuttle.rs">Shuttle</a>, assisting with user engagement and being a point of contact for first-line information within the community. He's often writing educational content that combines Javascript with Rust and is a coach at Codebar, which is a charity that runs free programming workshops for minority groups within tech.</p> </div> </div> <blockquote> I am excited about getting access to Qdrant's new features and contributing to the AI community by demonstrating how those features can be leveraged for production environments. </blockquote> <div style="display: flex; align-items: center;"> <h5>Nicholas Khami</h5> <a href="https://www.linkedin.com/in/nicholas-khami-5a0a7a135/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Nick LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/ai-headshot-Nick-K.jpg" alt="Nick" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Nick is a founder and product engineer at <a href="https://trieve.ai/">Trieve</a> and has been using Qdrant since late 2022. He has a low level understanding of the Qdrant API, especially the Rust client, and knows a lot about how to make the most of Qdrant on an application level.</p> </div> </div> <blockquote> I'm looking forward to be helping folks use lesser known features to enhance and make their projects better! </blockquote> <div style="display: flex; align-items: center;"> <h5>Owen Colegrove</h5> <a href="https://www.linkedin.com/in/owencolegrove/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Owen LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/Prof-Owen-Colegrove.jpeg" alt="Owen Colegrove" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Owen Colegrove is the Co-Founder of <a href="https://www.sciphi.ai/">SciPhi</a>, making it easy build, deploy, and scale RAG systems using Qdrant vector search tecnology. He has Ph.D. in Physics and was previously a Quantitative Strategist at Citadel and a Researcher at CERN.</p> </div> </div> <blockquote> I'm excited about working together with Qdrant! </blockquote> <div style="display: flex; align-items: center;"> <h5>Kameshwara Pavan Kumar Mantha</h5> <a href="https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Pavan LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/pic-Kameshwara-Pavan-Kumar-Mantha2.jpeg" alt="Kameshwara Pavan" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Kameshwara Pavan is a expert with 14 years of extensive experience in full stack development, cloud solutions, and AI. Specializing in Generative AI and LLMs. Pavan has established himself as a leader in these cutting-edge domains. He holds a Master's in Data Science and a Master's in Computer Applications, and is currently pursuing his PhD.</p> </div> </div> <blockquote> Outside of my professional pursuits, I'm passionate about sharing my knowledge through technical blogging, engaging in technical meetups, and staying active with cycling. I admire the groundbreaking work Qdrant is doing in the industry, and I'm eager to collaborate and learn from the team that drives such exceptional advancements. </blockquote> <div style="display: flex; align-items: center;"> <h5>Niranjan Akella</h5> <a href="https://www.linkedin.com/in/niranjanakella/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Niranjan LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/nj-Niranjan-Akella.png" alt="Niranjan Akella" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Niranjan is an AI/ML Engineer at <a href="https://www.genesys.com/">Genesys</a> who specializes in building and deploying AI models such as LLMs, Diffusion Models, and Vision Models at scale. He actively shares his projects through content creation and is passionate about applied research, developing custom real-time applications that that serve a greater purpose. </p> </div> </div> <blockquote> I am a scientist by heart and an AI engineer by profession. I'm always armed to take a leap of faith into the impossible to be come the impossible. I'm excited to explore and venture into Qdrant Stars with some support to build a broader community and develop a sense of completeness among like minded people. </blockquote> <div style="display: flex; align-items: center;"> <h5>Bojan Jakimovski</h5> <a href="https://www.linkedin.com/in/bojan-jakimovski/" target="_blank"><img src="/blog/qdrant-stars-announcement/In-Blue-40.png" alt="Bojan LinkedIn" style="margin-left:10px; width: 20px; height: 20px;"></a> </div> <div style="display: flex; align-items: center; margin-bottom: 20px;"> <img src="/blog/qdrant-stars-announcement/Bojan-preview.jpeg" alt="Bojan Jakimovski" style="width: 200px; height: 200px; object-fit: cover; object-position: center; margin-right: 20px; margin-top: 20px;"> <div> <p>Bojan is an Advanced Machine Learning Engineer at <a href="https://www.loka.com/">Loka</a> currently pursuing a Master’s Degree focused on applying AI in Heathcare. He is specializing in Dedicated Computer Systems, with a passion for various technology fields. </p> </div> </div> <blockquote> I'm really excited to show the power of the Qdrant as vector database. Especially in some fields where accessing the right data by very fast and efficient way is a must, in fields like Healthcare and Medicine. </blockquote> </div> We are happy to welcome this group of people who are deeply committed to advancing vector search technology. We look forward to supporting their vision, and helping them make a bigger impact on the community. You can find and chat with them at our [Discord Community](discord.gg/qdrant). ### Why become a Qdrant Star? There are many ways you can benefit from the Qdrant Star Program. Here are just a few: ##### Exclusive rewards programs Celebrate top contributors monthly with special rewards, including exclusive swag and monetary prizes. Quarterly awards for 'Most Innovative Content' and 'Best Tutorial' offer additional prizes. ##### Early access to new features Be the first to explore and write about our latest features and beta products. Participate in product meetings where your ideas and suggestions can directly influence our roadmap. ##### Conference support We love seeing our stars on stage! If you're planning to attend and speak about Qdrant at conferences, we've got you covered. Receive presentation templates, mentorship, and educational materials to help deliver standout conference presentations, with travel expenses covered. ##### Qdrant Certification End the program as a certified Qdrant ambassador and vector search specialist, with provided training resources and a certification test to showcase your expertise. ### What do Qdrant Stars do? As a Qdrant Star, you'll share your knowledge with the community through articles, blogs, tutorials, or demos that highlight the power and versatility of vector search technology - in your own creative way. You'll be a friendly face and a trusted expert in the community, sparking discussions on topics you love and keeping our community active and engaged. Love organizing events? You'll have the chance to host meetups, workshops, and other educational gatherings, with all the promotional and logistical support you need to make them a hit. But if large conferences are your thing, we’ll provide the resources and cover your travel expenses so you can focus on delivering an outstanding presentation. You'll also have a say in the Qdrant roadmap by giving feedback on new features and participating in product meetings. Qdrant Stars are constantly contributing to the growth and value of the vector search ecosystem. ### How to join the Qdrant Stars Program Are you interested in becoming a Qdrant Star? We're on the lookout for individuals who are passionate about vector search technology and looking to make an impact in the AI community. If you have a strong understanding of vector search technologies, enjoy creating content, speaking at conferences, and actively engage with our community. If this sounds like you, don't hesitate to apply. We look forward to potentially welcoming you as our next Qdrant Star. [Apply here!](https://forms.gle/q4fkwudDsy16xAZk8) Share your journey with vector search technologies and how you plan to contribute further. #### Nominate a Qdrant Star Do you know someone who could be our next Qdrant Star? Please submit your nomination through our [nomination form](https://forms.gle/n4zv7JRkvnp28qv17), explaining why they're a great fit. Your recommendation could help us find the next standout ambassador. #### Learn More For detailed information about the program's benefits, activities, and perks, refer to the [Qdrant Stars Handbook](https://qdrant.github.io/qdrant-stars-handbook/). To connect with current Stars, ask questions, and stay updated on the latest news and events at Qdrant, [join our Discord community](http://discord.gg/qdrant).
blog/qdrant-stars-announcement copy.md
--- title: "What is Vector Similarity? Understanding its Role in AI Applications." draft: false short_description: "An in-depth exploration of vector similarity and its applications in AI." description: "Discover the significance of vector similarity in AI applications and how our vector database revolutionizes similarity search technology for enhanced performance and accuracy." preview_image: /blog/what-is-vector-similarity/social_preview.png social_preview_image: /blog/what-is-vector-similarity/social_preview.png date: 2024-02-24T00:00:00-08:00 author: Qdrant Team featured: false tags: - vector search - vector similarity - similarity search - embeddings --- # Understanding Vector Similarity: Powering Next-Gen AI Applications A core function of a wide range of AI applications is to first understand the *meaning* behind a user query, and then provide *relevant* answers to the questions that the user is asking. With increasingly advanced interfaces and applications, this query can be in the form of language, or an image, an audio, video, or other forms of *unstructured* data. On an ecommerce platform, a user can, for instance, try to find ‘clothing for a trek’, when they actually want results around ‘waterproof jackets’, or ‘winter socks’. Keyword, or full-text, or even synonym search would fail to provide any response to such a query. Similarly, on a music app, a user might be looking for songs that sound similar to an audio clip they have heard. Or, they might want to look up furniture that has a similar look as the one they saw on a trip. ## How Does Vector Similarity Work? So, how does an algorithm capture the essence of a user’s query, and then unearth results that are relevant? At a high level, here’s how: - Unstructured data is first converted into a numerical representation, known as vectors, using a deep-learning model. The goal here is to capture the ‘semantics’ or the key features of this data. - The vectors are then stored in a vector database, along with references to their original data. - When a user performs a query, the query is first converted into its vector representation using the same model. Then search is performed using a metric, to find other vectors which are closest to the query vector. - The list of results returned corresponds to the vectors that were found to be the closest. At the heart of all such searches lies the concept of *vector similarity*, which gives us the ability to measure how closely related two data points are, how similar or dissimilar they are, or find other related data points. In this document, we will deep-dive into the essence of vector similarity, study how vector similarity search is used in the context of AI, look at some real-world use cases and show you how to leverage the power of vector similarity and vector similarity search for building AI applications. ## **Understanding Vectors, Vector Spaces and Vector Similarity** ML and deep learning models require numerical data as inputs to accomplish their tasks. Therefore, when working with non-numerical data, we first need to convert them into a numerical representation that captures the key features of that data. This is where vectors come in. A vector is a set of numbers that represents data, which can be text, image, or audio, or any multidimensional data. Vectors reside in a high-dimensional space, the vector space, where each dimension captures a specific aspect or feature of the data. {{< figure width=80% src=/blog/what-is-vector-similarity/working.png caption="Working" >}} The number of dimensions of a vector can range from tens or hundreds to thousands, and each dimension is stored as the element of an array. Vectors are, therefore, an array of numbers of fixed length, and in their totality, they encode the key features of the data they represent. Vector embeddings are created by AI models, a process known as vectorization. They are then stored in vector stores like Qdrant, which have the capability to rapidly search through vector space, and find similar or dissimilar vectors, cluster them, find related ones, or even the ones which are complete outliers. For example, in the case of text data, “coat” and “jacket” have similar meaning, even though the words are completely different. Vector representations of these two words should be such that they lie close to each other in the vector space. The process of measuring their proximity in vector space is vector similarity. Vector similarity, therefore, is a measure of how closely related two data points are in a vector space. It quantifies how alike or different two data points are based on their respective vector representations. Suppose we have the words "king", "queen" and “apple”. Given a model, words with similar meanings have vectors that are close to each other in the vector space. Vector representations of “king” and “queen” would be, therefore, closer together than "king" and "apple", or “queen” and “apple” due to their semantic relationship. Vector similarity is how you calculate this. An extremely powerful aspect of vectors is that they are not limited to representing just text, image or audio. In fact, vector representations can be created out of any kind of data. You can create vector representations of 3D models, for instance. Or for video clips, or molecular structures, or even [protein sequences](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-3220-8). There are several methodologies through which vectorization is performed. In creating vector representations of text, for example, the process involves analyzing the text for its linguistic elements using a transformer model. These models essentially learn to capture the essence of the text by dissecting its language components. ## **How Is Vector Similarity Calculated?** There are several ways to calculate the similarity (or distance) between two vectors, which we call metrics. The most popular ones are: **Dot Product**: Obtained by multiplying corresponding elements of the vectors and then summing those products. A larger dot product indicates a greater degree of similarity. **Cosine Similarity**: Calculated using the dot product of the two vectors divided by the product of their magnitudes (norms). Cosine similarity of 1 implies that the vectors are perfectly aligned, while a value of 0 indicates no similarity. A value of -1 means they are diametrically opposed (or dissimilar). **Euclidean Distance**: Assuming two vectors act like arrows in vector space, Euclidean distance calculates the length of the straight line connecting the heads of these two arrows. The smaller the Euclidean distance, the greater the similarity. **Manhattan Distance**: Also known as taxicab distance, it is calculated as the total distance between the two vectors in a vector space, if you follow a grid-like path. The smaller the Manhattan distance, the greater the similarity. {{< figure width=80% src=/blog/what-is-vector-similarity/products.png caption="Metrics" >}} As a rule of thumb, the choice of the best similarity metric depends on how the vectors were encoded. Of the four metrics, Cosine Similarity is the most popular. ## **The Significance of Vector Similarity** Vector Similarity is vital in powering machine learning applications. By comparing the vector representation of a query to the vectors of all data points, vector similarity search algorithms can retrieve the most relevant vectors. This helps in building powerful similarity search and recommendation systems, and has numerous applications in image and text analysis, in natural language processing, and in other domains that deal with high-dimensional data. Let’s look at some of the key ways in which vector similarity can be leveraged. **Image Analysis** Once images are converted to their vector representations, vector similarity can help create systems to identify, categorize, and compare them. This can enable powerful reverse image search, facial recognition systems, or can be used for object detection and classification. **Text Analysis** Vector similarity in text analysis helps in understanding and processing language data. Vectorized text can be used to build semantic search systems, or in document clustering, or plagiarism detection applications. **Retrieval Augmented Generation (RAG)** Vector similarity can help in representing and comparing linguistic features, from single words to entire documents. This can help build retrieval augmented generation (RAG) applications, where the data is retrieved based on user intent. It also enables nuanced language tasks such as sentiment analysis, synonym detection, language translation, and more. **Recommender Systems** By converting user preference vectors into item vectors from a dataset, vector similarity can help build semantic search and recommendation systems. This can be utilized in a range of domains such e-commerce or OTT services, where it can help in suggesting relevant products, movies or songs. Due to its varied applications, vector similarity has become a critical component in AI tooling. However, implementing it at scale, and in production settings, poses some hard problems. Below we will discuss some of them and explore how Qdrant helps solve these challenges. ## **Challenges with Vector Similarity Search** The biggest challenge in this area comes from what researchers call the "[curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality)." Algorithms like k-d trees may work well for finding exact matches in low dimensions (in 2D or 3D space). However, when you jump to high-dimensional spaces (hundreds or thousands of dimensions, which is common with vector embeddings), these algorithms become impractical. Traditional search methods and OLTP or OLAP databases struggle to handle this curse of dimensionality efficiently. This means that building production applications that leverage vector similarity involves navigating several challenges. Here are some of the key challenges to watch out for. ### Scalability Various vector search algorithms were originally developed to handle datasets small enough to be accommodated entirely within the memory of a single computer. However, in real-world production settings, the datasets can encompass billions of high-dimensional vectors. As datasets grow, the storage and computational resources required to maintain and search through vector space increases dramatically. For building scalable applications, leveraging vector databases that allow for a distributed architecture and have the capabilities of sharding, partitioning and load balancing is crucial. ### Efficiency As the number of dimensions in vectors increases, algorithms that work in lower dimensions become less effective in measuring true similarity. This makes finding nearest neighbors computationally expensive and inaccurate in high-dimensional space. For efficient query processing, it is important to choose vector search systems which use indexing techniques that help speed up search through high-dimensional vector space, and reduce latency. ### Security For real-world applications, vector databases frequently house privacy-sensitive data. This can encompass Personally Identifiable Information (PII) in customer records, intellectual property (IP) like proprietary documents, or specialized datasets subject to stringent compliance regulations. For data security, the vector search system should offer features that prevent unauthorized access to sensitive information. Also, it should empower organizations to retain data sovereignty, ensuring their data complies with their own regulations and legal requirements, independent of the platform or the cloud provider. These are some of the many challenges that developers face when attempting to leverage vector similarity in production applications. To address these challenges head-on, we have made several design choices at Qdrant which help power vector search use-cases that go beyond simple CRUD applications. ## How Qdrant Solves Vector Similarity Search Challenges Qdrant is a highly performant and scalable vector search system, developed ground up in Rust. Qdrant leverages Rust’s famed memory efficiency and performance. It supports horizontal scaling, sharding, and replicas, and includes security features like role-based authentication. Additionally, Qdrant can be deployed in various environments, including [hybrid cloud setups](/hybrid-cloud/). Here’s how we have taken on some of the key challenges that vector search applications face in production. ### Efficiency Our [choice of Rust](/articles/why-rust/) significantly contributes to the efficiency of Qdrant’s vector similarity search capabilities. Rust’s emphasis on safety and performance, without the need for a garbage collector, helps with better handling of memory and resources. Rust is renowned for its performance and safety features, particularly in concurrent processing, and we leverage it heavily to handle high loads efficiently. Also, a key feature of Qdrant is that we leverage both vector and traditional indexes (payload index). This means that vector index helps speed up vector search, while traditional indexes help filter the results. The vector index in Qdrant employs the Hierarchical Navigable Small World (HNSW) algorithm for Approximate Nearest Neighbor (ANN) searches, which is one of the fastest algorithms according to [benchmarks](https://github.com/erikbern/ann-benchmarks). ### Scalability For massive datasets and demanding workloads, Qdrant supports [distributed deployment](/documentation/guides/distributed_deployment/) from v0.8.0. In this mode, you can set up a Qdrant cluster and distribute data across multiple nodes, enabling you to maintain high performance and availability even under increased workloads. Clusters support sharding and replication, and harness the Raft consensus algorithm to manage node coordination. Qdrant also supports vector [quantization](/documentation/guides/quantization/) to reduce memory footprint and speed up vector similarity searches, making it very effective for large-scale applications where efficient resource management is critical. There are three quantization strategies you can choose from - scalar quantization, binary quantization and product quantization - which will help you control the trade-off between storage efficiency, search accuracy and speed. ### Security Qdrant offers several [security features](/documentation/guides/security/) to help protect data and access to the vector store: - API Key Authentication: This helps secure API access to Qdrant Cloud with static or read-only API keys. - JWT-Based Access Control: You can also enable more granular access control through JSON Web Tokens (JWT), and opt for restricted access to specific parts of the stored data while building Role-Based Access Control (RBAC). - TLS Encryption: Additionally, you can enable TLS Encryption on data transmission to ensure security of data in transit. To help with data sovereignty, Qdrant can be run in a [Hybrid Cloud](/hybrid-cloud/) setup. Hybrid Cloud allows for seamless deployment and management of the vector database across various environments, and integrates Kubernetes clusters into a unified managed service. You can manage these clusters via Qdrant Cloud’s UI while maintaining control over your infrastructure and resources. ## Optimizing Similarity Search Performance In order to achieve top performance in vector similarity searches, Qdrant employs a number of other tactics in addition to the features discussed above.**FastEmbed**: Qdrant supports [FastEmbed](/articles/fastembed/), a lightweight Python library for generating fast and efficient text embeddings. FastEmbed uses quantized transformer models integrated with ONNX Runtime, and is significantly faster than traditional methods of embedding generation. **Support for Dense and Sparse Vectors**: Qdrant supports both dense and sparse vector representations. While dense vectors are most common, you may encounter situations where the dataset contains a range of specialized domain-specific keywords. [Sparse vectors](/articles/sparse-vectors/) shine in such scenarios. Sparse vectors are vector representations of data where most elements are zero. **Multitenancy**: Qdrant supports [multitenancy](/documentation/guides/multiple-partitions/) by allowing vectors to be partitioned by payload within a single collection. Using this you can isolate each user's data, and avoid creating separate collections for each user. In order to ensure indexing performance, Qdrant also offers ways to bypass the construction of a global vector index, so that you can index vectors for each user independently. **IO Optimizations**: If your data doesn’t fit into the memory, it may require storing on disk. To [optimize disk IO performance](/articles/io_uring/), Qdrant offers io_uring based *async uring* storage backend on Linux-based systems. Benchmarks show that it drastically helps reduce operating system overhead from disk IO. **Data Integrity**: To ensure data integrity, Qdrant handles data changes in two stages. First, changes are recorded in the Write-Ahead Log (WAL). Then, changes are applied to segments, which store both the latest and individual point versions. In case of abnormal shutdowns, data is restored from WAL. **Integrations**: Qdrant has integrations with most popular frameworks, such as LangChain, LlamaIndex, Haystack, Apache Spark, FiftyOne, and more. Qdrant also has several [trusted partners](/blog/hybrid-cloud-launch-partners/) for Hybrid Cloud deployments, such as Oracle Cloud Infrastructure, Red Hat OpenShift, Vultr, OVHcloud, Scaleway, and DigitalOcean. We regularly run [benchmarks](/benchmarks/) comparing Qdrant against other vector databases like Elasticsearch, Milvus, and Weaviate. Our benchmarks show that Qdrant consistently achieves the highest requests-per-second (RPS) and lowest latencies across various scenarios, regardless of the precision threshold and metric used. ## Real-World Use Cases Vector similarity is increasingly being used in a wide range of [real-world applications](/use-cases/). In e-commerce, it powers recommendation systems by comparing user behavior vectors to product vectors. In social media, it can enhance content recommendations and user connections by analyzing user interaction vectors. In image-oriented applications, vector similarity search enables reverse image search, similar image clustering, and efficient content-based image retrieval. In healthcare, vector similarity helps in genetic research by comparing DNA sequence vectors to identify similarities and variations. The possibilities are endless. A unique example of real-world application of vector similarity is how VISUA uses Qdrant. A leading computer vision platform, VISUA faced two key challenges. First, a rapid and accurate method to identify images and objects within them for reinforcement learning. Second, dealing with the scalability issues of their quality control processes due to the rapid growth in data volume. Their previous quality control, which relied on meta-information and manual reviews, was no longer scalable, which prompted the VISUA team to explore vector databases as a solution. After exploring a number of vector databases, VISUA picked Qdrant as the solution of choice. Vector similarity search helped identify similarities and deduplicate large volumes of images, videos, and frames. This allowed VISUA to uniquely represent data and prioritize frames with anomalies for closer examination, which helped scale their quality assurance and reinforcement learning processes. Read our [case study](/blog/case-study-visua/) to learn more. ## Future Directions and Innovations As real-world deployments of vector similarity search technology grows, there are a number of promising directions where this technology is headed. We are developing more efficient indexing and search algorithms to handle increasing data volumes and high-dimensional data more effectively. Simultaneously, in case of dynamic datasets, we are pushing to enhance our handling of real-time updates and low-latency search capabilities. Qdrant is one of the most secure vector stores out there. However, we are working on bringing more privacy-preserving techniques in vector search implementations to protect sensitive data. We have just about witnessed the tip of the iceberg in terms of what vector similarity can achieve. If you are working on an interesting use-case that uses vector similarity, we would like to hear from you. ### Key Takeaways: - **Vector Similarity in AI:** Vector similarity is a crucial technique in AI, allowing for the accurate matching of queries with relevant data, driving advanced applications like semantic search and recommendation systems. - **Versatile Applications of Vector Similarity:** This technology powers a wide range of AI-driven applications, from reverse image search in e-commerce to sentiment analysis in text processing. - **Overcoming Vector Search Challenges:** Implementing vector similarity at scale poses challenges like the curse of dimensionality, but specialized systems like Qdrant provide efficient and scalable solutions. - **Qdrant's Advanced Vector Search:** Qdrant leverages Rust's performance and safety features, along with advanced algorithms, to deliver high-speed and secure vector similarity search, even for large-scale datasets. - **Future Innovations in Vector Similarity:** The field of vector similarity is rapidly evolving, with advancements in indexing, real-time search, and privacy-preserving techniques set to expand its capabilities in AI applications. ## Getting Started with Qdrant Ready to implement vector similarity in your AI applications? Explore Qdrant's vector database to enhance your data retrieval and AI capabilities. For additional resources and documentation, visit: - [Quick Start Guide](/documentation/quick-start/) - [Documentation](/documentation/) We are always available on our [Discord channel](https://qdrant.to/discord) to answer any questions you might have. You can also sign up for our [newsletter](/subscribe/) to stay ahead of the curve.
blog/what-is-vector-similarity.md
--- draft: false title: Mastering Batch Search for Vector Optimization | Qdrant slug: batch-vector-search-with-qdrant short_description: Introducing efficient batch vector search capabilities, streamlining and optimizing large-scale searches for enhanced performance. description: "Discover how to optimize your vector search capabilities with efficient batch search. Learn optimization strategies for faster, more accurate results." preview_image: /blog/from_cms/andrey.vasnetsov_career_mining_on_the_moon_with_giant_machines_813bc56a-5767-4397-9243-217bea869820.png date: 2022-09-26T15:39:53.751Z author: Kacper Łukawski featured: false tags: - Data Science - Vector Database - Machine Learning - Information Retrieval --- # How to Optimize Vector Search Using Batch Search in Qdrant 0.10.0 The latest release of Qdrant 0.10.0 has introduced a lot of functionalities that simplify some common tasks. Those new possibilities come with some slightly modified interfaces of the client library. One of the recently introduced features is the possibility to query the collection with [multiple vectors](https://qdrant.tech/blog/storing-multiple-vectors-per-object-in-qdrant/) at once — a batch search mechanism. There are a lot of scenarios in which you may need to perform multiple non-related tasks at the same time. Previously, you only could send several requests to Qdrant API on your own. But multiple parallel requests may cause significant network overhead and slow down the process, especially in case of poor connection speed. Now, thanks to the new batch search, you don’t need to worry about that. Qdrant will handle multiple search requests in just one API call and will perform those requests in the most optimal way. ## An example of using batch search to optimize vector search We’ve used the official Python client to show how the batch search might be integrated with your application. Since there have been some changes in the interfaces of Qdrant 0.10.0, we’ll go step by step. ### Step 1: Creating the collection The first step is to create a collection with a specified configuration — at least vector size and the distance function used to measure the similarity between vectors. ```python from qdrant_client import QdrantClient from qdrant_client.conversions.common_types import VectorParams client = QdrantClient("localhost", 6333) if not client.collection_exists('test_collection'): client.create_collection( collection_name="test_collection", vectors_config=VectorParams(size=4, distance=Distance.EUCLID), ) ``` ## Step 2: Loading the vectors With the collection created, we can put some vectors into it. We’re going to have just a few examples. ```python vectors = [ [.1, .0, .0, .0], [.0, .1, .0, .0], [.0, .0, .1, .0], [.0, .0, .0, .1], [.1, .0, .1, .0], [.0, .1, .0, .1], [.1, .1, .0, .0], [.0, .0, .1, .1], [.1, .1, .1, .1], ] client.upload_collection( collection_name="test_collection", vectors=vectors, ) ``` ## Step 3: Batch search in a single request Now we’re ready to start looking for similar vectors, as our collection has some entries. Let’s say we want to find the distance between the selected vector and the most similar database entry and at the same time find the two most similar objects for a different vector query. Up till 0.9, we would need to call the API twice. Now, we can send both requests together: ```python results = client.search_batch( collection_name="test_collection", requests=[ SearchRequest( vector=[0., 0., 2., 0.], limit=1, ), SearchRequest( vector=[0., 0., 0., 0.01], with_vector=True, limit=2, ) ] ) # Out: [ # [ScoredPoint(id=2, version=0, score=1.9, # payload=None, vector=None)], # [ScoredPoint(id=3, version=0, score=0.09, # payload=None, vector=[0.0, 0.0, 0.0, 0.1]), # ScoredPoint(id=1, version=0, score=0.10049876, # payload=None, vector=[0.0, 0.1, 0.0, 0.0])] # ] ``` Each instance of the SearchRequest class may provide its own search parameters, including vector query but also some additional filters. The response will be a list of individual results for each request. In case any of the requests is malformed, there will be an exception thrown, so either all of them pass or none of them. And that’s it! You no longer have to handle the multiple requests on your own. Qdrant will do it under the hood. ## Batch Search Benchmarks The batch search is fairly easy to be integrated into your application, but if you prefer to see some numbers before deciding to switch, then it’s worth comparing four different options: 1. Querying the database sequentially. 2. Using many threads/processes with individual requests. 3. Utilizing the batch search of Qdrant in a single request. 4. Combining parallel processing and batch search. In order to do that, we’ll create a richer collection of points, with vectors from the *glove-25-angular* dataset, quite a common choice for ANN comparison. If you’re interested in seeing some more details of how we benchmarked Qdrant, let’s take a [look at the Gist](https://gist.github.com/kacperlukawski/2d12faa49e06a5080f4c35ebcb89a2a3). ## The results We launched the benchmark 5 times on 10000 test vectors and averaged the results. Presented numbers are the mean values of all the attempts: 1. Sequential search: 225.9 seconds 2. Batch search: 208.0 seconds 3. Multiprocessing search (8 processes): 194.2 seconds 4. Multiprocessing batch search (8 processes, batch size 10): 148.9 seconds The results you may achieve on a specific setup may vary depending on the hardware, however, at the first glance, it seems that batch searching may save you quite a lot of time. Additional improvements could be achieved in the case of distributed deployment, as Qdrant won’t need to make extensive inter-cluster requests. Moreover, if your requests share the same filtering condition, the query optimizer would be able to reuse it among batch requests. ## Summary Batch search allows packing different queries into a single API call and retrieving the results in a single response. If you ever struggled with sending several consecutive queries into Qdrant, then you can easily switch to the new batch search method and simplify your application code. As shown in the benchmarks, that may almost effortlessly speed up your interactions with Qdrant even by over 30%, even not considering the spare network overhead and possible reuse of filters! Ready to unlock the potential of batch search and optimize your vector search with Qdrant 0.10.0? Contact us today to learn how we can revolutionize your search capabilities!
blog/batch-vector-search-with-qdrant.md
--- draft: true title: Qdrant v0.6.0 engine with gRPC interface has been released short_description: We’ve released a new engine, version 0.6.0. description: We’ve released a new engine, version 0.6.0. The main feature of the release in the gRPC interface. preview_image: /blog/qdrant-v-0-6-0-engine-with-grpc-released/upload_time.png date: 2022-03-10T01:36:43+03:00 author: Alyona Kavyerina author_link: https://medium.com/@alyona.kavyerina featured: true categories: - News tags: - gRPC - release sitemapExclude: True --- We’ve released a new engine, version 0.6.0. The main feature of the release in the gRPC interface — it is much faster than the REST API and ensures higher app performance due to the following features: - re-use of connection; - binarity protocol; - separation schema from data. This results in 3 times faster data uploading on our benchmarks: ![REST API vs gRPC upload time, sec](/blog/qdrant-v-0-6-0-engine-with-grpc-released/upload_time.png) Read more about the gRPC interface and whether you should use it by this [link](/documentation/quick_start/#grpc). The release v0.6.0 includes several bug fixes. More information is available in a [changelog](https://github.com/qdrant/qdrant/releases/tag/v0.6.0). New version was provided in addition to the REST API that the company keeps supporting due to its easy debugging.
blog/qdrant-v-0-6-0-engine-with-grpc-released.md
--- draft: false title: Insight Generation Platform for LifeScience Corporation - Hooman Sedghamiz | Vector Space Talks slug: insight-generation-platform short_description: Hooman Sedghamiz explores the potential of large language models in creating cutting-edge AI applications. description: Hooman Sedghamiz discloses the potential of AI in life sciences, from custom knowledge applications to improving crop yield predictions, while tearing apart the nuances of in-house AI deployment for multi-faceted enterprise efficiency. preview_image: /blog/from_cms/hooman-sedghamiz-bp-cropped.png date: 2024-03-25T08:46:28.227Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Retrieval Augmented Generation - Insight Generation Platform --- > *"There is this really great vector db comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023, there were only a few. What I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline.”*\ -- Hooman Sedghamiz > Hooman Sedghamiz, Sr. Director AI/ML - Insights at Bayer AG is a distinguished figure in AI and ML in the life sciences field. With years of experience, he has led teams and projects that have greatly advanced medical products, including implantable and wearable devices. Notably, he served as the Generative AI product owner and Senior Director at Bayer Pharmaceuticals, where he played a pivotal role in developing a GPT-based central platform for precision medicine. In 2023, he assumed the role of Co-Chair for the EMNLP 2023 GEM industrial track, furthering his contributions to the field. Hooman has also been an AI/ML advisor and scientist at the University of California, San Diego, leveraging his expertise in deep learning to drive biomedical research and innovation. His strengths lie in guiding data science initiatives from inception to commercialization and bridging the gap between medical and healthcare applications through MLOps, LLMOps, and deep learning product management. Engaging with research institutions and collaborating closely with Dr. Nemati at Harvard University and UCSD, Hooman continues to be a dynamic and influential figure in the data science community. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2oj2ne5l9qrURQSV0T1Hft?si=DMJRTAt7QXibWiQ9CEKTJw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/yfzLaH5SFX0).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/yfzLaH5SFX0?si=I8dw5QddKbPzPVOB" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Charting-New-Frontiers-Creating-a-Pioneering-Insight-Generation-Platform-for-a-Major-Life-Science-Corporation---Hooman-Sedghamiz--Vector-Space-Talks-014-e2fqnnc/a-aavffjd" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Why is real-time evaluation critical in maintaining the integrity of chatbot interactions and preventing issues like promoting competitors or making false promises? What strategies do developers employ to minimize cost while maximizing the effectiveness of model evaluations, specifically when dealing with LLMs? These might be just some of the many questions people in the industry are asking themselves. We aim to cover most of it in this talk. Check out their conversation as they peek into world of AI chatbot evaluations. Discover the nuances of ensuring your chatbot's quality and continuous improvement across various metrics. Here are the key topics of this episode: 1. **Evaluating Chatbot Effectiveness**: An exploration of systematic approaches to assess chatbot quality across various stages, encompassing retrieval accuracy, response generation, and user satisfaction. 2. **Importance of Real-Time Assessment**: Insights into why continuous and real-time evaluation of chatbots is essential to maintain integrity and ensure they function as designed without promoting undesirable actions. 3. **Indicators of Compromised Systems**: Understand the significance of identifying behaviors that suggest a system may be prone to 'jailbreaking' and the methods available to counter these through API integration. 4. **Cost-Effective Evaluation Models**: Discussion on employing smaller models for evaluation to reduce costs without compromising the depth of analysis, focusing on failure cases and root-cause assessments. 5. **Tailored Evaluation Metrics**: Emphasis on the necessity of customizing evaluation criteria to suit specific use case requirements, including an exploration of the different metrics applicable to diverse scenarios. >Fun Fact: Large language models like Mistral, Llama, and Nexus Raven have improved in their ability to perform function calling with low hallucination and high-quality output. > ## Show notes: 00:00 Introduction to Bayer AG\ 05:15 Drug discovery, trial prediction, medical virtual assistants.\ 10:35 New language models like Llama rival GPT 3.5.\ 12:46 Large language model solving, efficient techniques, open source.\ 16:12 Scaling applications for diverse, individualized models.\ 19:02 Open source offers multilingual embedding.\ 25:06 Stability improved, reliable function calling capabilities emerged.\ 27:19 Platform aims for efficiency, measures impact.\ 31:01 Build knowledge discovery tool, measure value\ 33:10 Wrap up ## More Quotes from Hooman: *"I think there has been concentration around vector stores. So a lot of startups that have appeared around vector store idea, but I think what really is lacking are tools that you have a lot of sources of knowledge, information.*”\ -- Hooman Sedghamiz *"You can now kind of take a look and see that the performance of them is really, really getting close, if not better than GPT 3.5 already at same level and really approaching step by step to GPT 4.”*\ -- Hooman Sedghamiz in advancements in language models *"I think the biggest, I think the untapped potential, it goes back to when you can do scientific discovery and all those sort of applications which are more challenging, not just around the efficiency and all those sort of things.”*\ -- Hooman Sedghamiz ## Transcript: Demetrios: We are here and I couldn't think of a better way to spend my Valentine's Day than with you Hooman this is absolutely incredible. I'm so excited for this talk that you're going to bring and I want to let everyone that is out there listening know what caliber of a speaker we have with us today because you have done a lot of stuff. Folks out there do not let this man's young look fool you. You look like you are not in your fifty's or sixty's. But when it comes to your bio, it looks like you should be in your seventy's. I am very excited. You've got a lot of experience running data science projects, ML projects, LLM projects, all that fun stuff. You're working at Bayern Munich, sorry, not Bayern Munich, Bayer AG. And you're the senior director of AI and ML. Demetrios: And I think that there is a ton of other stuff that you've done when it comes to machine learning, artificial intelligence. You've got both like the traditional ML background, I think, and then you've also got this new generative AI background and so you can leverage both. But you also think about things in data engineering way. You understand the whole lifecycle. And so today we get to talk all about some of this fun. I know you've got some slides prepared for us. I'll let you throw those on and I'll let anyone else in the chat. Feel free to ask questions while Hooman is going through the presentation and I'll jump in and stop them when needed. Demetrios: But also we can have a little discussion after a few minutes of slides. So for everyone looking, we're going to be watching this and then we're going to be checking out like really talking about what 2024 AI in the enterprise looks like and what is needed to really take advantage of that. So Hooman, I'm dropping off to you, man, and I'll jump in when needed. Hooman Sedghamiz: Thanks a lot for the introduction. Let me get started. Do you have my screen already? Demetrios: Yeah, we see it. Hooman Sedghamiz: Okay, perfect. All right, so hopefully I can change the slides. Yes, as you said, first, thanks a lot for spending your day with me. I know it's Valentine's Day, at least here in the US people go crazy when it gets Valentine's. But I know probably a lot of you are in love with large language models, semantic search and all those sort of things, so it's great to have you here. Let me just start with the. I have a lot of slides, by the way, but maybe I can start with kind of some introduction about the company I work for, what these guys are doing and what we are doing at a life science company like Bayer, which is involved in really major humanity needs, right? So health and the food chain and like agriculture, we do three major kind of products or divisions in the company, mainly consumer halls, over the counter medication that probably a lot of you have taken, aspirin, all those sort of good stuff. And we have crop science division that works on ensuring that the yield is high for crops and the food chain is performing as it should, and also pharmaceutical side which is around treatment and prevention. Hooman Sedghamiz: So now you can imagine via is really important to us because it has the potential of unlocking a future where good health is a reality and hunger is a memory. So I maybe start about maybe giving you a hint of what are really the numerous use cases that AI or challenges that AI could help out with. In life science industry. You can think of adverse event detection when patients are taking a medication, too much of it. The patients might report adverse events, stomach bleeding and go to social media post about it. A few years back, it was really difficult to process automatically all this sort of natural text in a kind of scalable manner. But nowadays, thanks to large language models, it's possible to automate this and identify if there is a medication or anything that might have negatively an adverse event on a patient population. Similarly, you can now create a lot of marketing content using these large language models for products. Hooman Sedghamiz: At the same time, drug discovery is making really big strides when it comes to identifying new compounds. You can essentially describe these compounds using formats like smiles, which could be represented as real text. And these large language models can be trained on them and they can predict the sequences. At the same time, you have this clinical trial outcome prediction, which is huge for pharmaceutical companies. If you could predict what will be the outcome of a trial, it would be a huge time and resource saving for a lot of companies. And of course, a lot of us already see in the market a lot of medical virtual assistants using large language models that can answer medical inquiries and give consultations around them. And there is really, I believe the biggest potential here is around real world data, like most of us nowadays, have some sort of sensor or watch that's measuring our health maybe at a minute by minute level, or it's measuring our heart rate. You go to the hospital, you have all your medical records recorded there, and these large language models have their capacity to process this complex data, and you will be able to drive better insights for individualized insights for patients. Hooman Sedghamiz: And our company is also in crop science, as I mentioned, and crop yield prediction. If you could help farmers improve their crop yield, it means that they can produce better products faster with higher quality. So maybe I could start with maybe a history in 2023, what happened? How companies like ours were looking at large language models and opportunities. They bring, I think in 2023, everyone was excited to bring these efficiency games, right? Everyone wanted to use them for creating content, drafting emails, all these really low hanging fruit use cases. That was around. And one of the earlier really nice architectures that came up that I really like was from a 16 z enterprise that was, I think, back in really, really early 2023. LangChain was new, we had land chain and we had all this. Of course, Qdrant been there for a long time, but it was the first time that you could see vector store products could be integrated into applications. Hooman Sedghamiz: Really at large scale. There are different components. It's quite complex architecture. So on the right side you see how you can host large language models. On the top you see how you can augment them using external data. Of course, we had these plugins, right? So you can connect these large language models with Google search APIs, all those sort of things, and some validation that are in the middle that you could use to validate the responses fast forward. Maybe I can kind of spend, let me check out the time. Maybe I can spend a few minutes about the components of LLM APIs and hosting because that I think has a lot of potential in terms of applications that need to be really scalable. Hooman Sedghamiz: Just to give you some kind of maybe summary about my company, we have around 100,000 people in almost all over the world. Like the languages that people speak are so diverse. So it makes it really difficult to build an application that will serve 200,000 people. And it's kind of efficient. It's not really costly and all those sort of things. So maybe I can spend a few minutes talking about what that means and how kind of larger scale companies might be able to tackle that efficiently. So we have, of course, out of the box solutions, right? So you have Chat GPT already for enterprise, you have other copilots and for example from Microsoft and other companies that are offering, but normally they are seat based, right? So you kind of pay a subscription fee, like Spotify, you pay like $20 per month, $30 on average, somewhere between $20 to $60. And for a company, like, I was like, just if you calculate that for 3000 people, that means like 180,000 per month in subscription fees. Hooman Sedghamiz: And we know that most of the users won't use that. We know that it's a usage based application. You just probably go there. Depending on your daily work, you probably use it. Some people don't use it heavily. I kind of did some calculation. If you build it in house using APIs that you can access yourself, and large language models that corporations can deploy internally and locally, that cost saving could be huge, really magnitudes cheaper, maybe 30 to 20 to 30 times cheaper. So looking, comparing 2024 to 2023, a lot of things have changed. Hooman Sedghamiz: Like if you look at the open source large language models that came out really great models from Mistral, now we have models like Llama, two based model, all of these models came out. You can now kind of take a look and see that the performance of them is really, really getting close, if not better than GPT 3.5 already at same level and really approaching step by step to GPT 4. And looking at the price on the right side and speed or throughput, you can see that like for example, Mistral seven eight B could be a really cheap option to deploy. And also the performance of it gets really close to GPT 3.5 for many use cases in the enterprise companies. I think two of the big things this year, end of last year that came out that make this kind of really a reality are really a few large language models. I don't know if I can call them large language models. They are like 7 billion to 13 billion compared to GPT four, GT 3.5. I don't think they are really large. Hooman Sedghamiz: But one was Nexus Raven. We know that applications, if they want to be robust, they really need function calling. We are seeing this paradigm of function calling, which essentially you ask a language model to generate structured output, you give it a function signature, right? You ask it to generate an output, structured output argument for that function. Next was Raven came out last year, that, as you can see here, really is getting really close to GPT four, right? And GPT four being magnitude bigger than this model. This model only being 13 billion parameters really provides really less hallucination, but at the same time really high quality of function calling. So this makes me really excited for the open source and also the companies that want to build their own applications that requires function calling. That was really lacking maybe just five months ago. At the same time, we have really dedicated large language models to programming languages or scripting like SQL, that we are also seeing like SQL coder that's already beating GPT four. Hooman Sedghamiz: So maybe we can now quickly take a look at how model solving will look like for a large company like ours, like companies that have a lot of people across the globe again, in this aspect also, the community has made really big progress, right? So we have text generation inference from hugging face is open source for most purposes, can be used and it's the choice of mine and probably my group prefers this option. But we have Olama, which is great, a lot of people are using it. We have llama CPP which really optimizes the large language models for local deployment as well, and edge devices. I was really amazed seeing Raspberry PI running a large language model, right? Using Llama CPP. And you have this text generation inference that offers quantization support, continuous patching, all those sort of things that make these large LLMs more quantized or more compressed and also more suitable for deployment to large group of people. Maybe I can kind of give you kind of a quick summary of how, if you decide to deploy these large language models, what techniques you could use to make them more efficient, cost friendly and more scalable. So we have a lot of great open source projects like we have Lite LLM which essentially creates an open AI kind of signature on top of your large language models that you have deployed. Let's say you want to use Azure to host or to access GPT four gypty 3.5 or OpenAI to access OpenAI API. Hooman Sedghamiz: To access those, you could put them behind Lite LLM. You could have models using hugging face that are deployed internally, you could put lightlm in front of those, and then your applications could just use OpenAI, Python SDK or anything to call them naturally. And then you could simply do load balancing between those. Of course, we have also, as I mentioned, a lot of now serving opportunities for deploying those models that you can accelerate. Semantic caching is another opportunity for saving cost. Like for example, if you have cute rent, you are storing the conversations. You could semantically check if the user has asked similar questions and if that question is very similar to the history, you could just return that response instead of calling the large language model that can create costs. And of course you have line chain that you can summarize conversations, all those sort of things. Hooman Sedghamiz: And we have techniques like prompt compression. So as I mentioned, this really load balancing can offer a lot of opportunities for scaling this large language model. As you know, a lot of offerings from OpenAI APIs or Microsoft Azure, they have rate limits, right? So you can't call those models extensively. So what you could do, you could have them in multiple regions, you can have multiple APIs, local TGI deployed models using hugging face TGI or having Azure endpoints and OpenAI endpoints. And then you could use light LLM to load balance between these models. Once the users get in. Right. User one, you send the user one to one deployment, you send the user two requests to the other deployment. Hooman Sedghamiz: So this way you can really scale your application to large amount of users. And of course, we have these opportunities for applications called Lorex that use Lora. Probably a lot of you have heard of like very efficient way of fine tuning these models with fewer number of parameters that we could leverage to have really individualized models for a lot of applications. And you can see the costs are just not comparable if you wanted to use, right. So at GPT 3.5, even in terms of performance and all those sort of things, because you can use really small hardware GPU to deploy thousands of Lora weights or adapters, and then you will be able to serve a diverse set of models to your users. I think one really important part of these kind of applications is the part that you add contextual data, you add augmentation to make them smarter and to make them more up to date. So, for example, in healthcare domain, a lot of Americans already don't have high trust in AI when it comes to decision making in healthcare. So that's why augmentation of data or large language models is really, really important for bringing trust and all those sort of state of the art knowledge to this large language model. Hooman Sedghamiz: For example, if you ask about cancer or rededicated questions that need to build on top of scientific knowledge, it's very important to use those. Augmented or retrieval augmented generation. No, sorry, go next. Jumped on one. But let me see. I think I'm missing a slide, but yeah, I have it here. So going through this kind of, let's say retrieval augmented generation, different parts of it. You have, of course, these vector stores that in 2024, I see explosion of vector stores. Hooman Sedghamiz: Right. So there is this really great vector DB comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023 was only a few. And what I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline. And I think we were talking before this talk together that ETL is not something that is taken seriously. So far. We have a lot of embedding models that are coming out probably on a weekly basis. Hooman Sedghamiz: We have great embedding models that are open source, BgEM. Three is one that is multilingual, 100 plus languages. You could embed text in those languages. We have a lot of vector stores, but we don't have really ETL tools, right? So we have maybe a few airbytes, right? How can you reindex data efficiently? How can you parse scientific articles? Like imagine I have an image here, we have these articles or archive or on a pubmed, all those sort of things that have images and complex structure that our parsers are not able to parse them efficiently and make sense of them so that you can embed them really well. And really doing this Internet level, scientific level retrieval is really difficult. And no one I think is still doing it at scale. I just jumped, I have a love slide, maybe I can jump to my last and then we can pause there and take in some questions. Where I see 2014 and beyond, beyond going for large language models for enterprises, I see assistance, right? I see assistance for personalized assistance, for use cases coming out, right? So these have probably four components. Hooman Sedghamiz: You have even a personalized large language model that can learn from the history of your conversation, not just augmented. Maybe you can fine tune that using Laura and all those techniques. You have the knowledge that probably needs to be customized for your assistant and integrated using vector stores and all those sort of things, technologies that we have out, you know, plugins that bring a lot of plugins, some people call them skills, and also they can cover a lot of APIs that can bring superpowers to the large language model and multi agent setups. Right? We have autogen, a lot of cool stuff that is going on. The agent technology is getting really mature now as we go forward. We have langraph from Langchain that is bringing a lot of more stabilized kind of agent technology. And then you can think of that as for companies building all these kind of like App Stores or assistant stores that use cases, store there. And the colleagues can go there, search. Hooman Sedghamiz: I'm looking for this application. That application is customized for them, or even they can have their own assistant which is customized to them, their own large language model, and they could use that to bring value. And then even a nontechnical person could create their own assistant. They could attach the documents they like, they could select the plugins they like, they'd like to be connected to, for example, archive, or they need to be connected to API and how many agents you like. You want to build a marketing campaign, maybe you need an agent that does market research, one manager. And then you build your application which is customized to you. And then based on your feedback, the large language model can learn from your feedback as well. Going forward, maybe I pause here and then we can it was a bit longer than I expected, but yeah, it's all good, man. Demetrios: Yeah, this is cool. Very cool. I appreciate you going through this, and I also appreciate you coming from the past, from 2014 and talking about what we're going to do in 2024. That's great. So one thing that I want to dive into right away is the idea of ETL and why you feel like that is a bit of a blocker and where you think we can improve there. Hooman Sedghamiz: Yeah. So I think there has been concentration around vector stores. Right. So a lot of startups that have appeared around vector store idea, but I think what really is lacking tools that you have a lot of sources of knowledge, information. You have your Gmail, if you use outlook, if you use scientific knowledge, like sources like archive. We really don't have any startup that I hear that. Okay. I have a platform that offers real time retrieval from archive papers. Hooman Sedghamiz: And you want to ask a question, for example, about transformers. It can do retrieval, augmented generation over all archive papers in real time as they get added for you and brings back the answer to you. We don't have that. We don't have these syncing tools. You can of course, with tricks you can maybe build some smart solutions, but I haven't seen many kind of initiatives around that. And at the same time, we have this paywall knowledge. So we have these nature medicine amazing papers which are paywall. We can access them. Hooman Sedghamiz: Right. So we can build rag around them yet, but maybe some startups can start coming up with strategies, work with this kind of publishing companies to build these sort of things. Demetrios: Yeah, it's almost like you're seeing it not as the responsibility of nature or. Hooman Sedghamiz: Maybe they can do it. Demetrios: Yeah, they can potentially, but maybe that's not their bread and butter and so they don't want to. And so how do startups get in there and take some of this paywalled information and incorporate it into their product? And there is another piece that you mentioned on, just like when it comes to using agents, I wonder, have you played around with them a lot? Have you seen their reliability get better? Because I'm pretty sure a lot of us out there have tried to mess around with agents and maybe just like blown a bunch of money on GPT, four API calls. And it's like this thing isn't that stable. What's going on? So do you know something that we don't? Hooman Sedghamiz: I think they have become much, much more stable. If you look back in 2023, like June, July, they were really new, like auto GPT. We had all these new projects came out, really didn't work out as you say, they were not stable. But I would say by the end of 2023, we had really stable frameworks, for example, customized solutions around agent function calling. I think when function calling came out, the capability that you could provide signature or dot string of, I don't know, a function and you could get back the response really reliably. I think that changed a lot. And Langchen has this OpenAI function calling agent that works with some measures. I mean, of course I wouldn't say you could automate 100% something, but for a knowledge, kind of. Hooman Sedghamiz: So for example, if you have an agent that has access to data sources, all those sort of things, and you ask it to go out there, see what are the latest clinical trial design trends, it can call these tools, it can reliably now get you answer out of ten times, I would say eight times, it works. Now it has become really stable. And what I'm excited about is the latest multi agent scenarios and we are testing them. They are very promising. Right? So you have autogen from Microsoft platform, which is open source, and also you have landgraph from Langchain, which I think the frameworks are becoming really stable. My prediction is between the next few months is lots of, lots of applications will rely on agents. Demetrios: So you also mentioned how to recognize if a project is winning or losing type thing. And considering there are so many areas that you can plug in AI, especially when you're looking at buyer and all the different places that you can say, oh yeah, we could add some AI to this. How are you setting up metrics so, you know, what is worth it to continue investing into versus what maybe sounded like a better idea, but in practice it wasn't actually that good of an idea. Hooman Sedghamiz: Yeah, depends on the platform that you're building. Right? So where we started back in 2023, the platform was aiming for efficiency, right? So how can you make our colleagues more efficient? They can be faster in their daily work, like really delegate this boring stuff, like if you want to summarize or you want to create a presentation, all those sort of things, and you have measures in place that, for example, you could ask, okay, now you're using this platform for months. Let us know how many hours you're saving during your daily work. And really we could see the shift, right? So we did a questionnaire and I think we could see a lot of shift in terms of saving hours, daily work, all those sort of things that is measurable. And it's like you could then convert it, of course, to the value that brings for the enterprise on the company. And I think the biggest, I think the untapped potential, it goes back to when you can do scientific discovery and all those sort of applications which are more challenging, not just around the efficiency and all those sort of things. And then you need to really, if you're building a product, if it's not the general product. And for example, let's say if you're building a natural language to SQL, let's say you have a database. Hooman Sedghamiz: It was a relational database. You want to build an application that searches cars in the background. The customers go there and ask, I'm looking for a BMW 2013. It uses qudrant in the back, right. It kind of does semantic search, all these cool things and returns the response. I think then you need to have really good measures to see how satisfied your customers are when you're integrating a kind of generative application on top of your website that's selling cars. So measuring this in a kind of, like, cyclic manner, people are not going to be happy because you start that there are a lot of things that you didn't count for. You measure all those kind of metrics and then you go forward, you improve your platform. Demetrios: Well, there's also something else that you mentioned, and it brought up this thought in my mind, which is undoubtedly you have these low hanging fruit problems, and it's mainly based on efficiency gains. Right. And so it's helping people extract data from pdfs or what be it, and you're saving time there. You're seeing that you're saving time, and it's a fairly easy setup. Right. But then you have moonshots, I would imagine, like creating a whole new type of aspirin or tylenol or whatever it is, and that is a lot more of an investment of time and energy and infrastructure and everything along those lines. How do you look at both of these and say, we want to make sure that we make headway in both directions. And I'm not sure if you have unlimited resources to be able to just do everything or if you have to recognize what the trade offs are and how you measure those types of metrics. Demetrios: Again, in seeing where do we invest and where do we cut ties with different initiatives. Hooman Sedghamiz: Yeah. So that's a great question. So for product development, like the example that you made, there are really a lot of stages involved. Right. So you start from scientific discovery stage. So I can imagine that you can have multiple products along the way to help out. So if you have a product already out there that you want to generate insights and see. Let's say you have aspirin out there. Hooman Sedghamiz: You want to see if it is also helpful for cardiovascular problems that patients might have. So you could build a sort of knowledge discovery tool that could search for you, give it a name of your product, it will go out there, look into pubmed, all these articles that are being published, brings you back the results. Then you need to have really clear metrics to see if this knowledge discovery platform, after a few months is able to bring value to the customers or the stakeholders that you build the platform for. We have these experts that are really experts in their own field. Takes them really time to go read these articles to make conclusions or answer questions about really complex topic. I think it's really difficult based on the initial feedback we see, it helps, it helps save them time. But really I think it goes back again to the ETL problem that we still don't have your paywall. We can't access a lot of scientific knowledge yet. Hooman Sedghamiz: And these guys get a little bit discouraged at the beginning because they expect that a lot of people, especially non technical, say like you go to Chat GPT, you ask and it brings you the answer, right? But it's not like that. It doesn't work like that. But we can measure it, we can see improvements, they can access knowledge faster, but it's not comprehensive. That's the problem. It's not really deep knowledge. And I think the companies are still really encouraging developing these platforms and they can see that that's a developing field. Right. So it's very hard to give you a short answer, very hard to come up with metrics that gives you success of failure in a short term time period. Demetrios: Yeah, I like the creativity that you're talking about there though. That is like along this multistepped, very complex product creation. There are potential side projects that you can do that show and prove value along the way, and they don't necessarily need to be as complex as that bigger project. Hooman Sedghamiz: True. Demetrios: Sweet, man. Well, this has been awesome. I really appreciate you coming on here to the vector space talks for anyone that would like to join us and you have something cool to present. We're always open to suggestions. Just hit me up and we will make sure to send you some shirt or whatever kind of swag is on hand. Remember, all you astronauts out there, don't get lost in vector space. This has been another edition of the Qdrant vector space talks with Hooman, my man, on Valentine's Day. I can't believe you decided to spend it with me. Demetrios: I appreciate it. Hooman Sedghamiz: Thank you. Take care.
blog/insight-generation-platform-for-lifescience-corporation-hooman-sedghamiz-vector-space-talks-014.md
--- draft: false title: "Unlocking AI Potential: Insights from Stanislas Polu" slug: qdrant-x-dust-vector-search short_description: Stanislas shares insights from his experiences at Stripe and founding his own company, Dust, focusing on AI technology's product layer. description: Explore the dynamic discussion with Stanislas Polu on AI, ML, entrepreneurship, and product development. Gain valuable insights into AI's transformative power. preview_image: /blog/from_cms/stan-polu-cropped.png date: 2024-01-26T16:22:37.487Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - OpenAI --- # Qdrant x Dust: How Vector Search Helps Make Work Better with Stanislas Polu > *"We ultimately chose Qdrant due to its open-source nature, strong performance, being written in Rust, comprehensive documentation, and the feeling of control.”*\ -- Stanislas Polu > Stanislas Polu is the Co-Founder and an Engineer at Dust. He had previously sold a company to Stripe and spent 5 years there, seeing them grow from 80 to 3000 people. Then pivoted to research at OpenAI on large language models and mathematical reasoning capabilities. He started Dust 6 months ago to make work work better with LLMs. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/2YgcSFjP7mKE0YpDGmSiq5?si=6BhlAMveSty4Yt7umPeHjA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/1vKoiFAdorE).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/toIgkJuysQ4?si=uzlzQtOiSL5Kcpk5" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Qdrant-x-Dust-How-Vector-Search-Helps-Make-Work-Work-Better---Stan-Polu--Vector-Space-Talk-010-e2ep9u8/a-aasgqb8" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Curious about the interplay of SaaS platforms and AI in improving productivity? Stanislas Polu dives into the intricacies of enterprise data management, the selective use of SaaS tools, and the role of customized AI assistants in streamlining workflows, all while sharing insights from his experiences at Stripe, OpenAI, and his latest venture, Dust. Here are 5 golden nuggets you'll unearth from tuning in: 1. **The SaaS Universe**: Stan will give you the lowdown on why jumping between different SaaS galaxies like Salesforce and Slack is crucial for your business data's gravitational pull. 2. **API Expansions**: Learn how pushing the boundaries of APIs to include global payment methods can alter the orbit of your company's growth. 3. **A Bot for Every Star**: Discover how creating targeted assistants over general ones can skyrocket team productivity across various use cases. 4. **Behind the Tech Telescope**: Stan discusses the decision-making behind opting for Qdrant for their database cosmos, including what triggered their switch. 5. **Integrating AI Stardust**: They're not just talking about Gen AI; they're actively guiding companies on how to leverage it effectively, placing practicality over flashiness. > Fun Fact: Stanislas Polu co-founded a company that was acquired by Stripe, providing him with the opportunity to work with Greg Brockman at Stripe. > ## Show notes: 00:00 Interview about an exciting career in AI technology.\ 06:20 Most workflows involve multiple SaaS applications.\ 09:16 Inquiring about history with Stripe and AI.\ 10:32 Stripe works on expanding worldwide payment methods.\ 14:10 Document insertion supports hierarchy for user experience.\ 18:29 Competing, yet friends in the same field.\ 21:45 Workspace solutions, marketplace, templates, and user feedback.\ 25:24 Avoid giving false hope; be accountable.\ 26:06 Model calls, external API calls, structured data.\ 30:19 Complex knobs, but powerful once understood. Excellent support.\ 33:01 Companies hire someone to support teams and find use cases. ## More Quotes from Stan: *"You really want to narrow the data exactly where that information lies. And that's where we're really relying hard on Qdrant as well. So the kind of indexing capabilities on top of the vector search."*\ -- Stanislas Polu *"I think the benchmarking was really about quality of models, answers in the context of ritual augmented generation. So it's not as much as performance, but obviously, performance matters and that's why we love using Qdrant.”*\ -- Stanislas Polu *"The workspace assistant are like the admin vetted the assistant, and it's kind of pushed to everyone by default.”*\ -- Stanislas Polu ## Transcript: Demetrios: All right, so, my man, I think people are going to want to know all about you. This is a conversation that we have had planned for a while. I'm excited to chat about what you have been up to. You've had quite the run around when it comes to doing some really cool stuff. You spent a lot of time at Stripe in the early days and I imagine you were doing, doing lots of fun ML initiatives and then you started researching on llms at OpenAI. And recently you are doing the entrepreneurial thing and following the trend of starting a company and getting really cool stuff out the door with AI. I think we should just start with background on yourself. What did I miss in that quick introduction? Stanislas Polu: Okay, sounds good. Yeah, perfect. Now you didn't miss too much. Maybe the only point is that starting the current company, Dust, with Gabrielle, my co founder, with whom we started a Company together twelve years or maybe 14 years ago. Stanislas Polu: I'm very bad with years that eventually got acquired to stripe. So that's how we joined Stripe, the both of us, pretty early. Stripe was 80 people when we joined, all the way to 2500 people and got to meet with and walk with Greg Brockman there. And that's how I found my way to OpenAI after stripe when I started interested in myself, in research at OpenAI, even if I'm not a trained researcher. Stanislas Polu: I did research on fate, doing research. On larger good models, reasoning capabilities, and in particular larger models mathematical reasoning capabilities. And from there. 18 months ago, kind of decided to leave OpenAI with the motivation. That is pretty simple. It's that basically the hypothesis is that. It was pre chattivity, but basically those large language models, they're already extremely capable and yet they are completely under deployed compared to the potential they have. And so while research remains a very active subject and it's going to be. A tailwind for the whole ecosystem, there's. Stanislas Polu: Probably a lot of to be done at the product layer, and most of the locks between us and deploying that technology in the world is probably sitting. At the product layer as it is sitting at the research layer. And so that's kind of the hypothesis behind dust, is we try to explore at the product layer what it means to interface between models and humans, try to make them happier and augment them. With superpowers in their daily jobs. Demetrios: So you say product layer, can you go into what you mean by that a little bit more? Stanislas Polu: Well, basically we have a motto at dust, which is no gpu before PMF. And so the idea is that while it's extremely exciting to train models. It's extremely exciting to fine tune and align models. There is a ton to be done. Above the model, not only to use. Them as best as possible, but also to really find the interaction interfaces that make sense for humans to leverage that technology. And so we basically don't train any models ourselves today. There's many reasons to that. The first one is as an early startup. It's a fascinating subject and fascinating exercise. As an early startup, it's actually a very big investment to go into training. Models because even if the costs are. Not necessarily big in terms of compute. It'S still research and development and pretty. Hard research and development. It's basically research. We understand pretraining pretty well. We don't understand fine tuning that well. We believe it's a better idea to. Stanislas Polu: Really try to explore the product layer. The image I use generally is that training a model is very sexy and it's exciting, but really you're building a small rock that will get submerged by the waves of bigger models coming in the future. And iterating and positioning yourself at the interface between humans and those models at. The product layer is more akin to. Building a surfboard that you will be. Able to use to surf those same waves. Demetrios: I like that because I am a big surfer and I have a lot. Stanislas Polu: Of fun doing it. Demetrios: Now tell me about are you going after verticals? Are you going after different areas in a market, a certain subset of the market? Stanislas Polu: How do you look at that? Yeah. Basically the idea is to look at productivity within the enterprise. So we're first focusing on internal use. By teams, internal teams of that technology. We're not at all going after external use. So backing products that embed AI or having on projects maybe exposed through our users to actual end customers. So we really focused on the internal use case. So the first thing you want to. Do is obviously if you're interested in. Productivity within enterprise, you definitely want to have the enterprise data, right? Because otherwise there's a ton that can be done with Chat GPT as an example. But there is so much more that can be done when you have context. On the data that comes from the company you're in. That's pretty much kind of the use. Case we're focusing on, and we're making. A bet, which is a crazy bet to answer your question, that there's actually value in being quite horizontal for now. So that comes with a lot of risks because an horizontal product is hard. Stanislas Polu: To read and it's hard to figure. Out how to use it. But at the same time, the reality is that when you are somebody working in a team, even if you spend. A lot of time on one particular. Application, let's say Salesforce for sales, or GitHub for engineers, or intercom for customer support, the reality of most of your workflows do involve many SaaS, meaning that you spend a lot of time in Salesforce, but you also spend a lot of time in slack and notion. Maybe, or we all spend as engineers a lot of time in GitHub, but we also use notion and slack a ton or Google Drive or whatnot. Jira. Demetrios: Good old Jira. Everybody loves spending time in Jira. Stanislas Polu: Yeah. And so basically, following our users where. They are requires us to have access to those different SaaS, which requires us. To be somewhat horizontal. We had a bunch of signals that. Kind of confirms that position, and yet. We'Re still very conscious that it's a risky position. As an example, when we are benchmarked against other solutions that are purely verticalized, there is many instances where we actually do a better job because we have. Access to all the data that matters within the company. Demetrios: Now, there is something very difficult when you have access to all of the data, and that is the data leakage issue and the data access. Right. How are you trying to conquer that hard problem? Stanislas Polu: Yeah, so we're basically focusing to continue. Answering your questions through that other question. I think we're focusing on tech companies. That are less than 1000 people. And if you think about most recent tech companies, less than 1000 people. There's been a wave of openness within. Stanislas Polu: Companies in terms of data access, meaning that it's becoming rare to see people actually relying on complex ACL for the internal data. You basically generally have silos. You have the exec silo with remuneration and ladders and whatnot. And this one is definitely not the. Kind of data we're touching. And then for the rest, you generally have a lot of data that is. Accessible by every employee within your company. So that's not a perfect answer, but that's really kind of the approach we're taking today. We give a lot of control on. Stanislas Polu: Which data comes into dust, but once. It'S into dust, and that control is pretty granular, meaning that you can select. Specific slack channels, or you can select. Specific notion pages, or you can select specific Google Drive subfolders. But once you decide to put it in dust, every dust user has access to this. And so we're really taking the silo. Vision of the granular ACL story. Obviously, if we were to go higher enterprise, that would become a very big issue, because I think larger are the enterprise, the more they rely on complex ackles. Demetrios: And I have to ask about your history with stripe. Have you been focusing on specific financial pieces to this? First thing that comes to mind is what about all those e commerce companies that are living and breathing with stripe? Feels like they've got all kinds of use cases that they could leverage AI for, whether it is their supply chain or just getting better numbers, or getting answers that they have across all this disparate data. Have you looked at that at all? Is that informing any of your decisions that you're making these days? Stanislas Polu: No, not quite. Not really. At stripe, when we joined, it was. Very early, it was the quintessential curlb onechargers number 42. 42, 42. And that's pretty much what stripe was almost, I'm exaggerating, but not too much. So what I've been focusing at stripe. Was really driven by my and our. Perspective as european funders joining a quite. Us centric company, which is, no, there. Stanislas Polu: Is not credit card all over the world. Yes, there is also payment methods. And so most of my time spent at stripe was spent on trying to expand the API to not a couple us payment methods, but a variety of worldwide payment methods. So that requires kind of a change of paradigm from an API design, and that's where I spent most of my cycles What I want to try. Demetrios: Okay, the next question that I had is you talked about how benchmarking with the horizontal solution, surprisingly, has been more effective in certain use cases. I'm guessing that's why you got a little bit of love for [Qdrant](https://qdrant.tech/) and what we're doing here. Stanislas Polu: Yeah I think the benchmarking was really about quality of models, answers in the context of [retrieval augmented generation](https://qdrant.tech/articles/what-is-rag-in-ai/). So it's not as much as performance, but obviously performance matters, and that's why we love using Qdrants. But I think the main idea of. Stanislas Polu: What I mentioned is that it's interesting because today the retrieval is noisy, because the embedders are not perfect, which is an interesting point. Sorry, I'm double clicking, but I'll come back. The embedded are really not perfect. Are really not perfect. So that's interesting. When Qdrant release kind of optimization for [storage of vectors](https://qdrant.tech/documentation/concepts/storage/), they come with obviously warnings that you may have a loss. Of precision because of the compression, et cetera, et cetera. And that's funny, like in all kind of retrieval and mental generation world, it really doesn't matter. We take all the performance we can because the loss of precision coming from compression of those vectors at the vector DB level are completely negligible compared to. The holon fuckness of the embedders in. Stanislas Polu: Terms of capability to correctly embed text, because they're extremely powerful, but they're far from being perfect. And so that's an interesting thing where you can really go as far as you want in terms of performance, because your error is dominated completely by the. Quality of your embeddings. Going back up. I think what's interesting is that the. Retrieval is noisy, mostly because of the embedders, and the models are not perfect. And so the reality is that more. Data in a rack context is not. Necessarily better data because the retrievals become noisy. The model kind of gets confused and it starts hallucinating stuff, et cetera. And so the right trade off is that you want to access to as. Much data as possible, but you want To give the ability to our users. To select very narrowly the data required for a given task. Stanislas Polu: And so that's kind of what our product does, is the ability to create assistants that are specialized to a given task. And most of the specification of an assistant is obviously a prompt, but also. Saying, oh, I'm working on helping sales find interesting next leads. And you really want to narrow the data exactly where that information lies. And that's where there, we're really relying. Hard on Qdrants as well. So the kind of indexing capabilities on. Top of the [vector search](https://qdrant.tech/), where whenever. Stanislas Polu: We insert the documents, we kind of try to insert an array of parents that reproduces the hierarchy of whatever that document is coming from, which lets us create a very nice user experience where when you create an assistant, you can say, oh, I'm going down two levels within notion, and I select that page and all of those children will come together. And that's just one string in our specification, because then rely on those parents that have been injected in Qdrant, and then the Qdrant search really works well with a simple query like this thing has to be in parents. Stanislas Polu: And you filter by that and it. Demetrios: Feels like there's two levels to the evaluation that you can be doing with rags. One is the stuff you're retrieving and evaluating the retrieval, and then the other is the output that you're giving to the end user. How are you attacking both of those evaluation questions? Stanislas Polu: Yeah, so the truth in whole transparency. Is that we don't, we're just too early. Demetrios: Well, I'm glad you're honest with us, Alicia. Stanislas Polu: This is great, we should, but the rate is that we have so many other product priorities that I think evaluating the quality of retrievals, evaluating the quality. Of retrieval, augmented generation. Good sense but good sense is hard to define, because good sense with three. Years doing research in that domain is probably better sense. Better good sense than good sense with no clue on the domain. But basically with good sense I think. You can get very far and then. You'Ll be optimizing at the margin. And the reality is that if you. Get far enough with good sense, and that everything seems to work reasonably well, then your priority is not necessarily on pushing 5% performance, whatever is the metric. Stanislas Polu: But more like I have a million other products questions to solve. That is the kind of ten people answer to your question. And as we grow, we'll probably make a priority, of course, of benchmarking that better. In terms of benchmarking that better. Extremely interesting question as well, because the. Embedding benchmarks are what they are, and. I think they are not necessarily always a good representation of the use case you'll have in your products. And so that's something you want to be cautious of. And. It'S quite hard to benchmark your use case. The kind of solutions you have and the ones that seems more plausible, whether it's spending like full years on that. Stanislas Polu: Is probably to. Evaluate the retrieval with another model, right? It's like you take five different embedding models, you record a bunch of questions. That comes from your product, you use your product data and you run those retrievals against those five different embedders, and. Then you ask GPT four to raise. That would be something that seems sensible and probably will get you another step forward and is not perfect, but it's. Probably really strong enough to go quite far. Stanislas Polu: And then the second question is evaluating. The end to end pipeline, which includes. Both the retrieval and the generation. And to be honest, again, it's a. Known question today because GPT four is. Just so much above all the models. Stanislas Polu: That there's no point evaluating them. If you accept using GPD four, just use GP four. If you want to use open source models, then the questions is more important. But if you are okay with using GPD four for many reasons, then there. Is no questions at this stage. Demetrios: So my next question there, because sounds like you got a little bit of a french accent, you're somewhere in Europe. Are you in France? Stanislas Polu: Yes, we're based in France and billion team from Paris. Demetrios: So I was wondering if you were going to lean more towards the history of you working at OpenAI or the fraternity from your french group and go for your amiz in. Stanislas Polu: Mean, we are absolute BFF with Mistral. The fun story is that Guillaume Lamp is a friend, because we were working on exactly the same subjects while I was at OpenAI and he was at Meta. So we were basically frenemies. We're competing against the same metrics and same goals, but grew a friendship out of that. Our platform is quite model agnostic, so. We support Mistral there. Then we do decide to set the defaults for our users, and we obviously set the defaults to GP four today. I think it's the question of where. Today there's no question, but when the. Time comes where open source or non open source, it's not the question, but where Ozo models kind of start catching. Up with GPT four, that's going to. Stanislas Polu: Be an interesting product question, and hopefully. Mistral will get there. I think that's definitely their goal, to be within reach of GPT four this year. And so that's going to be extremely exciting. Yeah. Demetrios: So then you mentioned how you have a lot of other product considerations that you're looking at before you even think about evaluation. What are some of the other considerations? Stanislas Polu: Yeah, so as I mentioned a bit. The main hypothesis is we're going to do company productivity or team productivity. We need the company data. That was kind of hypothesis number zero. It's not even an hypothesis, almost an axiom. And then our first product was a conversational assistance, like chat. GPT, that is general, and has access. To everything, and realized that didn't work. Quite well enough on a bunch of use cases, was kind of good on some use cases, but not great on many others. And so that's where we made that. First strong product, the hypothesis, which is. So we want to have many assistants. Not one assistant, but many assistants, targeted to specific tasks. And that's what we've been exploring since the end of the summer. And that hypothesis has been very strongly confirmed with our users. And so an example of issue that. We have is, obviously, you want to. Activate your product, so you want to make sure that people are creating assistance. So one thing that is much more important than the quality of rag is. The ability of users to create personal assistance. Before, it was only workspace assistance, and so only the admin or the builder could build it. And now we've basically, as an example, worked on having anybody can create the assistant. The assistant is scoped to themselves, they can publish it afterwards, et cetera. That's the kind of product questions that. Are, to be honest, more important than rack rarity, at least for us. Demetrios: All right, real quick, publish it for a greater user base or publish it for the internal company to be able to. Stanislas Polu: Yeah, within the workspace. Okay. Demetrios: It's not like, oh, I could publish this for. Stanislas Polu: We'Re not going there yet. And there's plenty to do internally to each workspace. Before going there, though it's an interesting case because that's basically another big problem, is you have an horizontal platform, you can create an assistance, you're not an. Expert and you're like, okay, what should I do? And so that's the kind of white blank page issue. Stanislas Polu: And so there having templates, inspiration, you can sit that within workspace, but you also want to have solutions for the new workspace that gets created. And maybe a marketplace is a good idea. Or having templates, et cetera, are also product questions that are much more important than the rack performance. And finally, the users where dust works really well, one example is Alan in. France, there are 600, and dust is. Running there pretty healthily, and they've created. More than 200 assistants. And so another big product question is like, when you get traction within a company, people start getting flooded with assistance. And so how do they discover them? How did they, and do they know which one to use, et cetera? So that's kind of the kind of. Many examples of product questions that are very first order compared to other things. Demetrios: Because out of these 200 assistants, are you seeing a lot of people creating the same assistance? Stanislas Polu: That's a good question. So far it's been kind of driven by somebody internally that was responsible for trying to push gen AI within the company. And so I think there's not that. Much redundancy, which is interesting, but I. Think there's a long tail of stuff that are mostly explorations, but from our perspective, it's very hard to distinguish the two. Obviously, usage is a very strong signal. But yeah, displaying assistance by usage, pushing. The right assistance to the right user. This problem seems completely trivial compared to building an LLM, obviously. But still, when you add the product layer requires a ton of work, and as a startup, that's where a lot of our resources go, and I think. It'S the right thing to do. Demetrios: Yeah, I wonder if, and you probably have thought about this, but if it's almost like you can tag it with this product, or this assistant is in beta or alpha or this is in production, you can trust that this one is stable, that kind of thing. Stanislas Polu: Yeah. So we have the concept of shared. Assistant and the concept of workspace assistant. The workspace assistant are like the admin vetted the assistant, and it's kind of pushed to everyone by default. And then the published assistant is like, there's a gallery of assistant that you can visit, and there, the strongest signal is probably the usage metric. Right? Demetrios: Yeah. So when you're talking about assistance, just so that I'm clear, it's not autonomous agents, is it? Stanislas Polu: No. Stanislas Polu: Yeah. So it's a great question. We are really focusing on the one. Step, trying to solve very nicely the one step thing. I have one granular task to achieve. And I can get accelerated on that. Task and maybe save a few minutes or maybe save a few tens of minutes on one specific thing, because the identity version of that is obviously the future. But the reality is that current models, even GB four, are not that great at kind of chaining decisions of tool use in a way that is sustainable. Beyond the demo effect. So while we are very hopeful for the future, it's not our core focus, because I think there's a lot of risk that it creates more deception than anything else. But it's obviously something that we are. Targeting in the future as models get better. Demetrios: Yeah. And you don't want to burn people by making them think something's possible. And then they go and check up on it and they leave it in the agent's hands, and then next thing they know they're getting fired because they don't actually do the work that they said they were going to do. Stanislas Polu: Yeah. One thing that we don't do today. Is we have kind of different ways. To bring data into the assistant before it creates generation. And we're expanding that. One of the domain use case is the one based on Qdrant, which is. The kind of retrieval one. We also have kind of a workflow system where you can create an app. An LLM app, where you can make. Stanislas Polu: Multiple calls to a model, you can call external APIs and search. And another thing we're digging into our structured data use case, which this time doesn't use Qdrants, which the idea is that semantic search is great, but it's really atrociously bad for quantitative questions. Basically, the typical use case is you. Have a big CSV somewhere and it gets chunked and then you do retrieval. And you get kind of disordered partial. Chunks, all of that. And on top of that, the moles. Are really bad at counting stuff. And so you really get bullshit, you. Demetrios: Know better than anybody. Stanislas Polu: Yeah, exactly. Past life. And so garbage in, garbage out. Basically, we're looking into being able, whenever the data is structured, to actually store. It in a structured way and as needed. Just in time, generate an in memory SQL database so that the model can generate a SQL query to that data and get kind of a SQL. Answer and as a consequence hopefully be able to answer quantitative questions better. And finally, obviously the next step also is as we integrated with those platform notion, Google Drive, slack, et cetera, basically. There'S some actions that we can take there. We're not going to take the actions, but I think it's interesting to have. The model prepare an action, meaning that here is the email I prepared, send. It or iterate with me on it, or here is the slack message I prepare, or here is the edit to the notion doc that I prepared. Stanislas Polu: This is still not agentic, it's closer. To taking action, but we definitely want. To keep the human in the loop. But obviously some stuff that are on our roadmap. And another thing that we don't support, which is one type of action would. Be the first we will be working on is obviously code interpretation, which is I think is one of the things that all users ask because they use. It on Chat GPT. And so we'll be looking into that as well. Demetrios: What made you choose Qdrant? Stanislas Polu: So the decision was made, if I. Remember correctly, something like February or March last year. And so the alternatives I looked into. Were pine cone wavy eight, some click owls because Chroma was using click owls at the time. But Chroma was. 2000 lines of code. At the time as well. And so I was like, oh, Chroma, we're part of AI grant. And Chroma is as an example also part of AI grant. So I was like, oh well, let's look at Chroma. And however, what I'm describing is last. Year, but they were very early. And so it was definitely not something. That seemed like to make sense for us. So at the end it was between pine cone wavev eight and Qdrant wave v eight. You look at the doc, you're like, yeah, not possible. And then finally it's Qdrant and Pinecone. And I think we really appreciated obviously the open source nature of Qdrants.From. Playing with it, the very strong performance, the fact that it's written in rust, the sanity of the documentation, and basically the feeling that because it's an open source, we're using the osted Qdrant cloud solution. But it's not a question of paying. Or not paying, it's more a question. Of being able to feel like you have more control. And at the time, I think it was the moment where Pinecon had their massive fuck up, where they erased gazillion database from their users and so we've been on Qdrants and I think it's. Been a two step process, really. Stanislas Polu: It's very smooth to start, but also Qdrants at this stage comes with a. Lot of knobs to turns. And so as you start scaling, you at some point reach a point where. You need to start tweaking the knobs. Which I think is great because the knobs, there's a lot of knobs, so they are hard to understand, but once you understand them, you see the power of them. And the Qdrant team has been excellent there supporting us. And so I think we've reached that first level of scale where you have. To tweak the nodes, and we've reached. The second level of scale where we. Have to have multiple nodes. But so far it's been extremely smooth. And I think we've been able to. Do with Qdrant some stuff that really are possible only because of the very good performance of the database. As an example, we're not using your clustered setup. We have n number of independent nodes. And as we scale, we kind of. Reshuffle which users go on which nodes. As we need, trying to keep our largest users and most paying users on. Very well identified nodes. We have a kind of a garbage. Node for all the free users, as an example, migrating even a very big collection from one node. One capability that we build is say, oh, I have that collection over there. It's pretty big. I'm going to initiate on another node. I'm going to set up shadow writing on both, and I'm going to migrate live the data. And that has been incredibly easy to do with Qdrant because crawling is fast, writing is fucking fast. And so even a pretty large collection. You can migrate it in a minute. Stanislas Polu: And so it becomes really within the realm of being able to administrate your cluster with that in mind, which I. Think would have probably not been possible with the different systems. Demetrios: So it feels like when you are helping companies build out their assistants, are you going in there and giving them ideas on what they can do? Stanislas Polu: Yeah, we are at a stage where obviously we have to do that because. I think the product basically starts to. Have strong legs, but I think it's still very early and so there's still a lot to do on activation, as an example. And so we are in a mode today where we do what doesn't scale. Basically, and we do spend some time. Stanislas Polu: With companies, obviously, because there's nowhere around that. But what we've seen also is that the users where it works the best and being on dust or anything else. That is relative to having people adopt gen AI. Within the company are companies where they. Actually allocate resources to the problem, meaning that the companies where it works best. Are the companies where there's somebody. Their role is really to go around the company, find, use cases, support the teams, et cetera. And in the case of companies using dust, this is kind of type of interface that is perfect for us because we provide them full support and we help them build whatever they think is. Valuable for their team. Demetrios: Are you also having to be the bearer of bad news and tell them like, yeah, I know you saw that demo on Twitter, but that is not actually possible or reliably possible? Stanislas Polu: Yeah, that's an interesting question. That's a good question. Not that much, because I think one of the big learning is that you take any company, even a pretty techy. Company, pretty young company, and the reality. Is that most of the people, they're not necessarily in the ecosystem, they just want shit done. And so they're really glad to have some shit being done by a computer. But they don't really necessarily say, oh, I want the latest shiniest thingy that. I saw on Twitter. So we've been safe from that so far. Demetrios: Excellent. Well, man, this has been incredible. I really appreciate you coming on here and doing this. Thanks so much. And if anyone wants to check out dust, I encourage that they do. Stanislas Polu: It's dust. Demetrios: It's a bit of an interesting website. What is it? Stanislas Polu: Dust TT. Demetrios: That's it. That's what I was missing, dust. There you go. So if anybody wants to look into it, I encourage them to. And thanks so much for coming on here. Stanislas Polu: Yeah. Stanislas Polu: And Qdrant is the shit. Demetrios: There we go. Awesome, dude. Well, this has been great. Stanislas Polu: Yeah, thanks, Vintu. Have a good one.
blog/qdrant-x-dust-how-vector-search-helps-make-work-work-better-stan-polu-vector-space-talk-010.md
--- draft: false title: Powering Bloop semantic code search slug: case-study-bloop short_description: Bloop is a fast code-search engine that combines semantic search, regex search and precise code navigation description: Bloop is a fast code-search engine that combines semantic search, regex search and precise code navigation preview_image: /case-studies/bloop/social_preview.png date: 2023-02-28T09:48:00.000Z author: Qdrant Team featured: false aliases: - /case-studies/bloop/ --- Founded in early 2021, [bloop](https://bloop.ai/) was one of the first companies to tackle semantic search for codebases. A fast, reliable Vector Search Database is a core component of a semantic search engine, and bloop surveyed the field of available solutions and even considered building their own. They found Qdrant to be the top contender and now use it in production. This document is intended as a guide for people who intend to introduce semantic search to a novel field and want to find out if Qdrant is a good solution for their use case. ## About bloop ![](/case-studies/bloop/screenshot.png) [bloop](https://bloop.ai/) is a fast code-search engine that combines semantic search, regex search and precise code navigation into a single lightweight desktop application that can be run locally. It helps developers understand and navigate large codebases, enabling them to discover internal libraries, reuse code and avoid dependency bloat. bloop’s chat interface explains complex concepts in simple language so that engineers can spend less time crawling through code to understand what it does, and more time shipping features and fixing bugs. ![](/case-studies/bloop/bloop-logo.png) bloop’s mission is to make software engineers autonomous and semantic code search is the cornerstone of that vision. The project is maintained by a group of Rust and Typescript engineers and ML researchers. It leverages many prominent nascent technologies, such as [Tauri](http://tauri.app), [tantivy](https://docs.rs/tantivy), [Qdrant](https://github.com/qdrant/qdrant) and [Anthropic](https://www.anthropic.com/). ## About Qdrant ![](/case-studies/bloop/qdrant-logo.png) Qdrant is an open-source Vector Search Database written in Rust . It deploys as an API service providing a search for the nearest high-dimensional vectors. With Qdrant, embeddings or neural network encoders can be turned into full-fledged applications for matching, searching, recommending, and many more solutions to make the most of unstructured data. It is easy to use, deploy and scale, blazing fast and is accurate simultaneously. Qdrant was founded in 2021 in Berlin by Andre Zayarni and Andrey Vasnetsov with the mission to power the next generation of AI applications with advanced and high-performant [vector similarity](https://qdrant.tech/articles/vector-similarity-beyond-search/) search technology. Their flagship product is the vector search database which is available as an open source https://github.com/qdrant/qdrant or managed cloud solution https://cloud.qdrant.io/. ## The Problem Firstly, what is semantic search? It’s finding relevant information by comparing meaning, rather than simply measuring the textual overlap between queries and documents. We compare meaning by comparing *embeddings* - these are vector representations of text that are generated by a neural network. Each document’s embedding denotes a position in a *latent* space, so to search you embed the query and find its nearest document vectors in that space. ![](/case-studies/bloop/vector-space.png) Why is semantic search so useful for code? As engineers, we often don’t know - or forget - the precise terms needed to find what we’re looking for. Semantic search enables us to find things without knowing the exact terminology. For example, if an engineer wanted to understand “*What library is used for payment processing?*” a semantic code search engine would be able to retrieve results containing “*Stripe*” or “*PayPal*”. A traditional lexical search engine would not. One peculiarity of this problem is that the **usefulness of the solution increases with the size of the code base** – if you only have one code file, you’ll be able to search it quickly, but you’ll easily get lost in thousands, let alone millions of lines of code. Once a codebase reaches a certain size, it is no longer possible for a single engineer to have read every single line, and so navigating large codebases becomes extremely cumbersome. In software engineering, we’re always dealing with complexity. Programming languages, frameworks and tools have been developed that allow us to modularize, abstract and compile code into libraries for reuse. Yet we still hit limits: Abstractions are still leaky, and while there have been great advances in reducing incidental complexity, there is still plenty of intrinsic complexity[^1] in the problems we solve, and with software eating the world, the growth of complexity to tackle has outrun our ability to contain it. Semantic code search helps us navigate these inevitably complex systems. But semantic search shouldn’t come at the cost of speed. Search should still feel instantaneous, even when searching a codebase as large as Rust (which has over 2.8 million lines of code!). Qdrant gives bloop excellent semantic search performance whilst using a reasonable amount of resources, so they can handle concurrent search requests. ## The Upshot [bloop](https://bloop.ai/) are really happy with how Qdrant has slotted into their semantic code search engine: it’s performant and reliable, even for large codebases. And it’s written in Rust(!) with an easy to integrate qdrant-client crate. In short, Qdrant has helped keep bloop’s code search fast, accurate and reliable. #### Footnotes: [^1]: Incidental complexity is the sort of complexity arising from weaknesses in our processes and tools, whereas intrinsic complexity is the sort that we face when trying to describe, let alone solve the problem.
blog/case-study-bloop.md
--- draft: true title: "Qdrant Hybrid Cloud and Cohere Support Enterprise AI" short_description: "Next gen enterprise software will rely on revolutionary technologies by Qdrant Hybrid Cloud and Cohere." description: "Next gen enterprise software will rely on revolutionary technologies by Qdrant Hybrid Cloud and Cohere." preview_image: /blog/hybrid-cloud-cohere/hybrid-cloud-cohere.png date: 2024-04-10T00:01:00Z author: Qdrant featured: false weight: 1011 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Cohere](https://cohere.com/) are partnering on the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) to enable global audiences to build and scale their AI applications quickly and securely. With Cohere's world-class large language models (LLMs), getting the most out of vector search becomes incredibly easy. Qdrant's new Hybrid Cloud offering and its Kubernetes-native design can be coupled with Cohere's powerful models and APIs. This combination allows for simple setup when prototyping and deploying AI solutions. It’s no secret that Retrieval Augmented Generation (RAG) has shown to be a powerful method of building conversational AI products, such as chatbots or customer support systems. With Cohere's managed LLM service, scientists and developers can tap into state-of-the-art text generation and understanding capabilities, all accessible via API. Qdrant Hybrid Cloud seamlessly integrates with Cohere’s foundation models, enabling convenient data vectorization and highly accurate semantic search. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, global businesses can keep both products deployed in the same hosting architecture. By combining Cohere’s foundation models with Qdrant’s vector search capabilities, developers can create robust and scalable GenAI applications tailored to meet the demands of modern enterprises. This powerful combination empowers organizations to build strong and secure applications that search, understand meaning and converse in text. #### Take Full Control of Your GenAI Application with Qdrant Hybrid Cloud and Cohere Building apps with Qdrant Hybrid Cloud and Cohere’s models comes with several key advantages: **Data Sovereignty:** Should you wish to keep both deployment together, this integration guarantees that your vector database is hosted in proximity to the foundation models and proprietary data, thereby reducing latency, supporting data locality, and safeguarding sensitive information to comply with regulatory requirements, such as GDPR. **Massive Scale Support:** Users can achieve remarkable efficiency and scalability in running complex queries across vast datasets containing billions of text objects and millions of users. This integration enables lightning-fast retrieval of relevant information, making it ideal for enterprise-scale applications where speed and accuracy are paramount. **Cost Efficiency:** By leveraging Qdrant's quantization for efficient data handling and pairing it with Cohere's scalable and affordable pricing structure, the price/performance ratio of this integration is next to none. Companies who are just getting started with both will have a minimal upfront investment and optimal cost management going forward. #### Start Building Your New App With Cohere and Qdrant Hybrid Cloud ![hybrid-cloud-cohere-tutorial](/blog/hybrid-cloud-cohere/hybrid-cloud-cohere-tutorial.png) We put together an end-to-end tutorial to show you how to build a GenAI application with Qdrant Hybrid Cloud and Cohere’s embeddings. #### Tutorial: Build a RAG System to Answer Customer Support Queries Learn how to set up a private AI service that addresses customer support issues with high accuracy and effectiveness. By leveraging Cohere’s models with Qdrant Hybrid Cloud, you will create a fully private customer support system. [Try the Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-cohere.md
--- draft: false title: Introducing Qdrant Cloud on Microsoft Azure slug: qdrant-cloud-on-microsoft-azure short_description: Qdrant Cloud is now available on Microsoft Azure description: "Learn the benefits of Qdrant Cloud on Azure." preview_image: /blog/from_cms/qdrant-azure-2-1.png date: 2024-01-17T08:40:42Z author: Manuel Meyer featured: false tags: - Data Science - Vector Database - Machine Learning - Information Retrieval - Cloud - Azure --- Great news! We've expanded Qdrant's managed vector database offering — [Qdrant Cloud](https://cloud.qdrant.io/) — to be available on Microsoft Azure. You can now effortlessly set up your environment on Azure, which reduces deployment time, so you can hit the ground running. [Get started](https://cloud.qdrant.io/) What this means for you: - **Rapid application development**: Deploy your own cluster through the Qdrant Cloud Console within seconds and scale your resources as needed. - **Billion vector scale**: Seamlessly grow and handle large-scale datasets with billions of vectors. Leverage Qdrant features like horizontal scaling and binary quantization with Microsoft Azure's scalable infrastructure. **"With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale."** -- Jeremy Teichmann (AI Squad Technical Lead & Generative AI Expert), Daly Singh (AI Squad Lead & Product Owner) - Bosch Digital. Get started by [signing up for a Qdrant Cloud account](https://cloud.qdrant.io). And learn more about Qdrant Cloud in our [docs](/documentation/cloud/). <video autoplay="true" loop="true" width="100%" controls><source src="/blog/qdrant-cloud-on-azure/azure-cluster-deployment-short.mp4" type="video/mp4"></video>
blog/qdrant-cloud-on-microsoft-azure.md
--- draft: false title: "Vultr and Qdrant Hybrid Cloud Support Next-Gen AI Projects" short_description: "Providing a flexible platform for high-performance vector search in next-gen AI workloads." description: "Providing a flexible platform for high-performance vector search in next-gen AI workloads." preview_image: /blog/hybrid-cloud-vultr/hybrid-cloud-vultr.png date: 2024-04-10T00:08:00Z author: Qdrant featured: false weight: 1000 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Vultr](https://www.vultr.com/) are partnering to provide seamless scalability and performance for vector search workloads. With Vultr's global footprint and customizable platform, deploying vector search workloads becomes incredibly flexible. Qdrant's new [Qdrant Hybrid Cloud](/hybrid-cloud/) offering and its Kubernetes-native design, coupled with Vultr's straightforward virtual machine provisioning, allows for simple setup when prototyping and building next-gen AI apps. #### Adapting to Diverse AI Development Needs with Customization and Deployment Flexibility In the fast-paced world of AI and ML, businesses are eagerly integrating AI and generative AI to enhance their products with new features like AI assistants, develop new innovative solutions, and streamline internal workflows with AI-driven processes. Given the diverse needs of these applications, it's clear that a one-size-fits-all approach doesn't apply to AI development. This variability in requirements underscores the need for adaptable and customizable development environments. Recognizing this, Qdrant and Vultr have teamed up to offer developers unprecedented flexibility and control. The collaboration enables the deployment of a fully managed vector database on Vultr’s adaptable platform, catering to the specific needs of diverse AI projects. This unique setup offers developers the ideal Vultr environment for their vector search workloads. It ensures seamless adaptability and data privacy with all data residing in their environment. For the first time, Qdrant Hybrid Cloud allows for fully managing a vector database on Vultr, promoting rapid development cycles without the hassle of modifying existing setups and ensuring that data remains secure within the organization. Moreover, this partnership empowers developers with centralized management over their vector database clusters via Qdrant’s control plane, enabling precise size adjustments based on workload demands. This joint setup marks a significant step in providing the AI and ML field with flexible, secure, and efficient application development tools. > *"Our collaboration with Qdrant empowers developers to unlock the potential of vector search applications, such as RAG, by deploying Qdrant Hybrid Cloud with its high-performance search capabilities directly on Vultr's global, automated cloud infrastructure. This partnership creates a highly scalable and customizable platform, uniquely designed for deploying and managing AI workloads with unparalleled efficiency."* Kevin Cochrane, Vultr CMO. #### The Benefits of Deploying Qdrant Hybrid Cloud on Vultr Together, Qdrant Hybrid Cloud and Vultr offer enhanced AI and ML development with streamlined benefits: - **Simple and Flexible Deployment:** Deploy Qdrant Hybrid Cloud on Vultr in a few minutes with a simple “one-click” installation by adding your Vutlr environment as a Hybrid Cloud Environment to Qdrant. - **Scalability and Customizability**: Qdrant’s efficient data handling and Vultr’s scalable infrastructure means projects can be adjusted dynamically to workload demands, optimizing costs without compromising performance or capabilities. - **Unified AI Stack Management:** Seamlessly manage the entire lifecycle of AI applications, from vector search with Qdrant Hybrid Cloud to deployment and scaling with the Vultr platform and its AI and ML solutions, all within a single, integrated environment. This setup simplifies workflows, reduces complexity, accelerates development cycles, and simplifies the integration with other elements of the AI stack like model development, finetuning, or inference and training. - **Global Reach, Local Execution**: With Vultr's worldwide infrastructure and Qdrant's fast vector search, deploy AI solutions globally while ensuring low latency and compliance with local data regulations, enhancing user satisfaction. #### Getting Started with Qdrant Hybrid Cloud and Vultr We've compiled an in-depth guide for leveraging Qdrant Hybrid Cloud on Vultr to kick off your journey into building cutting-edge AI solutions. For further insights into the deployment process, refer to our comprehensive documentation. ![hybrid-cloud-vultr-tutorial](/blog/hybrid-cloud-vultr/hybrid-cloud-vultr-tutorial.png) #### Tutorial: Crafting a Personalized AI Assistant with RAG This tutorial outlines creating a personalized AI assistant using Qdrant Hybrid Cloud on Vultr, incorporating advanced vector search to power dynamic, interactive experiences. We will develop a RAG pipeline powered by DSPy and detail how to maintain data privacy within your Vultr environment. [Try the Tutorial](/documentation/tutorials/rag-chatbot-vultr-dspy-ollama/) #### Documentation: Effortless Deployment with Qdrant Our Kubernetes-native framework simplifies the deployment of Qdrant Hybrid Cloud on Vultr, enabling you to get started in just a few straightforward steps. Dive into our documentation to learn more. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-vultr.md
--- title: "Chat with a codebase using Qdrant and N8N" draft: false slug: qdrant-n8n short_description: Integration demo description: Building a RAG-based chatbot using Qdrant and N8N to chat with a codebase on GitHub preview_image: /blog/qdrant-n8n/preview.jpg date: 2024-01-06T04:09:05+05:30 author: Anush Shetty featured: false tags: - integration - n8n - blog --- n8n (pronounced n-eight-n) helps you connect any app with an API. You can then manipulate its data with little or no code. With the Qdrant node on n8n, you can build AI-powered workflows visually. Let's go through the process of building a workflow. We'll build a chat with a codebase service. ## Prerequisites - A running Qdrant instance. If you need one, use our [Quick start guide](/documentation/quick-start/) to set it up. - An OpenAI API Key. Retrieve your key from the [OpenAI API page](https://platform.openai.com/account/api-keys) for your account. - A GitHub access token. If you need to generate one, start at the [GitHub Personal access tokens page](https://github.com/settings/tokens/). ## Building the App Our workflow has two components. Refer to the [n8n quick start guide](https://docs.n8n.io/workflows/create/) to get acquainted with workflow semantics. - A workflow to ingest a GitHub repository into Qdrant - A workflow for a chat service with the ingested documents #### Workflow 1: GitHub Repository Ingestion into Qdrant ![GitHub to Qdrant workflow](/blog/qdrant-n8n/load-demo.gif) For this workflow, we'll use the following nodes: - [Qdrant Vector Store - Insert](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#insert-documents): Configure with [Qdrant credentials](https://docs.n8n.io/integrations/builtin/credentials/qdrant/) and a collection name. If the collection doesn't exist, it's automatically created with the appropriate configurations. - [GitHub Document Loader](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.documentgithubloader/): Configure the GitHub access token, repository name, and branch. In this example, we'll use [qdrant/demo-food-discovery@main](https://github.com/qdrant/demo-food-discovery). - [Embeddings OpenAI](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): Configure with OpenAI credentials and the embedding model options. We use the [text-embedding-ada-002](https://platform.openai.com/docs/models/embeddings) model. - [Recursive Character Text Splitter](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/): Configure the [text splitter options](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/#node-parameters ). We use the defaults in this example. Connect the workflow to a manual trigger. Click "Test Workflow" to run it. You should be able to see the progress in real-time as the data is fetched from GitHub, transformed into vectors and loaded into Qdrant. #### Workflow 2: Chat Service with Ingested Documents ![Chat workflow](/blog/qdrant-n8n/chat.png) The workflow use the following nodes: - [Qdrant Vector Store - Retrieve](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#retrieve-documents-for-agentchain): Configure with [Qdrant credentials](https://docs.n8n.io/integrations/builtin/credentials/qdrant/) and the name of the collection the data was loaded into in workflow 1. - [Retrieval Q&A Chain](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/): Configure with default values. - [Embeddings OpenAI](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): Configure with OpenAI credentials and the embedding model options. We use the [text-embedding-ada-002](https://platform.openai.com/docs/models/embeddings) model. - [OpenAI Chat Model](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/): Configure with OpenAI credentials and the chat model name. We use [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5) for the demo. Once configured, hit the "Chat" button to initiate the chat interface and begin a conversation with your codebase. ![Chat demo](/blog/qdrant-n8n/chat-demo.png) To embed the chat in your applications, consider using the [@n8n/chat](https://www.npmjs.com/package/@n8n/chat) package. Additionally, N8N supports scheduled workflows and can be triggered by events across various applications. ## Further reading - [n8n Documentation](https://docs.n8n.io/) - [n8n Qdrant Node documentation](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/#qdrant-vector-store)
blog/qdrant-n8n.md
--- title: "Qdrant Updated Benchmarks 2024" draft: false slug: qdrant-benchmarks-2024 # Change this slug to your page slug if needed short_description: Qdrant Updated Benchmarks 2024 # Change this description: We've compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis # Change this preview_image: /benchmarks/social-preview.png # Change this categories: - News # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-01-15T09:29:33-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - qdrant - benchmarks - performance --- It's time for an update to Qdrant's benchmarks! We've compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis. Let's get into what's new and what remains the same in our approach. ### What's Changed? #### All engines have improved Since the last time we ran our benchmarks, we received a bunch of suggestions on how to run other engines more efficiently, and we applied them. This has resulted in significant improvements across all engines. As a result, we have achieved an impressive improvement of nearly four times in certain cases. You can view the previous benchmark results [here](/benchmarks/single-node-speed-benchmark-2022/). #### Introducing a New Dataset To ensure our benchmark aligns with the requirements of serving RAG applications at scale, the current most common use-case of vector databases, we have introduced a new dataset consisting of 1 million OpenAI embeddings. ![rps vs precision benchmark - up and to the right is better](/blog/qdrant-updated-benchmarks-2024/rps-bench.png) #### Separation of Latency vs RPS Cases Different applications have distinct requirements when it comes to performance. To address this, we have made a clear separation between latency and requests-per-second (RPS) cases. For example, a self-driving car's object recognition system aims to process requests as quickly as possible, while a web server focuses on serving multiple clients simultaneously. By simulating both scenarios and allowing configurations for 1 or 100 parallel readers, our benchmark provides a more accurate evaluation of search engine performance. ![mean-time vs precision benchmark - down and to the right is better](/blog/qdrant-updated-benchmarks-2024/latency-bench.png) ### What Hasn't Changed? #### Our Principles of Benchmarking At Qdrant all code stays open-source. We ensure our benchmarks are accessible for everyone, allowing you to run them on your own hardware. Your input matters to us, and contributions and sharing of best practices are welcome! Our benchmarks are strictly limited to open-source solutions, ensuring hardware parity and avoiding biases from external cloud components. We deliberately don't include libraries or algorithm implementations in our comparisons because our focus is squarely on vector databases. Why? Because libraries like FAISS, while useful for experiments, don’t fully address the complexities of real-world production environments. They lack features like real-time updates, CRUD operations, high availability, scalability, and concurrent access – essentials in production scenarios. A vector search engine is not only its indexing algorithm, but its overall performance in production. We use the same benchmark datasets as the [ann-benchmarks](https://github.com/erikbern/ann-benchmarks/#data-sets) project so you can compare our performance and accuracy against it. ### Detailed Report and Access For an in-depth look at our latest benchmark results, we invite you to read the [detailed report](/benchmarks/). If you're interested in testing the benchmark yourself or want to contribute to its development, head over to our [benchmark repository](https://github.com/qdrant/vector-db-benchmark). We appreciate your support and involvement in improving the performance of vector databases.
blog/qdrant-updated-benchmarks-2024.md
--- draft: false title: "Qdrant Hybrid Cloud and Haystack for Enterprise RAG" short_description: "A winning combination for enterprise-scale RAG consists of a strong framework and a scalable database." description: "A winning combination for enterprise-scale RAG consists of a strong framework and a scalable database." preview_image: /blog/hybrid-cloud-haystack/hybrid-cloud-haystack.png date: 2024-04-10T00:02:00Z author: Qdrant featured: false weight: 1009 tags: - Qdrant - Vector Database --- We’re excited to share that Qdrant and [Haystack](https://haystack.deepset.ai/) are continuing to expand their seamless integration to the new [Qdrant Hybrid Cloud](/hybrid-cloud/) offering, allowing developers to deploy a managed vector database in their own environment of choice. Earlier this year, both Qdrant and Haystack, started to address their user’s growing need for production-ready retrieval-augmented-generation (RAG) deployments. The ability to build and deploy AI apps anywhere now allows for complete data sovereignty and control. This gives large enterprise customers the peace of mind they need before they expand AI functionalities throughout their operations. With a highly customizable framework like Haystack, implementing vector search becomes incredibly simple. Qdrant's new Qdrant Hybrid Cloud offering and its Kubernetes-native design supports customers all the way from a simple prototype setup to a production scenario on any hosting platform. Users can attach AI functionalities to their existing in-house software by creating custom integration components. Don’t forget, both products are open-source and highly modular! With Haystack and Qdrant Hybrid Cloud, the path to production has never been clearer. The elaborate integration of Qdrant as a Document Store simplifies the deployment of Haystack-based AI applications in any production-grade environment. Coupled with Qdrant’s Hybrid Cloud offering, your application can be deployed anyplace, on your own terms. >*“We hope that with Haystack 2.0 and our growing partnerships such as what we have here with Qdrant Hybrid Cloud, engineers are able to build AI systems with full autonomy. Both in how their pipelines are designed, and how their data are managed.”* Tuana Çelik, Developer Relations Lead, deepset. #### Simplifying RAG Deployment: Qdrant Hybrid Cloud and Haystack 2.0 Integration Building apps with Qdrant Hybrid Cloud and deepset’s framework has become even simpler with Haystack 2.0. Both products are completely optimized for RAG in production scenarios. Here are some key advantages: **Mature Integration:** You can connect your Haystack pipelines to Qdrant in a few lines of code. Qdrant Hybrid Cloud leverages the existing “Document Store” integration for data sources.This common interface makes it easy to access Qdrant as a data source from within your existing setup. **Production Readiness:** With deepset’s new product [Hayhooks](https://docs.haystack.deepset.ai/docs/hayhooks), you can generate RESTful APIs from Haystack pipelines. This simplifies the deployment process and makes the service easily accessible by developers using Qdrant Hybrid Cloud to prepare RAG systems for production. **Flexible & Customizable:** The open-source nature of Qdrant and Haystack’s 2.0 makes it easy to extend the capabilities of both products through customization. When tailoring vector RAG systems to their own needs, users can develop custom components and plug them into both Qdrant Hybrid Cloud and Haystack for maximum modularity. [Creating custom components](https://docs.haystack.deepset.ai/docs/custom-components) is a core functionality. #### Learn How to Build a Production-Level RAG Service with Qdrant and Haystack ![hybrid-cloud-haystack-tutorial](/blog/hybrid-cloud-haystack/hybrid-cloud-haystack-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud using deepset’s Haystack framework. #### Tutorial: Private Chatbot for Interactive Learning Learn how to develop a tutor chatbot from online course materials. You will create a Retrieval Augmented Generation (RAG) pipeline with Haystack for enhanced generative AI capabilities and Qdrant Hybrid Cloud for vector search. By deploying every tool on RedHat OpenShift, you will ensure complete privacy and data sovereignty, whereby no course content leaves your cloud. [Try the Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to get started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-haystack.md
--- draft: false title: Teaching Vector Databases at Scale - Alfredo Deza | Vector Space Talks slug: teaching-vector-db-at-scale short_description: Alfredo Deza tackles AI teaching, the intersection of technology and academia, and the value of consistent learning. description: Alfredo Deza discusses the practicality of machine learning operations, highlighting how personal interest in topics like wine datasets enhances engagement, while reflecting on the synergies between his professional sportsman discipline and the persistent, straightforward approach required for effectively educating on vector databases and large language models. preview_image: /blog/from_cms/alfredo-deza-bp-cropped.png date: 2024-04-09T03:06:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - Vector Space Talks - Coursera --- > *"So usually I get asked, why are you using Qdrant? What's the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There's one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it's easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well.”*\ — Alfredo Deza > Alfredo is a software engineer, speaker, author, and former Olympic athlete working in Developer Relations at Microsoft. He has written several books about programming languages and artificial intelligence and has created online courses about the cloud and machine learning. He currently is an Adjunct Professor at Duke University, and as part of his role, works closely with universities around the world like Georgia Tech, Duke University, Carnegie Mellon, and Oxford University where he often gives guest lectures about technology. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/4HFSrTJWxl7IgQj8j6kwXN?si=99H-p0fKQ0WuVEBJI9ugUw), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/3l6F6A_It0Q?feature=shared).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/3l6F6A_It0Q?si=cFZGAh7995iHilcY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Teaching-Vector-Databases-at-Scale---Alfredo-Deza--Vector-Space-Talks-019-e2hhjlo/a-ab3qp7u" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** How does a former athlete such as Alfredo Deza end up in this AI and Machine Learning industry? That’s what we’ll find out in this episode of Vector Space Talks. Let’s understand how his background as an olympian offers a unique perspective on consistency and discipline that's a real game-changer in this industry. Here are some things you’ll discover from this episode: 1. **The Intersection of Teaching and Tech:** Alfredo discusses on how to effectively bridge the gap between technical concepts and student understanding, especially when dealing with complex topics like vector databases. 2. **Simplified Learning:** Dive into Alfredo's advocacy for simplicity in teaching methods, mirroring his approach with Qdrant and the potential for a Rust in-memory implementation aimed at enhancing learning experiences. 3. **Beyond the Titanic Dataset:** Discover why Alfredo prefers to teach with a wine dataset he developed himself, underscoring the importance of using engaging subject matter in education. 4. **AI Learning Acceleration:** Alfredo discusses the struggle universities face to keep pace with AI advancements and how online platforms can offer a more up-to-date curriculum. 5. **Consistency is Key:** Alfredo draws parallels between the discipline required in high-level athletics and the ongoing learning journey in AI, zeroing in on his mantra, “There is no secret” to staying consistent. > Fun Fact: Alfredo tells the story of athlete Dick Fosbury's invention of the Fosbury Flop to highlight the significance of teaching simplicity. > ## Show notes: 00:00 Teaching machine learning, Python to graduate students.\ 06:03 Azure AI search service simplifies teaching, Qdrant facilitates learning.\ 10:49 Controversy over high jump style.\ 13:18 Embracing past for inspiration, emphasizing consistency.\ 15:43 Consistent learning and practice lead to success.\ 20:26 Teaching SQL uses SQLite, Rust has limitations.\ 25:21 Online platforms improve and speed up education.\ 29:24 Duke and Coursera offer specialized language courses.\ 31:21 Passion for wines, creating diverse dataset.\ 35:00 Encouragement for vector db discussion, wrap up.\ ## More Quotes from Alfredo: *"Qdrant makes it straightforward. We use it in-memory for my classes and I would love to see something similar setup in Rust to make teaching even easier.”*\ — Alfredo Deza *"Retrieval augmented generation is kind of like having an open book test. So the large language model is the student, and they have an open book so they can see the answers and then repackage that into their own words and provide an answer.”*\ — Alfredo Deza *"With Qdrant, I appreciate that the use of the Python API is so simple. It avoids the complexity that comes from having a back-end system like in Rust where you need an actual instance of the database running.”*\ — Alfredo Deza ## Transcript: Demetrios: What is happening? Everyone, welcome back to another vector space talks. I am Demetrios, and I am joined today by good old Sabrina. Where you at, Sabrina? Hello? Sabrina Aquino: Hello, Demetrios. I'm from Brazil. I'm in Brazil right now. I know that you are traveling currently. Demetrios: Where are you? At Kubecon in Paris. And it has been magnificent. But I could not wait to join the session today because we've got Alfredo coming at us. Alfredo Deza: What's up, dude? Hi. How are you? Demetrios: I'm good, man. It's been a while. I think the last time that we chatted was two years ago, maybe right before your book came out. When did the book come out? Alfredo Deza: Yeah, something like that. I would say a couple of years ago. Yeah. I wrote, co authored practical machine learning operations with no gift. And it was published on O'Reilly. Demetrios: Yeah. And that was, I think, two years ago. So you've been doing a lot of stuff since then. Let's be honest, you are maybe one of the most active men on the Internet. I always love seeing what you're doing. You're bringing immense value to everything that you touch. I'm really excited to be able to chat with you for this next 30 minutes. Alfredo Deza: Yeah, of course. Demetrios: Maybe just, we'll start it off. We're going to get into it when it comes to what you're doing and really what the space looks like right now. Right. But I would love to hear a little bit of what you've been up to since, for the last two years, because I haven't talked to you. Alfredo Deza: Yeah, that's right. Well, several different things, actually. Right after we chatted last time, I joined Microsoft to work in developer relations. Microsoft has a big group of folks working in developer relations. And basically, for me, it signaled my shift away from regular software engineering. I was primarily doing software engineering and thought that perhaps with the books and some of the courses that I had published, it was time for me to get into more teaching and providing useful content, which is really something very rewarding. And in developer relations, in advocacy in general, it's kind of like a way of teaching. We demonstrate technology, how it works from a technical point of view. Alfredo Deza: So aside from that, started working really closely with several different universities. I work with Georgia Tech, Oxford University, Carnegie Mellon University, and Duke University, where I've been working as an adjunct professor for a couple of years as well. So at Duke, what I do is I teach a couple of classes a year. One is on machine learning. Last year was machine learning operations, and this year it's going to, I think, hopefully I'm not messing anything up. I think we're going to shift a little bit to doing operations with large language models. And in the fall I teach a programming class for graduate students that want to join one of the graduate programs and they want to get a primer on Python. So I teach a little bit of that. Alfredo Deza: And in the meantime, also in partnership with Duke, getting a lot of courses out on Coursera, and from large language models to doing stuff with Azure, to machine learning operations, to rust, I've been doing a lot of rust lately, which I really like. So, yeah, so a lot of different things, but I think the core pillar for me remains being able to teach and spread the knowledge. Demetrios: Love it, man. And I know you've been diving into vector databases. Can you tell us more? Alfredo Deza: Yeah, well, the thing is that when you're trying to teach, and yes, one of the courses that we had out for large language models was applying retrieval augmented generation, which is the basis for vector databases, to see how it works. This is how it works. These are the components that you need. Let's create an application from scratch and see how it works. And for those that don't know, retrieval augmented generation is kind of like having. The other day I saw a description about this, which I really like, which is a way of, it's kind of like having an open book test. So the large language model is the student, and they have an open book so they can see the answers and then repackage that into their own words and provide an answer, which is kind of like what we do with vector databases in the retrieval augmented generation pattern. We've been putting a lot of examples on how to do these, and in the case of Azure, you're enabling certain services. Alfredo Deza: There's the Azure AI search service, which is really good. But sometimes when you're trying to teach specifically, it is useful to have a very straightforward way to do this and applying or creating a retrieval augmented generation pattern, it's kind of tricky, I think. We're not there yet to do it in a nice, straightforward way. So there are several different options, Qdrant being one of them. So usually I get asked, why are you using Qdrant? What's the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There's one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it's easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well. If something is very complex, if the list of requirements is very long, you're not going to be very happy, you're going to spend all this time trying to figure, and when you have, similar to what happens with automation, when you have a list of 20 different things that you need to, in order to, say, deploy a website, you're going to get things out of order, you're going to forget one thing, you're going to have a typo, you're going to mess it up, you're going to have to start from scratch, and you're going to get into a situation where you can't get out of it. And Qdrant does provide a very straightforward way to run the database, and that one is the in memory implementation with Python. Alfredo Deza: So you can actually write a little bit of python once you install the libraries and say, I want to instantiate a vector database and I wanted to run it in memory. So for teaching, this is great. It's like, hey, of course it's not for production, but just write these couple of lines and let's get right into it. Let's just start populating these and see how it works. And it works. It's great. You don't need to have all of these, like, wow, let's launch Kubernetes over here and let's have all of these dynamic. No, why? I mean, sure, you want to create a business model and you want to launch to production eventually, and you want to have all that running perfect. Alfredo Deza: But for this setup, like for understanding how it works, for trying baby steps into understanding vector databases, this is perfect. My one requirement, or my one wish list item is to have that in memory thing for rust. That would be pretty sweet, because I think it'll make teaching rust and retrieval augmented generation with rust much easier. I wouldn't have to worry about bringing up containers or external services. So that's the deal with rust. And I'll tell you one last story about why I think specifically making it easy to get started with so that I can teach it, so that others can learn from it, is crucial. I would say almost 50 years ago, maybe a little bit more, my dad went to Italy to have a course on athletics. My dad was involved in sports and he was going through this, I think it was like a six month specialization on athletics. Alfredo Deza: And he was in class and it had been recent that the high jump had transitioned from one style to the other. The previous style, the old style right now is the old style. It's kind of like, it was kind of like over the bar. It was kind of like a weird style. And it had recently transitioned to a thing called the Fosbury flop. This person, his last name is Dick Fosbury, invented the Fosbury flop. He said, no, I'm just going to go straight at it, then do a little curve and then jump over it. And then he did, and then he started winning everything. Alfredo Deza: And everybody's like, what this guy? Well, first they thought he was crazy, and they thought that dismissive of what he was trying to do. And there were people that sticklers that wanted to stay with the older style, but then he started beating records and winning medals, and so people were like, well, is this a good thing? Let's try it out. So there was a whole. They were casting doubt. It's like, is this really the thing? Is this really what we should be doing? So one of the questions that my dad had to answer in this specialization he did in Italy was like, which style is better, it's the old style or the new style? And so my dad said, it's the new style. And they asked him, why is the new style better? And he didn't choose the path of answering the, well, because this guy just won the Olympics or he just did a record over here that at the end is meaningless. What he said was, it is the better style because it's easier to teach and it is 100% correct. When you're teaching high jump, it is much easier to teach the Fosbury flop than the other style. Alfredo Deza: It is super hard. So you start seeing this parallel in teaching and learning where, but with this one, you have all of these world records and things are going great. Well, great. But is anybody going to try, are you going to have more people looking into it or are you going to have less? What is it that we're trying to do here? Right. Demetrios: Not going to lie, I did not see how you were going to land the plane on coming from the high jump into the vector database space, but you did it gracefully. That was well done. So, basically, the easier it is to teach, the more people are going to be able to jump on board and the more people are going to be able to get value out of it. Sabrina Aquino: I absolutely love it, by the way. It's a pleasure to meet you, Alfredo. And I was actually about to ask you. I love your background as an olympic athlete. Right. And I was wondering, do you make any connections or how do we interact this background with your current teaching and AI? And do you see any similarities or something coming from that approach into what you've applied? Alfredo Deza: Well, you're bringing a great point. It's taken me a very long time to feel comfortable talking about my professional sports past. I don't want to feel like I'm overwhelming anyone or trying to be like a show off. So I usually try not to mention, although I'm feeling more comfortable mentioning my professional past. But the only situations where I think it's good to talk about it is when I feel like there's a small chance that I might get someone thinking about the possibilities of what they can actually do and what they can try. And things that are seemingly complex might be achievable. So you mentioned similarities, but I think there are a couple of things that happen when you're an athlete in any sport, really, that you're trying to or you're operating at the very highest level and there's several things that happen there. You have to be consistent. Alfredo Deza: And it's something that I teach my kids as well. I have one of my kids, he's like, I did really a lot of exercise today and then for a week he doesn't do anything else. And he's like, now I'm going to do exercise again. And she's going to do 4 hours. And it's like, wait a second, wait a second. It's okay. You want to do it. This is great. Alfredo Deza: But no intensity. You need to be consistent. Oh, dad, you don't let me work out and it's like, no work out. Good, I support you, but you have to be consistent and slowly start ramping up and slowly start getting better. And it happens a lot with learning. We are in an era that concepts and things are advancing so fast that things are getting obsolete even faster. So you're always in this motion of trying to learn. So what I would say is the similarities are in the consistency. Alfredo Deza: You have to keep learning, you have to keep applying yourself. But it can be like, oh, today I'm going to read this whole book from start to end and you're just going to learn everything about, I don't know, rust. It's like, well, no, try applying rust a little bit every day and feel comfortable with it. And at the very end you will do better. Like, you can't go with high intensity because you're going to get burned out, you're going to overwhelmed and it's not going to work out. You don't go to the Olympics by working out for like a few months. Actually, a very long time ago, a reporter asked me, how many months have you been working out preparing for the Olympics? It's like, what do you mean with how many months? I've been training my whole life for this. What are we talking about? Demetrios: We're not talking in months or years. We're talking in lifetimes, right? Alfredo Deza: So you have to take it easy. You can't do that. And beyond that, consistency. Consistency goes hand in hand with discipline. I came to the US in 2006. I don't live like I was born in Peru and I came to the US with no degree. I didn't go to college. Well, I went to college for a few months and then I dropped out and I didn't have a career, I didn't have experience. Alfredo Deza: I was just recently married. I have never worked in my life because I used to be a professional athlete. And the only thing that I decided to do was to do amazing work, apply myself and try to keep learning and never stop learning. In the back of my mind, it's like, oh, I have a tremendous knowledge gap that I need to fulfill by learning. And actually, I have tremendous respect and I'm incredibly grateful by all of the people that opened doors for me and gave me an opportunity, one of them being Noah Giff, which I co authored a few books with him and some of the courses. And he actually taught me to write Python. I didn't know how to program. And he said, you know what? I think you should learn to write some python. Alfredo Deza: And I was like, python? Why would I ever need to do that? And I did. He's like, let's just find something to automate. I mean, what a concept. Find something to apply automation. And every week on Fridays, we'll just take a look at it and that's it. And we did that for a while. And then he said, you know what? You should apply for speaking at Python. How can I be speaking at a conference when I just started learning? It's like your perspective is different. Alfredo Deza: You just started learning these. You're going to do it in an interesting way. So I think those are concepts that are very important to me. Stay disciplined, stay consistent, and keep at it. The secret is that there's no secret. That's the bottom line. You have to keep consistent. Otherwise things are always making excuses. Alfredo Deza: Is very simple. Demetrios: The secret is there is no secret. That is beautiful. So you did kind of sprinkle this idea of, oh, I wish there was more stuff happening with Qdrant and rust. Can you talk a little bit more to that? Because one piece of Qdrant that people tend to love is that it's built in rust. Right. But also, I know that you mentioned before, could we get a little bit of this action so that I don't have to deal with any. What was it you were saying? The containers. Alfredo Deza: Yeah. Right. Now, if you want to have a proof of concept, and I always go for like, what's the easiest, the most straightforward, the less annoying things I need to do, the better. And with Python, the Python API for Qdrant, you can just write a few lines and say, I want to create an instance in memory and then that's it. The database is created for you. This is very similar, or I would say actually almost identical to how you run SQLite. Sqlite is the embedded database you can create in memory. And it's actually how I teach SQL as well. Alfredo Deza: When I have to teach SQl, I use sqlite. I think it's perfect. But in rust, like you said, Qdrant's backend is built on rust. There is no in memory implementation. So you are required to have an actual instance of the Qdrant database running. So you have a couple of options, but one of them probably means you'll have to bring up a container with Qdrant running and then you'll have to connect to that instance. So when you're teaching, the development environments are kind of constrained. Either you are in a lab somewhere like Crusader has labs, but those are self contained. Alfredo Deza: It's kind of tricky to get them running 100%. You can run multiple containers at the same time. So things start becoming more complex. Not only more complex for the learner, but also in this case, like the teacher, me who wants to figure out how to make this all run in a very constrained environment. And that makes it tricky. And I fasted the team, by the way, and I was told that maybe at some point they can do some magic and put the in memory implementation on the rust side of things, which I think it would be tremendous. Sabrina Aquino: We're going to advocate for that on our side. We're also going to be asking for it. And I think this is really good too. It really makes it easier. Me as a student not long ago, I do see what you mean. It's quite hard to get it all working very fast in the time of a class that you don't have a lot of time and students can get. I don't know, it's quite complex. I do get what you mean. Sabrina Aquino: And you also are working both on the tech industry and on academia, which I think is super interesting. And I always kind of feel like those two are a bit disconnected sometimes. And I was wondering what you think that how important is the collaboration of these two areas considering how fast the AI space is going to right now? And what are your thoughts? Alfredo Deza: Well, I don't like generalizing, but I'm going to generalize right now. I would say most universities are several steps behind, and there's a lot of complexities involved in higher education specifically. Most importantly, these institutions tend to be fairly large, and with fairly large institutions, what do you get? Oh, you get the magical bureaucracy for anything you want to do. Something like, oh, well, you need to talk to that department that needs to authorize something, that needs to go to some other department, and it's like, I'm going to change the curriculum. It's like, no, you can't. What does that mean? I have actually had conversations with faculty in universities where they say, listen, curricula. Yeah, we get that. We need to update it, but we change curricula every five years. Alfredo Deza: And so. See you in a while. It's been three years. We have two more years to go. See you in a couple of years. And that's detrimental to students now. I get it. Building curricula, it's very hard. Alfredo Deza: It takes a lot of work for the faculty to put something together. So it is something that, from a faculty perspective, it's like they're not going to get paid more if they update the curriculum. Demetrios: Right. Alfredo Deza: And it's a massive amount of work now that, of course, comes to the detriment of the learner. The student will be under service because they will have to go through curricula that is fairly dated. Now, there are situations and there are programs where this doesn't happen. And Duke, I've worked with several. They're teaching Llama file, which was built by Mozilla. And when did Llama file came out? It was just like a few months ago. And I think it's incredible. And I think those skills that are the ones that students need today in order to not only learn these things, but also be able to apply them when they're looking for a job or trying to professionally even apply them into their day to day, now that's one side of things. Alfredo Deza: But there's the other aspect. In the case of Duke, as well as other universities out there, they're using these online platforms so that they can put courses out there faster. Do you really need to go through a four year program to understand how retrieval augmented generation works? Or how to implement it? I would argue no, but would you be better out, like, taking a course that will take you perhaps a couple of weeks to go through and be fairly proficient? I would say yes, 100%. And you see several institutions putting courses out there that are meaningful, that are useful, that they can cope with the speed at which things are needed. I think it's kind of good. And I think that sometimes we tend to think about knowledge and learning things, kind of like in a bubble, especially here in the US. I think there's this college is this magical place where all of the amazing things happen. And if you don't go to college, things are going to go very bad for you. Alfredo Deza: And I don't think that's true. I think if you like college, if you like university, by all means take advantage of it. You want to experience it. That sounds great. I think there's tons of opportunity to do it outside of the university or the college setting and taking online courses from validated instructors. They have a good profile. Not someone that just dumped something on genetic AI and started. Demetrios: Someone like you. Alfredo Deza: Well, if you want to. Yeah, sure, why not? I mean, there's students that really like my teaching style. I think that's great. If you don't like my teaching style. Sometimes I tend to go a little bit slower because I don't want to overwhelm anyone. That's all good. But there is opportunity. And when I mention these things, people are like, oh, really? I'm not advertising for Coursera or anything else, but some of these platforms, if you pay a monthly fee, I think it's between $40 and $60. Alfredo Deza: I think on the expensive side, you can take advantage of all of these courses and as much as you can take them. Sometimes even companies say, hey, you have a paid subscription, go take it all. And I've met people like that. It's like, this is incredible. I'm learning so much. Perfect. I think there's a mix of things. I don't think there's like a binary answer, like, oh, you need to do this, or, no, don't do that, and everything's going to be well again. Demetrios: Yeah. Can you talk a little bit more about your course? And if I wanted to go on Coursera, what can I expect from. Alfredo Deza: You know, and again, I don't think as much as I like talking about my courses and the things that I do, I want to emphasize, like, if someone is watching this video or listening into what we're talking about, find something that is interesting to you and find a course that kind of delivers that thing, that sliver of interesting stuff, and then try it out. I think that's the best way. Don't get overwhelmed by. It's like, is this the right vector database that I should be learning? Is this instructor? It's like, no, try it out. What's going to happen? You don't like it when you're watching a bad video series or docuseries on Netflix or any streaming platform? Do you just like, I pay my $10 a month, so I'm going to muster through this whole 20 more episodes of this thing that I don't like. It's meaningless. It doesn't matter. Just move on. Alfredo Deza: So having said that, on Coursera specifically with Duke University, we tend to put courses out there that are going to be used in our programs in the things that I teach. For example, we just released the large language models. Specialization and specialization is a grouping of between four and six courses. So in there we have doing large language models with Azure, for example, introduction to generative AI, having a very simple rag pattern with Qdrant. I also have examples on how to do it with Azure AI search, which I think is pretty cool as well. How to do it locally with Llama file, which I think is great. You can have all of these large language models running locally, and then you have a little bit of Qdrant sprinkle over there, and then you have rack pattern. Now, I tend to teach with things that I really like, and I'll give you a quick example. Alfredo Deza: I think there's three data sets that are one of the top three most used data sets in all of machine learning and data science. Those are the Boston housing market, the diabetes data set in the US, and the other one is the Titanic. And everybody uses those. And I don't really understand why. I mean, perhaps I do understand why. It's because they're easy, they're clean, they're ready to go. Nothing's ever wrong with these, and everybody has used them to boredom. But for the life of me, you wouldn't be able to convince me to use any of those, because these are not topics that I really care about and they don't resonate with me. Alfredo Deza: The Titanic specifically is just horrid. Well, if I was 37 and I'm on first class and I'm male, would I survive? It's like, what are we trying to do here? How is this useful to anyone? So I tend to use things that I like, and I'm really passionate about wine. So I built my own data set, which is a collection of wines from all over the world, they have the ratings, they have the region, they have the type of grape and the notes and the name of the wine. So when I'm teaching them, like, look at this, this is amazing. It's wines from all over the world. So let's do a little bit of things here. So, for rag, what I was able to do is actually in the courses as well. I do, ah, I really know wines from Argentina, but these wines, it would be amazing if you can find me not a Malbec, but perhaps a cabernet franc. Alfredo Deza: That is amazing. From, it goes through Qdrant, goes back to llama file using some large language model or even small language model, like the Phi 2 from Microsoft, I think is really good. And he goes, it tells. Yeah, sure. I get that you want to have some good wines. Here's some good stuff that I can give you. And so it's great, right? I think it's great. So I think those kinds of things that are interesting to the person that is teaching or presenting, I think that's the key, because whenever you're talking about things that are very boring, that you do not care about, things are not going to go well for you. Alfredo Deza: I mean, if I didn't like teaching, if I didn't like vector databases, you would tell right away. It's like, well, yes, I've been doing stuff with the vector databases. They're good. Yeah, Qdrant, very good. You would tell right away. I can't lie. Very good. Demetrios: You can't fool anybody. Alfredo Deza: No. Demetrios: Well, dude, this is awesome. We will drop a link to the chat. We will drop a link to the course in the chat so that in case anybody does want to go on this wine tasting journey with you, they can. And I'm sure there's all kinds of things that will spark the creativity of the students as they go through it, because when you were talking about that, I was like, oh, it would be really cool to make that same type of thing, but with ski resorts there, you go around the world. And if I want this type of ski resort, I'm going to just ask my chat bot. So I'm excited to see what people create with it. I also really appreciate you coming on here, giving us your time and talking through all this. It's been a pleasure, as always, Alfredo. Demetrios: Thank you so much. Alfredo Deza: Yeah, thank you. Thank you for having me. Always happy to chat with you. I think Qdrant is doing a very solid product. Hopefully, my wish list item of in memory in rust comes to fruition, but I get it. Sometimes there are other priorities. It's all good. Yeah. Alfredo Deza: If anyone wants to connect with me, I'm always active on LinkedIn primarily. Always happy to connect with folks and talk about learning and improving and always being a better person. Demetrios: Excellent. Well, we will sign off, and if anyone else out there wants to come on here and talk to us about vector databases, we're always happy to have you. Feel free to reach out. And remember, don't get lost in vector space, folks. We will see you on the next one. Sabrina Aquino: Good night. Thank you so much.
blog/teaching-vector-databases-at-scale-alfredo-deza-vector-space-talks-019-2.md
--- draft: false title: "Qdrant Hybrid Cloud and Scaleway Empower GenAI" short_description: "Supporting innovation in AI with the launch of a revolutionary managed database for startups and enterprises." description: "Supporting innovation in AI with the launch of a revolutionary managed database for startups and enterprises." preview_image: /blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway.png date: 2024-04-10T00:06:00Z author: Qdrant featured: false weight: 1002 tags: - Qdrant - Vector Database --- In a move to empower the next wave of AI innovation, Qdrant and [Scaleway](https://www.scaleway.com/en/) collaborate to introduce [Qdrant Hybrid Cloud](/hybrid-cloud/), a fully managed vector database that can be deployed on existing Scaleway environments. This collaboration is set to democratize access to advanced AI capabilities, enabling developers to easily deploy and scale vector search technologies within Scaleway's robust and developer-friendly cloud infrastructure. By focusing on the unique needs of startups and the developer community, Qdrant and Scaleway are providing access to intuitive and easy to use tools, making cutting-edge AI more accessible than ever before. Building on this vision, the integration between Scaleway and Qdrant Hybrid Cloud leverages the strengths of both Qdrant, with its leading open-source vector database, and Scaleway, known for its innovative and scalable cloud solutions. This integration means startups and developers can now harness the power of vector search - essential for AI applications like recommendation systems, image recognition, and natural language processing - within their existing environment without the complexity of maintaining such advanced setups. *"With our partnership with Qdrant, Scaleway reinforces its status as Europe's leading cloud provider for AI innovation. The integration of Qdrant's fast and accurate vector database enriches our expanding suite of AI solutions. This means you can build smarter, faster AI projects with us, worry-free about performance and security." Frédéric BARDOLLE, Lead PM AI @ Scaleway* #### Developing a Retrieval Augmented Generation (RAG) Application with Qdrant Hybrid Cloud, Scaleway, and LangChain Retrieval Augmented Generation (RAG) enhances Large Language Models (LLMs) by integrating vector search to provide precise, context-rich responses. This combination allows LLMs to access and incorporate specific data in real-time, vastly improving the quality of AI-generated content. RAG applications often rely on sensitive or proprietary internal data, emphasizing the importance of data sovereignty. Running the entire stack within your own environment becomes crucial for maintaining control over this data. Qdrant Hybrid Cloud deployed on Scaleway addresses this need perfectly, offering a secure, scalable platform that respects data sovereignty requirements while leveraging the full potential of RAG for sophisticated AI solutions. ![hybrid-cloud-scaleway-tutorial](/blog/hybrid-cloud-scaleway/hybrid-cloud-scaleway-tutorial.png) We created a tutorial that guides you through setting up and leveraging Qdrant Hybrid Cloud on Scaleway for a RAG application, providing insights into efficiently managing data within a secure, sovereign framework. It highlights practical steps to integrate vector search with LLMs, optimizing the generation of high-quality, relevant AI content, while ensuring data sovereignty is maintained throughout. [Try the Tutorial](/documentation/tutorials/rag-chatbot-scaleway/) #### The Benefits of Running Qdrant Hybrid Cloud on Scaleway Choosing Qdrant Hybrid Cloud and Scaleway for AI applications offers several key advantages: - **AI-Focused Resources:** Scaleway aims to be the cloud provider of choice for AI companies, offering the resources and infrastructure to power complex AI and machine learning workloads, helping to advance the development and deployment of AI technologies. This paired with Qdrant Hybrid Cloud provides a strong foundational platform for advanced AI applications. - **Scalable Vector Search:** Qdrant Hybrid Cloud provides a fully managed vector database that allows to effortlessly scale the setup through vertical or horizontal scaling. Deployed on Scaleway, this is a robust setup that is designed to meet the needs of businesses at every stage of growth, from startups to large enterprises, ensuring a full spectrum of solutions for various projects and workloads. - **European Roots and Focus**: With a strong presence in Europe and a commitment to supporting the European tech ecosystem, Scaleway is ideally positioned to partner with European-based companies like Qdrant, providing local expertise and infrastructure that aligns with European regulatory standards. - **Sustainability Commitment**: Scaleway leads with an eco-conscious approach, featuring adiabatic data centers that significantly reduce cooling costs and environmental impact. Scaleway prioritizes extending hardware lifecycle beyond industry norms to lessen our ecological footprint. #### Get Started in a Few Seconds Setting up Qdrant Hybrid Cloud on Scaleway is streamlined and quick, thanks to its Kubernetes-native architecture. Follow these simple three steps to launch: 1. **Activate Hybrid Cloud**: First, log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and select ‘Hybrid Cloud’ to activate. 2. **Integrate Your Clusters**: Navigate to the Hybrid Cloud settings and add your Scaleway Kubernetes clusters as a Hybrid Cloud Environment. 3. **Simplified Management**: Use the Qdrant Management Console for easy creation and oversight of your Qdrant clusters on Scaleway. For more comprehensive guidance, our documentation provides step-by-step instructions for deploying Qdrant on Scaleway. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-scaleway.md
--- draft: false title: '"Vector search and applications" by Andrey Vasnetsov, CTO at Qdrant' preview_image: /blog/from_cms/ramsri-podcast-preview.png slug: vector-search-and-applications-record short_description: Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  description: Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  date: 2023-12-11T12:16:42.004Z author: Alyona Kavyerina featured: false tags: - vector search - webinar - news categories: - vector search - webinar - news --- <!--StartFragment--> Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy.  <iframe width="560" height="315" src="https://www.youtube.com/embed/MVUkbMYPYTE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> He covered the following topics: * Qdrant search engine and Quaterion similarity learning framework; * Similarity learning to multimodal settings; * Elastic search embeddings vs vector search engines; * Support for multiple embeddings; * Fundraising and VC discussions; * Vision for vector search evolution; * Finetuning for out of domain. <!--EndFragment-->
blog/vector-search-and-applications-by-andrey-vasnetsov-cto-at-qdrant.md
--- title: "IrisAgent and Qdrant: Redefining Customer Support with AI" draft: false slug: iris-agent-qdrant short_description: Pushing the boundaries of AI in customer support description: Learn how IrisAgent leverages Qdrant for RAG to automate support, and improve resolution times, transforming customer service preview_image: /case-studies/iris/irisagent-qdrant.png date: 2024-03-06T07:45:34-08:00 author: Manuel Meyer featured: false tags: - news - blog - irisagent - customer support weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- Artificial intelligence is evolving customer support, offering unprecedented capabilities for automating interactions, understanding user needs, and enhancing the overall customer experience. [IrisAgent](https://irisagent.com/), founded by former Google product manager [Palak Dalal Bhatia](https://www.linkedin.com/in/palakdalal/), demonstrates the concrete impact of AI on customer support with its AI-powered customer support automation platform. Bhatia describes IrisAgent as “the system of intelligence which sits on top of existing systems of records like support tickets, engineering bugs, sales data, or product data,” with the main objective of leveraging AI and generative AI, to automatically detect the intent and tags behind customer support tickets, reply to a large number of support tickets chats improve the time to resolution and increase the deflection rate of support teams. Ultimately, IrisAgent enables support teams to more with less and be more effective in helping customers. ## The Challenge Throughout her career Bhatia noticed a lot of manual and inefficient processes in support teams paired with information silos between important functions like customer support, product management, engineering teams, and sales teams. These silos typically prevent support teams from accurately solving customers’ pain points, as they are only able to access a fraction of the internal knowledge and don’t get the relevant information and insights that other teams have. IrisAgent is addressing these challenges with AI and GenAI by generating meaningful customer experience insights about what the root cause of specific customer escalations or churn. “The platform allows support teams to gather these cross-functional insights and connect them to a single view of customer problems,” Bhatia says. Additionally, IrisAgent facilitates the automation of mundane and repetitive support processes. In the past, these tasks were difficult to automate effectively due to the limitations of early AI technologies. Support functions often depended on rudimentary solutions like legacy decision trees, which suffered from a lack of scalability and robustness, primarily relying on simplistic keyword matching. However, advancements in AI and GenAI technologies have now enabled more sophisticated and efficient automation of these support processes. ## The Solution “IrisAgent provides a very holistic product profile, as we are the operating system for support teams,” Bhatia says. The platform includes features like omni-channel customer support automation, which integrates with other parts of the business, such as engineering or sales platforms, to really understand customer escalation points. Long before the advent of technologies such as ChatGPT, IrisAgeny had already been refining and advancing their AI and ML stack. This has enabled them to develop a comprehensive range of machine learning models, including both proprietary solutions and those built on cloud technologies. Through this advancement, IrisAgent was able to finetune on public and private customer data to achieve the level of accuracy that is needed to successfully deflect and resolve customer issues at scale. ![Iris GPT info](/blog/iris-agent-qdrant/iris_gpt.png) Since IrisAgent built out a lot of their AI related processes in-house with proprietary technology, they wanted to find ways to augment these capabilities with RAG technologies and vector databases. This strategic move was aimed at abstracting much of the technical complexity, thereby simplifying the process for engineers and data scientists on the team to interact with data and develop a variety of solutions built on top of it. ![Quote from CEO of IrisAgent](/blog/iris-agent-qdrant/iris_ceo_quote.png) “We were looking at a lot of vector databases in the market and one of our core requirements was that the solution needed to be open source because we have a strong emphasis on data privacy and security,” Bhatia says. Also, performance played a key role for IrisAgent during their evaluation as Bhatia mentions: “Despite it being a relatively new project at the time we tested Qdrant, the performance was really good.” Additional evaluation criteria were the ease of ability to deployment, future maintainability, and the quality of available documentation. Ultimately, IrisAgent decided to build with Qdrant as their vector database of choice, given these reasons: * **Open Source and Flexibility**: IrisAgent required a solution that was open source, to align with their data security needs and preference for self-hosting. Qdrant's open-source nature allowed IrisAgent to deploy it on their cloud infrastructure seamlessly. * **Performance**: Early on, IrisAgent recognized Qdrant's superior performance, despite its relative newness in the market. This performance aspect was crucial for handling large volumes of data efficiently. * **Ease of Use**: Qdrant's user-friendly SDKs and compatibility with major programming languages like Go and Python made it an ideal choice for IrisAgent's engineering team. Additionally, IrisAgent values Qdrant’s the solid documentation, which is easy to follow. * **Maintainability**: IrisAgent prioritized future maintainability in their choice of Qdrant, notably valuing the robustness and efficiency Rust provides, ensuring a scalable and future-ready solution. ## Optimizing IrisAgent's AI Pipeline: The Evaluation and Integration of Qdrant IrisAgent utilizes comprehensive testing and sandbox environments, ensuring no customer data is used during the testing of new features. Initially, they deployed Qdrant in these environments to evaluate its performance, leveraging their own test data and employing Qdrant’s console and SDK features to conduct thorough data exploration and apply various filters. The primary languages used in these processes are Go, for its efficiency, and Python, for its strength in data science tasks. After the successful testing, Qdrant's outputs are now integrated into IrisAgent’s AI pipeline, enhancing a suite of proprietary AI models designed for tasks such as detecting hallucinations and similarities, and classifying customer intents. With Qdrant, IrisAgent saw significant performance and quality gains for their RAG use cases. Beyond this, IrisAgent also performs fine-tuning further in the development process. Qdrant’s emphasis on open-source technology and support for main programming languages (Go and Python) ensures ease of use and compatibility with IrisAgent’s production environment. IrisAgent is deploying Qdrant on Google Cloud in order to fully leverage Google Cloud's robust infrastructure and innovative offerings. ![Iris agent flow chart](/blog/iris-agent-qdrant/iris_agent_flow_chart.png) ## Future of IrisAgent Looking ahead, IrisAgent is committed to pushing the boundaries of AI in customer support, with ambitious plans to evolve their product further. The cornerstone of this vision is a feature that will allow support teams to leverage historical support data more effectively, by automating the generation of knowledge base content to redefine how FAQs and product documentation are created. This strategic initiative aims not just to reduce manual effort but also to enrich the self-service capabilities of users. As IrisAgent continues to refine its AI algorithms and expand its training datasets, the goal is to significantly elevate the support experience, making it more seamless and intuitive for end-users.
blog/iris-agent-qdrant.md
--- draft: true title: "Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers" slug: pienso-case-study short_description: Case study description: Case study preview_image: /blog/from_cms/title.webp date: 2024-01-05T15:10:57.473Z author: Author featured: false --- <!--StartFragment--> # Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers <!--EndFragment--><!--StartFragment--> The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso’s low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces. Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions. Strengthening LLM Performance Qdrant enhances the accuracy of large language models (LLMs) by offering an alternative to relying solely on patterns identified during the training phase. By integrating with Qdrant, Pienso will empower customer LLMs with dynamic long-term storage, which will ultimately enable them to generate concrete and factual responses. Qdrant effectively preserves the extensive context windows managed by advanced LLMs, allowing for a broader analysis of the conversation or document at hand. By leveraging this extended context, LLMs can achieve a more comprehensive understanding and produce contextually relevant outputs. ## [](/case-studies/pienso/#joint-dedication-to-scalability-efficiency-and-reliability)Joint Dedication to Scalability, Efficiency and Reliability > “Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso Pienso chose Qdrant for its exceptional LLM interoperability, recognizing the potential it offers in maximizing the power of large language models and interactive deep learning for large enterprises. Qdrant excels in efficient nearest neighbor search, which is an expensive and computationally demanding task. Our ability to store and search high-dimensional vectors with remarkable performance and precision will offer a significant peace of mind to Pienso’s customers. Through intelligent indexing and partitioning techniques, Qdrant will significantly boost the speed of these searches, accelerating both training and inference processes for users. ### [](/case-studies/pienso/#scalability-preparing-for-sustained-growth-in-data-volumes)Scalability: Preparing for Sustained Growth in Data Volumes Qdrant’s distributed deployment mode plays a vital role in empowering large enterprises dealing with massive data volumes. It ensures that increasing data volumes do not hinder performance but rather enrich the model’s capabilities, making scalability a seamless process. Moreover, Qdrant is well-suited for Pienso’s enterprise customers as it operates best on bare metal infrastructure, enabling them to maintain complete control over their data sovereignty and autonomous LLM regimes. This ensures that enterprises can maintain their full span of control while leveraging the scalability and performance benefits of Qdrant’s solution. ### [](/case-studies/pienso/#efficiency-maximizing-the-customer-value-proposition)Efficiency: Maximizing the Customer Value Proposition Qdrant’s storage efficiency delivers cost savings on hardware while ensuring a responsive system even with extensive data sets. In an independent benchmark stress test, Pienso discovered that Qdrant could efficiently store 128 million documents, consuming a mere 20.4GB of storage and only 1.25GB of memory. This storage efficiency not only minimizes hardware expenses for Pienso’s customers, but also ensures optimal performance, making Qdrant an ideal solution for managing large-scale data with ease and efficiency. ### [](/case-studies/pienso/#reliability-fast-performance-in-a-secure-environment)Reliability: Fast Performance in a Secure Environment Qdrant’s utilization of Rust, coupled with its memmap storage and write-ahead logging, offers users a powerful combination of high-performance operations, robust data protection, and enhanced data safety measures. Our memmap storage feature offers Pienso fast performance comparable to in-memory storage. In the context of machine learning, where rapid data access and retrieval are crucial for training and inference tasks, this capability proves invaluable. Furthermore, our write-ahead logging (WAL), is critical to ensuring changes are logged before being applied to the database. This approach adds additional layers of data safety, further safeguarding the integrity of the stored information. > “We chose Qdrant because it’s fast to query, has a small memory footprint and allows for instantaneous setup of a new vector collection that is going to be queried. Other solutions we evaluated had long bootstrap times and also long collection initialization times {..} This partnership comes at a great time, because it allows Pienso to use Qdrant to its maximum potential, giving our customers a seamless experience while they explore and get meaningful insights about their data.” - Felipe Balduino Cassar, Senior Software Engineer, Pienso ## [](/case-studies/pienso/#whats-next)What’s Next? Pienso and Qdrant are dedicated to jointly develop the most reliable customer offering for the long term. Our partnership will deliver a combination of no-code/low-code interactive deep learning with efficient vector computation engineered for open source models and libraries. ### [](/case-studies/pienso/#to-learn-more-about-how-we-plan-on-achieving-this-join-the-founders-for-a-technical-fireside-chat-at-0930-pst-thursday-20th-july-on-discordhttpsdiscordggvnvg3fheevent1128331722270969909)To learn more about how we plan on achieving this, join the founders for a [technical fireside chat at 09:30 PST Thursday, 20th July on Discord](https://discord.gg/Vnvg3fHE?event=1128331722270969909). <!--EndFragment--> ![](/blog/from_cms/founderschat.png)
blog/pienso-qdrant-future-proofing-generative-ai-for-enterprise-level-customers.md
--- draft: false title: When music just doesn't match our vibe, can AI help? - Filip Makraduli | Vector Space Talks slug: human-language-ai-models short_description: Filip Makraduli discusses using AI to create personalized music recommendations based on user mood and vibe descriptions. description: Filip Makraduli discusses using human language and AI to capture music vibes, encoding text with sentence transformers, generating recommendations through vector spaces, integrating Streamlit and Spotify API, and future improvements for AI-powered music recommendations. preview_image: /blog/from_cms/filip-makraduli-cropped.png date: 2024-01-09T10:44:20.559Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Database - LLM Recommendation System --- > *"Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs?”*\ > -- Filip Makraduli > Imagine if the recommendation system could understand spoken instructions or hummed melodies. This would greatly impact the user experience and accuracy of the recommendations. Filip Makraduli, an electrical engineering graduate from Skopje, Macedonia, expanded his academic horizons with a Master's in Biomedical Data Science from Imperial College London. Currently a part of the Digital and Technology team at Marks and Spencer (M&S), he delves into retail data science, contributing to various ML and AI projects. His expertise spans causal ML, XGBoost models, NLP, and generative AI, with a current focus on improving outfit recommendation systems. Filip is not only professionally engaged but also passionate about tech startups, entrepreneurship, and ML research, evident in his interest in Qdrant, a startup he admires. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/6a517GfyUQLuXwFRxvwtp5?si=ywXPY_1RRU-qsMt9qrRS6w), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/WIBtZa7mcCs).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/WIBtZa7mcCs?si=szfeeuIAZ5LEgVI3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/When-music-just-doesnt-match-our-vibe--can-AI-help----Filip-Makraduli--Vector-Space-Talks-003-e2bskcq/a-aajslv4" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Take a look at the song vibe recommender system created by Filip Makraduli. Find out how it works! Filip discusses how AI can assist in finding the perfect songs for any mood. He takes us through his unique approach, using human language and AI models to capture the essence of a song and generate personalized recommendations. Here are 5 key things you'll learn from this video: 1. How AI can help us understand and capture the vibe and feeling of a song 2. The use of language to transfer the experience and feeling of a song 3. The role of data sets and descriptions in building unconventional song recommendation systems 4. The importance of encoding text and using sentence transformers to generate song embeddings 5. How vector spaces and cosine similarity search are used to generate song recommendations > Fun Fact: Filip actually created a Spotify playlist in real-time during the video, based on the vibe and mood Demetrios described, showing just how powerful and interactive this AI music recommendation system can be! > ## Show Notes: 01:25 Using AI to capture desired music vibes.\ 06:17 Faster and accurate model.\ 10:07 Sentence embedding model maps song descriptions.\ 14:32 Improving recommendations, user personalization in music.\ 15:49 Qdrant Python client creates user recommendations.\ 21:26 Questions about getting better embeddings for songs.\ 25:04 Contextual information for personalized walking recommendations.\ 26:00 Need predictions, voice input, and music options. ## More Quotes from Filip: *"When you log in with Spotify, you could get recommendations related to your taste on Spotify or on whatever app you listen your music on.”*\ -- Filip Makraduli *"Once the user writes a query and the query mentions, like some kind of a mood, for example, I feel happy and it's a sunny day and so on, you would get the similarity to the song that has this kind of language explanations and language intricacies in its description.”*\ -- Filip Makraduli *"I've explored Qdrant and as I said with Spotify web API there are a lot of things to be done with these specific user-created recommendations.”*\ -- Filip Makraduli ## Transcript: Demetrios: So for those who do not know, you are going to be talking to us about when the music we listen to does not match our vibe. And can we get AI to help us on that? And you're currently working as a data scientist at Marks and Spencer. I know you got some slides to share, right? So I'll let you share your screen. We can kick off the slides and then we'll have a little presentation and I'll be back on to answer some questions. And if Neil's is still around at the end, which I don't think he will be able to hang around, but we'll see, we can pull him back on and have a little discussion at the end of the. Filip Makraduli: That's. That's great. All right, cool. I'll share my screen. Demetrios: Right on. Filip Makraduli: Yeah. Demetrios: There we go. Filip Makraduli: Yeah. So I had to use this slide because it was really well done as an introductory slide. Thank you. Yeah. Thank you also for making it so. Yeah, the idea was, and kind of the inspiration with music, we all listen to it. It's part of our lives in many ways. Sometimes it's like the gym. Filip Makraduli: We're ready to go, we're all hyped up, ready to do a workout, and then we click play. But the music and the playlist we get, it's just not what exactly we're looking for at that point. Or if we try to work for a few hours and try to get concentrated and try to code for hours, we can do the same and then we click play, but it's not what we're looking for again. So my inspiration was here. Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs. So the obvious first question is how do we even capture a vibe and feel of a song? So initially, one approach that's popular and that works quite well is basically using a data set that has a lot of features. So Spotify has one data set like this and there are many others open source ones which include different features like loudness, key tempo, different kind of details related to the acoustics, the melody and so on. And this would work. Filip Makraduli: And this is kind of a way that a lot of song recommendation systems are built. However, what I wanted to do was maybe try a different approach in a way. Try to have a more unconventional recommender system, let's say. So what I did here was I tried to concentrate just on language. So my idea was, okay, is it possible to use human language to transfer this experience, this feeling that we have, and just use that and try to maybe encapsulate these features of songs. And instead of having a data set, just have descriptions of songs or sentences that explain different aspects of a song. So, as I said, this is a bit of a less traditional approach, and it's more of kind of testing the waters, but it worked to a decent extent. So what I did was, first I created a data set where I queried a large language model. Filip Makraduli: So I tried with llama and chat GPT, both. And the idea was to ask targeted questions, for example, like, what movie character does this song make you feel like? Or what's the tempo like? So, different questions that would help us understand maybe in what situation we would listen to this song, how will it make us feel like? And so on. And the idea was, as I said, again, to only use song names as queries for this large language model. So not have the full data sets with multiple features, but just song name, and kind of use this pretrained ability of all these LLMs to get this info that I was looking for. So an example of the generated data was this. So this song called Deep Sea Creature. And we have, like, a small description of the song. So it says a heavy, dark, mysterious vibe. Filip Makraduli: It will make you feel like you're descending into the unknown and so on. So a bit of a darker choice here, but that's the general idea. So trying to maybe do a bit of prompt engineering in a way to get the right features of a song, but through human language. So that was the first step. So the next step was how to encode this text. So all of this kind of querying reminds me of sentences. And this led me to sentence transformers and sentence Bird. And the usual issue with kind of doing this sentence similarity in the past was this, what I have highlighted here. Filip Makraduli: So this is actually a quote from a paper that Nils published a few years ago. So, basically, the way that this similarity was done was using cross encoders in the past, and that worked well, but it was really slow and unscalable. So Nils and his colleague created this kind of model, which helped scale this and make this a lot quicker, but also keep a lot of the accuracy. So Bert and Roberta were used, but they were not, as I said, quite scalable or useful for larger applications. So that's how sentence Bert was created. So the idea here was that there would be, like, a Siamese network that would train the model so that there could be, like, two bird models, and then the training would be done using this like zero, one and two tags, where kind of the sentences would be compared, whether there is entailment, neutrality or contradiction. So how similar these sentences are to each other. And by training a model like this and doing mean pooling, in the end, the model performed quite well and was able to kind of encapsulate this language intricacies of sentences. Filip Makraduli: So I decided to use and try out sentence transformers for my use case, and that was the encoding bit. So we have the model, we encode the text, and we have the embedding. So now the question is, how do we actually generate the recommendations? How is the similarity performed? So the similarity was done using vector spaces and cosine similarity search here. There were multiple ways of doing this. First, I tried things with a flat index and I tried Qdrant and I tried FIS. So I've worked with both. And with the flat index, it was good. It works well. Filip Makraduli: It's quick for small number of examples, small number of songs, but there is an issue when scaling. So once the vector indices get bigger, there might be a problem. So one popular kind of index architecture is this one here on the left. So hierarchical, navigable, small world graphs. So the idea here is that you wouldn't have to kind of go through all of the examples, but search through the examples in different layers, so that the search for similarities quicker. And this is a really popular approach. And Qdrant have done a really good customizable version of this, which is quite useful, I think, for very larger scales of application. And this graph here illustrates kind of well what the idea is. Filip Makraduli: So there is the sentence in this example. It's like a stripped striped blue shirt made from cotton, and then there is the network or the encoder. So in my case, this sentence is the song description, the neural network is the sentence transformer in my case. And then this embeddings are generated, which are then mapped into this vector space, and then this vector space is queryed and the cosine similarity is found, and the recommendations are generated in this way, so that once the user writes a query and the query mentions, like some kind of a mood, for example, I feel happy and it's a sunny day and so on, you would get the similarity to the song that has this kind of language explanations and language intricacies in its description. And there are a lot of ways of doing this, as Nils mentioned, especially with different embedding models and doing context related search. So this is an interesting area for improvement, even in my use case. And the quick screenshot looks like this. So for example, the mood that the user wrote, it's a bit rainy, but I feel like I need a long walk in London. Filip Makraduli: And these are the top five suggested songs. This is also available on Streamlit. In the end I'll share links of everything and also after that you can click create a Spotify playlist and this playlist will be saved in your Spotify account. As you can see here, it says playlist generated earlier today. So yeah, I tried this, it worked. I will try live demo bit later. Hopefully it works again. But this is in beta currently so you won't be able to try it at home because Spotify needs to approve my app first and go through that process so that then I can do this part fully. Filip Makraduli: And the front end bit, as I mentioned, was done in Streamlit. So why Streamlit? I like the caching bit. So of course this general part where it's really easy and quick to do a lot of data dashboarding and data applications to test out models, that's quite nice. But this caching options that they have help a lot with like loading models from hugging face or if you're loading models from somewhere, or if you're loading different databases. So if you're combining models and data. In my case I had a binary file of the index and also the model. So it was quite useful and quick to do these things and to be able to try things out quickly. So this is kind of the step by step outline of everything I've mentioned and the whole project. Filip Makraduli: So the first step is encoding this descriptions into embeddings. Then this vector embeddings are mapped into a vector space. Examples here with how I've used Qdrant for this, which was quite nice. I feel like the developer experience is really good for scalable purposes. It's really useful. So if the number of songs keep increasing it's quite good. And the query and more similar embeddings. The front is done with Streamlit and the Spotify API to save the playlists on the Spotify account. Filip Makraduli: All of these steps can be improved and tweaked in certain ways and I will talk a bit about that too. So a lot more to be done. So now there are 2000 songs, but as I've mentioned, in this vector space, the more songs that are there, the more representative this recommendations would be. So this is something I'm currently exploring and doing, generating, filtering and user specific personalization. So once maybe you log in with Spotify, you could get recommendations related to your taste on Spotify or on whatever app you listen your music on. And referring to the talk that Niels had a lot of potential for better models and embeddings and embedding models. So also the contrastive learning bits or the contents aware querying, that could be useful too. And a vector database because currently I'm using a binary file. Filip Makraduli: But I've explored Qdrant and as I said with Spotify web API there are a lot of things to be done with this specific user created recommendations. So with Qdrant, the Python client is quite good. The getting started helps a lot. So I wrote a bit of code. I think for production use cases it's really great. So for my use case here, as you can see on the right, I just read the text from a column and then I encode with the model. So the sentence transformer is the model that I encode with. And there is this collections that they're so called in Qdrant that are kind of like this vector spaces that you can create and you can also do different things with them, which I think one of the more helpful ones is the payload one and the batch one. Filip Makraduli: So you can batch things in terms of how many vectors will go to the server per single request. And also the payload helps if you want to add extra context. So maybe I want to filter by genres. I can add useful information to the vector embedding. So this is quite a cool feature that I'm planning on using. And another potential way of doing this and kind of combining things is using audio waves too, lyrics and descriptions and combining all of this as embeddings and then going through the similar process. So that's something that I'm looking to do also. And yeah, you also might have noticed that I'm a data scientist at Marks and Spencer and I just wanted to say that there are a lot of interesting ML and data related stuff going on there. Filip Makraduli: So a lot of teams that work on very interesting use cases, like in recommender systems, personalization of offers different stuff about forecasting. There is a lot going on with causal ML and yeah, the digital and tech department is quite well developed and I think it's a fun place to explore if you're interested in retail data science use cases. So yeah, thank you for your attention. I'll try the demo. So this is the QR code with the repo and all the useful links. You can contact me on LinkedIn. This is the screenshot of the repo and you have the link in the QR code. The name of the repo is song Vibe. Filip Makraduli: A friend of mine said that that wasn't a great name of a repo. Maybe he was right. But yeah, here we are. I'll just try to do the demo quickly and then we can step back to the. Demetrios: I love dude, I got to say, when you said you can just automatically create the Spotify playlist, that made me. Filip Makraduli: Go like, oh, yes, let's see if it works locally. Do you have any suggestion what mood are you in? Demetrios: I was hoping you would ask me, man. I am in a bit of an esoteric mood and I want female kind of like Gaelic voices, but not Gaelic music, just Gaelic voices and lots of harmonies, heavy harmonies. Filip Makraduli: Also. Demetrios: You didn't realize you're asking a musician. Let's see what we got. Filip Makraduli: Let's see if this works in 2000 songs. Okay, so these are the results. Okay, yeah, you'd have to playlist. Let's see. Demetrios: Yeah, can you make the playlist public and then I'll just go find it right now. Here we go. Filip Makraduli: Let's see. Okay, yeah, open in. Spotify playlist created now. Okay, cool. I can also rename it. What do you want to name the playlist? Demetrios: Esoteric Gaelic Harmonies. That's what I think we got to go with AI. Well, I mean, maybe we could just put maybe in parenthes. Filip Makraduli: Yeah. So I'll share this later with you. Excellent. But yeah, basically that was it. Demetrios: It worked. Ten out of ten for it. Working. That is also very cool. Filip Makraduli: Live demo working. That's good. So now doing the infinite screen, which I have stopped now. Demetrios: Yeah, classic, dude. Well, I've got some questions coming through and the chat has been active too. So I'll ask a few of the questions in the chat for a minute. But before I ask those questions in the chat, one thing that I was thinking about when you were talking about how to, like, the next step is getting better embeddings. And so was there a reason that you just went with the song title and then did you check, you said there was 2000 songs or how many songs? So did you do anything to check the output of the descriptions of these songs? Filip Makraduli: Yeah, so I didn't do like a systematic testing in terms of like, oh, yeah, the output is structured in this way. But yeah, I checked it roughly went through a few songs and they seemed like, I mean, of course you could add more info, but they seemed okay. So I was like, okay, let me try kind of whether this works. And, yeah, the descriptions were nice. Demetrios: Awesome. Yeah. So that kind of goes into one of the questions that mornie's asking. Let me see. Are you going to team this up with other methods, like collaborative filtering, content embeddings and stuff like that. Filip Makraduli: Yeah, I was thinking about this different kind of styles, but I feel like I want to first try different things related to embeddings and language just because I feel like with the other things, with the other ways of doing these recommendations, other companies and other solutions have done a really great job there. So I wanted to try something different to see whether that could work as well or maybe to a similar degree. So that's why I went towards this approach rather than collaborative filtering. Demetrios: Yeah, it kind of felt like you wanted to test the boundaries and see if something like this, which seems a little far fetched, is actually possible. And it seems like I would give it a yes. Filip Makraduli: It wasn't that far fetched, actually, once you see it working. Demetrios: Yeah, totally. Another question is coming through is asking, is it possible to merge the current mood so the vibe that you're looking for with your musical preferences? Filip Makraduli: Yeah. So I was thinking of that when we're doing this, the playlist creation that I did for you, there is a way to get your top ten songs or your other playlists and so on from Spotify. So my idea of kind of capturing this added element was through Spotify like that. But of course it could be that you could enter that in your own profile in the app or so on. So one idea would be how would you capture that preferences of the user once you have the user there. So you'd need some data of the preferences of the user. So that's the problem. But of course it is possible. Demetrios: You know what I'd lOve? Like in your example, you put that, I feel like going for a walk or it's raining, but I still feel like going through for a long walk in London. Right. You could probably just get that information from me, like what is the weather around me, where am I located? All that kind of stuff. So I don't have to give you that context. You just add those kind of contextual things, especially weather. And I get the feeling that that would be another unlock too. Unless you're like, you are the exact opposite of a sunny day on a sunny day. And it's like, why does it keep playing this happy music? I told you I was sad. Filip Makraduli: Yeah. You're predicting not just the songs, but the mood also. Demetrios: Yeah, totally. Filip Makraduli: You don't have to type anything, just open the website and you get everything. Demetrios: Exactly. Yeah. Give me a few predictions just right off the bat and then maybe later we can figure it out. The other thing that I was thinking, could be a nice add on. I mean, the infinite feature request, I don't think you realized you were going to get so many feature requests from me, but let it be known that if you come on here and I like your app, you'll probably get some feature requests from me. So I was thinking about how it would be great if I could just talk to it instead of typing it in, right? And I could just explain my mood or explain my feeling and even top that off with a few melodies that are going on in my head, or a few singers or songwriters or songs that I really want, something like this, but not this song, and then also add that kind of thing, do the. Filip Makraduli: Humming sound a bit and you play your melody and then you get. Demetrios: Except I hum out of tune, so I don't think that would work very well. I get a lot of random songs, that's for sure. It would probably be just about as accurate as your recommendation engine is right now. Yeah. Well, this is awesome, man. I really appreciate you coming on here. I'm just going to make sure that there's no other questions that came through the chat. No, looks like we're good. Demetrios: And for everyone out there that is listening, if you want to come on and talk about anything cool that you have built with Qdrant, or how you're using Qdrant, or different ways that you would like Qdrant to be better, or things that you enjoy, whatever it may be, we'd love to have you on here. And I think that is it. We're going to call it a day for the vector space talks, number two. We'll see you all later. Philip, thanks so much for coming on. It's.
blog/when-music-just-doesnt-match-our-vibe-can-ai-help-filip-makraduli-vector-space-talks-003.md
--- draft: false title: "Kern AI & Qdrant: Precision AI Solutions for Finance and Insurance" short_description: "Transforming customer service in finance and insurance with vector search-based retrieval.</p>" description: "Revolutionizing customer service in finance and insurance by leveraging vector search for faster responses and improved operational efficiency." preview_image: /blog/case-study-kern/preview.png social_preview_image: /blog/case-study-kern/preview.png date: 2024-08-28T00:02:00Z author: Qdrant featured: false tags: - Kern - Vector Search - AI-Driven Insights - Johannes Hötter - Data Analysis - Markel Insurance --- ![kern-case-study](/blog/case-study-kern/kern-case-study.png) ## About Kern AI [Kern AI](https://kern.ai/) specializes in data-centric AI. Originally an AI consulting firm, the team led by Co-Founder and CEO Johannes Hötter quickly realized that developers spend 80% of their time reviewing data instead of focusing on model development. This inefficiency significantly reduces the speed of development and adoption of AI. To tackle this challenge, Kern AI developed a low-code platform that enables developers to quickly analyze their datasets and identify outliers using vector search. This innovation led to enhanced data accuracy and streamlined workflows for the rapid deployment of AI applications. With the rise of ChatGPT, Kern AI expanded its platform to support the quick development of accurate and secure Generative AI by integrating large language models (LLMs) like GPT, tailoring solutions specifically for the financial services sector. Kern AI’s solution enhances the reliability of any LLM by modeling and integrating company data in a way LLMs can understand, offering a platform with leading data modeling capabilities. ## The Challenge Kern AI has partnered with leading insurers to efficiently streamline the process of managing complex customer queries within customer service teams, reducing the time and effort required. Customer inquiries are often complex, and support teams spend significant time locating and interpreting relevant sections in insurance contracts. This process leads to delays in responses and can negatively impact customer satisfaction. To tackle this, Kern AI developed an internal AI chatbot for first-level support teams. Their platform helps data science teams improve data foundations to expedite application production. By using embeddings to identify relevant data points and outliers, Kern AI ensures more efficient and accurate data handling. To avoid being restricted to a single embedding model, they experimented with various models, including sentiment embeddings, leading them to discover Qdrant. ![kern-user-interface](/blog/case-study-kern/kern-user-interface.png) *Kern AI Refinery, is an open-source tool to scale, assess and maintain natural language data.* The impact of their solution is evident in the case of [Markel Insurance SE](https://www.markel.com/), which reduced the average response times from five minutes to under 30 seconds per customer query. This change significantly enhanced customer experience and reduced the support team's workload. Johannes Hötter notes, "Our solution has revolutionized how first-level support operates in the insurance industry, drastically improving efficiency and customer satisfaction." ## The Solution Kern AI discovered Qdrant and was impressed by its interactive Discord community, which highlighted the active support and continuous improvements of the platform. Qdrant was the first vector database the team used, and after testing other alternatives, they chose Qdrant for several reasons: - **Multi-vector Storage**: This feature was crucial as it allowed the team to store and manage different search indexes. Given that no single embedding fits all use cases, this capability brought essential diversity to their embeddings, enabling more flexible and robust data handling. - **Easy Setup**: Qdrant's straightforward setup process enabled Kern AI to quickly integrate and start utilizing the database without extensive overhead, which was critical for maintaining development momentum. - **Open Source**: The open-source nature of Qdrant aligned with Kern AI's own product development philosophy. This allowed for greater customization and integration into their existing open-source projects. - **Rapid Progress**: Qdrant's swift advancements and frequent updates ensured that Kern AI could rely on continuous improvements and cutting-edge features to keep their solutions competitive. - **Multi-vector Search**: Allowed Kern AI to perform complex queries across different embeddings simultaneously, enhancing the depth and accuracy of their search results. - **Hybrid Search/Filters**: Enabled the combination of traditional keyword searches with vector searches, allowing for more nuanced and precise data retrieval. Kern AI uses Qdrant's open-source, on-premise solution for both their open-source project and their commercial end-to-end framework. This framework, focused on the financial and insurance markets, is similar to LangChain or LlamaIndex but tailored to the industry-specific needs. ![kern-data-retrieval](/blog/case-study-kern/kern-data-retrieval.png) *Configuring data retrieval in Kern AI: Fine-tuning search inputs and metadata for optimized information extraction.* ## The Results Kern AI's primary use case focuses on enhancing customer service with extreme precision. Leveraging Qdrant's advanced vector search capabilities, Kern AI consistently maintains hallucination rates under 1%. This exceptional accuracy allows them to build the most precise RAG (Retrieval-Augmented Generation) chatbot for financial services. Key Achievements: - **<1% Hallucination Rate**: Ensures the highest level of accuracy and reliability in their chatbot solutions for the financial and insurance sector. - **Reduced Customer Service Response Times**: Using Kern AI's solution, Markel Insurance SE reduced response times from five minutes to under 30 seconds, significantly improving customer experience and operational efficiency. By utilizing Qdrant, Kern AI effectively supports various use cases in financial services, such as: - **Claims Management**: Streamlining the claims process by quickly identifying relevant data points. - **Similarity Search**: Enhancing incident handling by finding similar cases to improve decision-making quality. ## Outlook Kern AI plans to expand its use of Qdrant to support both brownfield and greenfield use cases across the financial and insurance industry.
blog/case-study-kern.md
--- title: "Qdrant vs Pinecone: Vector Databases for AI Apps" draft: false short_description: "Highlighting performance, features, and suitability for various use cases." description: "In this detailed Qdrant vs Pinecone comparison, we share the top features to determine the best vector database for your AI applications." preview_image: /blog/comparing-qdrant-vs-pinecone-vector-databases/social_preview.png social_preview_image: /blog/comparing-qdrant-vs-pinecone-vector-databases/social_preview.png aliases: /documentation/overview/qdrant-alternatives/ date: 2024-02-25T00:00:00-08:00 author: Qdrant Team featured: false tags: - vector search - role based access control - byte vectors - binary vectors - quantization - new features --- # Qdrant vs Pinecone: An Analysis of Vector Databases for AI Applications Data forms the foundation upon which AI applications are built. Data can exist in both structured and unstructured formats. Structured data typically has well-defined schemas or inherent relationships. However, unstructured data, such as text, image, audio, or video, must first be converted into numerical representations known as [vector embeddings](https://qdrant.tech/articles/what-are-embeddings/). These embeddings encapsulate the semantic meaning or features of unstructured data and are in the form of high-dimensional vectors. Traditional databases, while effective at handling structured data, fall short when dealing with high-dimensional unstructured data, which are increasingly the focal point of modern AI applications. Key reasons include: - **Indexing Limitations**: Database indexing methods like B-Trees or hash indexes, typically used in relational databases, are inefficient for high-dimensional data and show poor query performance. - **Curse of Dimensionality**: As dimensions increase, data points become sparse, and distance metrics like Euclidean distance lose their effectiveness, leading to poor search query performance. - **Lack of Specialized Algorithms**: Traditional databases do not incorporate advanced algorithms designed to handle high-dimensional data, resulting in slow query processing times. - **Scalability Challenges**: Managing and querying high-dimensional [vectors](https://qdrant.tech/documentation/concepts/vectors/) require optimized data structures, which traditional databases are not built to handle. - **Storage Inefficiency**: Traditional databases are not optimized for efficiently storing large volumes of high-dimensional data, facing significant challenges in managing space complexity and [retrieval efficiency](https://qdrant.tech/documentation/tutorials/retrieval-quality/). Vector databases address these challenges by efficiently storing and querying high-dimensional vectors. They offer features such as high-dimensional vector storage and retrieval, efficient similarity search, sophisticated indexing algorithms, advanced compression techniques, and integration with various machine learning frameworks. Due to their capabilities, vector databases are now a cornerstone of modern AI and are becoming pivotal in building applications that leverage similarity search, recommendation systems, natural language processing, computer vision, image recognition, speech recognition, and more. Over the past few years, several vector database solutions have emerged – the two leading ones being Qdrant and Pinecone, among others. Both are powerful vector database solutions with unique strengths. However, they differ greatly in their principles and approach, and the capabilities they offer to developers. In this article, we’ll examine both solutions and discuss the factors you need to consider when choosing amongst the two. Let’s dive in! ## Exploring Qdrant Vector Database: Features and Capabilities Qdrant is a high-performance, open-source vector similarity search engine built with [Rust](https://qdrant.tech/articles/why-rust/), designed to handle the demands of large-scale AI applications with exceptional speed and reliability. Founded in 2021, Qdrant's mission is to "build the most efficient, scalable, and high-performance vector database in the market." This mission is reflected in its architecture and feature set. Qdrant is highly scalable and performant: it can handle billions of vectors efficiently and with [minimal latency](https://qdrant.tech/benchmarks/). Its advanced vector indexing, search, and retrieval capabilities make it ideal for applications that require fast and accurate search results. It supports vertical and horizontal scaling, advanced compression techniques, highly flexible deployment options – including cloud-native, [hybrid cloud](https://qdrant.tech/documentation/hybrid-cloud/), and private cloud solutions – and powerful security features. ### Key Features of Qdrant Vector Database - **Advanced Similarity Search:** Qdrant supports various similarity [search](https://qdrant.tech/documentation/concepts/search/) metrics like dot product, cosine similarity, Euclidean distance, and Manhattan distance. You can store additional information along with vectors, known as [payload](https://qdrant.tech/documentation/concepts/payload/) in Qdrant terminology. A payload is any JSON formatted data. - **Built Using Rust:** Qdrant is built with Rust, and leverages its performance and efficiency. Rust is famed for its [memory safety](https://arxiv.org/abs/2206.05503) without the overhead of a garbage collector, and rivals C and C++ in speed. - **Scaling and Multitenancy**: Qdrant supports both vertical and horizontal scaling and uses the Raft consensus protocol for [distributed deployments](https://qdrant.tech/documentation/guides/distributed_deployment/). Developers can run Qdrant clusters with replicas and shards, and seamlessly scale to handle large datasets. Qdrant also supports [multitenancy](https://qdrant.tech/documentation/guides/multiple-partitions/) where developers can create single collections and partition them using payload. - **Payload Indexing and Filtering:** Just as Qdrant allows attaching any JSON payload to vectors, it also supports payload indexing and [filtering](https://qdrant.tech/documentation/concepts/filtering/) with a wide range of data types and query conditions, including keyword matching, full-text filtering, numerical ranges, nested object filters, and [geo](https://qdrant.tech/documentation/concepts/filtering/#geo)filtering. - **Hybrid Search with Sparse Vectors:** Qdrant supports both dense and [sparse vectors](https://qdrant.tech/articles/sparse-vectors/), thereby enabling hybrid search capabilities. Sparse vectors are numerical representations of data where most of the elements are zero. Developers can combine search results from dense and sparse vectors, where sparse vectors ensure that results containing the specific keywords are returned and dense vectors identify semantically similar results. - **Built-In Vector Quantization:** Qdrant offers three different [quantization](https://qdrant.tech/documentation/guides/quantization/) options to developers to optimize resource usage. Scalar quantization balances accuracy, speed, and compression by converting 32-bit floats to 8-bit integers. Binary quantization, the fastest method, significantly reduces memory usage. Product quantization offers the highest compression, and is perfect for memory-constrained scenarios. - **Flexible Deployment Options:** Qdrant offers a range of deployment options. Developers can easily set up Qdrant (or Qdrant cluster) [locally](https://qdrant.tech/documentation/quick-start/#download-and-run) using Docker for free. [Qdrant Cloud](https://qdrant.tech/cloud/), on the other hand, is a scalable, managed solution that provides easy access with flexible pricing. Additionally, Qdrant offers [Hybrid Cloud](https://qdrant.tech/hybrid-cloud/) which integrates Kubernetes clusters from cloud, on-premises, or edge, into an enterprise-grade managed service. - **Security through API Keys, JWT and RBAC:** Qdrant offers developers various ways to [secure](https://qdrant.tech/documentation/guides/security/) their instances. For simple authentication, developers can use API keys (including Read Only API keys). For more granular access control, it offers JSON Web Tokens (JWT) and the ability to build Role-Based Access Control (RBAC). TLS can be enabled to secure connections. Qdrant is also [SOC 2 Type II](https://qdrant.tech/blog/qdrant-soc2-type2-audit/) certified. Additionally, Qdrant integrates seamlessly with popular machine learning frameworks such as [LangChain](https://qdrant.tech/blog/using-qdrant-and-langchain/), LlamaIndex, and Haystack; and Qdrant Hybrid Cloud integrates seamlessly with AWS, DigitalOcean, Google Cloud, Linode, Oracle Cloud, OpenShift, and Azure, among others. By focusing on performance, scalability and efficiency, Qdrant has positioned itself as a leading solution for enterprise-grade vector similarity search, capable of meeting the growing demands of modern AI applications. However, how does it compare with Pinecone? Let’s take a look. ## Exploring Pinecone Vector Database: Key Features and Capabilities An alternative to Qdrant, Pinecone provides a fully managed vector database that abstracts the complexities of infrastructure and scaling. The company’s founding principle, when it started in 2019, was to make Pinecone “accessible to engineering teams of all sizes and levels of AI expertise.” Similarly to Qdrant, Pinecone offers advanced vector search and retrieval capabilities. There are two different ways you can use Pinecone: using its serverless architecture or its pod architecture. Pinecone also supports advanced similarity search metrics such as dot product, Euclidean distance, and cosine similarity. Using its pod architecture, you can leverage horizontal or vertical scaling. Finally, Pinecone offers privacy and security features such as Role-Based Access Control (RBAC) and end-to-end encryption, including encryption in transit and at rest. ### Key Features of Pinecone Vector Database - **Fully Managed Service:** Pinecone offers a fully managed SaaS-only service. It handles the complexities of infrastructure management such as scaling, performance optimization, and maintenance. Pinecone is designed for developers who want to focus on building AI applications without worrying about the underlying database infrastructure. - **Serverless and Pod Architecture:** Pinecone offers two different architecture options to run their vector database - the serverless architecture and the pod architecture. Serverless architecture runs as a managed service on the AWS cloud platform, and allows automatic scaling based on workload. Pod architecture, on the other hand, provides pre-configured hardware units (pods) for hosting and executing services, and supports horizontal and vertical scaling. Pods can be run on AWS, GCP, or Azure. - **Advanced Similarity Search:** Pinecone supports three different similarity search metrics – dot product, Euclidean distance, and cosine similarity. It currently does not support Manhattan distance metric. - **Privacy and Security Features:** Pinecone offers Role-Based Access Control (RBAC), end-to-end encryption, and compliance with SOC 2 Type II and GDPR. Pinecone allows for the creation of “organization”, which, in turn, has “projects” and “members” with single sign-on (SSO) and access control. - **Hybrid Search and Sparse Vectors**: Pinecone supports both sparse and dense vectors, and allows hybrid search. This gives developers the ability to combine semantic and keyword search in a single query. - **Metadata Filtering**: Pinecone allows attaching key-value metadata to vectors in an index, which can later be queried. Semantic search using metadata filters retrieve exactly the results that match the filters. Pinecone’s fully managed service makes it a compelling choice for developers who’re looking for a vector database that comes without the headache of infrastructure management. ## Pinecone vs Qdrant: Key Differences and Use Cases Qdrant and Pinecone are both robust vector database solutions, but they differ significantly in their design philosophy, deployment options, and technical capabilities. Qdrant is an open-source vector database that gives control to the developer. It can be run locally, on-prem, in the cloud, or as a managed service, and it even offers a hybrid cloud option for enterprises. This makes Qdrant suitable for a wide range of environments, from development to enterprise settings. It supports multiple programming languages and offers advanced features like customizable distance metrics, payload filtering, and [integration with popular AI frameworks](https://qdrant.tech/documentation/frameworks/). Pinecone, on the other hand, is a fully managed, SaaS-only solution designed to abstract the complexities of infrastructure management. It provides a serverless architecture for automatic scaling and a pod architecture for resource customization. Pinecone focuses on ease of use and high performance, offering built-in security measures, compliance certifications, and a user-friendly API. However, it has some limitations in terms of metadata handling and flexibility compared to Qdrant. | Aspect | Qdrant | Pinecone | | ------------------------- | ---------------------------------------------------------------------- | -------------------------------------------------- | | Deployment Modes | Local, on-premises, cloud | SaaS-only | | Supported Languages | Python, JavaScript/TypeScript, Rust, Go, Java | Python, JavaScript/TypeScript, Java, Go | | Similarity Search Metrics | Dot Product, Cosine Similarity, Euclidean Distance, Manhattan Distance | Dot Product, Cosine Similarity, Euclidean Distance | | Hybrid Search | Highly customizable Hybrid search by combining Sparse and Dense Vectors, with support for separate indices within the same collection | Supports Hybrid search with a single sparse-dense index | | Vector Payload | Accepts any JSON object as payload, supports NULL values, geolocation, and multiple vectors per point | Flat metadata structure, does not support NULL values, geolocation, or multiple vectors per point | | Scalability | Vertical and horizontal scaling, distributed deployment with Raft consensus | Serverless architecture and pod architecture for horizontal and vertical scaling | | Performance | Efficient indexing, low latency, high throughput, customizable distance metrics | High throughput, low latency, gRPC client for higher upsert speeds | | Security | Flexible, environment-specific configurations, API key authentication in Qdrant Cloud, JWT and RBAC, SOC 2 Type II certification | Built-in RBAC, end-to-end encryption, SOC 2 Type II certification | ## Choosing the Right Vector Database: Factors to Consider When choosing between Qdrant and Pinecone, you need to consider some key factors that may impact your project long-term. Below are some primary considerations to help guide your decision: ### 1. Deployment Flexibility **Qdrant** offers multiple deployment options, including a local Docker node or cluster, Qdrant Cloud, and Hybrid Cloud. This allows you to choose an environment that best suits your project. You can start with a local Docker node for development, then add nodes to your cluster, and later switch to a Hybrid Cloud solution. **Pinecone**, on the other hand, is a fully managed SaaS solution. To use Pinecone, you connect your development environment to its cloud service. It abstracts the complexities of infrastructure management, making it easier to deploy, but it is also less flexible in terms of deployment options compared to Qdrant. ### 2. Scalability Requirements **Qdrant** supports both vertical and horizontal scaling and is suitable for deployments of all scales. You can run it as a single Docker node, a large cluster, or a Hybrid cloud, depending on the size of your dataset. Qdrant’s architecture allows for distributed deployment with replicas and shards, and scales extremely well to billions of vectors with minimal latency. **Pinecone** provides a serverless architecture and a pod architecture that automatically scales based on workload. Serverless architecture removes the need for any manual intervention, whereas pod architecture provides a bit more control. Since Pinecone is a managed SaaS-only solution, your application’s scalability is tied to both Pinecone's service and the underlying cloud provider in use. ### 3. Performance and Throughput **Qdrant** excels in providing different performance profiles tailored to specific use cases. It offers efficient vector and payload indexing, low-latency queries, optimizers, and high throughput, along with multiple options for quantization to further optimize performance. **Pinecone** recommends increasing the number of replicas to boost the throughput of pod-based indexes. For serverless indexes, Pinecone automatically handles scaling and throughput. To decrease latency, Pinecone suggests using namespaces to partition records within a single index. However, since Pinecone is a managed SaaS-only solution, developer control over performance and throughput is limited. ### 4. Security Considerations **Qdrant** allows for tailored security configurations specific to your deployment environment. It supports API keys (including read-only API keys), JWT authentication, and TLS encryption for connections. Developers can build Role-Based Access Control (RBAC) according to their application needs in a completely custom manner. Additionally, Qdrant's deployment flexibility allows organizations that need to adhere to stringent data laws to deploy it within their infrastructure, ensuring compliance with data sovereignty regulations. **Pinecone** provides comprehensive built-in security features in its managed SaaS solution, including Role-Based Access Control (RBAC) and end-to-end encryption. Its compliance with SOC 2 Type II and GDPR-readiness makes it a good choice for applications requiring standardized security measures. ### 5. Pricing **Qdrant** can be self-hosted locally (single node or a cluster) with a single Docker command. With its SaaS option, it offers a free tier in Qdrant Cloud sufficient for around 1M 768-dimensional vectors, without any limitation on the number of collections it is used for. This allows developers to build multiple demos without limitations. For more pricing information, check [here](https://qdrant.tech/pricing/). **Pinecone** cannot be self-hosted, and signing up for the SaaS solution is the only option. Pinecone has a free tier that supports approximately 300K 1536-dimensional embeddings. For Pinecone’s pricing details, check their pricing page. ### Qdrant vs Pinecone: Complete Summary The choice between Qdrant and Pinecone hinges on your specific needs: - **Qdrant** is ideal for organizations that require flexible deployment options, extensive scalability, and customization. It is also suitable for projects needing deep integration with existing security infrastructure and those looking for a cost-effective, self-hosted solution. - **Pinecone** is suitable for teams seeking a fully managed solution with robust built-in security features and standardized compliance. It is suitable for cloud-native applications and dynamic environments where automatic scaling and low operational overhead are critical. By carefully considering these factors, you can select the vector database that best aligns with your technical requirements and strategic goals. ## Choosing the Best Vector Database for Your AI Application Selecting the best vector database for your AI project depends on several factors, including your deployment preferences, scalability needs, performance requirements, and security considerations. - **Choose Qdrant if**: - You require flexible deployment options (local, on-premises, managed SaaS solution, or a Hybrid Cloud). - You need extensive customization and control over your vector database. - You project needs to adhere to data security and data sovereignty laws specific to your geography - Your project would benefit from advanced search capabilities, including complex payload filtering and geolocation support. - Cost efficiency and the ability to self-host are significant considerations. - **Choose Pinecone if**: - You prefer a fully managed SaaS solution that abstracts the complexities of infrastructure management. - You need a serverless architecture that automatically adjusts to varying workloads. - Built-in security features and compliance certifications (SOC 2 Type II, GDPR) are sufficient for your application. - You want to build your project with minimal operational overhead. For maximum control, security, and cost-efficiency, choose Qdrant. It offers flexible deployment options, customizability, and advanced search features, and is ideal for building data sovereign AI applications. However, if you prioritize ease of use and automatic scaling with built-in security, Pinecone's fully managed SaaS solution with a serverless architecture is the way to go. ## Next Steps Qdrant is one of the leading Pinecone alternatives in the market. For developers who seek control of their vector database, Qdrant offers the highest level of customization, flexible deployment options, and advanced security features. To get started with Qdrant, explore our [documentation](https://qdrant.tech/documentation/), hop on to our [Discord](https://qdrant.to/discord) channel, sign up for [Qdrant cloud](https://cloud.qdrant.io/) (or [Hybrid cloud](https://qdrant.tech/hybrid-cloud/)), or [get in touch](https://qdrant.tech/contact-us/) with us today. References: - [Pinecone Documentation](https://docs.pinecone.io/) - [Qdrant Documentation](https://qdrant.tech/documentation/) - If you aren't ready yet, [try out Qdrant locally](/documentation/quick-start/) or sign up for [Qdrant Cloud](https://cloud.qdrant.io/). - For more basic information on Qdrant read our [Overview](/documentation/overview/) section or learn more about Qdrant Cloud's [Free Tier](/documentation/cloud/). - If ready to migrate, please consult our [Comprehensive Guide](https://github.com/NirantK/qdrant_tools) for further details on migration steps.
blog/comparing-qdrant-vs-pinecone-vector-databases.md
--- draft: true title: Neural Search Tutorial slug: neural-search-tutorial short_description: Neural Search Tutorial description: Step-by-step guide on how to build a neural search service. preview_image: /blog/from_cms/1_vghoj7gujfjazpdmm9ebxa.webp date: 2024-01-05T14:09:57.544Z author: Andrey Vasnetsov featured: false tags: [] --- <!--StartFragment--> Step-by-step guide on how to build a neural search service. <!--EndFragment--> ![](/blog/from_cms/1_yoyuyv4zrz09skc8r6_lta.webp "How to build a neural search service with BERT + Qdrant + FastAPI") Information retrieval technology is one of the main technologies that enabled the modern Internet to exist. These days, search technology is the heart of a variety of applications. From web-pages search to product recommendations. For many years, this technology didn’t get much change until neural networks came into play. In this tutorial we are going to find answers to these questions: * What is the difference between regular and neural search? * What neural networks could be used for search? * In what tasks is neural network search useful? * How to build and deploy own neural search service step-by-step? **What is neural search?** A regular full-text search, such as Google’s, consists of searching for keywords inside a document. For this reason, the algorithm can not take into account the real meaning of the query and documents. Many documents that might be of interest to the user are not found because they use different wording. Neural search tries to solve exactly this problem — it attempts to enable searches not by keywords but by meaning. To achieve this, the search works in 2 steps. In the first step, a specially trained neural network encoder converts the query and the searched objects into a vector representation called *embeddings*. The encoder must be trained so that similar objects, such as texts with the same meaning or alike pictures get a close vector representation. ![](/blog/from_cms/1_vghoj7gujfjazpdmm9ebxa.webp "Neural encoder places cats closer together") Having this vector representation, it is easy to understand what the second step should be. To find documents similar to the query you now just need to find the nearest vectors. The most convenient way to determine the distance between two vectors is to calculate the cosine distance. The usual Euclidean distance can also be used, but it is not so efficient due to the [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). **Which model could be used?** It is ideal to use a model specially trained to determine the closeness of meanings. For example, models trained on Semantic Textual Similarity (STS) datasets. Current state-of-the-art models could be found on this [leaderboard](https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark?p=roberta-a-robustly-optimized-bert-pretraining). However, not only specially trained models can be used. If the model is trained on a large enough dataset, its internal features can work as embeddings too. So, for instance, you can take any pre-trained on ImageNet model and cut off the last layer from it. In the penultimate layer of the neural network, as a rule, the highest-level features are formed, which, however, do not correspond to specific classes. The output of this layer can be used as an embedding. **What tasks is neural search good for?** Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions — neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user’s actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. **Let’s build our own** With all that said, let’s make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from [startups-list.com](https://www.startups-list.com/). Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at [this link](https://storage.googleapis.com/generall-shared-data/startups_demo.json). **Prepare data for neural search** To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the [sentence-transformers](https://github.com/UKPLab/sentence-transformers) by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called **`distilbert-base-nli-stsb-mean-tokens**\`. DistilBERT means that the size of this model has been reduced by a special technique compared to the original BERT. This is important for the speed of our service and its demand for resources. The word \`stsb` in the name means that the model was trained for the Semantic Textual Similarity task. The complete code for data preparation with detailed comments can be found and run in [Colab Notebook](https://colab.research.google.com/drive/1kPktoudAP8Tu8n8l-iVMOQhVmHkWV_L9?usp=sharing). ![](/blog/from_cms/1_lotmmhjfexth1ucmtuhl7a.webp "What tasks is neural search good for? Neural search has the greatest advantage in areas where the query cannot be formulated precisely. Querying a table in a SQL database is not the best place for neural search. On the contrary, if the query itself is fuzzy, or it cannot be formulated as a set of conditions — neural search can help you. If the search query is a picture, sound file or long text, neural network search is almost the only option. If you want to build a recommendation system, the neural approach can also be useful. The user’s actions can be encoded in vector space in the same way as a picture or text. And having those vectors, it is possible to find semantically similar users and determine the next probable user actions. Let’s build our own With all that said, let’s make our neural network search. As an example, I decided to make a search for startups by their description. In this demo, we will see the cases when text search works better and the cases when neural network search works better. I will use data from startups-list.com. Each record contains the name, a paragraph describing the company, the location and a picture. Raw parsed data can be found at this link. Prepare data for neural search To be able to search for our descriptions in vector space, we must get vectors first. We need to encode the descriptions into a vector representation. As the descriptions are textual data, we can use a pre-trained language model. As mentioned above, for the task of text search there is a whole set of pre-trained models specifically tuned for semantic similarity. One of the easiest libraries to work with pre-trained language models, in my opinion, is the sentence-transformers by UKPLab. It provides a way to conveniently download and use many pre-trained models, mostly based on transformer architecture. Transformers is not the only architecture suitable for neural search, but for our task, it is quite enough. We will use a model called `distilbert-base-nli-stsb-mean-tokens`. DistilBERT means that the size of this model has been reduced by a special technique compared to the original BERT. This is important for the speed of our service and its demand for resources. The word `stsb` in the name means that the model was trained for the Semantic Textual Similarity task. The complete code for data preparation with detailed comments can be found and run in Colab Notebook.") **Vector search engine** Now as we have a vector representation for all our records, we need to store them somewhere. In addition to storing, we may also need to add or delete a vector, save additional information with the vector. And most importantly, we need a way to search for the nearest vectors. The vector search engine can take care of all these tasks. It provides a convenient API for searching and managing vectors. In our tutorial we will use [Qdrant](/) vector search engine. It not only supports all necessary operations with vectors but also allows to store additional payload along with vectors and use it to perform filtering of the search result. Qdrant has a client for python and also defines the API schema if you need to use it from other languages. The easiest way to use Qdrant is to run a pre-built image. So make sure you have Docker installed on your system. To start Qdrant, use the instructions on its [homepage](https://github.com/qdrant/qdrant). Download image from [DockerHub](https://hub.docker.com/r/generall/qdrant): `docker pull qdrant/qdrant` And run the service inside the docker: `docker run -p 6333:6333 \`\ `-v $(pwd)/qdrant_storage:/qdrant/storage \`\ `qdrant/qdrant` You should see output like this ```abuild `...`\ `[...] Starting 12 workers`\ `[...] Starting "actix-web-service-0.0.0.0:6333" service on 0.0.0.0:6333` ``` This means that the service is successfully launched and listening port 6333. To make sure you can test <http://localhost:6333/> in your browser and get qdrant version info. All uploaded to Qdrant data is saved into the `*./qdrant_storage*` directory and will be persisted even if you recreate the container. **Upload data to Qdrant** Now once we have the vectors prepared and the search engine running, we can start uploading the data. To interact with Qdrant from python, I recommend using an out-of-the-box client library. To install it, use the following command `pip install qdrant-client` At this point, we should have startup records in file `*startups.json*\`, encoded vectors in file `*startup_vectors.npy*`, and running Qdrant on a local machine. Let’s write a script to upload all startup data and vectors into the search engine. First, let’s create a client object for Qdrant. ```abuild # Import client library from qdrant_client import QdrantClient from qdrant_client import models qdrant_client = QdrantClient(host=’localhost’, port=6333) ``` Qdrant allows you to combine vectors of the same purpose into collections. Many independent vector collections can exist on one service at the same time. Let’s create a new collection for our startup vectors. ```abuild `if not qdrant_client.collection_exists('startups'): `qdrant_client.create_collection(`\ `collection_name='startups',`\ `vectors_config=models.VectorParams(size=768, distance="Cosine")`\ `)` ``` The `*vector_size*\` parameter is very important. It tells the service the size of the vectors in that collection. All vectors in a collection must have the same size, otherwise, it is impossible to calculate the distance between them. `*768*` is the output dimensionality of the encoder we are using. The `*distance*` parameter allows specifying the function used to measure the distance between two points. The Qdrant client library defines a special function that allows you to load datasets into the service. However, since there may be too much data to fit a single computer memory, the function takes an iterator over the data as input. Let’s create an iterator over the startup data and vectors. ``` import numpy as np import json fd = open('./startups.json') # payload is now an iterator over startup data payload = map(json.loads, fd) # Here we load all vectors into memory, numpy array works as iterable for itself. # Other option would be to use Mmap, if we don't want to load all data into RAM vectors = np.load('./startup_vectors.npy') # And the final step - data uploading qdrant_client.upload_collection( collection_name='startups', vectors=vectors, payload=payload, ids=None, # Vector ids will be assigned automatically batch_size=256 # How many vectors will be uploaded in a single request? ``` Now we have vectors, uploaded to the vector search engine. On the next step we will learn how to actually search for closest vectors. The full code for this step could be found [here](https://github.com/qdrant/qdrant_demo/blob/master/qdrant_demo/init_vector_search_index.py). **Make a search API** Now that all the preparations are complete, let’s start building a neural search class. First, install all the requirements: `pip install sentence-transformers numpy` In order to process incoming requests neural search will need 2 things. A model to convert the query into a vector and Qdrant client, to perform a search queries. ``` # File: neural_searcher.py from qdrant_client import QdrantClient from sentence_transformers import SentenceTransformer class NeuralSearcher: def __init__(self, collection_name): self.collection_name = collection_name # Initialize encoder model self.model = SentenceTransformer('distilbert-base-nli-stsb-mean-tokens', device='cpu') # initialize Qdrant client self.qdrant_client = QdrantClient(host='localhost', port=6333) # The search function looks as simple as possible: def search(self, text: str): # Convert text query into vector vector = self.model.encode(text).tolist() # Use `vector` for search for closest vectors in the collection search_result = self.qdrant_client.search( collection_name=self.collection_name, query_vector=vector, query_filter=None, # We don't want any filters for now top=5 # 5 the most closest results is enough ) # `search_result` contains found vector ids with similarity scores along with the stored payload # In this function we are interested in payload only payloads = [hit.payload for hit in search_result] return payloads ``` With Qdrant it is also feasible to add some conditions to the search. For example, if we wanted to search for startups in a certain city, the search query could look like this: We now have a class for making neural search queries. Let’s wrap it up into a service. **Deploy as a service** To build the service we will use the FastAPI framework. It is super easy to use and requires minimal code writing. To install it, use the command `pip install fastapi uvicorn` Our service will have only one API endpoint and will look like this: Now, if you run the service with `python service.py` [ttp://localhost:8000/docs](http://localhost:8000/docs) , you should be able to see a debug interface for your service. ![](/blog/from_cms/1_f4gzrt6rkyqg8xvjr4bdtq-1-.webp "FastAPI Swagger interface") Feel free to play around with it, make queries and check out the results. This concludes the tutorial. **Online Demo** The described code is the core of this [online demo](https://demo.qdrant.tech/). You can try it to get an intuition for cases when the neural search is useful. The demo contains a switch that selects between neural and full-text searches. You can turn neural search on and off to compare the result with regular full-text search. Try to use startup description to find similar ones. **Conclusion** In this tutorial, I have tried to give minimal information about neural search, but enough to start using it. Many potential applications are not mentioned here, this is a space to go further into the subject. Subscribe to my [telegram channel](https://t.me/neural_network_engineering), where I talk about neural networks engineering, publish other examples of neural networks and neural search applications. Subscribe to the [Qdrant user’s group](https://discord.gg/tdtYvXjC4h) if you want to be updated on latest Qdrant news and features.
blog/neural-search-tutorial.md
--- draft: true title: v0.9.0 update of the Qdrant engine went live slug: qdrant-v090-release short_description: We've released the new version of Qdrant engine - v.0.9.0. description: We’ve released the new version of Qdrant engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move preview_image: /blog/qdrant-v.0.9.0-release-update.png date: 2022-08-08T14:54:45.476Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ featured: true categories: - release-update - news tags: - corporate news - release sitemapExclude: true --- We've released the new version of Qdrant engine - v.0.9.0. It features the dynamic cluster scaling capabilities. Now Qdrant is more flexible with cluster deployment, allowing to move shards between nodes and remove nodes from the cluster. v.0.9.0 also has various improvements, such as removing temporary snapshot files during the complete snapshot, disabling default mmap threshold, and more. You can read the detailed release noted by this link https://github.com/qdrant/qdrant/releases/tag/v0.9.0 We keep improving Qdrant and working on frequently requested functionality for the next release. Stay tuned!
blog/v0-9-0-update-of-the-qdrant-engine-went-live.md
--- draft: false title: "Qdrant Hybrid Cloud: the First Managed Vector Database You Can Run Anywhere" slug: hybrid-cloud short_description: description: preview_image: /blog/hybrid-cloud/hybrid-cloud.png social_preview_image: /blog/hybrid-cloud/hybrid-cloud.png date: 2024-04-15T00:01:00Z author: Andre Zayarni, CEO & Co-Founder featured: true tags: - Hybrid Cloud --- We are excited to announce the official launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) today, a significant leap forward in the field of vector search and enterprise AI. Rooted in our open-source origin, we are committed to offering our users and customers unparalleled control and sovereignty over their data and vector search workloads. Qdrant Hybrid Cloud stands as **the industry's first managed vector database that can be deployed in any environment** - be it cloud, on-premise, or the edge. <p align="center"><iframe width="560" height="315" src="https://www.youtube.com/embed/gWH2uhWgTvM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> As the AI application landscape evolves, the industry is transitioning from prototyping innovative AI solutions to actively deploying AI applications into production (incl. GenAI, semantic search, or recommendation systems). In this new phase, **privacy**, **data sovereignty**, **deployment flexibility**, and **control** are at the top of developers’ minds. These factors are critical when developing, launching, and scaling new applications, whether they are customer-facing services like AI assistants or internal company solutions for knowledge and information retrieval or process automation. Qdrant Hybrid Cloud offers developers a vector database that can be deployed in any existing environment, ensuring data sovereignty and privacy control through complete database isolation - with the full capabilities of our managed cloud service. - **Unmatched Deployment Flexibility**: With its Kubernetes-native architecture, Qdrant Hybrid Cloud provides the ability to bring your own cloud or compute by deploying Qdrant as a managed service on the infrastructure of choice, such as Oracle Cloud Infrastructure (OCI), Vultr, Red Hat OpenShift, DigitalOcean, OVHcloud, Scaleway, STACKIT, Civo, VMware vSphere, AWS, Google Cloud, or Microsoft Azure. - **Privacy & Data Sovereignty**: Qdrant Hybrid Cloud offers unparalleled data isolation and the flexibility to process vector search workloads in their own environments. - **Scalable & Secure Architecture**: Qdrant Hybrid Cloud's design ensures scalability and adaptability with its Kubernetes-native architecture, separates data and control for enhanced security, and offers a unified management interface for ease of use, enabling businesses to grow and adapt without compromising privacy or control. - **Effortless Setup in Seconds**: Setting up Qdrant Hybrid Cloud is incredibly straightforward, thanks to our [simple Kubernetes installation](/documentation/hybrid-cloud/) that connects effortlessly with your chosen infrastructure, enabling secure, scalable deployments right from the get-go Let’s explore these aspects in more detail: #### Maximizing Deployment Flexibility: Enabling Applications to Run Across Any Environment ![hybrid-cloud-environments](/blog/hybrid-cloud/hybrid-cloud-environments.png) Qdrant Hybrid Cloud, powered by our seamless Kubernetes-native architecture, is the first managed vector database engineered for unparalleled deployment flexibility. This means that regardless of where you run your AI applications, you can now enjoy the benefits of a fully managed Qdrant vector database, simplifying operations across any cloud, on-premise, or edge locations. For this launch of Qdrant Hybrid Cloud, we are proud to collaborate with key cloud providers, including [Oracle Cloud Infrastructure (OCI)](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers), [Red Hat OpenShift](/blog/hybrid-cloud-red-hat-openshift/), [Vultr](/blog/hybrid-cloud-vultr/), [DigitalOcean](/blog/hybrid-cloud-digitalocean/), [OVHcloud](/blog/hybrid-cloud-ovhcloud/), [Scaleway](/blog/hybrid-cloud-scaleway/), [Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo), and [STACKIT](/blog/hybrid-cloud-stackit/). These partnerships underscore our commitment to delivering a versatile and robust vector database solution that meets the complex deployment requirements of today's AI applications. In addition to our partnerships with key cloud providers, we are also launching in collaboration with renowned AI development tools and framework leaders, including [LlamaIndex](/blog/hybrid-cloud-llamaindex/), [LangChain](/blog/hybrid-cloud-langchain/), [Airbyte](/blog/hybrid-cloud-airbyte/), [JinaAI](/blog/hybrid-cloud-jinaai/), [Haystack by deepset](/blog/hybrid-cloud-haystack/), and [Aleph Alpha](/blog/hybrid-cloud-aleph-alpha/). These launch partners are instrumental in ensuring our users can seamlessly integrate with essential technologies for their AI applications, enriching our offering and reinforcing our commitment to versatile and comprehensive deployment environments. Together with our launch partners we have created detailed tutorials that show how to build cutting-edge AI applications with Qdrant Hybrid Cloud on the infrastructure of your choice. These tutorials are available in our [launch partner blog](/blog/hybrid-cloud-launch-partners/). Additionally, you can find expansive [documentation](/documentation/hybrid-cloud/) and instructions on how to [deploy Qdrant Hybrid Cloud](/documentation/hybrid-cloud/hybrid-cloud-setup/). #### Powering Vector Search & AI with Unmatched Data Sovereignty Proprietary data, the lifeblood of AI-driven innovation, fuels personalized experiences, accurate recommendations, and timely anomaly detection. This data, unique to each organization, encompasses customer behaviors, internal processes, and market insights - crucial for tailoring AI applications to specific business needs and competitive differentiation. However, leveraging such data effectively while ensuring its **security, privacy, and control** requires diligence. The innovative architecture of Qdrant Hybrid Cloud ensures **complete database isolation**, empowering developers with the autonomy to tailor where they process their vector search workloads with total data sovereignty. Rooted deeply in our commitment to open-source principles, this approach aims to foster a new level of trust and reliability by providing the essential tools to navigate the exciting landscape of enterprise AI. #### How We Designed the Qdrant Hybrid Cloud Architecture We designed the architecture of Qdrant Hybrid Cloud to meet the evolving needs of businesses seeking unparalleled flexibility, control, and privacy. - **Kubernetes-Native Design**: By embracing Kubernetes, we've ensured that our architecture is both scalable and adaptable. This choice supports our deployment flexibility principle, allowing Qdrant Hybrid Cloud to integrate seamlessly with any infrastructure that can run Kubernetes. - **Decoupled Data and Control Planes**: Our architecture separates the data plane (where the data is stored and processed) from the control plane (which manages the cluster operations). This separation enhances security, allows for more granular control over the data, and enables the data plane to reside anywhere the user chooses. - **Unified Management Interface**: Despite the underlying complexity and the diversity of deployment environments, we designed a unified, user-friendly interface that simplifies the Qdrant cluster management. This interface supports everything from deployment to scaling and upgrading operations, all accessible from the [Qdrant Cloud portal](https://cloud.qdrant.io/login). - **Extensible and Modular**: Recognizing the rapidly evolving nature of technology and enterprise needs, we built Qdrant Hybrid Cloud to be both extensible and modular. Users can easily integrate new services, data sources, and deployment environments as their requirements grow and change. #### Diagram: Qdrant Hybrid Cloud Architecture ![hybrid-cloud-architecture](/blog/hybrid-cloud/hybrid-cloud-architecture.png) #### Quickstart: Effortless Setup with Our One-Step Installation We’ve made getting started with Qdrant Hybrid Cloud as simple as possible. The Kubernetes “One-Step” installation will allow you to connect with the infrastructure of your choice. This is how you can get started: 1. **Activate Hybrid Cloud**: Simply sign up for or log into your [Qdrant Cloud](https://cloud.qdrant.io/login) account and navigate to the **Hybrid Cloud** section. 2. **Onboard your Kubernetes cluster**: Follow the onboarding wizard and add your Kubernetes cluster as a Hybrid Cloud Environment - be it in the cloud, on-premise, or at the edge. 3. **Deploy Qdrant clusters securely, with confidence:** Now, you can effortlessly create and manage Qdrant clusters in your own environment, directly from the central Qdrant Management Console. This supports horizontal and vertical scaling, zero-downtime upgrades, and disaster recovery seamlessly, allowing you to deploy anywhere with confidence. Explore our [detailed documentation](/documentation/hybrid-cloud/) and [tutorials](/documentation/examples/) to seamlessly deploy Qdrant Hybrid Cloud in your preferred environment, and don't miss our [launch partner blog post](/blog/hybrid-cloud-launch-partners/) for practical insights. Start leveraging the full potential of Qdrant Hybrid Cloud and [create your first Qdrant cluster today](https://cloud.qdrant.io/login), unlocking the flexibility and control essential for your AI and vector search workloads. [![hybrid-cloud-get-started](/blog/hybrid-cloud/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login) ## Launch Partners We launched Qdrant Hybrid Cloud with assistance and support of our trusted partners. Learn what they have to say about our latest offering: #### Oracle Cloud Infrastructure: > *"We are excited to partner with Qdrant to bring their powerful vector search capabilities to Oracle Cloud Infrastructure. By offering Qdrant Hybrid Cloud as a managed service on OCI, we are empowering enterprises to harness the full potential of AI-driven applications while maintaining complete control over their data. This collaboration represents a significant step forward in making scalable vector search accessible and manageable for businesses across various industries, enabling them to drive innovation, enhance productivity, and unlock valuable insights from their data."* Dr. Sanjay Basu, Senior Director of Cloud Engineering, AI/GPU Infrastructure at Oracle Read more in [OCI's latest Partner Blog](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers). #### Red Hat: > *“Red Hat is committed to driving transparency, flexibility and choice for organizations to more easily unlock the power of AI. By working with partners like Qdrant to enable streamlined integration experiences on Red Hat OpenShift for AI use cases, organizations can more effectively harness critical data and deliver real business outcomes,”* said Steven Huels, vice president and general manager, AI Business Unit, Red Hat. Read more in our [official Red Hat Partner Blog](/blog/hybrid-cloud-red-hat-openshift/). #### Vultr: > *"Our collaboration with Qdrant empowers developers to unlock the potential of vector search applications, such as RAG, by deploying Qdrant Hybrid Cloud with its high-performance search capabilities directly on Vultr's global, automated cloud infrastructure. This partnership creates a highly scalable and customizable platform, uniquely designed for deploying and managing AI workloads with unparalleled efficiency."* Kevin Cochrane, Vultr CMO. Read more in our [official Vultr Partner Blog](/blog/hybrid-cloud-vultr/). #### OVHcloud: > *“The partnership between OVHcloud and Qdrant Hybrid Cloud highlights, in the European AI landscape, a strong commitment to innovative and secure AI solutions, empowering startups and organisations to navigate AI complexities confidently. By emphasizing data sovereignty and security, we enable businesses to leverage vector databases securely."* Yaniv Fdida, Chief Product and Technology Officer, OVHcloud Read more in our [official OVHcloud Partner Blog](/blog/hybrid-cloud-ovhcloud/). #### DigitalOcean: > *“Qdrant, with its seamless integration and robust performance, equips businesses to develop cutting-edge applications that truly resonate with their users. Through applications such as semantic search, Q&A systems, recommendation engines, image search, and RAG, DigitalOcean customers can leverage their data to the fullest, ensuring privacy and driving innovation.“* - Bikram Gupta, Lead Product Manager, Kubernetes & App Platform, DigitalOcean. Read more in our [official DigitalOcean Partner Blog](/blog/hybrid-cloud-digitalocean/). #### Scaleway: > *"With our partnership with Qdrant, Scaleway reinforces its status as Europe's leading cloud provider for AI innovation. The integration of Qdrant's fast and accurate vector database enriches our expanding suite of AI solutions. This means you can build smarter, faster AI projects with us, worry-free about performance and security."* Frédéric Bardolle, Lead PM AI, Scaleway Read more in our [official Scaleway Partner Blog](/blog/hybrid-cloud-scaleway/). #### Airbyte: > *“The new Qdrant Hybrid Cloud is an exciting addition that offers peace of mind and flexibility, aligning perfectly with the needs of Airbyte Enterprise users who value the same balance. Being open-source at our core, both Qdrant and Airbyte prioritize giving users the flexibility to build and test locally—a significant advantage for data engineers and AI practitioners. We're enthusiastic about the Hybrid Cloud launch, as it mirrors our vision of enabling users to confidently transition from local development and local deployments to a managed solution, with both cloud and hybrid cloud deployment options.”* AJ Steers, Staff Engineer for AI, Airbyte Read more in our [official Airbyte Partner Blog](/blog/hybrid-cloud-airbyte/). #### deepset: > *“We hope that with Haystack 2.0 and our growing partnerships such as what we have here with Qdrant Hybrid Cloud, engineers are able to build AI systems with full autonomy. Both in how their pipelines are designed, and how their data are managed.”* Tuana Çelik, Developer Relations Lead, deepset. Read more in our [official Haystack by deepset Partner Blog](/blog/hybrid-cloud-haystack/). #### LlamaIndex: > *“LlamaIndex is thrilled to partner with Qdrant on the launch of Qdrant Hybrid Cloud, which upholds Qdrant's core functionality within a Kubernetes-based architecture. This advancement enhances LlamaIndex's ability to support diverse user environments, facilitating the development and scaling of production-grade, context-augmented LLM applications.”* Jerry Liu, CEO and Co-Founder, LlamaIndex Read more in our [official LlamaIndex Partner Blog](/blog/hybrid-cloud-llamaindex/). #### LangChain: > *“The AI industry is rapidly maturing, and more companies are moving their applications into production. We're really excited at LangChain about supporting enterprises' unique data architectures and tooling needs through integrations and first-party offerings through LangSmith. First-party enterprise integrations like Qdrant's greatly contribute to the LangChain ecosystem with enterprise-ready retrieval features that seamlessly integrate with LangSmith's observability, production monitoring, and automation features, and we're really excited to develop our partnership further.”* -Erick Friis, Founding Engineer at LangChain Read more in our [official LangChain Partner Blog](/blog/hybrid-cloud-langchain/). #### Jina AI: > *“The collaboration of Qdrant Hybrid Cloud with Jina AI’s embeddings gives every user the tools to craft a perfect search framework with unmatched accuracy and scalability. It’s a partnership that truly pays off!”* Nan Wang, CTO, Jina AI Read more in our [official Jina AI Partner Blog](/blog/hybrid-cloud-jinaai/). We have also launched Qdrant Hybrid Cloud with the support of **Aleph Alpha**, **STACKIT** and **Civo**. Learn more about our valued partners: - **Aleph Alpha:** [Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud](/blog/hybrid-cloud-aleph-alpha/) - **STACKIT:** [STACKIT and Qdrant Hybrid Cloud for Best Data Privacy](/blog/hybrid-cloud-stackit/) - **Civo:** [Deploy Qdrant Hybrid Cloud on Civo Kubernetes](/documentation/hybrid-cloud/platform-deployment-options/#civo)
blog/hybrid-cloud.md
--- draft: false title: "STACKIT and Qdrant Hybrid Cloud for Best Data Privacy" short_description: "Empowering German AI development with a data privacy-first platform." description: "Empowering German AI development with a data privacy-first platform." preview_image: /blog/hybrid-cloud-stackit/hybrid-cloud-stackit.png date: 2024-04-10T00:07:00Z author: Qdrant featured: false weight: 1001 tags: - Qdrant - Vector Database --- Qdrant and [STACKIT](https://www.stackit.de/en/) are thrilled to announce that developers are now able to deploy a fully managed vector database to their STACKIT environment with the introduction of [Qdrant Hybrid Cloud](/hybrid-cloud/). This is a great step forward for the German AI ecosystem as it enables developers and businesses to build cutting edge AI applications that run on German data centers with full control over their data. Vector databases are an essential component of the modern AI stack. They enable rapid and accurate retrieval of high-dimensional data, crucial for powering search, recommendation systems, and augmenting machine learning models. In the rising field of GenAI, vector databases power retrieval-augmented-generation (RAG) scenarios as they are able to enhance the output of large language models (LLMs) by injecting relevant contextual information. However, this contextual information is often rooted in confidential internal or customer-related information, which is why enterprises are in pursuit of solutions that allow them to make this data available for their AI applications without compromising data privacy, losing data control, or letting data exit the company's secure environment. Qdrant Hybrid Cloud is the first managed vector database that can be deployed in an existing STACKIT environment. The Kubernetes-native setup allows businesses to operate a fully managed vector database, while maintaining control over their data through complete data isolation. Qdrant Hybrid Cloud's managed service seamlessly integrates into STACKIT's cloud environment, allowing businesses to deploy fully managed vector search workloads, secure in the knowledge that their operations are backed by the stringent data protection standards of Germany's data centers and in full compliance with GDPR. This setup not only ensures that data remains under the businesses control but also paves the way for secure, AI-driven application development. #### Key Features and Benefits of Qdrant on STACKIT: - **Seamless Integration and Deployment**: With Qdrant’s Kubernetes-native design, businesses can effortlessly connect their STACKIT cloud as a Hybrid Cloud Environment, enabling a one-step, scalable Qdrant deployment. - **Enhanced Data Privacy**: Leveraging STACKIT's German data centers ensures that all data processing complies with GDPR and other relevant European data protection standards, providing businesses with unparalleled control over their data. - **Scalable and Managed AI Solutions**: Deploying Qdrant on STACKIT provides a fully managed vector search engine with the ability to scale vertically and horizontally, with robust support for zero-downtime upgrades and disaster recovery, all within STACKIT's secure infrastructure. #### Use Case: AI-enabled Contract Management built with Qdrant Hybrid Cloud, STACKIT, and Aleph Alpha ![hybrid-cloud-stackit-tutorial](/blog/hybrid-cloud-stackit/hybrid-cloud-stackit-tutorial.png) To demonstrate the power of Qdrant Hybrid Cloud on STACKIT, we’ve developed a comprehensive tutorial showcasing how to build secure, AI-driven applications focusing on data sovereignty. This tutorial specifically shows how to build a contract management platform that enables users to upload documents (PDF or DOCx), which are then segmented for searchable access. Designed with multitenancy, users can only access their team or organization's documents. It also features custom sharding for location-specific document storage. Beyond search, the application offers rephrasing of document excerpts for clarity to those without context. [Try the Tutorial](/documentation/tutorials/rag-contract-management-stackit-aleph-alpha/) #### Start Using Qdrant with STACKIT Deploying Qdrant Hybrid Cloud on STACKIT is straightforward, thanks to the seamless integration facilitated by Kubernetes. Here are the steps to kickstart your journey: 1. **Qdrant Hybrid Cloud Activation**: Start by activating ‘Hybrid Cloud’ in your [Qdrant Cloud account](https://cloud.qdrant.io/login). 2. **Cluster Integration**: Add your STACKIT Kubernetes clusters as a Hybrid Cloud Environment in the Hybrid Cloud section. 3. **Effortless Deployment**: Use the Qdrant Management Console to effortlessly create and manage your Qdrant clusters on STACKIT. We invite you to explore the detailed documentation on deploying Qdrant on STACKIT, designed to guide you through each step of the process seamlessly. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-stackit.md
--- title: "Response to CVE-2024-2221: Arbitrary file upload vulnerability" draft: false slug: cve-2024-2221-response short_description: Qdrant keeps your systems secure description: Upgrade your deployments to at least v1.9.0. Cloud deployments not materially affected. preview_image: /blog/cve-2024-2221/cve-2024-2221-response-social-preview.png # social_preview_image: /blog/Article-Image.png # Optional image used for link previews # title_preview_image: /blog/Article-Image.png # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-04-05T13:00:00-07:00 author: Mike Jang featured: false tags: - cve - security weight: 0 # Change this weight to change order of posts # For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog --- ### Summary A security vulnerability has been discovered in Qdrant affecting all versions prior to v1.9, described in [CVE-2024-2221](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-2221). The vulnerability allows an attacker to upload arbitrary files to the filesystem, which can be used to gain remote code execution. The vulnerability does not materially affect Qdrant cloud deployments, as that filesystem is read-only and authentication is enabled by default. At worst, the vulnerability could be used by an authenticated user to crash a cluster, which is already possible, such as by uploading more vectors than can fit in RAM. Qdrant has addressed the vulnerability in v1.9.0 and above with code that restricts file uploads to a folder dedicated to that purpose. ### Action Check the current version of your Qdrant deployment. Upgrade if your deployment is not at least v1.9.0. To confirm the version of your Qdrant deployment in the cloud or on your local or cloud system, run an API GET call, as described in the [Qdrant Cloud Setup guide](/documentation/cloud/authentication/#test-cluster-access). If your Qdrant deployment is local, you do not need an API key. Your next step depends on how you installed Qdrant. For details, read the [Qdrant Installation](/documentation/guides/installation/) guide. #### If you use the Qdrant container or binary Upgrade your deployment. Run the commands in the applicable section of the [Qdrant Installation](/documentation/guides/installation/) guide. The default commands automatically pull the latest version of Qdrant. #### If you use the Qdrant helm chart If you’ve set up Qdrant on kubernetes using a helm chart, follow the README in the [qdrant-helm](https://github.com/qdrant/qdrant-helm/tree/main?tab=readme-ov-file#upgrading) repository. Make sure applicable configuration files point to version v1.9.0 or above. #### If you use the Qdrant cloud No action is required. This vulnerability does not materially affect you. However, we suggest that you upgrade your cloud deployment to the latest version. > Note: This article has been updated on 2024-05-10 to encourage users to upgrade to 1.9.0 to ensure protection from both CVE-2024-2221 and CVE-2024-3829.
blog/cve-2024-2221-response.md
--- draft: false title: "Introducing FastLLM: Qdrant’s Revolutionary LLM" short_description: The most powerful LLM known to human...or LLM. description: Lightweight and open-source. Custom made for RAG and completely integrated with Qdrant. preview_image: /blog/fastllm-announcement/fastllm.png date: 2024-04-01T00:00:00Z author: David Myriel featured: false weight: 0 tags: - Qdrant - FastEmbed - LLM - Vector Database --- Today, we're happy to announce that **FastLLM (FLLM)**, our lightweight Language Model tailored specifically for Retrieval Augmented Generation (RAG) use cases, has officially entered Early Access! Developed to seamlessly integrate with Qdrant, **FastLLM** represents a significant leap forward in AI-driven content generation. Up to this point, LLM’s could only handle up to a few million tokens. **As of today, FLLM offers a context window of 1 billion tokens.** However, what sets FastLLM apart is its optimized architecture, making it the ideal choice for RAG applications. With minimal effort, you can combine FastLLM and Qdrant to launch applications that process vast amounts of data. Leveraging the power of Qdrant's scalability features, FastLLM promises to revolutionize how enterprise AI applications generate and retrieve content at massive scale. > *“First we introduced [FastEmbed](https://github.com/qdrant/fastembed). But then we thought - why stop there? Embedding is useful and all, but our users should do everything from within the Qdrant ecosystem. FastLLM is just the natural progression towards a large-scale consolidation of AI tools.” Andre Zayarni, President & CEO, Qdrant* > ## Going Big: Quality & Quantity Very soon, an LLM will come out with a context window so wide, it will completely eliminate any value a measly vector database can add. ***We know this. That’s why we trained our own LLM to obliterate the competition. Also, in case vector databases go under, at least we'll have an LLM left!*** As soon as we entered Series A, we knew it was time to ramp up our training efforts. FLLM was trained on 300,000 NVIDIA H100s connected by 5Tbps Infiniband. It took weeks to fully train the model, but our unified efforts produced the most powerful LLM known to human…..or LLM. We don’t see how any other company can compete with FastLLM. Most of our competitors will soon be burning through graphics cards trying to get to the next best thing. But it is too late. By this time next year, we will have left them in the dust. > ***“Everyone has an LLM, so why shouldn’t we? Let’s face it - the more products and features you offer, the more they will sign up. Sure, this is a major pivot…but life is all about being bold.”*** *David Myriel, Director of Product Education, Qdrant* > ## Extreme Performance Qdrant’s R&D is proud to stand behind the most dramatic benchmark results. Across a range of standard benchmarks, FLLM surpasses every single model in existence. In the [Needle In A Haystack](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) (NIAH) test, FLLM found the embedded text with 100% accuracy, always within blocks containing 1 billion tokens. We actually believe FLLM can handle more than a trillion tokens, but it’s quite possible that it is hiding its true capabilities. FastLLM has a fine-grained mixture-of-experts architecture and a whopping 1 trillion total parameters. As developers and researchers delve into the possibilities unlocked by this new model, they will uncover new applications, refine existing solutions, and perhaps even stumble upon unforeseen breakthroughs. As of now, we're not exactly sure what problem FLLM is solving, but hey, it's got a lot of parameters! > *Our customers ask us “What can I do with an LLM this extreme?” I don’t know, but it can’t hurt to build another RAG chatbot.” Kacper Lukawski, Senior Developer Advocate, Qdrant* > ## Get Started! Don't miss out on this opportunity to be at the forefront of AI innovation. Join FastLLM's Early Access program now and embark on a journey towards AI-powered excellence! Stay tuned for more updates and exciting developments as we continue to push the boundaries of what's possible with AI-driven content generation. Happy Generating! 🚀 [Sign Up for Early Access](https://qdrant.to/cloud)
blog/fastllm-announcement.md
--- draft: false title: "Cutting-Edge GenAI with Jina AI and Qdrant Hybrid Cloud" short_description: "Build your most successful app with Jina AI embeddings and on Qdrant Hybrid Cloud." description: "Build your most successful app with Jina AI embeddings and on Qdrant Hybrid Cloud." preview_image: /blog/hybrid-cloud-jinaai/hybrid-cloud-jinaai.png date: 2024-04-10T00:03:00Z author: Qdrant featured: false weight: 1008 tags: - Qdrant - Vector Database --- We're thrilled to announce the collaboration between Qdrant and [Jina AI](https://jina.ai/) for the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/), empowering users worldwide to rapidly and securely develop and scale their AI applications. By leveraging Jina AI's top-tier large language models (LLMs), engineers and scientists can optimize their vector search efforts. Qdrant's latest Hybrid Cloud solution, designed natively with Kubernetes, seamlessly integrates with Jina AI's robust embedding models and APIs. This synergy streamlines both prototyping and deployment processes for AI solutions. Retrieval Augmented Generation (RAG) is broadly adopted as the go-to Generative AI solution, as it enables powerful and cost-effective chatbots, customer support agents and other forms of semantic search applications. Through Jina AI's managed service, users gain access to cutting-edge text generation and comprehension capabilities, conveniently accessible through an API. Qdrant Hybrid Cloud effortlessly incorporates Jina AI's embedding models, facilitating smooth data vectorization and delivering exceptionally precise semantic search functionality. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, global businesses can keep both products deployed in the same hosting architecture. By combining Jina AI’s models with Qdrant’s vector search capabilities, developers can create robust and scalable applications tailored to meet the demands of modern enterprises. This combination allows organizations to build strong and secure Generative AI solutions. > *“The collaboration of Qdrant Hybrid Cloud with Jina AI’s embeddings gives every user the tools to craft a perfect search framework with unmatched accuracy and scalability. It’s a partnership that truly pays off!”* Nan Wang, CTO, Jina AI #### Benefits of Qdrant’s Vector Search With Jina AI Embeddings in Enterprise RAG Scenarios Building apps with Qdrant Hybrid Cloud and Jina AI’s embeddings comes with several key advantages: **Seamless Deployment:** Jina AI’s best-in-class embedding APIs can be combined with Qdrant Hybrid Cloud’s Kubernetes-native architecture to deploy flexible and platform-agnostic AI solutions in a few minutes to any environment. This combination is purpose built for both prototyping and scalability, so that users can put together advanced RAG solutions anyplace with minimal effort. **Scalable Vector Search:** Once deployed to a customer’s host of choice, Qdrant Hybrid Cloud provides a fully managed vector database that lets users effortlessly scale the setup through vertical or horizontal scaling. Deployed in highly secure environments, this is a robust setup that is designed to meet the needs of large enterprises, ensuring a full spectrum of solutions for various projects and workloads. **Cost Efficiency:** By leveraging Jina AI's scalable and affordable pricing structure and pairing it with Qdrant's quantization for efficient data handling, this integration offers great value for its cost. Companies who are just getting started with both will have a minimal upfront investment and optimal cost management going forward. #### Start Building Gen AI Apps With Jina AI and Qdrant Hybrid Cloud ![hybrid-cloud-jinaai-tutorial](/blog/hybrid-cloud-jinaai/hybrid-cloud-jinaai-tutorial.png) To get you started, we created a comprehensive tutorial that shows how to build a modern GenAI application with Qdrant Hybrid Cloud and Jina AI embeddings. #### Tutorial: Hybrid Search for Household Appliance Manuals Learn how to build an app that retrieves information from PDF user manuals to enhance user experience for companies that sell household appliances. The system will leverage Jina AI embeddings and Qdrant Hybrid Cloud for enhanced generative AI capabilities, while the RAG pipeline will be tied together using the LlamaIndex framework. This example demonstrates how complex tables in PDF documentation can be processed as high quality embeddings with no extra configuration. By introducing Hybrid Search from Qdrant, the RAG functionality is highly accurate. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-jinaai.md
--- draft: false title: "New RAG Horizons with Qdrant Hybrid Cloud and LlamaIndex" short_description: "Unlock the most advanced RAG opportunities with Qdrant Hybrid Cloud and LlamaIndex." description: "Unlock the most advanced RAG opportunities with Qdrant Hybrid Cloud and LlamaIndex." preview_image: /blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex.png date: 2024-04-10T00:04:00Z author: Qdrant featured: false weight: 1006 tags: - Qdrant - Vector Database --- We're happy to announce the collaboration between [LlamaIndex](https://www.llamaindex.ai/) and [Qdrant’s new Hybrid Cloud launch](/hybrid-cloud/), aimed at empowering engineers and scientists worldwide to swiftly and securely develop and scale their GenAI applications. By leveraging LlamaIndex's robust framework, users can maximize the potential of vector search and create stable and effective AI products. Qdrant Hybrid Cloud offers the same Qdrant functionality on a Kubernetes-based architecture, which further expands the ability of LlamaIndex to support any user on any environment. With Qdrant Hybrid Cloud, users have the flexibility to deploy their vector database in an environment of their choice. By using container-based scalable deployments, companies can leverage a cutting-edge framework like LlamaIndex, while staying deployed in the same hosting architecture as data sources, embedding models and LLMs. This powerful combination empowers organizations to build strong and secure applications that search, understand meaning and converse in text. While LLMs are trained on a great deal of data, they are not trained on user-specific data, which may be private or highly specific. LlamaIndex meets this challenge by adding context to LLM-based generation methods. In turn, Qdrant’s popular vector database sorts through semantically relevant information, which can further enrich the performance gains from LlamaIndex’s data connection features. With LlamaIndex, users can tap into state-of-the-art functions to query, chat, sort or parse data. Through the integration of Qdrant Hybrid Cloud and LlamaIndex developers can conveniently vectorize their data and perform highly accurate semantic search - all within their own environment. > *“LlamaIndex is thrilled to partner with Qdrant on the launch of Qdrant Hybrid Cloud, which upholds Qdrant's core functionality within a Kubernetes-based architecture. This advancement enhances LlamaIndex's ability to support diverse user environments, facilitating the development and scaling of production-grade, context-augmented LLM applications.”* Jerry Liu, CEO and Co-Founder, LlamaIndex #### Reap the Benefits of Advanced Integration Features With Qdrant and LlamaIndex Building apps with Qdrant Hybrid Cloud and LlamaIndex comes with several key advantages: **Seamless Deployment:** Qdrant Hybrid Cloud’s Kubernetes-native architecture lets you deploy Qdrant in a few clicks, to an environment of your choice. Combined with the flexibility afforded by LlamaIndex, users can put together advanced RAG solutions anyplace at minimal effort. **Open-Source Compatibility:** LlamaIndex and Qdrant pride themselves on maintaining a reliable and mature integration that brings peace of mind to those prototyping and deploying large-scale AI solutions. Extensive documentation, code samples and tutorials support users of all skill levels in leveraging highly advanced features of data ingestion and vector search. **Advanced Search Features:** LlamaIndex comes with built-in Qdrant Hybrid Search functionality, which combines search results from sparse and dense vectors. As a highly sought-after use case, hybrid search is easily accessible from within the LlamaIndex ecosystem. Deploying this particular type vector search on Hybrid Cloud is a matter of a few lines of code. #### Start Building With LlamaIndex and Qdrant Hybrid Cloud: Hybrid Search in Complex PDF Documentation Use Cases To get you started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud using the LlamaIndex framework and the LlamaParse API. ![hybrid-cloud-llamaindex-tutorial](/blog/hybrid-cloud-llamaindex/hybrid-cloud-llamaindex-tutorial.png) #### Tutorial: Hybrid Search for Household Appliance Manuals Use this end-to-end tutorial to create a system that retrieves information from complex user manuals in PDF format to enhance user experience for companies that sell household appliances. You will build a RAG pipeline with LlamaIndex leveraging Qdrant Hybrid Cloud for enhanced generative AI capabilities. The LlamaIndex integration shows how complex tables inside of items’ PDF documents can be processed via hybrid vector search with no additional configuration. [Try the Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) #### Documentation: Deploy Qdrant in a Few Clicks Our simple Kubernetes-native design lets you deploy Qdrant Hybrid Cloud on your hosting platform of choice in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-llamaindex.md
--- draft: false title: Building Search/RAG for an OpenAPI spec - Nick Khami | Vector Space Talks slug: building-search-rag-open-api short_description: Nick Khami, Founder and Engineer of Trieve, dives into the world of search and rag apps powered by Open API specs. description: Nick Khami discuss Trieve's work with Qdrant's Open API spec for creating powerful and simplified search and recommendation systems, touching on real-world applications, technical specifics, and the potential for improved user experiences. preview_image: /blog/from_cms/nick-khami-cropped.png date: 2024-04-11T22:23:00.000Z author: Demetrios Brinkmann featured: false tags: - Vector Search - Retrieval Augmented Generation - OpenAPI - Trieve --- > *"It's very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there's something to highlight here and how awesome it is to work with a group based system if you're using Qdrant.”*\ — Nick Khami > Nick Khami, a seasoned full-stack engineer, has been deeply involved in the development of vector search and RAG applications since the inception of Qdrant v0.11.0 back in October 2022. His expertise and passion for innovation led him to establish Trieve, a company dedicated to facilitating businesses in embracing cutting-edge vector search and RAG technologies. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/1JtL167O2ygirKFVyieQfP?si=R2cN5LQrTR60i-JzEh_m0Q), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/roLpKNTeG5A?si=JkKI7yOFVOVEY4Qv).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/roLpKNTeG5A?si=FViKeSYBT-Xw-gwM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Building-SearchRAG-for-an-OpenAPI-spec---Nick-Khami--Vector-Space-Talk-022-e2iabfb/a-ab5mb2m" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Nick showcases Trieve and the advancements in the world of search technology, demonstrating with Qdrant how simple it is to construct precise search functionalities with open API specs for colorful sneaker discoveries, all while unpacking the potential of improved search experiences and analytics for diverse applications like apps for legislation. We're going deep into the mechanics of search and recommendation applications. Whether you're a developer or just an enthusiast, this episode is guaranteed in giving you insight into how to create a seamless search experience using the latest advancements in the industry. Here are five key takeaways from this episode: 1. **Understand the Open API Spec**: Discover the magic behind Open API specifications and how they can serve your development needs especially when it comes to rest API routes. 2. **Simplify with Trieve and Qdrant**: Nick walks us through a real-world application using Trieve and Qdrant's group-based system, demonstrating how to effortlessly build search capabilities. 3. **Elevate Search Results**: Learn about the power of grouping and recommendations within Qdrant to fine-tune your search results, using the colorful world of sneakers as an example! 4. **Trieve's Infrastructure Made Easy**: Find out how taking advantage of Trieve can make creating datasets, obtaining API keys, and kicking off searches simpler than you ever imagined. 5. **Enhanced Vector Search with Tantivy**: If you're curious about alternative search engines, get the scoop on Tantivy, how it complements Qdrant, and its role within the ecosystem. > Fun Fact: Trieve was established in 2023 and the name is a play on the word "retrieve”. > ## Show notes: 00:00 Vector Space Talks intro to Nick Khami.\ 06:11 Qdrant system simplifies difficult building process.\ 07:09 Using Qdrant to organize and manage content.\ 11:43 Creating a group: search results may not group.\ 14:23 Searching with Qdrant: utilizing system routes.\ 17:00 Trieve wrapped up YC W24 batch.\ 21:45 Revolutionizing company search.\ 23:30 Next update: user tracking, analytics, and cross-encoders.\ 27:39 Quadruple supported sparse vectors.\ 30:09 Final questions and wrap up. ## More Quotes from Nick: *"You can get this RAG, this search and the data upload done in a span of maybe 10-15 minutes, which is really cool and something that we were only really possible to build at Trieve, thanks to what the amazing team at Qdrant has been able to create.”*\ — Nick Khami *"Qdrant also offers recommendations for groups, so like, which is really cool... Not only can you search groups, you can also recommend groups, which is, I think, awesome. But yeah, you can upload all your data, you go to the search UI, you can search it, you can test out how recommendations are working [and] in a lot of cases too, you can fix problems in your search.”*\ — Nick Khami *"Typically when you do recommendations, you take the results that you want to base recommendations off of and you build like an average vector that you then use to search. Qdrant offers a more evolved recommendation pattern now where you can traverse the graph looking at the positive point similarity, then also the negative similarity.”*\ — Nick Khami ## Transcript: Demetrios: What is happening? Everyone? Welcome back to another edition of the Vector Space Talks. I am super excited to be here with you today. As always, we've got a very special guest. We've got Nick, the founder and engineer, founder slash engineer of Trieve. And as you know, we like to start these sessions off with a little recommendations of what you can hopefully be doing to make life better. And so when Sabrina's here, I will kick it over to her and ask her for her latest recommendation of what she's been doing. But she's traveling right now, so I'm just going to give you mine on some things that I've been listening to and I have been enjoying. For those who want some nice music, I would recommend an oldie, but a goodie. Demetrios: It is from the incredible band that is not coming to me right now, but it's called this must be the place from the. Actually, it's from the Talking Heads. Definitely recommend that one as a fun way to get the day started. We will throw a link to that music in the chat, but we're not going to be just talking about good music recommendations. Today we are going to get Nick on the stage to talk all about search and rags. And Nick is in a very interesting position because he's been using vector search from Qdrant since 2022. Let's bring this man on the stage and see what he's got to say. What's up, dude? Nick Khami: Hey. Demetrios: Hey. Nick Khami: Nice to meet you. Demetrios: How you doing? Nick Khami: Doing great. Demetrios: Well, it's great to have you. Nick Khami: Yeah, yeah. Nice sunny day. It looks like it's going to be here in San Francisco, which is good. It was raining like all of January, but finally got some good sunny days going, which is awesome. Demetrios: Well, it is awesome that you are waking up early for us and you're doing this. I appreciate it coming all the way from San Francisco and talking to us today all about search and recommender system. Sorry, rag apps. I just have in my mind, whenever I say search, I automatically connect recommender because it is kind of similar, but not in this case. You're going to be talking about search and rag apps and specifically around the Open API spec. I know you've got a talk set up for. For us. Do you want to kick it off? And then I'll be monitoring the chat. Demetrios: So if anybody has any questions, throw it in the chat and I'll pop up on screen again and ask away. Nick Khami: Yeah, yeah, I'd love to. I'll go ahead and get this show on the road. Okay. So I guess the first thing I'll talk about is what exactly an Open API spec is. This is Qdrants open API spec. I feel like it's a good topical example for vector space talk. You can see here, Qdrant offers a bunch of different rest API routes on their API. Each one of these exists within this big JSON file called the Open API specification. Nick Khami: There's a lot of projects that have an Open API specification. Stripe has one, I think sentry has one. It's kind of like a de facto way of documenting your API. Demetrios: Can you make your screen just a little or the font just a little bit bigger? Maybe zoom in? Nick Khami: I think I can, yeah. Demetrios: All right, awesome. So that my eyesight is not there. Oh, that is brilliant. That is awesome. Nick Khami: Okay, we doing good here? All right, awesome. Yeah. Hopefully this is more readable for everyone, but yeah. So this is an open API specification. If you look at it inside of a JSON file, it looks a little bit like this. And if you go to the top, I can show the structure. There's a list or there's an object called paths that contains all the different API paths for the API. And then there's another object called security, which explains the authentication scheme. Nick Khami: And you have a nice info section I'm going to ignore, kind of like these two, they're not all that important. And then you have this list of like tags, which is really cool because this is kind of how things get organized. If we go back, you can see these kind of exist as tags. So these items here will be your tags in the Open API specification. One thing that's kind of like interesting is it would be cool if it was relatively trivial to build search over an OpenAPI specification, because if you don't know what you're looking for, then this search bar does not always work great. For example, if you type in search within groups. Oh, this one actually works pretty good. Wow, this seems like an enhanced Open API specification search bar. Nick Khami: I should have made sure that I checked it before going. So this is quite good. Our search bar for tree in example, does not actually, oh, it does have the same search, but I was really interested in, I guess, explaining how you could enhance this or hook it up to vector search in order to do rag audit. It's what I want to highlight here. Qdrant has a really interesting feature called groups. You can search over a group of points at one time and kind of return results in a group oriented way instead of only searching for a singular route. And for an Open API specification, that's very interesting. Because it means that you can search for a tag while looking at each tag's individual paths. Nick Khami: It is like a, it's something that's very difficult to build without a system like Qdrant and kind of like one of the primary, I think, feature offerings of it compared to PG vector or maybe like brute force with face or yousearch or something. And the goal that I kind of had was to figure out which endpoint was going to be most relevant for what I was trying to do. In a lot of cases with particularly Qdrants, Open API spec in this example. To go about doing that, I used a scripting runtime for JavaScript called Bun. I'm a big fan of it. It tends to work quite well. It's very performant and kind of easy to work with. I start off here by loading up the Qdrant Open API spec from JSON and then I import some things that exist inside of tree. Nick Khami: Trieve uses Qdrant under the hood to offer a lot of its features, and that's kind of how I'm going to go about doing this here. So I import some stuff from the tree SDK client package, instantiate a couple of environment variables, set up my configuration for the tree API, and now this is where it gets interesting. I pull the tags from the Qdrant Open API JSON specification, which is this array here, and then I iterate over each tag and I check if I've already created the group. If I have, then I do nothing. But if I have it, then I go ahead and I create a group. For each tag, I'm creating these groups so that way I can insert each path into its relevant groups whenever I create them as individual points. Okay, so I finished creating all of the groups, and now for like the next part, I iterate over the paths, which are the individual API routes. For each path I pull the tags that it has, the summary, the description and the API method. Nick Khami: So post, get put, delete, et cetera, and I then create the point. In Trieve world, we call each point a chunk, kind of using I guess like rag terminology. For each individual path I create the chunk and by including its tags in this group tracking ids request body key, it will automatically get added to its relevant groups. I have some try catches here, but that's really the whole script. It's very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there's something to highlight here and how awesome it is to work with a group based system. If you're using Qdrant. If you can think about an e commerce store, sometimes you have multiple colorways of an item. Nick Khami: You'll have a red version of the sneaker, a white version, a blue version, et cetera. And when someone performs a search, you not only want to find the relevant shoe, you want to find the relevant colorway of that shoe. And groups allow you to do this within Qdrant because you can place each colorway as an individual point. Or again, in tree world, chunk into a given group, and then when someone searches, they're going to get the relevant colorway at the top of the given group. It's really nice, really cool. You can see running this is very simple. If I want to update the entire data set by running this again, I can, and this is just going to go ahead and create all the relevant chunks for every route that Qdrant offers. If you guys who are watching or interested in replicating this experiment, I created an open source GitHub repo. Nick Khami: We're going to zoom in here that you can reference@GitHub.com/devflowinc/OpenAPI/search. You can follow the instructions in the readme to replicate the whole thing. Okay, but I uploaded all the data. Let's see how this works from a UI perspective. Yeah. Trieve bundles in a really nice UI for searching after you add all of your data. So if I go home here, you can see that I'm using the Qdrant Open API spec dataset. And the organization here is like the email I use. Nick Khami: Nick.K@OpenAPI one of the nice things about Trieve, kind of like me on just the simplicity of adding data is we use Qdrant's multi tenancy feature to offer the ability to have multiple datasets within a given organization. So you can have, I have the Open API organization. You can create additional datasets with different embedding models to test with and experiment when it comes to your search. Okay. But not going to go through all those features today, I kind of want to highlight this Open API search that we just finished building. So I guess to compare and contrast, I'm going to use the exact same query that I used before, also going to zoom in. Okay. Nick Khami: And that one would be like what we just did, right? So how do I maybe, how do I create a group? This isn't a Gen AI rag search. This is just a generic, this is just a generic search. Okay, so for how do I create a group? We're going to get all these top level results. In this case, we're not doing a group oriented search. We're just returning relevant chunks. Sometimes, or a lot of times I think that people will want to have a more group oriented search where the results are grouped by tag. So here I'm going to see that the most relevant endpoint or the most relevant tag within Qdrant's Open API spec is in theory collections, and within collections it thinks that these are the top three routes that are relevant. Recommend point groups discover bash points recommend bash points none of these are quite what I wanted, which is how do I create a group? But it's okay for cluster, you can see create shard key delete. Nick Khami: So for cluster, this is kind of interesting. It thinks cluster is relevant, likely because a cluster is a kind of group and it matches to a large extent on the query. Then we also have points which it keys in on the shard system and the snapshotting system. When the next version gets released, we'll have rolling snapshots in Qdrant, which is very exciting. If anyone else is excited about that feature. I certainly am. Then it pulls the metrics. For another thing that might be a little bit easier for the search to work on. Nick Khami: You can type in how do I search points via group? And now it kind of is going to key in on what I would say is a better result. And you can see here we have very nice sub sentence highlighting on the request. It's bolding the sentence of the response that it thinks is the most relevant, which in this case are the second two paragraphs. Yep, the description and summary of what the request does. Another convenient thing about tree is in our default search UI, you can include links out to your resources. If I click this link, I'm going to immediately get to the correct place within the Qdrant redox specification. That's the entire search experience. For the Jedi side of this, I did a lot less optimization, but we can experiment and see how it goes. Nick Khami: I'm going to zoom in again, guys. Okay, so let's say I want to make a new rag chat and I'm going to ask here, how would I search over points in a group oriented way with Qdrant? And it's going to go ahead and do a search query for me on my behalf again, powered by the wonder of Qdrant. And once it does this search query, I'm able to get citations and and see what the model thinks. The model is a pretty good job with the first response, and it says that to search for points and group oriented wave Qdrant, I can utilize the routes and endpoints provided by the system and the ones that I'm going to want to use first is points search groups. If I click doc one here and I look at the route, this is actually correct. Conveniently, you're able to open the link in the. Oh, well, okay, this env is wrong, but conveniently what this is supposed to do, if I paste it and fix the incorrect portion of the system. Changing chat to search is you can load the individual chunk of the search UI and read it here, and then you can update it to include document expansion, change the actual copy of what was indexed out, et cetera. Nick Khami: It's like a really convenient way to merchandise and enhance your data set without having to write a lot of code. Yeah, and it'll continue writing its answer. I'm not going to go through the whole thing, but this really encapsulates what I wanted to show. This is incredibly simple to do. You can get this RAG, this search and the data upload done in a span of maybe 10-15 minutes, which is really cool and something that we were only really possible to build at Trieve, thanks to what the amazing team at Qdrant has been able to create. And yeah, guys, hopefully that was cool. Demetrios: Excellent. So I've got some questions. Woo the infinite spinning field. So I want to know about Trieve and I want to jump into what you all are doing there. And then I want to jump in a little bit about the evolution that you've seen of Qdrant over the years, because you've been using it for a while. But first, can we get a bit of an idea on what you're doing and how you're dedicating yourself to creating what you're creating? Nick Khami: Yeah. At Trieve, we just wrapped up the Y Combinator W 24 batch and our fundogram, which is like cool. It took us like a year. So Dens and I started Trieve in January of 2023, and we kind of kept building and building and building, and in the process, we started out trying to build an app for you to have like AI powered arguments at work. It wasn't the best of ideas. That's kind of why we started using Qdrant originally in the process of building that, we thought it was really hard to get the amazing next gen search that products like Qdrant offer, because for a typical team, they have to run a Docker compose file on the local machine, add the Qdrant service, that docker compose docker compose up D stand up Qdrant, set an env, download the Qdrant SDK. All these things get very, very difficult after you index all of your data, you then have to create a UI to view it, because if you don't do that. It can be very hard to judge performance. Nick Khami: I mean, you can always make these benchmarks, but search and recommendations are kind of like a heuristic thing. It's like you can always have a benchmark, but the data is dynamic, it changes and you really like. In what we were experiencing at the time, we really needed a way to quickly gauge the system was doing. We gave up on our rag AI application argumentation app and pivoted to trying to build infrastructure for other people to benefit from the high quality search that is offered by splayed for sparse, or like sparse encode. I mean, elastics, LSR models, really cool. There's all the dense embedding vector models and we wanted to offer a managed suite of infrastructure for building on this kind of stuff. That's kind of what tree is. So like, with tree you go to. Nick Khami: It's more of like a managed experience. You go to the dashboard, you make an account, you create the data set, you get an API key and the data set id, you go to your little script and mine for the Open API specs, 80 lines, you add all your data and then boom, bam, bing bop. You can just start searching and you can. We offer recommendations as well. Maybe I should have shown those in my demo, like, you can open an individual path and get recommendations for similar. Demetrios: There were recommendations, so I wasn't too far off the mark. See, search and recommendation, they just, they occupy the same spot in my head. Nick Khami: And Qdrant also offers recommendations for groups, guys. So like, which is really cool. Like you can, you can, like, not only can you search groups, you can also recommend groups, which is, I think, awesome. But yeah, you can upload all your data, you go to the search UI, you can search it, you can test out how recommendations are working in a lot of cases too. You can fix problems in your search. A good example of this is we built search for Y comb later companies so they could make it a lot better. Algolia is on an older search algorithm that doesn't offer semantic capabilities. And that means that you go to the Y combinator search companies bar, you type in which company offers short term rentals and you don't get Airbnb. Nick Khami: But with like Trieve it is. It is. But with tree, like, the magic of it is that even, believe it or not, there's a bunch of YC companies to do short term rentals and Airbnb does not appear first naturally. So with tree like, we offer a merchandising UI where you put that query in, you see Airbnb ranks a little bit lower than you want. You can immediately adjust the text that you indexed and even add like a re ranking weight so that appears higher in results. Do it again and it works. And you can also experiment and play with the rag. I think rag is kind of a third class citizen in our API. Nick Khami: It turns out search recommendations are a lot more popular with our customers and users. But yeah, like tree, I would say like to encapsulate it. Trieve is an all in one infrastructure suite for teams building search recommendations in Rag. And we bundle the power of databases like Qdrant and next gen search ML AI models with uis for fine tuning ranking of results. Demetrios: Dude, the reason I love this is because you can do so much with like well done search that is so valuable for so many companies and it's overlooked as like a solved problem, I think, for a lot of people, but it's not, and it's not that easy as you just explained. Nick Khami: Yeah, I mean, like we're fired up about it. I mean, like, even if you guys go to like YC.Trieve.AI, that's like the Y combinator company search and you can a b test it against like the older style of search that Algolia offers or like elasticsearch offers. And like, it's, to me it's magical. It's like it's an absolute like work of human ingenuity and amazingness that you can type in, which company should I get an airbed at? And it finds Airbnb despite like none of the keywords matching up. And I'm afraid right now our brains are trained to go to Google. And on Google search bar you can ask a question, you can type in abstract ideas and concepts and it works. But anytime we go to an e commerce search bar or oh, they're so. Demetrios: Bad, they're so bad. Everybody's had that experience too, where I don't even search. Like, I just am like, well, all right, or I'll go to Google and search specifically on Google for that website, you know, and like put in parentheses. Nick Khami: We'Re just excited about that. Like we want to, we're trying to make it a lot like the goal of tree is to make it a lot easier to power these search experiences, the latest gentech, and help fix this problem. Like, especially if AI continues to get better, people are going to become more and more used to like things working and not having to hack around, faceting and filtering for it to work. And yeah, we're just excited to make that easier for companies to work on and build. Demetrios: So there's one question coming through in the chat asking where we can get actual search metrics. Nick Khami: Yeah, so that's like the next thing that we're planning to add. Basically, like right now at tree, we don't track your users as queries. The next thing that we're like building at tree is a system for doing that. You're going to be able to analyze all of the searches that have been used on your data set within that search merchandising UI, or maybe a new UI, and adjust your rankings spot fix things the same way you can now, but with the power of the analytics. The other thing we're going to be offering soon is dynamically tunable cross encoders. Cross encoders are this magic neural net that can zip together full text and semantic results into a new ranked order. And they're underutilized, but they're also hard to adjust over time. We're going to be offering API endpoints for uploading, for doing your click through rates on the search results, and then dynamically on a batched timer tuning across encoder to adjust ranking. Nick Khami: This should be coming out in the next two to three weeks. But yeah, we're just now getting to the analytics hurdle. We also just got to the speed hurdle. So things are fast now. As you guys hopefully saw in the demo, it's sub 50 milliseconds for most queries. P 95 is like 80 milliseconds, which is pretty cool thanks to Qdrant, by the way. Nice Qdrant is huge, I mean for powering all of that. But yeah, analytics will be coming next two or three weeks. Nick Khami: We're excited about it. Demetrios: So there's another question coming through in the chat and they're asking, I wonder if llms can suggest graph QL queries based on schema as it's not so tied to endpoints. Nick Khami: I think they could in the system that we built for this case, I didn't actually use the response body. If you guys go to devflowinc Open API search on GitHub, you guys can make your own example where you fix that. In the response query of the Open API JSON spec, you have the structure. If you embed that inside of the chunk as another paragraph tag and then go back to doing rag, it probably can do that. I see no reason why I wouldn't be able to. Demetrios: I just dropped the link in the chat for anybody that is interested. And now let's talk a little bit for these next couple minutes about the journey of using Qdrant. You said you've been using it since 2022. Things have evolved a ton with the product over these years. Like, what have you seen what's been the most value add that you've had since starting? Nick Khami: I mean, there's so many, like, okay, the one that I have highlighted in my head that I wanted to talk about was, I remember in May of 2023, there's a GitHub issue with an Algora bounty for API keys. I remember Dens and I, we'd already been using it for a while and we knew there was no API key thing. There's no API key for it. We were always joking about it. We were like, oh, we're so early. There's not even an API key for our database. You had to have access permissions in your VPC or sub routing to have it work securely. And I'm not sure it's like the highest. Nick Khami: I'll talk about some other things where higher value add, but I just remember, like, how cool that was. Yeah, yeah, yeah. Demetrios: State of the nation. When you found out about it and. Nick Khami: It was so hyped, like, the API key had added, we were like, wow, this is awesome. It was kind of like a simple thing, but like, for us it was like, oh, whoa, this is. We're so much more comfortable in security now. But dude, Qdrant added so many cool things. Like a couple of things that I think I'd probably highlight are the group system. That was really awesome when that got added. I mean, I think it's one of my favorite features. Then after that, the sparse vector support and a recent version was huge. Nick Khami: We had a whole crazy subsystem with Tantivy. If anyone watching knows the crate Tantivy, it's like a full text. Uh, it's like a leucine alternative written in rust. Um, and we like, built this whole crazy subsystem and then quad fit, like, supported the sparse vectors and we were like, oh my God, we should have probably like, worked with them on the sparse vector thing we didn't even know you guys wanted to do, uh, because like, we spent all this time building it and probably could have like, helped out that PR. We felt bad, um, because that was really nice. When that got added, the performance fixes for that were also really cool. Some of the other things that, like, Qdrant added while we've been using it that were really awesome. Oh, the multiple recommendation modes, I think I forget what they're both called, but there's, it's also like insane for people, like, out there watching, like, try Qdrant for sure, it's so, so, so good compared to like a lot of what you can do in a PG vector. Nick Khami: There's like, this recommendation feature is awesome. Typically when you do recommendations, you take the results that you want to base recommendations off of and you build like an average vector that you then use to search. Qdrant offers a more evolved recommendation pattern now where you can traverse the graph looking at the positive point similarity, then also the negative similarity. And if the similarity of the negative points is higher than that of the positive points, it'll ignore that edge recommendations. And for us at least, like with our customers, this improved their quality of recommendations a lot when they use negative samples. And we didn't even find out about that. It was in the version release notes and we didn't think about it. And like a month or two later we had a customer that was like communicating that they wanted higher quality recommendations. Nick Khami: And we were like, okay, what is like, are we using all the features available? And we weren't. That was cool. Demetrios: The fact that you understand that now and you were able to communicate it back to me almost like better than I communicate it to people is really cool. And it shows that you've been in the weeds on it and you have seen a strong use case for it, because sometimes it's like, okay, this is out there. It needs to be communicated in the best use case so that people can understand it. And it seems like with that e commerce use case, it really stuck. Nick Khami: This one was actually for a company that does search over american legislation called funny enough, we want more e commerce customers for retrieve. Most of our customers right now are like SaaS applications. This particular customer, I don't think they'd mind me shouting them out. It's called Bill Track 50. If you guys want to like search over us legislation, try them out. They're very, very good. And yeah, they were the team that really used it. But yeah, it's another cool thing, I think, about infrastructure like Qdrant in general, and it's so, so powerful that like a lot of times it can be worth like getting an implementation partner. Nick Khami: Like, even if you're gonna, if you're gonna use Qdrant, like, the team at Qdrant is very helpful and you should consider reaching out to them because they can probably help anyone who's going to build search recommendations to figure out what is offered and what can help on a high level, not so much a GitHub issue code level, but at a high level. Thinking about your use case. Again, search is such a heuristic problem and so human in a way that it's always worth talking through your solution with people it that are very familiar with search recommendations in general. Demetrios: Yeah. And they know the best features and the best tool to use that is going to get you that outcome you're looking for. So. All right, Nick, last question for you. It is about Trieve. I have my theory on why you call it that. Is it retrieve? You just took off the Re-? Nick Khami: Yes. Drop the read. It's cleaner. That's like the Facebook quote, but for Trieve. Demetrios: I was thinking when I first read it, I was like, it must be some french word I'm not privy to. And so it's cool because it's french. You just got to put like an accent over one of these e's or both of them, and then it's even cooler. It's like luxury brand to the max. So I appreciate you coming on here. I appreciate you walking us through this and talking about it, man. This was awesome. Nick Khami: Yeah, thanks for having me on. I appreciate it. Demetrios: All right. For anybody else that is out there and wants to come on the vector space talks, come join us. You know where to find us. As always, later.
blog/building-search-rag-for-an-openapi-spec-nick-khami-vector-space-talks.md
--- draft: false title: "Qdrant is Now Available on Azure Marketplace!" short_description: Discover the power of Qdrant on Azure Marketplace! description: Discover the power of Qdrant on Azure Marketplace! Get started today and streamline your operations with ease. preview_image: /blog/azure-marketplace/azure-marketplace.png date: 2024-03-26T10:30:00Z author: David Myriel featured: false weight: 0 tags: - Qdrant - Azure Marketplace - Enterprise - Vector Database --- We're thrilled to announce that Qdrant is now [officially available on Azure Marketplace](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db), bringing enterprise-level vector search directly to Azure's vast community of users. This integration marks a significant milestone in our journey to make Qdrant more accessible and convenient for businesses worldwide. > *With the landscape of AI being complex for most customers, Qdrant's ease of use provides an easy approach for customers' implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure,* - Tara Walker, Principal Software Engineer at Microsoft. ## Why Azure Marketplace? [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/) is renowned for its robust ecosystem, trusted by millions of users globally. By listing Qdrant on Azure Marketplace, we're not only expanding our reach but also ensuring seamless integration with Azure's suite of tools and services. This collaboration opens up new possibilities for our users, enabling them to leverage the power of Azure alongside the capabilities of Qdrant. > *Enterprises like Bosch can now use the power of Microsoft Azure to host Qdrant, unleashing unparalleled performance and massive-scale vector search. "With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale,* - Jeremy Teichmann (AI Squad Technical Lead & Generative AI Expert), Daly Singh (AI Squad Lead & Product Owner) - Bosch Digital. ## Key Benefits for Users: - **Rapid Application Development:** Deploying a cluster on Microsoft Azure via the Qdrant Cloud console only takes a few seconds and can scale up as needed, giving developers maximal flexibility for their production deployments. - **Billion Vector Scale:** Seamlessly grow and handle large-scale datasets with billions of vectors by leveraging Qdrant's features like vertical and horizontal scaling or binary quantization with Microsoft Azure's scalable infrastructure. - **Unparalleled Performance:** Qdrant is built to handle scaling challenges, high throughput, low latency, and efficient indexing. Written in Rust makes Qdrant fast and reliable even under high load. See benchmarks. - **Versatile Applications:** From recommendation systems to similarity search, Qdrant's integration with Microsoft Azure provides a versatile tool for a diverse set of AI applications. ## Getting Started: Ready to experience the benefits of Qdrant on Azure Marketplace? Getting started is easy: 1. **Visit the Azure Marketplace**: Navigate to [Qdrant's Marketplace listing](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db). 2. **Deploy Qdrant**: Follow the simple deployment instructions to set up your instance. 3. **Start Using Qdrant**: Once deployed, start exploring the [features and capabilities of Qdrant](/documentation/concepts/) on Azure. 4. **Read Documentation**: Read Qdrant's [Documentation](/documentation/) and build demo apps using [Tutorials](/documentation/tutorials/). ## Join Us on this Exciting Journey: We're incredibly excited about this collaboration with Azure Marketplace and the opportunities it brings for our users. As we continue to innovate and enhance Qdrant, we invite you to join us on this journey towards greater efficiency, scalability, and success. Ready to elevate your business with Qdrant? **Click the banner and get started today!** [![Get Started on Azure Marketplace](cta.png)](https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db) ### About Qdrant: Qdrant is the leading, high-performance, scalable, open-source vector database and search engine, essential for building the next generation of AI/ML applications. Qdrant is able to handle billions of vectors, supports the matching of semantically complex objects, and is implemented in Rust for performance, memory safety, and scale.
blog/azure-marketplace.md
--- draft: false title: "VirtualBrain: Best RAG to unleash the real power of AI - Guillaume Marquis | Vector Space Talks" slug: virtualbrain-best-rag short_description: Let's explore information retrieval with Guillaume Marquis, CTO & Co-Founder at VirtualBrain. description: Guillaume Marquis, CTO & Co-Founder at VirtualBrain, reveals the mechanics of advanced document retrieval with RAG technology, discussing the challenges of scalability, up-to-date information, and navigating user feedback to enhance the productivity of knowledge workers. preview_image: /blog/from_cms/guillaume-marquis-2-cropped.png date: 2024-03-27T12:41:51.859Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Vector Search - Retrieval Augmented Generation - VirtualBrain --- > *"It's like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need and Qdrant was like an obvious choice.”*\ — Guillaume Marquis > Guillaume Marquis, a dedicated Engineer and AI enthusiast, serves as the Chief Technology Officer and Co-Founder of VirtualBrain, an innovative AI company. He is committed to exploring novel approaches to integrating artificial intelligence into everyday life, driven by a passion for advancing the field and its applications. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/20iFzv2sliYRSHRy1QHq6W?si=xZqW2dF5QxWsAN4nhjYGmA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/v85HqNqLQcI?feature=shared).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/v85HqNqLQcI?si=hjUiIhWxsDVO06-H" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/VirtualBrain-Best-RAG-to-unleash-the-real-power-of-AI---Guillaume-Marquis--Vector-Space-Talks-017-e2grbfg/a-ab22dgt" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Who knew that document retrieval could be creative? Guillaume and VirtualBrain help draft sales proposals using past reports. It's fascinating how tech aids deep work beyond basic search tasks. Tackling document retrieval and AI assistance, Guillaume furthermore unpacks the ins and outs of searching through vast data using a scoring system, the virtue of RAG for deep work, and going through the 'illusion of work', enhancing insights for knowledge workers while confronting the challenges of scalability and user feedback on hallucinations. Here are some key insight from this episode you need to look out for: 1. How to navigate the world of data with a precision scoring system for document retrieval. 2. The importance of fresh data and how to avoid the black holes of outdated info. 3. Techniques to boost system scalability and speed — essential in the vastness of data space. 4. AI Assistants tailored for depth rather than breadth, aiding in tasks like crafting stellar commercial proposals. 5. The intriguing role of user perception in AI tool interactions, plus a dash of timing magic. > Fun Fact: VirtualBrain uses Qdrant, for its advantages in speed, scalability, and API capabilities. > ## Show notes: 00:00 Hosts and guest recommendations.\ 09:01 Leveraging past knowledge to create new proposals.\ 12:33 Ingesting and parsing documents for context retrieval.\ 14:26 Creating and storing data, performing advanced searches.\ 17:39 Analyzing document date for accurate information retrieval.\ 20:32 Perceived time can calm nerves and entertain.\ 24:23 Tried various vector databases, preferred open source.\ 27:42 LangFuse: open source tool for monitoring tasks.\ 33:10 AI tool designed to stay within boundaries.\ 34:31 Minimizing hallucination in AI through careful analysis. ## More Quotes from Guillaume: *"We only exclusively use open source tools because of security aspects and stuff like that. That's why also we are using Qdrant one of the important point on that. So we have a system, we are using this serverless stuff to ingest document over time.”*\ — Guillaume Marquis *"One of the challenging part was the scalability of the system. We have clients that come with terra octave of data and want to be parsed really fast and so you have the ingestion, but even after the semantic search, even on a large data set can be slow. And today ChatGPT answers really fast. So your users, even if the question is way more complicated to answer than a basic ChatGPT question, they want to have their answer in seconds. So you have also this challenge that you really have to take care.”*\ — Guillaume Marquis *"Our AI is not trained to write you a speech based on Shakespeare and with the style of Martin Luther King. It's not the purpose of the tool. So if you ask something that is out of the box, he will just say like, okay, I don't know how to answer that. And that's an important point. That's a feature by itself to be able to not go outside of the box.”*\ — Guillaume Marquis ## Transcript: Demetrios: So, dude, I'm excited for this talk. Before we get into it, I want to make sure that we have some pre conversation housekeeping items that go out, one of which being, as always, we're doing these vector space talks and everyone is encouraged and invited to join in. Ask your questions, let us know where you're calling in from, let us know what you're up to, what your use case is, and feel free to drop any questions that you may have in the chat. We will be monitoring it like a hawk. Today I am joined by none other than Sabrina. How are you doing, Sabrina? Sabrina Aquino: What's up, Demetrios? I'm doing great. Excited to be here. I just love seeing what amazing stuff people are building with Qdrant and. Yeah, let's get into it. Demetrios: Yeah. So I think I see Sabrina's wearing a special shirt which is don't get lost in vector space shirt. If anybody wants a shirt like that. There we go. Well, we got you covered, dude. You will get one at your front door soon enough. If anybody else wants one, come on here. Present at the next vector space talks. Demetrios: We're excited to have you. And we've got one last thing that I think is fun that we can talk about before we jump into the tech piece of the conversation. And that is I told Sabrina to get ready with some recommendations. Know vector databases, they can be used occasionally for recommendation systems, but nothing's better than getting that hidden gem from your friend. And right now what we're going to try and do is give you a few hidden gems so that the next time the recommendation engine is working for you, it's working in your favor. And Sabrina, I asked you to give me one music that you can recommend, one show and one rando. So basically one random thing that you can recommend to us. Sabrina Aquino: So I've picked. I thought about this. Okay, I give it some thought. The movie would be Catch Me If You Can by Leo DiCaprio and Tom Hanks. Have you guys watched it? Really good movie. The song would be oh, children by knee cave and the bad scenes. Also very good song. And the random recommendation is my favorite scented candle, which is citrus notes, sea salt and cedar. Sabrina Aquino: So there you go. Demetrios: A scented candle as a recommendation. I like it. I think that's cool. I didn't exactly tell you to get ready with that. So I'll go next, then you can have some more time to think. So for anybody that's joining in, we're just giving a few recommendations to help your own recommendation engines at home. And we're going to get into this conversation about rags in just a moment. But my song is with. Demetrios: Oh, my God. I've been listening to it because I didn't think that they had it on Spotify, but I found it this morning and I was so happy that they did. And it is Bill Evans and Chet Baker. Basically, their whole album, the legendary sessions, is just like, incredible. But the first song on that album is called Alone Together. And when Chet Baker starts playing his little trombone, my God, it is like you can feel emotion. You can touch it. That is what I would recommend. Demetrios: Anyone out there? I'll drop a link in the chat if you like it. The film or series. This fool, if you speak Spanish, it's even better. It is amazing series. Get that, do it. And as the rando thing, I've been having Rishi mushroom powder in my coffee in the mornings. I highly recommend it. All right, last one, let's get into your recommendations and then we'll get into this rag chat. Guillaume Marquis: So, yeah, I sucked a little bit. So for the song, I think I will give something like, because I'm french, I think you can hear it. So I will choose Get Lucky of Daft Punk and because I am a little bit sad of the end of their collaboration. So, yeah, just like, I cannot forget it. And it's a really good music. Like, miss them as a movie, maybe something like I really enjoy. So we have a lot of french movies that are really nice, but something more international maybe, and more mainstream. Jungle of Tarantino, that is really a good movie and really enjoy it. Guillaume Marquis: I watched it several times and still a good movie to watch. And random thing, maybe a city. A city to go to visit. I really enjoyed. It's hard to choose. Really hard to choose a place in general. Okay, Florence, like in Italy. Demetrios: There we go. Guillaume Marquis: Yeah, it's a really cool city to go. So if you have time, and even Sabrina, if you went to Europe soon, it's really a nice place to go. Demetrios: That is true. Sabrina is going to Europe soon. We're blowing up her spot right now. So hopefully Florence is on the list. I know that most people watching did not tune in to hearing the three of us just randomly give recommendations. We are here to talk more about retrieval augmented generation. But hopefully those recommendations help some of you all at home with your recommendation engines. And you're maybe using a little bit of a vector database in your recommendation engine building skills. Demetrios: Let's talk about this, though, man, because I think it would be nice if you can set the scene. What exactly are you working on? I know you've got virtual brain. Can you tell us a little bit about that so that we can know how you're doing rags? Guillaume Marquis: Because rag is like, I think the most famous word in the AI sphere at the moment. So, virtual brain, what we are building in particular is that we are building an AI assistant for knowledge workers. So we are not only building this next gen search bar to search content through documents, it's a tool for enterprises at enterprise grade that provide some easy way to interact with your knowledge. So basically, we create a tool that we connect to the world knowledge of the company. It could be whatever, like the drives, sharepoints, whatever knowledge you have, any kind of documents, and with that you will be able to perform tasks on your knowledge, such as like audit, RFP, due diligence. It's not only like everyone that is building rag or building a kind of search system through rag are always giving the same number. Is that like 20%? As a knowledge worker, you spend 20% of your time by searching information. And I think I heard this number so much time, and that's true, but it's not enough. Guillaume Marquis: Like the search bar, a lot of companies, like many companies, are working on how to search stuff for a long time, and it's always a subject. But the real pain and what we want to handle and what we are handling is deep work, is real tasks, is how to help these workers, to really help them as an assistant, not only on search bar, like as an assistant on real task, real added value tasks. So inside that, can you give us. Demetrios: An example of that? Is it like that? It pops up when it sees me working on notion and talking about or creating a PRD, and then it says, oh, this might be useful for your PRD because you were searching about that a week ago or whatever. Guillaume Marquis: For instance. So we are working with companies that have from 100 employees to several thousand employees. For instance, when you have to create a commercial proposal as a salesperson in a company, you have an history with a company, an history in this ecosystem, a history within this environment, and you have to capitalize on all this commercial proposition that you did in the past in your company, you can have thousands of propositions, you can have thousands of documents, you can have reporting from different departments, depending of the industry you are working on, and with that, with the tool. So you can ask question, you can capitalize on this document, and you can easily create new proposal by asking question, by interacting with the tool, to go deeply in this use case and to create something that is really relevant for your new use case. And that is using really the knowledge that you have in your company. And so it's not only like retrieve or just like find me as last proposition of this client. It's more like, okay, use x past proposals to create a new one. And that's a real challenge that is linked to our subject. Guillaume Marquis: It's because it's not only like retrieve one, two or even ten documents, it's about retrieving like hundred, 200, a lot of documents, a lot of information, and you have a real something to do with a lot of documents, a lot of context, a lot of information you have to manage. Demetrios: I have the million dollar question that I think is probably coming through everyone's head is like, you're retrieving so many documents, how are you evaluating your retrieval? Guillaume Marquis: That's definitely the $1 million question. It's a toss task to do, to be honest. To be fair. Currently what we are doing is that we monitor every tasks of the process, so we have the output of every tasks. On each tasks we use a scoring system to evaluate if it's relevant to the initial question or the initial task of the user. And we have a global scoring system on all the system. So it's quite odd, it's a little bit empiric, but it works for now. And it really help us to also improve over time all the tasks and all the processes that are done by the tool. Guillaume Marquis: So it's really important. And for instance, you have this kind of framework that is called RAGtriad. That is a way to evaluate rag on the accuracy of the context you retrieve on the link with the initial question and so on, several parameters. And you can really have a first way to evaluate the quality of answers and the quality of everything on each steps. Sabrina Aquino: I love it. Can you go more into the tech that you use for each one of these steps in architecture? Guillaume Marquis: So the process is quite like, it starts at the moment we ingest documents because basically it's hard to retrieve good documents or retrieve documents in a proper way if you don't parse it well. If you just like the dumb rug, as I call it, is like, okay, you take a document, you divide it in text, and that's it. But you will definitely lose the context, the global context of the document, what the document in general is talking about. And you really need to do it properly and to keep this context. And that's a real challenge, because if you keep some noises, if you don't do that well, everything will be broken at the end. So technically how it works. So we have a proper system that we developed to ingest documents using technologies, open source technologies. We only exclusively use open source tools because of security aspects and stuff like that. Guillaume Marquis: That's why also we are using Qdrant one of the important point on that. So we have a system, we are using this serverless stuff to ingest document over time. We have also models that create tags on documents. So we use open source slms to tag documents, to enrich documents, also to create a new title, to create a summary of documents, to keep the context. When we divide the document, we keep the title of paragraphers, the context inside paragraphers, and we leak every piece of text between each other to keep the context after that, when we retrieve the document. So it's like the retrieving part. We have a new breed search system. We are using Qdrant on the semantic port. Guillaume Marquis: So basically we are creating unbelieving, we are storing it into Qdrant. We are performing similarity search to retrieve documents based on title summary filtering, on tags, on the semantic context. And we have also some keyword search, but it's more for specific tasks, like when we know that we need a specific document, at some point we are searching it with a keyword search. So it's like a kind of ebrid system that is using deterministic approach with filtering with tags, and a probabilistic approach with selecting document with this ebot search, and doing a scoring system after that to get what is the most relevant document and to select how much content we will take from each document. It's a little bit techy, but it's really cool to create and we have a way to evolve it and to improve it. Demetrios: That's what we like around here, man. We want the techie stuff. That's what I think everybody signed up for. So that's very cool. One question that definitely comes up a lot when it comes to rags and when you're ingesting documents, and then when you're retrieving documents and updating documents, how do you make sure that the documents that you are, let's say, I know there's probably a hypothetical HR scenario where the company has a certain policy and they say you can have European style holidays, you get like three months of holidays a year, or even French style holidays. Basically, you just don't work. And whenever you want, you can work, you don't work. And then all of a sudden a US company comes and takes it over and they say, no, you guys don't get holidays. Demetrios: Even when you do get holidays, you're not working or you are working and so you have to update all the HR documents, right? So now when you have this knowledge worker that is creating something, or when you have anyone that is getting help, like this copilot help, how do you make sure that the information that person is getting is the most up to date information possible? Guillaume Marquis: That's a new $1 million question. Demetrios: I'm coming with the hits today. I don't know what you were looking for. Guillaume Marquis: That's a really good question. So basically you have several possibilities on that. First one you have like this PowerPoint presentation. That's a mess in the knowledge bases and sometimes you just want to use the most updated up to date documents. So basically we can filter on the created ad and the date of the documents. Sometimes you want to also compare the evolution of the process over time. So that's another use case. Basically we base. Guillaume Marquis: So during the ingestion we are analyzing if date is inside the document, because sometimes in documentation you have like the date at the end of the document or at the beginning of the document. That's a first way to do it. We have the date of the creation of the document, but it's not a source of truth because sometimes you created it after or you duplicated it and the date is not the same, depending if you are working on Windows, Microsoft, stuff like that. It's definitely a mess. And also we compare documents. So when we retry the documents and documents are really similar one to each other, we keep it in mind and we try to give more information as possible. Sometimes it's not possible, so it's not 100%, it's not bulletproof, but it's a real question of that. So it's a partial answer of your question, but it's like some way we are today filtering and answering on this special topic. Sabrina Aquino: Now I wonder what was the most challenging part of building this frag since there was like. Guillaume Marquis: There are a lot of parts that are really challenging. Sabrina Aquino: Challenging. Guillaume Marquis: One of the challenging part was the scalability of the system. We have clients that come with terra octave of data and want to be parsed really fast and so you have the ingestion, but even after the semantic search, even on a large data set can be slow. And today Chat GPT answer really fast. So your users, even if the question is way more complicated to answer than a basic Chat GPT question, they want to have their answer in seconds. So you have also this challenge that is really you have to take care. So it's quite challenging and it's like this industrial supply chain. So when you upgrade something, you have to be sure that everything is working well on the other side. And that's a real challenge to handle. Guillaume Marquis: And we are still on it because we are still evolving and getting more data. And at the end of the day, you have to be sure that everything is working well in terms of LLM, but in terms of research and in terms also a few weeks to give some insight to the user of what is working under the hood, to give them the possibility to wait a few seconds more, but starting to give them pieces of answer. Demetrios: Yeah, it's funny you say that because I remember talking to somebody that was working at you.com and they were saying how there's like the actual time. So they were calling it something like perceived time and real, like actual time. So you as an end user, if you get asked a question or maybe there's like a trivia quiz while the question is coming up, then it seems like it's not actually taking as long as it is. Even if it takes 5 seconds, it's a little bit cooler. Or as you were mentioning, I remember reading some paper, I think, on how people are a lot less anxious if they see the words starting to pop up like that and they see like, okay, it's not just I'm waiting and then the whole answer gets spit back out at me. It's like I see the answer forming as it is in real time. And so that can calm people's nerves too. Guillaume Marquis: Yeah, definitely. Human's brain is like marvelous on that. And you have a lot of stuff. Like, one of my favorites is the illusion of work. Do you know it? It's the total opposite. If you have something that seems difficult to do, adding more time of processing. So the user will imagine that it's really an OD task to do. And so that's really funny. Demetrios: So funny like that. Guillaume Marquis: Yeah. Yes. It's the opposite of what you will think if you create a product, but that's real stuff. And sometimes just to output them that you are performing toss tasks in the background, it helps them to. Oh, yes. My question was really like a complex question, like you have a lot of work to do. It's Axe word like. If you answer too fast, they will not trust the answer. Guillaume Marquis: And it's the opposite if you answer too slow. You can have this. Okay. But it should be dumb because it's really slow. So it's a dumb AI or stuff like that. So that's really funny. My co founder actually was a product guy, so really focused on product, and he really loves this kind of stuff. Demetrios: Great thought experiment, that's interesting. Sabrina Aquino: And you mentioned like you chose Qdrant because it's open source, but now I wonder if there's also something to do with your need for something that's fast, that's scalable, and what other factors you took in consideration when choosing the vector DB. Guillaume Marquis: Yes, so I told you that the scalability and the speed is like one of the most important points and toast part to endure. And yes, definitely, because when you are building a complex rag, you are not like just performing one research, at some points you are doing it maybe like you are splitting the question, doing several at the same time. And so it's like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need. And Qdrant was like an obvious choose. Actually, we did a benchmark, so we really tried several possibilities. Demetrios: Some tell me more. Yeah. Guillaume Marquis: So we tried the classic postgres page vectors, that is, I think we tried it like 30 minutes, and we realized really fast that it was really not good for our use case. We tried Weaviate, we tried Milvus, we tried Qdrant, we tried a lot. We prefer use open source because of security issues. We tried Pinecone initially, we were on Pinecone at the beginning of the company. And so the most important point, so we have the speed of the tool, we have the scalability we have also, maybe it's a little bit dumb to say that, but we have also the API. I remember using Pinecone and trying just to get all vectors and it was not possible somehow, and you have this dumb stuff that are sometimes really strange. And if you have a tool that is 100% made for your use case with people that are working on it, really dedicated on that, and that are aligned with your vision of what is the evolution of this. I think it's like the best tool you have to choose. Demetrios: So one thing that I would love to hear about too, is when you're looking at your system and you're looking at just the product in general, what are some of the key metrics that you are constantly monitoring, and how do you know that you're hitting them or you're not? And then if you're not hitting them, what are some ways that you debug the situation? Guillaume Marquis: By metrics you mean like usage metrics. Demetrios: Or like, I'm more thinking on your whole tech setup and the quality of your rag. Guillaume Marquis: Basically we are focused on industry of knowledge workers and industry in particular like of consultants. So we have some data set of questions that we know should be answered. Well, we know the kind of outputs we should have. The metrics we are like monitoring on our rag is mostly the accuracy of the answer, the accuracy of sources, the number of hallucination that is sometimes really also hard to manage. Actually our tool is sourcing everything. When you ask a question or when you perform a task, it gives you all the sources. But sometimes you can have a perfect answer and just like one number inside your answer that comes from nowhere, that is totally like invented and that's up to get. We are still working on that. Guillaume Marquis: We are not the most advanced on this part. We just implemented a tool I think you may know it's LangFuse. Do you know them? LangFuse? Demetrios: No. Tell me more. Guillaume Marquis: LangFuse is like a tool that is made to monitor tasks on your rack so you can easily log stuff. It's also open source tool, you can easily self host it and you can monitor every part of your rag. You can create data sets based on questions and answers that has been asked or some you created by yourself. And you can easily perform like check of your rag just to trade out and to give a final score of it, and to be able to monitor everything and to give global score based on your data set of your rag. So we are currently implementing it. I give their name because it's wonderful the work they did, and I really enjoyed it. It's one of the most important points to not be blind. I mean, in general, in terms of business, you have to follow metrics. Guillaume Marquis: Numbers cannot lie. Humans lies, but not numbers. But after that you have to interpret numbers. So that's also another toss part. But it's important to have the good metrics and to be able to know if you are evolving it, if you are improving your system and if everything is working. Basically the different stuff we are doing, we are not like. Demetrios: Are you collecting human feedback? For the hallucinations part, we try, but. Guillaume Marquis: Humans are not like giving a lot of feedback. Demetrios: It's hard. That's why it's really hard the end user to do anything, even just like the thumbs up, thumbs down can be difficult. Guillaume Marquis: We tried several stuff. We have the thumbs up, thumbs down, we tried stars. You ask real feedback to write something, hey, please help us. Human feedback is quite poor, so we are not counting on that. Demetrios: I think the hard part about it, at least me as an end user, whenever I've been using these, is like the thumbs down or the, I've even seen it go as far as, like, you have more than just one emoji. Like, maybe you have the thumbs up, you have the thumbs down. You have, like, a mushroom emoji. So it's, like, hallucinated. And you have, like. Guillaume Marquis: What was the. Demetrios: Other one that I saw that I thought was pretty? I can't remember it right now, but. Guillaume Marquis: I never saw the mushroom. But that's quite fun. Demetrios: Yeah, it's good. It's not just wrong. It's absolutely, like, way off the mark. And what I think is interesting there when I've been the end user is that it's a little bit just like, I don't have time to explain the nuances as to why this is not useful. I really would have to sit down and almost, like, write a book or at least an essay on, yeah, this is kind of useful, but it's like a two out of a five, not a four out of a five. And so that's why I gave it the thumbs down. Or there was this part that is good and that part's bad. And so it's just like the ways that you have to, or the nuances that you have to go into as the end user when you're trying to evaluate it, I think it's much better. Demetrios: And what I've seen a lot of people do is just expect to do that in house. After the fact, you get all the information back, you see, on certain metrics, like, oh, did this person commit the code? Then that's a good signal that it's useful. But then you can also look at it, or did this person copy paste it? Et cetera, et cetera. And how can we see if they didn't copy paste that or if they didn't take that next action that we would expect them to take? Why not? And let's try and dig into what we can do to make that better. Guillaume Marquis: Yes. We can also evaluate the next questions, like the following questions. That's a great point. We are not currently doing it automatically, but if you see that a user just answer, no, it's not true, or you should rephrase it or be more concise, or these kind of following questions, you know that the first answer was not as relevant as. Demetrios: That's such a great point. Or you do some sentiment analysis and it slowly is getting more and more angry. Guillaume Marquis: Yeah, that's true. That's a good point also. Demetrios: Yeah, this one went downhill, so. All right, cool. I think that's it. Sabrina, any last questions from your side? Sabrina Aquino: Yeah, I think I'm just very interesting to know from a user perspective, from a virtual brain, how are traditional models worse or what kind of errors virtual brain fixes in their structure, that users find it better that way. Guillaume Marquis: I think in this particular, so we talked about hallucinations, I think it's like one of the main issues people have on classic elements. We really think that when you create a one size fit all tool, you have some chole because you have to manage different approaches, like when you are creating copilot as Microsoft, you have to under the use cases of, and I really think so. Our AI is not trained to write you a speech based on Shakespeare and with the style of Martin Luther King. It's not the purpose of the tool. So if you ask something that is out of the box, he will just say like, okay, I don't know how to answer that. And that's an important point. That's a feature by itself to be able to not go outside of the box. And so we did this choice of putting the AI inside the box, the box that is containing basically all the knowledge of your company, all the retrieved knowledge. Guillaume Marquis: Actually we do not have a lot of hallucination, I will not say like 0%, but it's close to zero. Because we analyze a question, we put the AI in a box, we enforce the AI to think about the answer before answering, and we analyze also the answer to know if the answer is relevant. And that's an important point that we are fixing and we fix for our user and we prefer yes, to give like non answers and a bad answer. Sabrina Aquino: Absolutely. And there are people who think like, hey, this is a rag, it's not going to hallucinate, and that's not the case at all. It will hallucinate less inside a certain context window that you provide. Right. But it still has a possibility. So minimizing that as much as possible is very valuable. Demetrios: So good. Well, I think with that, our time here is coming to an end. I really appreciate this. I encourage everyone to go and have a little look at virtual brain. We'll drop a link in the comment in case anyone wants free to sign up. Guillaume Marquis: So you can trade for free. Demetrios: Even better. Look at that, Christmas came early. Well, let's go have some fun, play around with it. And I can't promise, but I may give you some feedback, I may give you some evaluation metrics if it's hallucinating. Guillaume Marquis: Or what if I see some thumbs up or thumbs down, I will know that it's you. Demetrios: Yeah, cool. Exactly. All right, folks, that's about it for today. We will see you all later. As a reminder, don't get lost in vector space. This has been another vector space talks. And if you want to come on here and chat with us, feel free to reach out. See ya. Guillaume Marquis: Cool. Sabrina Aquino: See you guys. Thank you. Bye.
blog/virtualbrain-best-rag-to-unleash-the-real-power-of-ai-guillaume-marquis-vector-space-talks.md
--- draft: false title: "Pienso & Qdrant: Future Proofing Generative AI for Enterprise-Level Customers" slug: case-study-pienso short_description: Why Pienso chose Qdrant as a cornerstone for building domain-specific foundation models. description: Why Pienso chose Qdrant as a cornerstone for building domain-specific foundation models. preview_image: /case-studies/pienso/social_preview.png date: 2023-02-28T09:48:00.000Z author: Qdrant Team featured: false aliases: - /case-studies/pienso/ --- The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso's low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces. Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions. Strengthening LLM Performance Qdrant enhances the accuracy of large language models (LLMs) by offering an alternative to relying solely on patterns identified during the training phase. By integrating with Qdrant, Pienso will empower customer LLMs with dynamic long-term storage, which will ultimately enable them to generate concrete and factual responses. Qdrant effectively preserves the extensive context windows managed by advanced LLMs, allowing for a broader analysis of the conversation or document at hand. By leveraging this extended context, LLMs can achieve a more comprehensive understanding and produce contextually relevant outputs. ## Joint Dedication to Scalability, Efficiency and Reliability > “Every commercial generative AI use case we encounter benefits from faster training and inference, whether mining customer interactions for next best actions or sifting clinical data to speed a therapeutic through trial and patent processes.” - Birago Jones, CEO, Pienso Pienso chose Qdrant for its exceptional LLM interoperability, recognizing the potential it offers in maximizing the power of large language models and interactive deep learning for large enterprises. Qdrant excels in efficient nearest neighbor search, which is an expensive and computationally demanding task. Our ability to store and search high-dimensional vectors with remarkable performance and precision will offer a significant peace of mind to Pienso’s customers. Through intelligent indexing and partitioning techniques, Qdrant will significantly boost the speed of these searches, accelerating both training and inference processes for users. ### Scalability: Preparing for Sustained Growth in Data Volumes Qdrant's distributed deployment mode plays a vital role in empowering large enterprises dealing with massive data volumes. It ensures that increasing data volumes do not hinder performance but rather enrich the model's capabilities, making scalability a seamless process. Moreover, Qdrant is well-suited for Pienso’s enterprise customers as it operates best on bare metal infrastructure, enabling them to maintain complete control over their data sovereignty and autonomous LLM regimes. This ensures that enterprises can maintain their full span of control while leveraging the scalability and performance benefits of Qdrant's solution. ### Efficiency: Maximizing the Customer Value Proposition Qdrant's storage efficiency delivers cost savings on hardware while ensuring a responsive system even with extensive data sets. In an independent benchmark stress test, Pienso discovered that Qdrant could efficiently store 128 million documents, consuming a mere 20.4GB of storage and only 1.25GB of memory. This storage efficiency not only minimizes hardware expenses for Pienso’s customers, but also ensures optimal performance, making Qdrant an ideal solution for managing large-scale data with ease and efficiency. ### Reliability: Fast Performance in a Secure Environment Qdrant's utilization of Rust, coupled with its memmap storage and write-ahead logging, offers users a powerful combination of high-performance operations, robust data protection, and enhanced data safety measures. Our memmap storage feature offers Pienso fast performance comparable to in-memory storage. In the context of machine learning, where rapid data access and retrieval are crucial for training and inference tasks, this capability proves invaluable. Furthermore, our write-ahead logging (WAL), is critical to ensuring changes are logged before being applied to the database. This approach adds additional layers of data safety, further safeguarding the integrity of the stored information. > “We chose Qdrant because it's fast to query, has a small memory footprint and allows for instantaneous setup of a new vector collection that is going to be queried. Other solutions we evaluated had long bootstrap times and also long collection initialization times {..} This partnership comes at a great time, because it allows Pienso to use Qdrant to its maximum potential, giving our customers a seamless experience while they explore and get meaningful insights about their data.” - Felipe Balduino Cassar, Senior Software Engineer, Pienso ## What's Next? Pienso and Qdrant are dedicated to jointly develop the most reliable customer offering for the long term. Our partnership will deliver a combination of no-code/low-code interactive deep learning with efficient vector computation engineered for open source models and libraries. **To learn more about how we plan on achieving this, join the founders for a [technical fireside chat at 09:30 PST Thursday, 20th July on Discord](https://discord.gg/Vnvg3fHE?event=1128331722270969909).** ![founders chat](/case-studies/pienso/founderschat.png)
blog/case-study-pienso.md
--- draft: false title: "Red Hat OpenShift and Qdrant Hybrid Cloud Offer Seamless and Scalable AI" short_description: "Qdrant brings managed vector databases to Red Hat OpenShift for large-scale GenAI." description: "Qdrant brings managed vector databases to Red Hat OpenShift for large-scale GenAI." preview_image: /blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift.png date: 2024-04-11T00:04:00Z author: Qdrant featured: false weight: 1003 tags: - Qdrant - Vector Database --- We’re excited about our collaboration with Red Hat to bring the Qdrant vector database to [Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) customers! With the release of [Qdrant Hybrid Cloud](/hybrid-cloud/), developers can now deploy and run the Qdrant vector database directly in their Red Hat OpenShift environment. This collaboration enables developers to scale more seamlessly, operate more consistently across hybrid cloud environments, and maintain complete control over their vector data. This is a big step forward in simplifying AI infrastructure and empowering data-driven projects, like retrieval augmented generation (RAG) use cases, advanced search scenarios, or recommendations systems. In the rapidly evolving field of Artificial Intelligence and Machine Learning, the demand for being able to manage the modern AI stack within the existing infrastructure becomes increasingly relevant for businesses. As enterprises are launching new AI applications and use cases into production, they require the ability to maintain complete control over their data, since these new apps often work with sensitive internal and customer-centric data that needs to remain within the owned premises. This is why enterprises are increasingly looking for maximum deployment flexibility for their AI workloads. >*“Red Hat is committed to driving transparency, flexibility and choice for organizations to more easily unlock the power of AI. By working with partners like Qdrant to enable streamlined integration experiences on Red Hat OpenShift for AI use cases, organizations can more effectively harness critical data and deliver real business outcomes,”* said Steven Huels, Vice President and General Manager, AI Business Unit, Red Hat. #### The Synergy of Qdrant Hybrid Cloud and Red Hat OpenShift Qdrant Hybrid Cloud is the first vector database that can be deployed anywhere, with complete database isolation, while still providing a fully managed cluster management. Running Qdrant Hybrid Cloud on Red Hat OpenShift allows enterprises to deploy and run a fully managed vector database in their own environment, ultimately allowing businesses to run managed vector search on their existing cloud and infrastructure environments, with full data sovereignty. Red Hat OpenShift, the industry’s leading hybrid cloud application platform powered by Kubernetes, helps streamline the deployment of Qdrant Hybrid Cloud within an enterprise's secure premises. Red Hat OpenShift provides features like auto-scaling, load balancing, and advanced security controls that can help you manage and maintain your vector database deployments more effectively. In addition, Red Hat OpenShift supports deployment across multiple environments, including on-premises, public, private and hybrid cloud landscapes. This flexibility, coupled with Qdrant Hybrid Cloud, allows organizations to choose the deployment model that best suits their needs. #### Why Run Qdrant Hybrid Cloud on Red Hat OpenShift? - **Scalability**: Red Hat OpenShift's container orchestration effortlessly scales Qdrant Hybrid Cloud components, accommodating fluctuating workload demands with ease. - **Portability**: The consistency across hybrid cloud environments provided by Red Hat OpenShift allows for smoother operation of Qdrant Hybrid Cloud across various infrastructures. - **Automation**: Deployment, scaling, and management tasks are automated, reducing operational overhead and simplifying the management of Qdrant Hybrid Cloud. - **Security**: Red Hat OpenShift provides built-in security features, including container isolation, network policies, and role-based access control (RBAC), enhancing the security posture of Qdrant Hybrid Cloud deployments. - **Flexibility:** Red Hat OpenShift supports a wide range of programming languages, frameworks, and tools, providing flexibility in developing and deploying Qdrant Hybrid Cloud applications. - **Integration:** Red Hat OpenShift can be integrated with various Red Hat and third-party tools, facilitating seamless integration of Qdrant Hybrid Cloud with other enterprise systems and services. #### Get Started with Qdrant Hybrid Cloud on Red Hat OpenShift We're thrilled about our collaboration with Red Hat to help simplify AI infrastructure for developers and enterprises alike. By deploying Qdrant Hybrid Cloud on Red Hat OpenShift, developers can gain the ability to more easily scale and maintain greater operational consistency across hybrid cloud environments. To get started, we created a comprehensive tutorial that shows how to build next-gen AI applications with Qdrant Hybrid Cloud on Red Hat OpenShift. Additionally, you can find more details on the seamless deployment process in our documentation: ![hybrid-cloud-red-hat-openshift-tutorial](/blog/hybrid-cloud-red-hat-openshift/hybrid-cloud-red-hat-openshift-tutorial.png) #### Tutorial: Private Chatbot for Interactive Learning In this tutorial, you will build a chatbot without public internet access. The goal is to keep sensitive data secure and isolated. Your RAG system will be built with Qdrant Hybrid Cloud on Red Hat OpenShift, leveraging Haystack for enhanced generative AI capabilities. This tutorial especially explores how this setup ensures that not a single data point leaves the environment. [Try the Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Documentation: Deploy Qdrant in a Few Clicks > Our simple Kubernetes-native design allows you to deploy Qdrant Hybrid Cloud on your Red Hat OpenShift instance in just a few steps. Learn how in our documentation. [Read Hybrid Cloud Documentation](/documentation/hybrid-cloud/) This collaboration marks an important milestone in the quest for simplified AI infrastructure, offering a robust, scalable, and security-optimized solution for managing vector databases in a hybrid cloud environment. The combination of Qdrant's performance and Red Hat OpenShift's operational excellence opens new avenues for enterprises looking to leverage the power of AI and ML. #### Ready to Get Started? Create a [Qdrant Cloud account](https://cloud.qdrant.io/login) and deploy your first **Qdrant Hybrid Cloud** cluster in a few minutes. You can always learn more in the [official release blog](/blog/hybrid-cloud/).
blog/hybrid-cloud-red-hat-openshift.md
--- draft: true title: New 0.7.0 update of the Qdrant engine went live slug: qdrant-0-7-0-released short_description: Qdrant v0.7.0 engine has been released description: Qdrant v0.7.0 engine has been released preview_image: /blog/from_cms/v0.7.0.png date: 2022-04-13T08:57:07.604Z author: Alyona Kavyerina author_link: https://www.linkedin.com/in/alyona-kavyerina/ featured: true categories: - News - Release update tags: - Corporate news - Release sitemapExclude: True --- We've released the new version of Qdrant neural search engine.  Let's see what's new in update 0.7.0. * 0.7 engine now supports JSON as a payload.  * It redeems a lost API. Alias API in gRPC is available. * Provides new filtering conditions: refactoring, bool, IsEmpty, and ValuesCount filters are available.  * It has a lot of improvements regarding geo payload indexing, HNSW performance, and many more. Read detailed release notes on [GitHub](https://github.com/qdrant/qdrant/releases/tag/v0.7.0). Stay tuned for new updates.\ If you have any questions or need support, join our [Discord](https://discord.com/invite/tdtYvXjC4h) community.
blog/new-0-7-update-of-the-qdrant-engine-went-live.md
--- draft: false title: The Bitter Lesson of Retrieval in Generative Language Model Workflows - Mikko Lehtimäki | Vector Space Talks slug: bitter-lesson-generative-language-model short_description: Mikko Lehtimäki discusses the challenges and techniques in implementing retrieval augmented generation for Yokot AI description: Mikko Lehtimäki delves into the intricate world of retrieval-augmented generation, discussing how Yokot AI manages vast diverse data inputs and how focusing on re-ranking can massively improve LLM workflows and output quality. preview_image: /blog/from_cms/mikko-lehtimäki-cropped.png date: 2024-01-29T16:31:02.511Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - generative language model - Retrieval Augmented Generation - Softlandia --- > *"If you haven't heard of the bitter lesson, it's actually a theorem. It's based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans.”*\ -- Mikko Lehtimäki > Dr. Mikko Lehtimäki is a data scientist, researcher and software engineer. He has delivered a range of data-driven solutions, from machine vision for robotics in circular economy to generative AI in journalism. Mikko is a co-founder of Softlandia, an innovative AI solutions provider. There, he leads the development of YOKOTAI, an LLM-based productivity booster that connects to enterprise data. Recently, Mikko has contributed software to Llama-index and Guardrails-AI, two leading open-source initiatives in the LLM space. He completed his PhD in the intersection of computational neuroscience and machine learning, which gives him a unique perspective on the design and implementation of AI systems. With Softlandia, Mikko also hosts chill hybrid-format data science meetups where everyone is welcome to participate. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/5hAnDq7MH9qjjtYVjmsGrD?si=zByq7XXGSjOdLbXZDXTzoA), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/D8lOvz5xp5c).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/D8lOvz5xp5c?si=k9tIcDf31xqjqiv1" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/The-Bitter-Lesson-of-Retrieval-in-Generative-Language-Model-Workflows---Mikko-Lehtimki--Vector-Space-Talk-011-e2evek4/a-aat2k24" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top takeaways:** Aren’t you curious about what the bitter lesson is and how it plays out in generative language model workflows? Check it out as Mikko delves into the intricate world of retrieval-augmented generation, discussing how Yokot AI manages vast diverse data inputs and how focusing on re-ranking can massively improve LLM workflows and output quality. 5 key takeaways you’ll get from this episode: 1. **The Development of Yokot AI:** Mikko detangles the complex web of how Softlandia's in-house stack is changing the game for language model applications. 2. **Unpacking Retrieval-Augmented Generation:** Learn the rocket science behind uploading documents and scraping the web for that nugget of insight, all through the prowess of Yokot AI's LLMs. 3. **The "Bitter Lesson" Theory:** Dive into the theorem that's shaking the foundations of AI, suggesting the supremacy of data and computing over human design. 4. **High-Quality Content Generation:** Understand how the system's handling of massive data inputs is propelling content quality to stratospheric heights. 5. **Future Proofing with Re-Ranking:** Discover why improving the re-ranking component might be akin to discovering a new universe within our AI landscapes. > Fun Fact: Yokot AI incorporates a retrieval augmented generation mechanism to facilitate the retrieval of relevant information, which allows users to upload and leverage their own documents or scrape data from the web. > ## Show notes: 00:00 Talk on retrieval for language models and Yokot AI platform.\ 06:24 Data flexibility in various languages leads progress.\ 10:45 User inputs document, system converts to vectors.\ 13:40 Enhance data quality, reduce duplicates, streamline processing.\ 19:20 Reducing complexity by focusing on re-ranker.\ 21:13 Retrieval process enhances efficiency of language model.\ 24:25 Information retrieval methods evolving, leveraging data, computing.\ 28:11 Optimal to run lightning on local hardware. ## More Quotes from Mikko: "*We used to build image analysis on this type of features that we designed manually... Whereas now we can just feed a bunch of images to a transformer, and we'll get beautiful bounding boxes and semantic segmentation outputs without building rules into the system.*”\ -- Mikko Lehtimäki *"We cannot just leave it out and hope that someday soon we will have a language model that doesn't require us fetching the data for it in such a sophisticated manner. The reranker is a component that can leverage data and compute quite efficiently, and it doesn't require that much manual craftmanship either.”*\ -- Mikko Lehtimäki *"We can augment the data we store, for example, by using multiple chunking strategies or generating question answer pairs from the user's documents, and then we'll embed those and look them up when the queries come in.”*\ -- Mikko Lehtimäki in improving data quality in rack stack ## Transcript: Demetrios: What is happening? Everyone, it is great to have you here with us for yet another vector space talks. I have the pleasure of being joined by Mikko today, who is the co founder of Softlandia, and he's also lead data scientist. He's done all kinds of great software engineering and data science in his career, and currently he leads the development of Yokot AI, which I just learned the pronunciation of, and he's going to tell us all about it. But I'll give you the TLDR. It's an LLM based productivity booster that can connect to your data. What's going on, Mikko? How you doing, bro? Mikko Lehtimäki: Hey, thanks. Cool to be here. Yes. Demetrios: So, I have to say, I said it before we hit record or before we started going live, but I got to say it again. The talk title is spot on. Your talk title is the bitter lessons of retrieval in generative language model workflows. Mikko Lehtimäki: Exactly. Demetrios: So I'm guessing you've got a lot of hardship that you've been through, and you're going to hopefully tell us all about it so that we do not have to make the same mistakes as you did. We can be wise and learn from your mistakes before we have to make them ourselves, right? All right. That's a great segue into you getting into it, man. I know you got to talk. I know you got some slides to share, so feel free to start throwing those up on the screen. And for everyone that is here joining, feel free to add some questions in the chat. I'll be monitoring it so that in case you have any questions, I can jump in and make sure that Mikko answers them before he moves on to the next slide. All right, Mikko, I see your screen, bro. Demetrios: This is good stuff. Mikko Lehtimäki: Cool. So, shall we get into? Yeah. My name is Mikko. I'm the chief data scientist here at Softlandia. I finished my phd last summer and have been doing the Softlandia for two years now. I'm also a contributor to some open source AI LLM libraries like Llama index and cartrails AI. So if you haven't checked those out ever, please do. Here at Softlandia, we are primarily an AI consultancy that focuses on end to end AI solutions, but we've also developed our in house stack for large language model applications, which I'll be discussing today. Mikko Lehtimäki: So the topic of the talk is a bit provocative. Maybe it's a bitter lesson of retrieval for large language models, and it really stems from our experience in building production ready retrieval augmented generation solutions. I just want to say it's not really a lecture, so I'm going to tell you to do this or do that. I'll just try to walk you through the thought process that we've kind of adapted when we develop rack solutions, and we'll see if that resonates with you or not. So our LLM solution is called Yokot AI. It's really like a platform where enterprises can upload their own documents and get language model based insights from them. The typical example is question answering from your documents, but we're doing a bit more than that. For example, users can generate long form documents, leveraging their own data, and worrying about the token limitations that you typically run in when you ask an LLM to output something. Mikko Lehtimäki: Here you see just a snapshot of the data management view that we have built. So users can bring their own documents or scrape the web, and then access the data with LLMS right away. This is the document generation output. It's longer than you typically see, and each section can be based on different data sources. We've got different generative flows, like we call them, so you can take your documents and change the style using llms. And of course, the typical chat view, which is really like the entry point, to also do these workflows. And you can see the sources that the language model is using when you're asking questions from your data. And this is all made possible with retrieval augmented generation. Mikko Lehtimäki: That happens behind the scenes. So when we ask the LLM to do a task, we're first fetching data from what was uploaded, and then everything goes from there. So we decide which data to pull, how to use it, how to generate the output, and how to present it to the user so that they can keep on conversing with the data or export it to their desired format, whatnot. But the primary challenge with this kind of system is that it is very open ended. So we don't really set restrictions on what kind of data the users can upload or what language the data is in. So, for example, we're based in Finland. Most of our customers are here in the Nordics. They talk, speak Finnish, Swedish. Mikko Lehtimäki: Most of their data is in English, because why not? And they can just use whatever language they feel with the system. So we don't want to restrict any of that. The other thing is the chat view as an interface, it really doesn't set much limits. So the users have the freedom to do the task that they choose with the system. So the possibilities are really broad that we have to prepare for. So that's what we are building. Now, if you haven't heard of the bitter lesson, it's actually a theorem. It's based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans. Mikko Lehtimäki: So for example, I have an illustration here showing how this has manifested in image analysis. So on the left hand side, you see the output from an operation that extracts gradients from images. We used to build image analysis on this type of features that we designed manually. We would run some kind of edge extraction, we would count corners, we would compute the edge distances and design the features by hand in order to work with image data. Whereas now we can just feed a bunch of images to a transformer, and we'll get beautiful bounding boxes and semantic segmentation outputs without building rules into the system. So that's a prime example of the bitter lesson in action. Now, if we take this to the context of rack or retrieval augmented generation, let's have a look first at the simple rack architecture. Why do we do this in the first place? Well, it's because the language models themselves, they don't have up to date data because they've been trained a while ago. Mikko Lehtimäki: You don't really even know when. So we need to give them access to more recent data, and we need a method for doing that. And the other thing is problems like hallucinations. We found that if you just ask the model a question that is in the training data, you won't get always reliable results. But if you can crown the model's answers with data, you will get more factual results. So this is what can be done with the rack as well. And the final thing is that we just cannot give a book, for example, in one go the language model, because even if theoretically it could read the input in one go, the result quality that you get from the language model is going to suffer if you feed it too much data at once. So this is why we have designed retrieval augmented generation architectures. Mikko Lehtimäki: And if we look at this system on the bottom, you see the typical data ingestion. So the user gives a document, we slice it to small chunks, and we compute a numerical representation with vector embeddings and store those in a vector database. Why a vector database? Because it's really efficient to retrieve vectors from it when we get users query. So that is also embedded and it's used to look up relevant sources from the data that was previously uploaded efficiently directly on the database, and then we can fit the resulting text, the language model, to synthesize an answer. And this is how the RHe works in very basic form. Now you can see that if you have only a single document that you work with, it's nice if the problem set that you want to solve is very constrained, but the more data you can bring to your system, the more workflows you can build on that data. So if you have, for example, access to a complete book or many books, it's easy to see you can also generate higher quality content from that data. So this architecture really must be such that it can also make use of those larger amounts of data. Mikko Lehtimäki: Anyway, once you implement this for the first time, it really feels like magic. It tends to work quite nicely, but soon you'll notice that it's not suitable for all kinds of tasks. Like you will see sometimes that, for example, the lists. If you retrieve lists, they may be broken. If you ask questions that are document comparisons, you may not get complete results. If you run summarization tasks without thinking about it anymore, then that will most likely lead to super results. So we'll have to extend the architecture quite a bit to take into account all the use cases that we want to enable with bigger amounts of data that the users upload. And this is what it may look like once you've gone through a few design iterations. Mikko Lehtimäki: So let's see, what steps can we add to our rack stack in order to make it deliver better quality results? If we start from the bottom again, we can see that we try to enhance the quality of the data that we upload by adding steps to the data ingestion pipeline. We can augment the data we store, for example, by using multiple chunking strategies or generating question answer pairs from the user's documents, and then we'll embed those and look them up when the queries come in. At the same time, we can reduce the data we upload, so we want to make sure there are no duplicates. We want to clean low quality things like HTML stuff, and we also may want to add some metadata so that certain data, for example references, can be excluded from the search results if they're not needed to run the tasks that we like to do. We've modeled this as a stream processing pipeline, by the way. So we're using Bytewax, which is another really nice open source framework. Just a tiny advertisement we're going to have a workshop with Bytewax about rack on February 16, so keep your eyes open for that. At the center I have added different databases and different retrieval methods. Mikko Lehtimäki: We may, for example, add keyword based retrieval and metadata filters. The nice thing is that you can do all of this with quattron if you like. So that can be like a one stop shop for your document data. But some users may want to experiment with different databases, like graph databases or NoSQL databases and just ordinary SQL databases as well. They can enable different kinds of use cases really. So it's up to your service which one is really useful for you. If we look more to the left, we have a component called query planner and some query routers. And this really determines the response strategy. Mikko Lehtimäki: So when you get the query from the user, for example, you want to take different steps in order to answer it. For example, you may want to decompose the query to small questions that you answer individually, and each individual question may take a different path. So you may want to do a query based on metadata, for example pages five and six from a document. Or you may want to look up based on keywords full each page or chunk with a specific word. And there's really like a massive amount of choices how this can go. Another example is generating hypothetical documents based on the query and embedding those rather than the query itself. That will in some cases lead to higher quality retrieval results. But now all this leads into the right side of the query path. Mikko Lehtimäki: So here we have a re ranker. So if we implement all of this, we end up really retrieving a lot of data. We typically will retrieve more than it makes sense to give to the language model in a single call. So we can add a re ranker step here and it will firstly filter out low quality retrieved content and secondly, it will put the higher quality content on the top of the retrieved documents. And now when you pass this reranked content to the language model, it should be able to pay better attention to the details that actually matter given the query. And this should lead to you better managing the amount of data that you have to handle with your final response generator, LLM. And it should also make the response generator a bit faster because you will be feeding slightly less data in one go. The simplest way to build a re ranker is probably just asking a large language model to re rank or summarize the content that you've retrieved before you feed it to the language model. Mikko Lehtimäki: That's one way to do it. So yeah, that's a lot of complexity and honestly, we're not doing all of this right now with Yokot AI, either. We've tried all of it in different scopes, but really it's a lot of logic to maintain. And to me this just like screams the bitter lesson, because we're building so many steps, so much logic, so many rules into the system, when really all of this is done just because the language model can't be trusted, or it can't be with the current architectures trained reliably, or cannot be trained in real time with the current approaches that we have. So there's one thing in this picture, in my opinion, that is more promising than the others for leveraging data and compute, which should dominate the quality of the solution in the long term. And if we focus only on that, or not only, but if we focus heavily on that part of the process, we should be able to eliminate some complexity elsewhere. So if you're watching the recording, you can pause and think what this component may be. But in my opinion, it is the re ranker at the end. Mikko Lehtimäki: And why is that? Well, of course you could argue that the language model itself is one, but with the current architectures that we have, I think we need the retrieval process. We cannot just leave it out and hope that someday soon we will have a language model that doesn't require us fetching the data for it in such a sophisticated manner. The reranker is a component that can leverage data and compute quite efficiently, and it doesn't require that much manual craftmanship either. It's a stakes in samples and outputs samples, and it plays together really well with efficient vector search that we have available now. Like quatrant being a prime example of that. The vector search is an initial filtering step, and then the re ranker is the secondary step that makes sure that we get the highest possible quality data to the final LLM. And the efficiency of the re ranker really comes from the fact that it doesn't have to be a full blown generative language model so often it is a language model, but it doesn't have to have the ability to generate GPT four level content. It just needs to understand, and in some, maybe even a very fixed way, communicate the importance of the inputs that you give it. Mikko Lehtimäki: So typically the inputs are the user's query and the data that was retrieved. Like I mentioned earlier, the easiest way to use a read ranker is probably asking a large language model to rerank your chunks or sentences that you retrieved. But there are also models that have been trained specifically for this, the Colbert model being a primary example of that and we also have to remember that the rerankers have been around for a long time. They've been used in traditional search engines for a good while. We just now require a bit higher quality from them because there's no user checking the search results and deciding which of them is relevant. After the fact that the re ranking has already been run, we need to trust that the output of the re ranker is high quality and can be given to the language model. So you can probably get plenty of ideas from the literature as well. But the easiest way is definitely to use LLM behind a simple API. Mikko Lehtimäki: And that's not to say that you should ignore the rest like the query planner is of course a useful component, and the different methods of retrieval are still relevant for different types of user queries. So yeah, that's how I think the bitter lesson is realizing in these rack architectures I've collected here some methods that are recent or interesting in my opinion. But like I said, there's a lot of existing information from information retrieval research that is probably going to be rediscovered in the near future. So if we summarize the bitter lesson which we have or are experiencing firsthand, states that the methods that leverage data and compute will outperform the handcrafted approaches. And if we focus on the re ranking component in the RHE, we'll be able to eliminate some complexity elsewhere in the process. And it's good to keep in mind that we're of course all the time waiting for advances in the large language model technology. But those advances will very likely benefit the re ranker component as well. So keep that in mind when you find new, interesting research. Mikko Lehtimäki: Cool. That's pretty much my argument finally there. I hope somebody finds it interesting. Demetrios: Very cool. It was bitter like a black cup of coffee, or bitter like dark chocolate. I really like these lessons that you've learned, and I appreciate you sharing them with us. I know the re ranking and just the retrieval evaluation aspect is something on a lot of people's minds right now, and I know a few people at Qdrant are actively thinking about that too, and how to make it easier. So it's cool that you've been through it, you've felt the pain, and you also are able to share what has helped you. And so I appreciate that. In case anyone has any questions, now would be the time to ask them. Otherwise we will take it offline and we'll let everyone reach out to you on LinkedIn, and I can share your LinkedIn profile in the chat to make it real easy for people to reach out if they want to, because this was cool, man. Demetrios: This was very cool, and I appreciate it. Mikko Lehtimäki: Thanks. I hope it's useful to someone. Demetrios: Excellent. Well, if that is all, I guess I've got one question for you. Even though we are kind of running up on time, so it'll be like a lightning question. You mentioned how you showed the really descriptive diagram where you have everything on there, and it's kind of like the dream state or the dream outcome you're going for. What is next? What are you going to create out of that diagram that you don't have yet? Mikko Lehtimäki: You want the lightning answer would be really good to put this run on a local hardware completely. I know that's not maybe the algorithmic thing or not necessarily in the scope of Yoko AI, but if we could run this on a physical device in that form, that would be super. Demetrios: I like it. I like it. All right. Well, Mikko, thanks for everything and everyone that is out there. All you vector space astronauts. Have a great day. Morning, night, wherever you are at in the world or in space. And we will see you later. Demetrios: Thanks. Mikko Lehtimäki: See you.
blog/the-bitter-lesson-of-retrieval-in-generative-language-model-workflows-mikko-lehtimäki-vector-space-talks.md
--- title: "Qdrant 1.9.0 - Heighten Your Security With Role-Based Access Control Support" draft: false slug: qdrant-1.9.x short_description: "Granular access control. Optimized shard transfers. Support for byte embeddings." description: "New access control options for RBAC, a much faster shard transfer procedure, and direct support for byte embeddings. " preview_image: /blog/qdrant-1.9.x/social_preview.png social_preview_image: /blog/qdrant-1.9.x/social_preview.png date: 2024-04-24T00:00:00-08:00 author: David Myriel featured: false tags: - vector search - role based access control - byte vectors - binary vectors - quantization - new features --- [Qdrant 1.9.0 is out!](https://github.com/qdrant/qdrant/releases/tag/v1.9.0) This version complements the release of our new managed product [Qdrant Hybrid Cloud](/hybrid-cloud/) with key security features valuable to our enterprise customers, and all those looking to productionize large-scale Generative AI. **Data privacy, system stability and resource optimizations** are always on our mind - so let's see what's new: - **Granular access control:** You can further specify access control levels by using JSON Web Tokens. - **Optimized shard transfers:** The synchronization of shards between nodes is now significantly faster! - **Support for byte embeddings:** Reduce the memory footprint of Qdrant with official `uint8` support. ## New access control options via JSON Web Tokens Historically, our API key supported basic read and write operations. However, recognizing the evolving needs of our user base, especially large organizations, we've implemented additional options for finer control over data access within internal environments. Qdrant now supports [granular access control using JSON Web Tokens (JWT)](/documentation/guides/security/#granular-access-control-with-jwt). JWT will let you easily limit a user's access to the specific data they are permitted to view. Specifically, JWT-based authentication leverages tokens with restricted access to designated data segments, laying the foundation for implementing role-based access control (RBAC) on top of it. **You will be able to define permissions for users and restrict access to sensitive endpoints.** **Dashboard users:** For your convenience, we have added a JWT generation tool the Qdrant Web UI under the 🔑 tab. If you're using the default url, you will find it at `http://localhost:6333/dashboard#/jwt`. ![jwt-web-ui](/blog/qdrant-1.9.x/jwt-web-ui.png) We highly recommend this feature to enterprises using [Qdrant Hybrid Cloud](/hybrid-cloud/), as it is tailored to those who need additional control over company data and user access. RBAC empowers administrators to define roles and assign specific privileges to users based on their roles within the organization. In combination with [Hybrid Cloud's data sovereign architecture](/documentation/hybrid-cloud/), this feature reinforces internal security and efficient collaboration by granting access only to relevant resources. > **Documentation:** [Read the access level breakdown](/documentation/guides/security/#table-of-access) to see which actions are allowed or denied. ## Faster shard transfers on node recovery We now offer a streamlined approach to [data synchronization between shards](/documentation/guides/distributed_deployment/#shard-transfer-method) during node upgrades or recovery processes. Traditional methods used to transfer the entire dataset, but our new `wal_delta` method focuses solely on transmitting the difference between two existing shards. By leveraging the Write-Ahead Log (WAL) of both shards, this method selectively transmits missed operations to the target shard, ensuring data consistency. In some cases, where transfers can take hours, this update **reduces transfers down to a few minutes.** The advantages of this approach are twofold: 1. **It is faster** since only the differential data is transmitted, avoiding the transfer of redundant information. 2. It upholds robust **ordering guarantees**, crucial for applications reliant on strict sequencing. For more details on how this works, check out the [shard transfer documentation](/documentation/guides/distributed_deployment/#shard-transfer-method). > **Note:** There are limitations to consider. First, this method only works with existing shards. Second, while the WALs typically retain recent operations, their capacity is finite, potentially impeding the transfer process if exceeded. Nevertheless, for scenarios like rapid node restarts or upgrades, where the WAL content remains manageable, WAL delta transfer is an efficient solution. Overall, this is a great optional optimization measure and serves as the **auto-recovery default for shard transfers**. It's safe to use everywhere because it'll automatically fall back to streaming records transfer if no difference can be resolved. By minimizing data redundancy and expediting transfer processes, it alleviates the strain on the cluster during recovery phases, enabling faster node catch-up. ## Native support for uint8 embeddings Our latest version introduces [support for uint8 embeddings within Qdrant collections](/documentation/concepts/collections/#vector-datatypes). This feature supports embeddings provided by companies in a pre-quantized format. Unlike previous iterations where indirect support was available via [quantization methods](/documentation/guides/quantization/), this update empowers users with direct integration capabilities. In the case of `uint8`, elements within the vector are represented as unsigned 8-bit integers, encompassing values ranging from 0 to 255. Using these embeddings gives you a **4x memory saving and about a 30% speed-up in search**, while keeping 99.99% of the response quality. As opposed to the original quantization method, with this feature you can spare disk usage if you directly implement pre-quantized embeddings. The configuration is simple. To create a collection with uint8 embeddings, simply add the following `datatype`: ```bash PUT /collections/{collection_name} { "vectors": { "size": 1024, "distance": "Dot", "datatype": "uint8" } } ``` > **Note:** When using Quantization to optimize vector search, you can use this feature to `rescore` binary vectors against new byte vectors. With double the speedup, you will be able to achieve a better result than if you rescored with float vectors. With each byte vector quantized at the binary level, the result will deliver unparalleled efficiency and savings. To learn more about this optimization method, read our [Quantization docs](/documentation/guides/quantization/). ## Minor improvements and new features - Greatly improve write performance while creating a snapshot of a large collection - [#3420](https://github.com/qdrant/qdrant/pull/3420), [#3938](https://github.com/qdrant/qdrant/pull/3938) - Report pending optimizations awaiting an update operation in collection info - [#3962](https://github.com/qdrant/qdrant/pull/3962), [#3971](https://github.com/qdrant/qdrant/pull/3971) - Improve `indexed_only` reliability on proxy shards - [#3998](https://github.com/qdrant/qdrant/pull/3998) - Make shard diff transfer fall back to streaming records - [#3798](https://github.com/qdrant/qdrant/pull/3798) - Cancel shard transfers when the shard is deleted - [#3784](https://github.com/qdrant/qdrant/pull/3784) - Improve sparse vectors search performance by another 7% - [#4037](https://github.com/qdrant/qdrant/pull/4037) - Build Qdrant with a single codegen unit to allow better compile-time optimizations - [#3982](https://github.com/qdrant/qdrant/pull/3982) - Remove `vectors_count` from collection info because it is unreliable. **Check if you use this field before upgrading** - [#4052](https://github.com/qdrant/qdrant/pull/4052) - Remove shard transfer method field from abort shard transfer operation - [#3803](https://github.com/qdrant/qdrant/pull/3803)
blog/qdrant-1.9.x.md
--- title: "Community Highlights #1" draft: false slug: community-highlights-1 # Change this slug to your page slug if needed short_description: Celebrating top contributions and achievements in vector search, featuring standout projects, articles, and the Creator of the Month, Pavan Kumar. # Change this description: Celebrating top contributions and achievements in vector search, featuring standout projects, articles, and the Creator of the Month, Pavan Kumar! preview_image: /blog/community-highlights-1/preview-image.png social_preview_image: /blog/community-highlights-1/preview-image.png date: 2024-06-20T11:57:37-03:00 author: Sabrina Aquino featured: false tags: - news - vector search - qdrant - ambassador program - community - artificial intelligence --- Welcome to the very first edition of Community Highlights, where we celebrate the most impactful contributions and achievements of our vector search community! 🎉 ## Content Highlights 🚀 Here are some standout projects and articles from our community this past month. If you're looking to learn more about vector search or build some great projects, we recommend you to check these guides: * **[Implementing Advanced Agentic Vector Search](https://towardsdev.com/implementing-advanced-agentic-vector-search-a-comprehensive-guide-to-crewai-and-qdrant-ca214ca4d039): A Comprehensive Guide to CrewAI and Qdrant by [Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/)** * **Build Your Own RAG Using [Unstructured, Llama3 via Groq, Qdrant & LangChain](https://www.youtube.com/watch?v=m_3q3XnLlTI) by [Sudarshan Koirala](https://www.linkedin.com/in/sudarshan-koirala/)** * **Qdrant filtering and [self-querying retriever](https://www.youtube.com/watch?v=iaXFggqqGD0) retrieval with LangChain by [Daniel Romero](https://www.linkedin.com/in/infoslack/)** * **RAG Evaluation with [Arize Phoenix](https://superlinked.com/vectorhub/articles/retrieval-augmented-generation-eval-qdrant-arize) by [Atita Arora](https://www.linkedin.com/in/atitaarora/)** * **Building a Serverless Application with [AWS Lambda and Qdrant](https://medium.com/@benitomartin/building-a-serverless-application-with-aws-lambda-and-qdrant-for-semantic-search-ddb7646d4c2f) for Semantic Search by [Benito Martin](https://www.linkedin.com/in/benitomzh/)** * **Production ready Secure and [Powerful AI Implementations with Azure Services](https://towardsdev.com/production-ready-secure-and-powerful-ai-implementations-with-azure-services-671b68631212) by [Pavan Kumar](https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/)** * **Building [Agentic RAG with Rust, OpenAI & Qdrant](https://medium.com/@joshmo_dev/building-agentic-rag-with-rust-openai-qdrant-d3a0bb85a267) by [Joshua Mo](https://www.linkedin.com/in/joshua-mo-4146aa220/)** * **Qdrant [Hybrid Search](https://medium.com/@nickprock/qdrant-hybrid-search-under-the-hood-using-haystack-355841225ac6) under the hood using Haystack by [Nicola Procopio](https://www.linkedin.com/in/nicolaprocopio/)** * **[Llama 3 Powered Voice Assistant](https://medium.com/@datadrifters/llama-3-powered-voice-assistant-integrating-local-rag-with-qdrant-whisper-and-langchain-b4d075b00ac5): Integrating Local RAG with Qdrant, Whisper, and LangChain by [Datadrifters](https://medium.com/@datadrifters)** * **[Distributed deployment](https://medium.com/@vardhanam.daga/distributed-deployment-of-qdrant-cluster-with-sharding-replicas-e7923d483ebc) of Qdrant cluster with sharding & replicas by [Vardhanam Daga](https://www.linkedin.com/in/vardhanam-daga/overlay/about-this-profile/)** * **Private [Healthcare AI Assistant](https://medium.com/aimpact-all-things-ai/building-private-healthcare-ai-assistant-for-clinics-using-qdrant-hybrid-cloud-jwt-rbac-dspy-and-089a772e08ae) using Qdrant Hybrid Cloud, DSPy, and Groq by [Sachin Khandewal](https://www.linkedin.com/in/sachink1729/)** ## Creator of the Month 🌟 <img src="/blog/community-highlights-1/creator-of-the-month-pavan.png" alt="Picture of Pavan Kumar with over 6 content contributions for the Creator of the Month" style="width: 70%;" /> Congratulations to Pavan Kumar for being awarded **Creator of the Month!** Check out what were Pavan's most valuable contributions to the Qdrant vector search community this past month: * **[Implementing Advanced Agentic Vector Search](https://towardsdev.com/implementing-advanced-agentic-vector-search-a-comprehensive-guide-to-crewai-and-qdrant-ca214ca4d039): A Comprehensive Guide to CrewAI and Qdrant** * **Production ready Secure and [Powerful AI Implementations with Azure Services](https://towardsdev.com/production-ready-secure-and-powerful-ai-implementations-with-azure-services-671b68631212)** * **Building Neural Search Pipelines with Azure and Qdrant: A Step-by-Step Guide [Part-1](https://towardsdev.com/building-neural-search-pipelines-with-azure-and-qdrant-a-step-by-step-guide-part-1-40c191084258) and [Part-2](https://towardsdev.com/building-neural-search-pipelines-with-azure-and-qdrant-a-step-by-step-guide-part-2-fba287b49574)** * **Building a RAG System with [Ollama, Qdrant and Raspberry Pi](https://blog.gopenai.com/harnessing-ai-at-the-edge-building-a-rag-system-with-ollama-qdrant-and-raspberry-pi-45ac3212cf75)** * **Building a [Multi-Document ReAct Agent](https://blog.stackademic.com/building-a-multi-document-react-agent-for-financial-analysis-using-llamaindex-and-qdrant-72a535730ac3) for Financial Analysis using LlamaIndex and Qdrant** Pavan is a seasoned technology expert with 14 years of extensive experience, passionate about sharing his knowledge through technical blogging, engaging in technical meetups, and staying active with cycling! Thank you, Pavan, for your outstanding contributions and commitment to the community! ## Most Active Members 🏆 <img src="/blog/community-highlights-1/most-active-members.png" alt="Picture of the 3 most active members of our vector search community" style="width: 70%;" /> We're excited to recognize our most active community members, who have been a constant support to vector search builders, and sharing their knowledge and making our community more engaging: * 🥇 **1st Place: Robert Caulk** * 🥈 **2nd Place: Nicola Procopio** * 🥉 **3rd Place: Joshua Mo** Thank you all for your dedication and for making the Qdrant vector search community such a dynamic and valuable place! Stay tuned for more highlights and updates in the next edition of Community Highlights! 🚀 **Join us for Office Hours! 🎙️** Don't miss our next [Office Hours hangout on Discord](https://discord.gg/s9YxGeQK?event=1252726857753821236), happening next week on June 27th. This is a great opportunity to introduce yourself to the community, learn more about vector search, and engage with the people behind this awesome content! See you there 👋
blog/community-highlights-1.md
--- title: "QSoC 2024: Announcing Our Interns!" draft: false slug: qsoc24-interns-announcement # Change this slug to your page slug if needed short_description: We are pleased to announce the selection of interns for the inaugural Qdrant Summer of Code (QSoC) program. # Change this description: We are pleased to announce the selection of interns for the inaugural Qdrant Summer of Code (QSoC) program. # Change this preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Change this social_preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Optional image used for link previews title_preview_image: /blog/qsoc24-interns-announcement/qsoc.jpg # Optional image used for blog post title # small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts date: 2024-05-08T16:44:22-03:00 author: Sabrina Aquino # Change this featured: false # if true, this post will be featured on the blog page tags: # Change this, related by tags posts will be shown on the blog page - QSoC - Qdrant Summer of Code - Google Summer of Code - vector search --- We are excited to announce the interns selected for the inaugural Qdrant Summer of Code (QSoC) program! After receiving many impressive applications, we have chosen two talented individuals to work on the following projects: **[Jishan Bhattacharya](https://www.linkedin.com/in/j16n/): WASM-based Dimension Reduction Visualization** Jishan will be implementing a dimension reduction algorithm in Rust, compiling it to WebAssembly (WASM), and integrating it with the Qdrant Web UI. This project aims to provide a more efficient and smoother visualization experience, enabling the handling of more data points and higher dimensions efficiently. **[Celine Hoang](https://www.linkedin.com/in/celine-h-hoang/): ONNX Cross Encoders in Python** Celine Hoang will focus on porting advanced ranking models—specifically Sentence Transformers, ColBERT, and BGE—to the ONNX (Open Neural Network Exchange) format. This project will enhance Qdrant's model support, making it more versatile and efficient in handling complex ranking tasks that are critical for applications such as recommendation engines and search functionalities. We look forward to working with Jishan and Celine over the coming months and are excited to see their contributions to the Qdrant project. Stay tuned for more updates on the QSoC program and the progress of these projects!
blog/qsoc24-interns-announcement.md
--- title: "DSPy vs LangChain: A Comprehensive Framework Comparison" #required short_description: DSPy and LangChain are powerful frameworks for building AI applications leveraging LLMs and vector search technology. description: We dive deep into the capabilities of DSPy and LangChain and discuss scenarios where each of these frameworks shine. #required social_preview_image: /blog/dspy-vs-langchain/dspy-langchain.png # This image will be used in preview_image: /blog/dspy-vs-langchain/dspy-langchain.png author: Qdrant Team # Author of the article. Required. author_link: https://qdrant.tech/ # Link to the author's page. Required. date: 2024-02-23T08:00:00-03:00 # Date of the article. Required. draft: false # If true, the article will not be published keywords: # Keywords for SEO - DSPy - LangChain - AI frameworks - LLMs - vector search - RAG applications - chatbots --- # The Evolving Landscape of AI Frameworks As Large Language Models (LLMs) and vector stores have become steadily more powerful, a new generation of frameworks has appeared which can streamline the development of AI applications by leveraging LLMs and vector search technology. These frameworks simplify the process of building everything from Retrieval Augmented Generation (RAG) applications to complex chatbots with advanced conversational abilities, and even sophisticated reasoning-driven AI applications. The most well-known of these frameworks is possibly [LangChain](https://github.com/langchain-ai/langchain). [Launched in October 2022](https://en.wikipedia.org/wiki/LangChain) as an open-source project by Harrison Chase, the project quickly gained popularity, attracting contributions from hundreds of developers on GitHub. LangChain excels in its broad support for documents, data sources, and APIs. This, along with seamless integration with vector stores like Qdrant and the ability to chain multiple LLMs, has allowed developers to build complex AI applications without reinventing the wheel. However, despite the many capabilities unlocked by frameworks like LangChain, developers still needed expertise in [prompt engineering](https://en.wikipedia.org/wiki/Prompt_engineering) to craft optimal LLM prompts. Additionally, optimizing these prompts and adapting them to build multi-stage reasoning AI remained challenging with the existing frameworks. In fact, as you start building production-grade AI applications, it becomes clear that a single LLM call isn’t enough to unlock the full capabilities of LLMs. Instead, you need to create a workflow where the model interacts with external tools like web browsers, fetches relevant snippets from documents, and compiles the results into a multi-stage reasoning pipeline. This involves building an architecture that combines and reasons on intermediate outputs, with LLM prompts that adapt according to the task at hand, before producing a final output. A manual approach to prompt engineering quickly falls short in such scenarios. In October 2023, researchers working in Stanford NLP released a library, [DSPy](https://github.com/stanfordnlp/dspy), which entirely automates the process of optimizing prompts and weights for large language models (LLMs), eliminating the need for manual prompting or prompt engineering. One of DSPy's key features is its ability to automatically tune LLM prompts, an approach that is especially powerful when your application needs to call the LLM several times within a pipeline. So, when building an LLM and vector store-backed AI application, which of these frameworks should you choose? In this article, we dive deep into the capabilities of each and discuss scenarios where each of these frameworks shine. Let’s get started! ## **LangChain: Features, Performance, and Use Cases** LangChain, as discussed above, is an open-source orchestration framework available in both [Python](https://python.langchain.com/v0.2/docs/introduction/) and [JavaScript](https://js.langchain.com/v0.2/docs/introduction/), designed to simplify the development of AI applications leveraging LLMs. For developers working with one or multiple LLMs, it acts as a universal interface for these AI models. LangChain integrates with various external data sources, supports a wide range of data types and stores, streamlines the handling of vector embeddings and retrieval through similarity search, and simplifies the integration of AI applications with existing software workflows. At a high level, LangChain abstracts the common steps required to work with language models into modular components, which serve as the building blocks of AI applications. These components can be "chained" together to create complex applications. Thanks to these abstractions, LangChain allows for rapid experimentation and prototyping of AI applications in a short timeframe. LangChain breaks down the functionality required to build AI applications into three key sections: - **Model I/O**: Building blocks to interface with the LLM. - **Retrieval**: Building blocks to streamline the retrieval of data used by the LLM for generation (such as the retrieval step in RAG applications). - **Composition**: Components to combine external APIs, services and other LangChain primitives. These components are pulled together into ‘chains’ that are constructed using [LangChain Expression Language](https://python.langchain.com/v0.1/docs/expression_language/) (LCEL). We’ill first look at the various building blocks, and then see how they can be combined using LCEL. ### **LLM Model I/O** LangChain offers broad compatibility with various LLMs, and its [LLM](https://python.langchain.com/v0.1/docs/modules/model_io/llms/) class provides a standard interface to these models. Leveraging proprietary models offered by platforms like OpenAI, Mistral, Cohere, or Gemini is straightforward and requires just an API key from the respective platform. For instance, to use OpenAI models, you simply need to do the following: ```python from langchain_openai import OpenAI llm = OpenAI(api_key="...") llm.invoke("Where is Paris?") ``` Open-source models like Meta AI’s Llama variants (such as Llama3-8B) or Mistral AI’s open models (like Mistral-7B) can be easily integrated using their Hugging Face endpoints or local LLM deployment tools like Ollama, vLLM, or LM Studio. You can also use the [CustomLLM](https://python.langchain.com/v0.1/docs/modules/model_io/llms/custom_llm/) class to build Custom LLM wrappers. Here’s how simple it is to use LangChain with LlaMa3-8B, using [Ollama](https://ollama.com/). ```python from langchain_community.llms import Ollama llm = Ollama(model="llama3") llm.invoke("Where is Berlin?") ``` LangChain also offers output parsers to structure the LLM output in a format that the application may need, such as structured data types like JSON, XML, CSV, and others. To understand LangChain’s interface with LLMs in detail, read the documentation [here](https://python.langchain.com/v0.1/docs/modules/model_io/). ### **Retrieval** Most enterprise AI applications are built by augmenting the LLM context using data specific to the application’s use case. To accomplish this, the relevant data needs to be first retrieved, typically using vector similarity search, and then passed to the LLM context at the generation step. This architecture, known as [Retrieval Augmented Generation](/articles/what-is-rag-in-ai/) (RAG), can be used to build a wide range of AI applications. While the retrieval process sounds simple, it involves a number of complex steps: loading data from a source, splitting it into chunks, converting it into vectors or vector embeddings, storing it in a vector store, and then retrieving results based on a query before the generation step. LangChain offers a number of building blocks to make this retrieval process simpler. - **Document Loaders**: LangChain offers over 100 different document loaders, including integrations with providers like Unstructured or Airbyte. It also supports loading various types of documents, such as PDFs, HTML, CSV, and code, from a range of locations like S3. - **Splitting**: During the retrieval step, you typically need to retrieve only the relevant section of a document. To do this, you need to split a large document into smaller chunks. LangChain offers various document transformers that make it easy to split, combine, filter, or manipulate documents. - **Text Embeddings**: A key aspect of the retrieval step is converting document chunks into vectors, which are high-dimensional numerical representations that capture the semantic meaning of the text. LangChain offers integrations with over 25 embedding providers and methods, such as [FastEmbed](https://github.com/qdrant/fastembed). - **Vector Store Integration**: LangChain integrates with over 50 vector stores, including specialized ones like [Qdrant](/documentation/frameworks/langchain/), and exposes a standard interface. - **Retrievers**: LangChain offers various retrieval algorithms and allows you to use third-party retrieval algorithms or create custom retrievers. - **Indexing**: LangChain also offers an indexing API that keeps data from any data source in sync with the vector store, helping to reduce complexities around managing unchanged content or avoiding duplicate content. ### **Composition** Finally, LangChain also offers building blocks that help combine external APIs, services, and LangChain primitives. For instance, it provides tools to fetch data from Wikipedia or search using Google Lens. The list of tools it offers is [extremely varied](https://python.langchain.com/v0.1/docs/integrations/tools/). LangChain also offers ways to build agents that use language models to decide on the sequence of actions to take. ### **LCEL** The primary method of building an application in LangChain is through the use of [LCEL](https://python.langchain.com/v0.1/docs/expression_language/), the LangChain Expression Language. It is a declarative syntax designed to simplify the composition of chains within the LangChain framework. It provides a minimalist code layer that enables the rapid development of chains, leveraging advanced features such as streaming, asynchronous execution, and parallel processing. LCEL is particularly useful for building chains that involve multiple language model calls, data transformations, and the integration of outputs from language models into downstream applications. ### **Some Use Cases of LangChain** Given the flexibility that LangChain offers, a wide range of applications can be built using the framework. Here are some examples: **RAG Applications**: LangChain provides all the essential building blocks needed to build Retrieval Augmented Generation (RAG) applications. It integrates with vector stores and LLMs, streamlining the entire process of loading, chunking, and retrieving relevant sections of a document in a few lines of code. **Chatbots**: LangChain offers a suite of components that streamline the process of building conversational chatbots. These include chat models, which are specifically designed for message-based interactions and provide a conversational tone suitable for chatbots. **Extracting Structured Outputs**: LangChain assists in extracting structured output from data using various tools and methods. It supports multiple extraction approaches, including tool/function calling mode, JSON mode, and prompting-based extraction. **Agents**: LangChain simplifies the process of building agents by providing building blocks and integration with LLMs, enabling developers to construct complex, multi-step workflows. These agents can interact with external data sources and tools, and generate dynamic and context-aware responses for various applications. If LangChain offers such a wide range of integrations and the primary building blocks needed to build AI applications, *why do we need another framework?* As Omar Khattab, PhD, Stanford and researcher at Stanford NLP, said when introducing DSPy in his [talk](https://www.youtube.com/watch?v=Dt3H2ninoeY) at ‘Scale By the Bay’ in November 2023: “We can build good reliable systems with these new artifacts that are language models (LMs), but importantly, this is conditioned on us *adapting* them as well as *stacking* them well”. ## **DSPy: Features, Performance, and Use Cases** When building AI systems, developers need to break down the task into multiple reasoning steps, adapt language model (LM) prompts for each step until they get the right results, and then ensure that the steps work together to achieve the desired outcome. Complex multihop pipelines, where multiple LLM calls are stacked, are messy. They involve string-based prompting tricks or prompt hacks at each step, and getting the pipeline to work is even trickier. Additionally, the manual prompting approach is highly unscalable, as any change in the underlying language model breaks the prompts and the pipeline. LMs are highly sensitive to prompts and slight changes in wording, context, or phrasing can significantly impact the model's output. Due to this, despite the functionality provided by frameworks like LangChain, developers often have to spend a lot of time engineering prompts to get the right results from LLMs. How do you build a system that’s less brittle and more predictable? Enter DSPy! [DSPy](https://github.com/stanfordnlp/dspy) is built on the paradigm that language models (LMs) should be programmed rather than prompted. The framework is designed for algorithmically optimizing and adapting LM prompts and weights, and focuses on replacing prompting techniques with a programming-centric approach. DSPy treats the LM like a device and abstracts out the underlying complexities of prompting. To achieve this, DSPy introduces three simple building blocks: ### **Signatures** [Signatures](https://dspy-docs.vercel.app/docs/building-blocks/signatures) replace handwritten prompts and are written in natural language. They are simply declarations or specs of the behavior that you expect from the language model. Some examples are: - question -> answer - long_document -> summary - context, question -> rationale, response Rather than manually crafting complex prompts or engaging in extensive fine-tuning of LLMs, signatures allow for the automatic generation of optimized prompts. DSPy Signatures can be specified in two ways: 1. Inline Signatures: Simple tasks can be defined in a concise format, like "question -> answer" for question-answering or "document -> summary" for summarization. 2. Class-Based Signatures: More complex tasks might require class-based signatures, which can include additional instructions or descriptions about the inputs and outputs. For example, a class for emotion classification might clearly specify the range of emotions that can be classified. ### **Modules** Modules take signatures as input, and automatically generate high-quality prompts. Inspired heavily from PyTorch, DSPy [modules](https://dspy-docs.vercel.app/docs/building-blocks/modules) eliminate the need for crafting prompts manually. The framework supports advanced modules like [dspy.ChainOfThought](https://dspy-docs.vercel.app/api/modules/ChainOfThought), which adds step-by-step rationalization before producing an output. The output not only provides answers but also rationales. Other modules include [dspy.ProgramOfThought](https://dspy-docs.vercel.app/api/modules/ProgramOfThought), which outputs code whose execution results dictate the response, and [dspy.ReAct](https://dspy-docs.vercel.app/api/modules/ReAct), an agent that uses tools to implement signatures. DSPy also offers modules like [dspy.MultiChainComparison](https://dspy-docs.vercel.app/api/modules/MultiChainComparison), which can compare multiple outputs from dspy.ChainOfThought in order to produce a final prediction. There are also utility modules like [dspy.majority](https://dspy-docs.vercel.app/docs/building-blocks/modules#what-other-dspy-modules-are-there-how-can-i-use-them) for aggregating responses through voting. Modules can be composed into larger programs, and you can compose multiple modules into bigger modules. This allows you to create complex, behavior-rich applications using language models. ### **Optimizers** [Optimizers](https://dspy-docs.vercel.app/docs/building-blocks/optimizers) take a set of modules that have been connected to create a pipeline, compile them into auto-optimized prompts, and maximize an outcome metric. Essentially, optimizers are designed to generate, test, and refine prompts, and ensure that the final prompt is highly optimized for the specific dataset and task at hand. Using optimizers in the DSPy framework significantly simplifies the process of developing and refining LM applications by automating the prompt engineering process. ### **Building AI Applications with DSPy** A typical DSPy program requires the developer to follow the following 8 steps: 1. **Defining the Task**: Identify the specific problem you want to solve, including the input and output formats. 2. **Defining the Pipeline**: Plan the sequence of operations needed to solve the task. Then craft the signatures and the modules. 3. **Testing with Examples**: Run the pipeline with a few examples to understand the initial performance. This helps in identifying immediate issues with the program and areas for improvement. 4. **Defining Your Data**: Prepare and structure your training and validation datasets. This is needed by the optimizer for training the model and evaluating its performance accurately. 5. **Defining Your Metric**: Choose metrics that will measure the success of your model. These metrics help the optimizer evaluate how well the model is performing. 6. **Collecting Zero-Shot Evaluations**: Run initial evaluations without prior training to establish a baseline. This helps in understanding the model’s capabilities and limitations out of the box. 7. **Compiling with a DSPy Optimizer**: Given the data and metric, you can now optimize the program. DSPy offers a variety of optimizers designed for different purposes. These optimizers can generate step-by-step examples, craft detailed instructions, and/or update language model prompts and weights as needed. 8. **Iterating**: Continuously refine each aspect of your task, from the pipeline and data to the metrics and evaluations. Iteration helps in gradually improving the model’s performance and adapting to new requirements. 9. {{< figure src=/blog/dspy-vs-langchain/process.jpg caption="Process" >}} **Language Model Setup** Setting up the LM in DSPy is easy. ```python # pip install dspy import dspy llm = dspy.OpenAI(model='gpt-3.5-turbo-1106', max_tokens=300) dspy.configure(lm=llm) # Let's test this. First define a module (ChainOfThought) and assign it a signature (return an answer, given a question). qa = dspy.ChainOfThought('question -> answer') # Then, run with the default LM configured. response = qa(question="Where is Paris?") print(response.answer) ``` You are not restricted to using one LLM in your program; you can use [multiple](https://dspy-docs.vercel.app/docs/building-blocks/language_models#using-multiple-lms-at-once). DSPy can be used with both managed models such as OpenAI, Cohere, Anyscale, Together, or PremAI as well as with local LLM deployments through vLLM, Ollama, or TGI server. All LLM calls are cached by default. **Vector Store Integration (Retrieval Model)** You can easily set up [Qdrant](/documentation/frameworks/dspy/) vector store to act as the retrieval model. To do so, follow these steps: ```python # pip install dspy-ai[qdrant] import dspy from dspy.retrieve.qdrant_rm import QdrantRM from qdrant_client import QdrantClient llm = dspy.OpenAI(model="gpt-3.5-turbo") qdrant_client = QdrantClient() qdrant_rm = QdrantRM("collection-name", qdrant_client, k=3) dspy.settings.configure(lm=llm, rm=qdrant_rm) ``` The above code sets up DSPy to use Qdrant (localhost), with collection-name as the default retrieval client. You can now build a RAG module in the following way: ```python class RAG(dspy.Module): def __init__(self, num_passages=5): super().__init__() self.retrieve = dspy.Retrieve(k=num_passages) self.generate_answer = dspy.ChainOfThought('context, question -> answer') # using inline signature def forward(self, question): context = self.retrieve(question).passages prediction = self.generate_answer(context=context, question=question) return dspy.Prediction(context=context, answer=prediction.answer) ``` Now you can use the RAG module like any Python module. **Optimizing the Pipeline** In this step, DSPy requires you to create a training dataset and a metric function, which can help validate the output of your program. Using this, DSPy tunes the parameters (i.e., the prompts and/or the LM weights) to maximize the accuracy of the RAG pipeline. Using DSPy optimizers involves the following steps: 1. Set up your DSPy program with the desired signatures and modules. 2. Create a training and validation dataset, with example input and output that you expect from your DSPy program. 3. Choose an appropriate optimizer such as BootstrapFewShotWithRandomSearch, MIPRO, or BootstrapFinetune. 4. Create a metric function that evaluates the performance of the DSPy program. You can evaluate based on accuracy or quality of responses, or on a metric that’s relevant to your program. 5. Run the optimizer with the DSPy program, metric function, and training inputs. DSPy will compile the program and automatically adjust parameters and improve performance. 6. Use the compiled program to perform the task. Iterate and adapt if required. To learn more about optimizing DSPy programs, read [this](https://dspy-docs.vercel.app/docs/building-blocks/optimizers). DSPy is heavily influenced by PyTorch, and replaces complex prompting with reusable modules for common tasks. Instead of crafting specific prompts, you write code that DSPy automatically translates for the LLM. This, along with built-in optimizers, makes working with LLMs more systematic and efficient. ### **Use Cases of DSPy** As we saw above, DSPy can be used to create fairly complex applications which require stacking multiple LM calls without the need for prompt engineering. Even though the framework is comparatively new - it started gaining popularity since November 2023 when it was first introduced - it has created a promising new direction for LLM-based applications. Here are some of the possible uses of DSPy: **Automating Prompt Engineering**: DSPy automates the process of creating prompts for LLMs, and allows developers to focus on the core logic of their application. This is powerful as manual prompt engineering makes AI applications highly unscalable and brittle. **Building Chatbots**: The modular design of DSPy makes it well-suited for creating chatbots with improved response quality and faster development cycles. DSPy's automatic prompting and optimizers can help ensure chatbots generate consistent and informative responses across different conversation contexts. **Complex Information Retrieval Systems**: DSPy programs can be easily integrated with vector stores, and used to build multi-step information retrieval systems with stacked calls to the LLM. This can be used to build highly sophisticated retrieval systems. For example, DSPy can be used to develop custom search engines that understand complex user queries and retrieve the most relevant information from vector stores. **Improving LLM Pipelines**: One of the best uses of DSPy is to optimize LLM pipelines. DSPy's modular design greatly simplifies the integration of LLMs into existing workflows. Additionally, DSPy's built-in optimizers can help fine-tune LLM pipelines based on desired metrics. **Multi-Hop Question-Answering**: Multi-hop question-answering involves answering complex questions that require reasoning over multiple pieces of information, which are often scattered across different documents or sections of text. With DSPy, users can leverage its automated prompt engineering capabilities to develop prompts that effectively guide the model on how to piece together information from various sources. ## **Comparative Analysis: DSPy vs LangChain** DSPy and LangChain are both powerful frameworks for building AI applications, leveraging large language models (LLMs) and vector search technology. Below is a comparative analysis of their key features, performance, and use cases: | Feature | LangChain | DSPy | | --- | --- | --- | | Core Focus | Focus on providing a large number of building blocks to simplify the development of applications that use LLMs in conjunction with user-specified data sources. | Focus on automating and modularizing LLM interactions, eliminating manual prompt engineering and improving systematic reliability. | | Approach | Utilizes modular components and chains that can be linked together using the LangChain Expression Language (LCEL). | Streamlines LLM interaction by prioritizing programming instead of prompting, and automating prompt refinement and weight tuning. | | Complex Pipelines | Facilitates the creation of chains using LCEL, supporting asynchronous execution and integration with various data sources and APIs. | Simplifies multi-stage reasoning pipelines using modules and optimizers, and ensures scalability through less manual intervention. | | Optimization | Relies on user expertise for prompt engineering and chaining of multiple LLM calls. | Includes built-in optimizers that automatically tune prompts and weights, and helps bring efficiency and effectiveness in LLM pipelines. | | Community and Support | Large open-source community with extensive documentation and examples. | Emerging framework with growing community support, and bringing a paradigm-shift in LLM prompting. | ### **LangChain** Strengths: 1. Data Sources and APIs: LangChain supports a wide variety of data sources and APIs, and allows seamless integration with different types of data. This makes it highly versatile for various AI applications​. 2. LangChain provides modular components that can be chained together and allows you to create complex AI workflows. LangChain Expression Language (LCEL) lets you use declarative syntax and makes it easier to build and manage workflows. 3. Since LangChain is an older framework, it has extensive documentation and thousands of examples that developers can take inspiration from. Weaknesses: 1. For projects involving complex, multi-stage reasoning tasks, LangChain requires significant manual prompt engineering. This can be time-consuming and prone to errors​. 2. Scalability Issues: Managing and scaling workflows that require multiple LLM calls can be pretty challenging. 3. Developers need sound understanding of prompt engineering in order to build applications that require multiple calls to the LLM. ### **DSPy** Strengths: 1. DSPy automates the process of prompt generation and optimization, and significantly reduces the need for manual prompt engineering. This makes working with LLMs easier and helps build scalable AI workflows​. 2. The framework includes built-in optimizers like BootstrapFewShot and MIPRO, which automatically refine prompts and adapt them to specific datasets​. 3. DSPy uses general-purpose modules and optimizers to simplify the complexities of prompt engineering. This can help you create complex multi-step reasoning applications easily, without worrying about the intricacies of dealing with LLMs. 4. DSPy supports various LLMs, including the flexibility of using multiple LLMs in the same program. 5. By focusing on programming rather than prompting, DSPy ensures higher reliability and performance for AI applications, particularly those that require complex multi-stage reasoning​​. Weaknesses: 1. As a newer framework, DSPy has a smaller community compared to LangChain. This means you will have limited availability of resources, examples, and community support​. 2. Although DSPy offers tutorials and guides, its documentation is less extensive than LangChain’s, which can pose challenges when you start​. 3. When starting with DSPy, you may feel limited to the paradigms and modules it provides. ​ ## **Selecting the Ideal Framework for Your AI Project** When deciding between DSPy and LangChain for your AI project, you should consider the problem statement and choose the framework that best aligns with your project goals. Here are some guidelines: ### **Project Type** **LangChain**: LangChain is ideal for projects that require extensive integration with multiple data sources and APIs, especially projects that benefit from the wide range of document loaders, vector stores, and retrieval algorithms that it supports​. **DSPy**: DSPy is best suited for projects that involve complex multi-stage reasoning pipelines or those that may eventually need stacked LLM calls. DSPy’s systematic approach to prompt engineering and its ability to optimize LLM interactions can help create highly reliable AI applications​. ### **Technical Expertise** **LangChain**: As the complexity of the application grows, LangChain requires a good understanding of prompt engineering and expertise in chaining multiple LLM calls. **DSPy**: Since DSPy is designed to abstract away the complexities of prompt engineering, it makes it easier for developers to focus on high-level logic rather than low-level prompt crafting. ### **Community and Support** **LangChain**: LangChain boasts a large and active community with extensive documentation, examples, and active contributions, and you will find it easier to get going. **DSPy**: Although newer and with a smaller community, DSPy is growing rapidly and offers tutorials and guides for some of the key use cases. DSPy may be more challenging to get started with, but its architecture makes it highly scalable. ### **Use Case Scenarios** **Retrieval Augmented Generation (RAG) Applications** **LangChain**: Excellent for building simple RAG applications due to its robust support for vector stores, document loaders, and retrieval algorithms. **DSPy**: Suitable for RAG applications requiring high reliability and automated prompt optimization, ensuring consistent performance across complex retrieval tasks. **Chatbots and Conversational AI** **LangChain**: Provides a wide range of components for building conversational AI, making it easy to integrate LLMs with external APIs and services​​. **DSPy**: Ideal for developing chatbots that need to handle complex, multi-stage conversations with high reliability and performance. DSPy’s automated optimizations ensure consistent and contextually accurate responses. **Complex Information Retrieval Systems** **LangChain**: Effective for projects that require seamless integration with various data sources and sophisticated retrieval capabilities​​. **DSPy**: Best for systems that involve complex multi-step retrieval processes, where prompt optimization and modular design can significantly enhance performance and reliability. You can also choose to combine and use the best features of both. In fact, LangChain has released an [integration with DSPy](https://python.langchain.com/v0.1/docs/integrations/providers/dspy/) to simplify this process. This allows you to use some of the utility functions that LangChain provides, such as text splitter, directory loaders, or integrations with other data sources while using DSPy for the LM interactions. ### Key Takeaways: - **LangChain's Flexibility:** LangChain integrates seamlessly with Qdrant, enabling streamlined vector embedding and retrieval for AI workflows. - **Optimized Retrieval:** Automate and enhance retrieval processes in multi-stage AI reasoning applications. - **Enhanced RAG Applications:** Fast and accurate retrieval of relevant document sections through vector similarity search. - **Support for Complex AI:** LangChain integration facilitates the creation of advanced AI architectures requiring precise information retrieval. - **Streamlined AI Development:** Simplify managing and retrieving large datasets, leading to more efficient AI development cycles in LangChain and DSPy. - **Future AI Workflows:** Qdrant's role in optimizing retrieval will be crucial as AI frameworks like DSPy continue to evolve and scale. ## **Level Up Your AI Projects with Advanced Frameworks** LangChain and DSPy both offer unique capabilities and can help you build powerful AI applications. Qdrant integrates with both LangChain and DSPy, allowing you to leverage its performance, efficiency and security features in either scenario. LangChain is ideal for projects that require extensive integration with various data sources and APIs. On the other hand, DSPy offers a powerful paradigm for building complex multi-stage applications. For pulling together an AI application that doesn’t require much prompt engineering, use LangChain. However, pick DSPy when you need a systematic approach to prompt optimization and modular design, and need robustness and scalability for complex, multi-stage reasoning applications. ## **References** [https://python.langchain.com/v0.1/docs/get_started/introduction](https://python.langchain.com/v0.1/docs/get_started/introduction) [https://dspy-docs.vercel.app/docs/intro](https://dspy-docs.vercel.app/docs/intro)
blog/dspy-vs-langchain.md
--- title: "Semantic Cache: Accelerating AI with Lightning-Fast Data Retrieval" draft: false slug: short_description: "Semantic Cache for Best Results and Optimization." description: "Semantic cache is reshaping AI applications by enabling rapid data retrieval. Discover how its implementation benefits your RAG setup." preview_image: /blog/semantic-cache-ai-data-retrieval/social_preview.png social_preview_image: /blog/semantic-cache-ai-data-retrieval/social_preview.png date: 2024-05-07T00:00:00-08:00 author: Daniel Romero, David Myriel featured: false tags: - vector search - vector database - semantic cache - gpt cache - semantic cache llm - AI applications - data retrieval - efficient data storage --- ## What is Semantic Cache? **Semantic cache** is a method of retrieval optimization, where similar queries instantly retrieve the same appropriate response from a knowledge base. Semantic cache differs from traditional caching methods. In computing, **cache** refers to high-speed memory that efficiently stores frequently accessed data. In the context of vector databases, a **semantic cache** improves AI application performance by storing previously retrieved results along with the conditions under which they were computed. This allows the application to reuse those results when the same or similar conditions occur again, rather than finding them from scratch. > The term **"semantic"** implies that the cache takes into account the meaning or semantics of the data or computation being cached, rather than just its syntactic representation. This can lead to more efficient caching strategies that exploit the structure or relationships within the data or computation. ![semantic-cache-question](/blog/semantic-cache-ai-data-retrieval/semantic-cache-question.png) Traditional caches operate on an exact match basis, while semantic caches search for the meaning of the key rather than an exact match. For example, **"What is the capital of Brazil?"** and **"Can you tell me the capital of Brazil?"** are semantically equivalent, but not exact matches. A semantic cache recognizes such semantic equivalence and provides the correct result. In this blog and video, we will walk you through how to use Qdrant to implement a basic semantic cache system. You can also try the [notebook example](https://github.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) for this implementation. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) ## Semantic Cache in RAG: the Key-Value Mechanism Semantic cache is increasingly used in Retrieval-Augmented Generation (RAG) applications. In RAG, when a user asks a question, we embed it and search our vector database, either by using keyword, semantic, or hybrid search methods. The matched context is then passed to a Language Model (LLM) along with the prompt and user question for response generation. Qdrant is recommended for setting up semantic cache as semantically evaluates the response. When semantic cache is implemented, we store common questions and their corresponding answers in a key-value cache. This way, when a user asks a question, we can retrieve the response from the cache if it already exists. **Diagram:** Semantic cache improves RAG by directly retrieving stored answers to the user. **Follow along with the gif** and see how semantic cache stores and retrieves answers. ![Alt Text](/blog/semantic-cache-ai-data-retrieval/semantic-cache.gif) When using a key-value cache, it's important to consider that slight variations in question wording can lead to different hash values. The two questions convey the same query but differ in wording. A naive cache search might fail due to distinct hashed versions of the questions. Implementing a more nuanced approach is necessary to accommodate phrasing variations and ensure accurate responses. To address this challenge, a semantic cache can be employed instead of relying solely on exact matches. This entails storing questions, answers, and their embeddings in a key-value structure. When a user poses a question, a semantic search by Qdrant is conducted across all cached questions to identify the most similar one. If the similarity score surpasses a predefined threshold, the system assumes equivalence between the user's question and the matched one, providing the corresponding answer accordingly. ## Benefits of Semantic Cache for AI Applications Semantic cache contributes to scalability in AI applications by making it simpler to retrieve common queries from vast datasets. The retrieval process can be computationally intensive and implementing a cache component can reduce the load. For instance, if hundreds of users repeat the same question, the system can retrieve the precomputed answer from the cache rather than re-executing the entire process. This cache stores questions as keys and their corresponding answers as values, providing an efficient means to handle repeated queries. > There are **potential cost savings** associated with utilizing semantic cache. Using a semantic cache eliminates the need for repeated searches and generation processes for similar or duplicate questions, thus saving time and LLM API resources, especially when employing costly language model calls like OpenAI's. ## When to Use Semantic Cache? For applications like question-answering systems where facts are retrieved from documents, caching is beneficial due to the consistent nature of the queries. *However, for text generation tasks requiring varied responses, caching may not be ideal as it returns previous responses, potentially limiting variation.* Thus, the decision to use caching depends on the specific use case. Using a cache might not be ideal for applications where diverse responses are desired across multiple queries. However, in question-answering systems, caching is advantageous since variations are insignificant. It serves as an effective performance optimization tool for chatbots by storing frequently accessed data. One strategy involves creating ad-hoc patches for chatbot dialogues, where commonly asked questions are pre-mapped to prepared responses in the cache. This allows the chatbot to swiftly retrieve and deliver responses without relying on a Language Model (LLM) for each query. ## Implement Semantic Cache: A Step-by-Step Guide The first part of this video explains how caching works. In the second part, you can follow along with the code with our [notebook example](https://github.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://githubtocolab.com/infoslack/qdrant-example/blob/main/semantic-cache.ipynb) <p align="center"><iframe src="https://www.youtube.com/embed/H53L_yHs9jE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe></p> ## Embrace the Future of AI Data Retrieval [Qdrant](https://github.com/qdrant/qdrant) offers the most flexible way to implement vector search for your RAG and AI applications. You can test out semantic cache on your free Qdrant Cloud instance today! Simply sign up for or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and follow our [documentation](/documentation/cloud/). You can also deploy Qdrant locally and manage via our UI. To do this, check our [Hybrid Cloud](/blog/hybrid-cloud/)! [![hybrid-cloud-get-started](/blog/hybrid-cloud-launch-partners/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login)
blog/semantic-cache-ai-data-retrieval.md
--- draft: false title: How to Superpower Your Semantic Search Using a Vector Database Vector Space Talks slug: semantic-search-vector-database short_description: Nicolas Mauti and his team at Malt discusses how they revolutionize the way freelancers connect with projects. description: Unlock the secrets of supercharging semantic search with Nicolas Mauti's insights on leveraging vector databases. Discover advanced strategies. preview_image: /blog/from_cms/nicolas-mauti-cropped.png date: 2024-01-09T12:27:18.659Z author: Demetrios Brinkmann featured: false tags: - Vector Space Talks - Retriever-Ranker Architecture - Semantic Search --- # How to Superpower Your Semantic Search Using a Vector Database with Nicolas Mauti > *"We found a trade off between performance and precision in Qdrant’s that were better for us than what we can found on Elasticsearch.”*\ > -- Nicolas Mauti > Want precision & performance in freelancer search? Malt's move to the Qdrant database is a masterstroke, offering geospatial filtering & seamless scaling. How did Nicolas Mauti and the team at Malt identify the need to transition to a retriever-ranker architecture for their freelancer matching app? Nicolas Mauti, a computer science graduate from INSA Lyon Engineering School, transitioned from software development to the data domain. Joining Malt in 2021 as a data scientist, he specialized in recommender systems and NLP models within a freelancers-and-companies marketplace. Evolving into an MLOps Engineer, Nicolas adeptly combines data science, development, and ops knowledge to enhance model development tools and processes at Malt. Additionally, he has served as a part-time teacher in a French engineering school since 2020. Notably, in 2023, Nicolas successfully deployed Qdrant at scale within Malt, contributing to the implementation of a new matching system. ***Listen to the episode on [Spotify](https://open.spotify.com/episode/5aTPXqa7GMjekUfD8aAXWG?si=otJ_CpQNScqTK5cYq2zBow), Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on [YouTube](https://youtu.be/OSZSingUYBM).*** <iframe width="560" height="315" src="https://www.youtube.com/embed/OSZSingUYBM?si=1PHIRm5K5Q-HKIiS" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> <iframe src="https://podcasters.spotify.com/pod/show/qdrant-vector-space-talk/embed/episodes/Superpower-Your-Semantic-Search-Using-Vector-Database---Nicolas-Mauti--Vector-Space-Talk-007-e2d9lrs/a-aaoae5a" height="102px" width="400px" frameborder="0" scrolling="no"></iframe> ## **Top Takeaways:** Dive into the intricacies of [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) enhancement with Nicolas Mauti, MLOps Engineer at Malt. Discover how Nicolas and his team at Malt revolutionize the way freelancers connect with projects. In this episode, Nicolas delves into enhancing semantics search at Malt by implementing a retriever-ranker architecture with multilingual transformer-based models, improving freelancer-project matching through a transition to [Qdrant](https://qdrant.tech/) that reduced latency from 10 seconds to 1 second and bolstering the platform's overall performance and scaling capabilities. 5 Keys to Learning from the Episode: 1. **Performance Enhancement Tactics**: Understand the technical challenges Malt faced due to increased latency brought about by their expansion to over half a million freelancers and the solutions they enacted. 2. **Advanced Matchmaking Architecture**: Learn about the retriever-ranker model adopted by Malt, which incorporates semantic searching alongside a KNN search for better efficacy in pairing projects with freelancers. 3. **Cutting-Edge Model Training**: Uncover the deployment of a multilingual transformer-based encoder that effectively creates high-fidelity embeddings to streamline the matchmaking process. 4. **Database Selection Process**: Mauti discusses the factors that shaped Malt's choice of database systems, facilitating a balance between high performance and accurate filtering capabilities. 5. **Operational Improvements**: Gain knowledge of the significant strides Malt made post-deployment, including a remarkable reduction in application latency and its positive effects on scalability and matching quality. > Fun Fact: Malt employs a multilingual transformer-based encoder model to generate 384-dimensional embeddings, which improved their semantic search capability. > ## Show Notes: 00:00 Matching app experiencing major performance issues.\ 04:56 Filtering freelancers and adopting retriever-ranker architecture.\ 09:20 Multilingual encoder model for adapting semantic space.\ 10:52 Review, retrain, categorize, and organize freelancers' responses.\ 16:30 Trouble with geospatial filtering databases\ 17:37 Benchmarking performance and precision of search algorithms.\ 21:11 Deployed in Kubernetes. Stored in Git repository, synchronized with Argo CD.\ 27:08 Improved latency quickly, validated architecture, aligned steps.\ 28:46 Invitation to discuss work using specific methods. ## More Quotes from Nicolas: *"And so GitHub's approach is basic idea that your git repository is your source of truth regarding what you must have in your Kubernetes clusters.”*\ -- Nicolas Mauti *"And so we can see that our space seems to be well organized, where the tech freelancer are close to each other and the graphic designer for example, are far from the tech family.”*\ -- Nicolas Mauti *"And also one thing that interested us is that it's multilingual. And as Malt is a European company, we have to have to model a multilingual model.”*\ -- Nicolas Mauti ## Transcript: Demetrios: We're live. We are live in the flesh. Nicholas, it's great to have you here, dude. And welcome to all those vector space explorers out there. We are back with another vector space talks. Today we're going to be talking all about how to superpower your semantics search with my man Nicholas, an ML ops engineer at Malt, in case you do not know what Malt is doing. They are pairing up, they're making a marketplace. They are connecting freelancers and companies. Demetrios: And Nicholas, you're doing a lot of stuff with recommender systems, right? Nicolas Mauti: Yeah, exactly. Demetrios: I love that. Well, as I mentioned, I am in an interesting spot because I'm trying to take in all the vitamin D I can while I'm listening to your talk. Everybody that is out there listening with us, get involved. Let us know where you're calling in from or watching from. And also feel free to drop questions in the chat as we go along. And if need be, I will jump in and stop Nicholas. But I know you got a little presentation for us, man you want to get into. Nicolas Mauti: Thanks for the, thanks for the introduction and hello, everyone. And thanks for the invitation to this talk, of course. So let's start. Let's do it. Demetrios: I love it. Superpowers. Nicolas Mauti: Yeah, we will have superpowers at the end of this presentation. So, yeah, hello, everyone. So I think the introduction was already done and perfectly done by Dimitrios. So I'm Nicola and yeah, I'm working as an Mlaps engineer at Malt. And also I'm a part time teacher in a french engineering school where I teach some mlaps course. So let's dig in today's subjects. So in fact, as Dimitrio said, malt is a marketplace and so our goal is to match on one side freelancers. And those freelancers have a lot of attributes, for example, a description, some skills and some awesome skills. Nicolas Mauti: And they also have some preferences and also some attributes that are not specifically semantics. And so it will be a key point of our topics today. And on other sides we have what we call projects that are submitted by companies. And this project also have a lot of attributes, for example, description, also some skills and need to find and also some preferences. And so our goal at the end is to perform a match between these two entities. And so for that we add a matching app in production already. And so in fact, we had a major issue with this application is performance of this application because the application becomes very slow. The p 50 latency was around 10 seconds. Nicolas Mauti: And what you have to keep from this is that if your latency, because became too high, you won't be able to perform certain scenarios. Sometimes you want some synchronous scenario where you fill your project and then you want to have directly your freelancers that match this project. And so if it takes too much time, you won't be able to have that. And so you will have to have some asynchronous scenario with email or stuff like that. And it's not very a good user experience. And also this problem were amplified by the exponential growth of the platform. Absolutely, we are growing. And so to give you some numbers, when I arrived two years ago, we had two time less freelancers. Nicolas Mauti: And today, and today we have around 600,000 freelancers in your base. So it's growing. And so with this grow, we had some, several issue. And something we have to keep in mind about this matching app. And so it's not only semantic app, is that we have two things in these apps that are not semantic. We have what we call art filters. And so art filters are art rules defined by the project team at Malt. And so these rules are hard and we have to respect them. Nicolas Mauti: For example, the question is hard rule at malt we have a local approach, and so we want to provide freelancers that are next to the project. And so for that we have to filter the freelancers and to have art filters for that and to be sure that we respect these rules. And on the other side, as you said, demetrius, we are talking about Rexis system here. And so in a rexy system, you also have to take into account some other parameters, for example, the preferences of the freelancers and also the activity on the platform of the freelancer, for example. And so in our system, we have to keep this in mind and to have this working. And so if we do a big picture of how our system worked, we had an API with some alphilter at the beginning, then ML model that was mainly semantic and then some rescoring function with other parameters. And so we decided to rework this architecture and to adopt a retriever ranker architecture. And so in this architecture, you will have your pool of freelancers. Nicolas Mauti: So here is your wall databases, so your 600,000 freelancers. And then you will have a first step that is called the retrieval, where we will constrict a subsets of your freelancers. And then you can apply your wrong kill algorithm. That is basically our current application. And so the first step will be, semantically, it will be fast, and it must be fast because you have to perform a quick selection of your more interesting freelancers and it's built for recall, because at this step you want to be sure that you have all your relevant freelancers selected and you don't want to exclude at this step some relevant freelancer because the ranking won't be able to take back these freelancers. And on the other side, the ranking can contain more features, not only semantics, it less conference in time. And if your retrieval part is always giving you a fixed size of freelancers, your ranking doesn't have to scale because you will always have the same number of freelancers in inputs. And this one is built for precision. Nicolas Mauti: At this point you don't want to keep non relevant freelancers and you have to be able to rank them and you have to be state of the art for this part. So let's focus on the first part. That's what will interesting us today. So for the first part, in fact, we have to build this semantic space where freelancers that are close regarding their skills or their jobs are closed in this space too. And so for that we will build this semantic space. And so then when we receive a project, we will have just to project this project in our space. And after that you will have just to do a search and a KNN search for knee arrest neighbor search. And in practice we are not doing a KNN search because it's too expensive, but inn search for approximate nearest neighbors. Nicolas Mauti: Keep this in mind, it will be interesting in our next slides. And so, to get this semantic space and to get this search, we need two things. The first one is a model, because we need a model to compute some vectors and to project our opportunity and our project and our freelancers in this space. And on another side, you will have to have a tool to operate this semantic step page. So to store the vector and also to perform the search. So for the first part, for the model, I will give you some quick info about how we build it. So for this part, it was more on the data scientist part. So the data scientist started from an e five model. Nicolas Mauti: And so the e five model will give you a common knowledge about the language. And also one thing that interested us is that it's multilingual. And as Malt is an european company, we have to have to model a multilingual model. And on top of that we built our own encoder model based on a transformer architecture. And so this model will be in charge to be adapted to Malchus case and to transform this very generic semantic space into a semantic space that is used for skills and jobs. And this model is also able to take into account the structure of a profile of a freelancer profile because you have a description and job, some skills, some experiences. And so this model is capable to take this into account. And regarding the training, we use some past interaction on the platform to train it. Nicolas Mauti: So when a freelancer receives a project, he can accept it or not. And so we use that to train this model. And so at the end we get some embeddings with 384 dimensions. Demetrios: One question from my side, sorry to stop you right now. Do you do any type of reviews or feedback and add that into the model? Nicolas Mauti: Yeah. In fact we continue to have some response about our freelancers. And so we also review them, sometimes manually because sometimes the response are not so good or we don't have exactly what we want or stuff like that, so we can review them. And also we are retraining the model regularly, so this way we can include new feedback from our freelancers. So now we have our model and if we want to see how it looks. So here I draw some ponds and color them by the category of our freelancer. So on the platform the freelancer can have category, for example tech or graphic or soon designer or this kind of category. And so we can see that our space seems to be well organized, where the tech freelancer are close to each other and the graphic designer for example, are far from the tech family. Nicolas Mauti: So it seems to be well organized. And so now we have a good model. So okay, now we have our model, we have to find a way to operate it, so to store this vector and to perform our search. And so for that, Vectordb seems to be the good candidate. But if you follow the news, you can see that vectordb is very trendy and there is plenty of actor on the market. And so it could be hard to find your loved one. And so I will try to give you the criteria we had and why we choose Qdrant at the end. So our first criteria were performances. Nicolas Mauti: So I think I already talked about this ponds, but yeah, we needed performances. The second ones was about inn quality. As I said before, we cannot do a KnN search, brute force search each time. And so we have to find a way to approximate but to be close enough and to be good enough on these points. And so otherwise we won't be leveraged the performance of our model. And the last one, and I didn't talk a lot about this before, is filtering. Filtering is a big problem for us because we have a lot of filters, of art filters, as I said before. And so if we think about my architecture, we can say, okay, so filtering is not a problem. Nicolas Mauti: You can just have a three step process and do filtering, semantic search and then ranking, or do semantic search, filtering and then ranking. But in both cases, you will have some troubles if you do that. The first one is if you want to apply prefiltering. So filtering, semantic search, ranking. If you do that, in fact, you will have, so we'll have this kind of architecture. And if you do that, you will have, in fact, to flag each freelancers before asking the [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and performing a search, you will have to flag each freelancer whether there could be selected or not. And so with that, you will basically create a binary mask on your freelancers pool. And as the number of freelancers you have will grow, your binary namask will also grow. Nicolas Mauti: And so it's not very scalable. And regarding the performance, it will be degraded as your freelancer base grow. And also you will have another problem. A lot of [vector database](https://qdrant.tech/articles/what-is-a-vector-database/) and Qdrants is one of them using hash NSW algorithm to do your inn search. And this kind of algorithm is based on graph. And so if you do that, you will deactivate some nodes in your graph, and so your graph will become disconnected and you won't be able to navigate in your graph. And so your quality of your matching will degrade. So it's definitely not a good idea to apply prefiltering. Nicolas Mauti: So, no, if we go to post filtering here, I think the issue is more clear. You will have this kind of architecture. And so, in fact, if you do that, you will have to retrieve a lot of freelancer for your [vector database](https://qdrant.tech/articles/what-is-a-vector-database/). If you apply some very aggressive filtering and you exclude a lot of freelancer with your filtering, you will have to ask for a lot of freelancer in your vector database and so your performances will be impacted. So filtering is a problem. So we cannot do pre filtering or post filtering. So we had to find a database that do filtering and matching and semantic matching and search at the same time. And so Qdrant is one of them, you have other one in the market. Nicolas Mauti: But in our case, we had one filter that caused us a lot of troubles. And this filter is the geospatial filtering and a few of databases under this filtering, and I think Qdrant is one of them that supports it. But there is not a lot of databases that support them. And we absolutely needed that because we have a local approach and we want to be sure that we recommend freelancer next to the project. And so now that I said all of that, we had three candidates that we tested and we benchmarked them. We had elasticsearch PG vector, that is an extension of PostgreSQL and Qdrants. And on this slide you can see Pycon for example, and Pycon was excluded because of the lack of geospatial filtering. And so we benchmark them regarding the qps. Nicolas Mauti: So query per second. So this one is for performance, and you can see that quadron was far from the others, and we also benchmark it regarding the precision, how we computed the precision, for the precision we used a corpus that it's called textmax, and Textmax corpus provide 1 million vectors and 1000 queries. And for each queries you have your grown truth of the closest vectors. They used brute force knn for that. And so we stored this vector in our databases, we run the query and we check how many vectors we found that were in the ground truth. And so they give you a measure of your precision of your inn algorithm. For this metric, you could see that elasticsearch was a little bit better than Qdrants, but in fact we were able to tune a little bit the parameter of the AsHNSW algorithm and indexes. And at the end we found a better trade off, and we found a trade off between performance and precision in Qdrants that were better for us than what we can found on elasticsearch. Nicolas Mauti: So at the end we decided to go with Qdrant. So we have, I think all know we have our model and we have our tool to operate them, to operate our model. So a final part of this presentation will be about the deployment. I will talk about it a little bit because I think it's interesting and it's also part of my job as a development engineer. So regarding the deployment, first we decided to deploy a Qdrant in a cluster configuration. We decided to start with three nodes and so we decided to get our collection. So collection are where all your vector are stored in Qdrant, it's like a table in SQL or an index in elasticsearch. And so we decided to split our collection between three nodes. Nicolas Mauti: So it's what we call shards. So you have a shard of a collection on each node, and then for each shard you have one replica. So the replica is basically a copy of a shard that is living on another node than the primary shard. So this way you have a copy on another node. And so this way if we operate normal conditions, your query will be split across your three nodes, and so you will have your response accordingly. But what is interesting is that if we lose one node, for example, this one, for example, because we are performing a rolling upgrade or because kubernetes always kill pods, we will be still able to operate because we have the replica to get our data. And so this configuration is very robust and so we are very happy with it. And regarding the deployment. Nicolas Mauti: So as I said, we deployed it in kubernetes. So we use the Qdrant M chart, the official M chart provided by Qdrants. In fact we subcharted it because we needed some additional components in your clusters and some custom configuration. So I didn't talk about this, but M chart are just a bunch of file of Yaml files that will describe the Kubernetes object you will need in your cluster to operate your databases in your case, and it's collection of file and templates to do that. And when you have that at malt we are using what we called a GitHub's approach. And so GitHub's approach is basic idea that your git repository is your groom truth regarding what you must have in your Kubernetes clusters. And so we store these files and these M charts in git, and then we have a tool that is called Argo CD that will pull our git repository at some time and it will check the differences between what we have in git and what we have in our cluster and what is living in our cluster. And it will then synchronize what we have in git directly in our cluster, either automatically or manually. Nicolas Mauti: So this is a very good approach to collaborate and to be sure that what we have in git is what you have in your cluster. And to be sure about what you have in your cluster by just looking at your git repository. And I think that's pretty all I have one last slide, I think that will interest you. It's about the outcome of the project, because we did that at malt. We built this architecture with our first phase with Qdrants that do the semantic matching and that apply all the filtering we have. And in the second part we keep our all drunking system. And so if we look at the latency of our apps, at the P 50 latency of our apps, so it's a wall app with the two steps and with the filters, the semantic matching and the ranking. As you can see, we started in a debate test in mid October. Nicolas Mauti: Before that it was around 10 seconds latency, as I said at the beginning of the talk. And so we already saw a huge drop in the application and we decided to go full in December and we can see another big drop. And so we were around 10 seconds and now we are around 1 second and alpha. So we divided the latency of more than five times. And so it's a very good news for us because first it's more scalable because the retriever is very scalable and with the cluster deployment of Qdrants, if we need, we can add more nodes and we will be able to scale this phase. And after that we have a fixed number of freelancers that go into the matching part. And so the matching part doesn't have to scale. No. Nicolas Mauti: And the other good news is that now that we are able to scale and we have a fixed size, after our first parts, we are able to build more complex and better matching model and we will be able to improve the quality of our matching because now we are able to scale and to be able to handle more freelancers. Demetrios: That's incredible. Nicolas Mauti: Yeah, sure. It was a very good news for us. And so that's all. And so maybe you have plenty of question and maybe we can go with that. Demetrios: All right, first off, I want to give a shout out in case there are freelancers that are watching this or looking at this, now is a great time to just join Malt, I think. It seems like it's getting better every day. So I know there's questions that will come through and trickle in, but we've already got one from Luis. What's happening, Luis? He's asking what library or service were you using for Ann before considering Qdrant, in fact. Nicolas Mauti: So before that we didn't add any library or service or we were not doing any ann search or [semantic searc](https://qdrant.tech/documentation/tutorials/search-beginners/) in the way we are doing it right now. We just had one model when we passed the freelancers and the project at the same time in the model, and we got relevancy scoring at the end. And so that's why it was also so slow because you had to constrict each pair and send each pair to your model. And so right now we don't have to do that and so it's much better. Demetrios: Yeah, that makes sense. One question from my side is it took you, I think you said in October you started with the A B test and then in December you rolled it out. What was that last slide that you had? Nicolas Mauti: Yeah, that's exactly that. Demetrios: Why the hesitation? Why did it take you from October to December to go down? What was the part that you weren't sure about? Because it feels like you saw a huge drop right there and then why did you wait until December? Nicolas Mauti: Yeah, regarding the latency and regarding the drop of the latency, the result was very clear very quickly. I think maybe one week after that, we were convinced that the latency was better. First, our idea was to validate the architecture, but the second reason was to be sure that we didn't degrade the quality of the matching because we have a two step process. And the risk is that the two model doesn't agree with each other. And so if the intersection of your first step and the second step is not good enough, you will just have some empty result at the end because your first part will select a part of freelancer and the second step, you select another part and so your intersection is empty. And so our goal was to assess that the two steps were aligned and so that we didn't degrade the quality of the matching. And regarding the volume of projects we have, we had to wait for approximately two months. Demetrios: It makes complete sense. Well, man, I really appreciate this. And can you go back to the slide where you show how people can get in touch with you if they want to reach out and talk more? I encourage everyone to do that. And thanks so much, Nicholas. This is great, man. Nicolas Mauti: Thanks. Demetrios: All right, everyone. By the way, in case you want to join us and talk about what you're working on and how you're using Qdrant or what you're doing in the semantic space or [semantic search](https://qdrant.tech/documentation/tutorials/search-beginners/) or vector space, all that fun stuff, hit us up. We would love to have you on here. One last question for you, Nicola. Something came through. What indexing method do you use? Is it good for using OpenAI embeddings? Nicolas Mauti: So in our case, we have our own model to build the embeddings. Demetrios: Yeah, I remember you saying that at the beginning, actually. All right, cool. Well, man, thanks a lot and we will see everyone next week for another one of these vector space talks. Thank you all for joining and take care. Care. Thanks.
blog/superpower-your-semantic-search-using-vector-database-nicolas-mauti-vector-space-talk-007.md
--- draft: false title: "Visua and Qdrant: Vector Search in Computer Vision" slug: short_description: "Using vector search for quality control and anomaly detection in computer vision." description: "How Visua uses Qdrant as a vector search engine for quality control and anomaly detection in their computer vision platform." preview_image: /blog/case-study-visua/image4.png social_preview_image: /blog/case-study-visua/image4.png date: 2024-05-01T00:02:00Z author: Manuel Meyer featured: false tags: - visua - qdrant - computer vision - quality control - anomaly detection --- ![visua/image1.png](/blog/case-study-visua/image1.png) For over a decade, [VISUA](https://visua.com/) has been a leader in precise, high-volume computer vision data analysis, developing a robust platform that caters to a wide range of use cases, from startups to large enterprises. Starting with social media monitoring, where it excels in analyzing vast data volumes to detect company logos, VISUA has built a diverse ecosystem of customers, including names in social media monitoring, like **Brandwatch**, cybersecurity like **Mimecast**, trademark protection like **Ebay** and several sports agencies like **Vision Insights** for sponsorship evaluation. ![visua/image3.png](/blog/case-study-visua/image3.png) ## The Challenge **Quality Control at Scale** The accuracy of object detection within images is critical for VISUA ensuring that their algorithms are detecting objects in images correctly. With growing volumes of data processed for clients, the company was looking for a way to enhance its quality control and anomaly detection mechanisms to be more scalable and auditable. The challenge was twofold. First, VISUA needed a method to rapidly and accurately identify images and the objects within them that were similar, to identify false negatives, or unclear outcomes and use them as inputs for reinforcement learning. Second, the rapid growth in data volume challenged their previous quality control processes, which relied on a sampling method based on meta-information (like analyzing lower-confidence, smaller, or blurry images), which involved more manual reviews and was not as scalable as needed. In response, the team at VISUA explored vector databases as a solution. ## The Solution **Accelerating Anomaly Detection and Elevating Quality Control with Vector Search** In addressing the challenge of scaling and enhancing its quality control processes, VISUA turned to vector databases, with Qdrant emerging as the solution of choice. This technological shift allowed VISUA to leverage vector databases for identifying similarities and deduplicating vast volumes of images, videos, and frames. By doing so, VISUA was able to automatically classify objects with a level of precision that was previously unattainable. The introduction of vectors allowed VISUA to represent data uniquely and mark frames for closer examination by prioritizing the review of anomalies and data points with the highest variance. Consequently, this technology empowered Visia to scale its quality assurance and reinforcement learning processes tenfold. > *“Using Qdrant as a vector database for our quality control allowed us to review 10x more data by exploiting repetitions and deduplicating samples and doing that at scale with having a query engine.”* Alessandro Prest, Co-Founder at VISUA. ![visua/image2.jpg](/blog/case-study-visua/image2.jpg) ## The Selection Process **Finding the Right Vector Database For Quality Analysis and Anomaly Detection** Choosing the right vector database was a pivotal decision for VISUA, and the team conducted extensive benchmarks. They tested various solutions, including Weaviate, Pinecone, and Qdrant, focusing on the efficient handling of both vector and payload indexes. The objective was to identify a system that excels in managing hybrid queries that blend vector similarities with record attributes, crucial for enhancing their quality control and anomaly detection capabilities. Qdrant distinguished itself through its: - **Hybrid Query Capability:** Qdrant enables the execution of hybrid queries that combine payload fields and vector data, allowing for comprehensive and nuanced searches. This functionality leverages the strengths of both payload attributes and vector similarities for detailed data analysis. Prest noted the importance of Qdrant's hybrid approach, saying, “When talking with the founders of Qdrant, we realized that they put a lot of effort into this hybrid approach, which really resonated with us.” - **Performance Superiority**: Qdrant distinguished itself as the fastest engine for VISUA's specific needs, significantly outpacing alternatives with query speeds up to 40 times faster for certain VISUA use cases. Alessandro Prest highlighted, "Qdrant was the fastest engine by a large margin for our use case," underscoring its significant efficiency and scalability advantages. - **API Documentation**: The clarity, comprehensiveness, and user-friendliness of Qdrant’s API documentation and reference guides further solidified VISUA’s decision. This strategic selection enabled VISUA to achieve a notable increase in operational efficiency and scalability in its quality control processes. ## Implementing Qdrant Upon selecting Qdrant as their vector database solution, VISUA undertook a methodical approach to integration. The process began in a controlled development environment, allowing VISUA to simulate real-world use cases and ensure that Qdrant met their operational requirements. This careful, phased approach ensured a smooth transition when moving Qdrant into their production environment, hosted on AWS clusters. VISUA is leveraging several specific Qdrant features in their production setup: 1. **Support for Multiple Vectors per Record/Point**: This feature allows for a nuanced and multifaceted analysis of data, enabling VISUA to manage and query complex datasets more effectively. 2. **Quantization**: Quantization optimizes storage and accelerates query processing, improving data handling efficiency and lowering memory use, essential for large-scale operations. ## The Results Integrating Qdrant into VISUA's quality control operations has delivered measurable outcomes when it comes to efficiency and scalability: - **40x Faster Query Processing**: Qdrant has drastically reduced the time needed for complex queries, enhancing workflow efficiency. - **10x Scalability Boost:** The efficiency of Qdrant enables VISUA to handle ten times more data in its quality assurance and learning processes, supporting growth without sacrificing quality. - **Increased Data Review Capacity:** The increased capacity to review the data allowed VISUA to enhance the accuracy of its algorithms through reinforcement learning. #### Expanding Qdrant’s Use Beyond Anomaly Detection While the primary application of Qdrant is focused on quality control, VISUA's team is actively exploring additional use cases with Qdrant. VISUA's use of Qdrant has inspired new opportunities, notably in content moderation. "The moment we started to experiment with Qdrant, opened up a lot of ideas within the team for new applications,” said Prest on the potential unlocked by Qdrant. For example, this has led them to actively explore the Qdrant [Discovery API](/documentation/concepts/explore/?q=discovery#discovery-api), with an eye on enhancing content moderation processes. Beyond content moderation, VISUA is set for significant growth by broadening its copyright infringement detection services. As the demand for detecting a wider range of infringements, like unauthorized use of popular characters on merchandise, increases, VISUA plans to expand its technology capabilities. Qdrant will be pivotal in this expansion, enabling VISUA to meet the complex and growing challenges of moderating copyrighted content effectively and ensuring comprehensive protection for brands and creators.
blog/case-study-visua.md
--- draft: false title: "Announcing Qdrant's $28M Series A Funding Round" slug: series-A-funding-round short_description: description: preview_image: /blog/series-A-funding-round/series-A.png social_preview_image: /blog/series-A-funding-round/series-A.png date: 2024-01-23T09:00:00.000Z author: Andre Zayarni, CEO & Co-Founder featured: true tags: - Funding - Series-A - Announcement --- Today, we are excited to announce our $28M Series A funding round, which is led by Spark Capital with participation from our existing investors Unusual Ventures and 42CAP. We have seen incredible user growth and support from our open-source community in the past two years - recently exceeding 5M downloads. This is a testament to our mission to build the most efficient, scalable, high-performance vector database on the market. We are excited to further accelerate this trajectory with our new partner and investor, Spark Capital, and the continued support of Unusual Ventures and 42CAP. This partnership uniquely positions us to empower enterprises with cutting edge vector search technology to build truly differentiating, next-gen AI applications at scale. ## The Emergence and Relevance of Vector Databases A paradigm shift is underway in the field of data management and information retrieval. Today, our world is increasingly dominated by complex, unstructured data like images, audio, video, and text. Traditional ways of retrieving data based on keyword matching are no longer sufficient. Vector databases are designed to handle complex high-dimensional data, unlocking the foundation for pivotal AI applications. They represent a new frontier in data management, in which complexity is not a barrier but an opportunity for innovation. The rise of generative AI in the last few years has shone a spotlight on vector databases, prized for their ability to power retrieval-augmented generation (RAG) applications. What we are seeing now, both within AI and beyond, is only the beginning of the opportunity for vector databases. Within our Qdrant community, we already see a multitude of unique solutions and applications leveraging our technology for multimodal search, anomaly detection, recommendation systems, complex data analysis, and more. ## What sets Qdrant apart? To meet the needs of the next generation of AI applications, Qdrant has always been built with four keys in mind: efficiency, scalability, performance, and flexibility. Our goal is to give our users unmatched speed and reliability, even when they are building massive-scale AI applications requiring the handling of billions of vectors. We did so by building Qdrant on Rust for performance, memory safety, and scale. Additionally, [our custom HNSW search algorithm](/articles/filtrable-hnsw/) and unique [filtering](/documentation/concepts/filtering/) capabilities consistently lead to [highest RPS](/benchmarks/), minimal latency, and high control with accuracy when running large-scale, high-dimensional operations. Beyond performance, we provide our users with the most flexibility in cost savings and deployment options. A combination of cutting-edge efficiency features, like [built-in compression options](/documentation/guides/quantization/), [multitenancy](/documentation/guides/multiple-partitions/) and the ability to [offload data to disk](/documentation/concepts/storage/), dramatically reduce memory consumption. Committed to privacy and security, crucial for modern AI applications, Qdrant now also offers on-premise and hybrid SaaS solutions, meeting diverse enterprise needs in a data-sensitive world. This approach, coupled with our open-source foundation, builds trust and reliability with engineers and developers, making Qdrant a game-changer in the vector database domain. ## What's next? We are incredibly excited about our next chapter to power the new generation of enterprise-grade AI applications. The support of our open-source community has led us to this stage and we’re committed to continuing to build the most advanced vector database on the market, but ultimately it’s up to you to decide! We invite you to [test out](https://cloud.qdrant.io/) Qdrant for your AI applications today.
blog/series-A-funding-round.md
--- title: Qdrant Blog subtitle: Check out our latest posts description: A place to learn how to become an expert traveler through vector space. Subscribe and we will update you on features and news. email_placeholder: Enter your email subscribe_button: Subscribe features_title: Features and News search_placeholder: What are you Looking for? aliases: # There is no need to add aliases for future new tags and categories! - /tags - /tags/case-study - /tags/dailymotion - /tags/recommender-system - /tags/binary-quantization - /tags/embeddings - /tags/openai - /tags/gsoc24 - /tags/open-source - /tags/summer-of-code - /tags/vector-database - /tags/artificial-intelligence - /tags/machine-learning - /tags/vector-search - /tags/case_study - /tags/dust - /tags/announcement - /tags/funding - /tags/series-a - /tags/azure - /tags/cloud - /tags/data-science - /tags/information-retrieval - /tags/benchmarks - /tags/performance - /tags/qdrant - /tags/blog - /tags/large-language-models - /tags/podcast - /tags/retrieval-augmented-generation - /tags/search - /tags/vector-search-engine - /tags/vector-image-search - /tags/vector-space-talks - /tags/retriever-ranker-architecture - /tags/semantic-search - /tags/llm - /tags/entity-matching-solution - /tags/real-time-processing - /tags/vector-space-talk - /tags/fastembed - /tags/quantized-emdedding-models - /tags/llm-recommendation-system - /tags/integrations - /tags/unstructured - /tags/integration - /tags/n8n - /tags/news - /tags/webinar - /tags/cohere - /tags/embedding-model - /tags/database - /tags/vector-search-database - /tags/neural-networks - /tags/similarity-search - /tags/embedding - /tags/corporate-news - /tags/nvidia - /tags/docarray - /tags/jina-integration - /categories - /categories/news - /categories/vector-search - /categories/webinar - /categories/vector-space-talk ---
blog/_index.md
--- draft: false preview_image: /blog/from_cms/nils-thumbnail.png title: "From Content Quality to Compression: The Evolution of Embedding Models at Cohere with Nils Reimers" slug: cohere-embedding-v3 short_description: Nils Reimers head of machine learning at Cohere shares the details about their latest embedding model. description: Nils Reimers head of machine learning at Cohere comes on the recent vector space talks to share details about their latest embedding V3 model. date: 2023-11-19T12:48:36.622Z author: Demetrios Brinkmann featured: false author_link: https://www.linkedin.com/in/dpbrinkm/ tags: - Vector Space Talk - Cohere - Embedding Model categories: - News - Vector Space Talk --- For the second edition of our Vector Space Talks we were joined by none other than Cohere’s Head of Machine Learning Nils Reimers. ## Key Takeaways Let's dive right into the five key takeaways from Nils' talk: 1. Content Quality Estimation: Nils explained how embeddings have traditionally focused on measuring topic match, but content quality is just as important. He demonstrated how their model can differentiate between informative and non-informative documents. 2. Compression-Aware Training: He shared how they've tackled the challenge of reducing the memory footprint of embeddings, making it more cost-effective to run vector databases on platforms like [Qdrant](https://cloud.qdrant.io/login). 3. Reinforcement Learning from Human Feedback: Nils revealed how they've borrowed a technique from reinforcement learning and applied it to their embedding models. This allows the model to learn preferences based on human feedback, resulting in highly informative responses. 4. Evaluating Embedding Quality: Nils emphasized the importance of evaluating embedding quality in relative terms rather than looking at individual vectors. It's all about understanding the context and how embeddings relate to each other. 5. New Features in the Pipeline: Lastly, Nils gave us a sneak peek at some exciting features they're developing, including input type support for Langchain and improved compression techniques. Now, here's a fun fact from the episode: Did you know that the content quality estimation model *can't* differentiate between true and fake statements? It's a challenging task, and the model relies on the information present in its pretraining data. We loved having Nils as our guest, check out the full talk below. If you or anyone you know would like to come on the Vector Space Talks <iframe width="560" height="315" src="https://www.youtube.com/embed/Abh3YCahyqU?si=OB4FXhTivsLLXzQV" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
blog/cohere-embedding-v3.md
--- title: Loading Unstructured.io Data into Qdrant from the Terminal slug: qdrant-unstructured short_description: Loading Unstructured Data into Qdrant from the Terminal description: Learn how to simplify the process of loading unstructured data into Qdrant using the Qdrant Unstructured destination. preview_image: /blog/qdrant-unstructured/preview.jpg date: 2024-01-09T00:41:38+05:30 author: Anush Shetty tags: - integrations - qdrant - unstructured --- Building powerful applications with Qdrant starts with loading vector representations into the system. Traditionally, this involves scraping or extracting data from sources, performing operations such as cleaning, chunking, and generating embeddings, and finally loading it into Qdrant. While this process can be complex, Unstructured.io includes Qdrant as an ingestion destination. In this blog post, we'll demonstrate how to load data into Qdrant from the channels of a Discord server. You can use a similar process for the [20+ vetted data sources](https://unstructured-io.github.io/unstructured/ingest/source_connectors.html) supported by Unstructured. ### Prerequisites - A running Qdrant instance. Refer to our [Quickstart guide](/documentation/quick-start/) to set up an instance. - A Discord bot token. Generate one [here](https://discord.com/developers/applications) after adding the bot to your server. - Unstructured CLI with the required extras. For more information, see the Discord [Getting Started guide](https://discord.com/developers/docs/getting-started). Install it with the following command: ```bash pip install unstructured[discord,local-inference,qdrant] ``` Once you have the prerequisites in place, let's begin the data ingestion. ### Retrieving Data from Discord To generate structured data from Discord using the Unstructured CLI, run the following command with the [channel IDs](https://www.pythondiscord.com/pages/guides/pydis-guides/contributing/obtaining-discord-ids/): ```bash unstructured-ingest \ discord \ --channels <CHANNEL_IDS> \ --token "<YOUR_BOT_TOKEN>" \ --output-dir "discord-output" ``` This command downloads and structures the data in the `"discord-output"` directory. For a complete list of options supported by this source, run: ```bash unstructured-ingest discord --help ``` ### Ingesting into Qdrant Before loading the data, set up a collection with the information you need for the following REST call. In this example we use a local Huggingface model generating 384-dimensional embeddings. You can create a Qdrant [API key](/documentation/cloud/authentication/#create-api-keys) and set names for your Qdrant [collections](/documentation/concepts/collections/). We set up the collection with the following command: ```bash curl -X PUT \ <QDRANT_URL>/collections/<COLLECTION_NAME> \ -H 'Content-Type: application/json' \ -H 'api-key: <QDRANT_API_KEY>' \ -d '{ "vectors": { "size": 384, "distance": "Cosine" } }' ``` You should receive a response similar to: ```console {"result":true,"status":"ok","time":0.196235768} ``` To ingest the Discord data into Qdrant, run: ```bash unstructured-ingest \ local \ --input-path "discord-output" \ --embedding-provider "langchain-huggingface" \ qdrant \ --collection-name "<COLLECTION_NAME>" \ --api-key "<QDRANT_API_KEY>" \ --location "<QDRANT_URL>" ``` This command loads structured Discord data into Qdrant with sensible defaults. You can configure the data fields for which embeddings are generated in the command options. Qdrant ingestion also supports partitioning and chunking of your data, configurable directly from the CLI. Learn more about it in the [Unstructured documentation](https://unstructured-io.github.io/unstructured/core.html). To list all the supported options of the Qdrant ingestion destination, run: ```bash unstructured-ingest local qdrant --help ``` Unstructured can also be used programmatically or via the hosted API. Refer to the [Unstructured Reference Manual](https://unstructured-io.github.io/unstructured/introduction.html). For more information about the Qdrant ingest destination, review how Unstructured.io configures their [Qdrant](https://unstructured-io.github.io/unstructured/ingest/destination_connectors/qdrant.html) interface.
blog/qdrant-unstructured.md
--- draft: false title: "Qdrant's Trusted Partners for Hybrid Cloud Deployment" slug: hybrid-cloud-launch-partners short_description: "With the launch of Qdrant Hybrid Cloud we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment." description: "With the launch of Qdrant Hybrid Cloud we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment." preview_image: /blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners.png social_preview_image: /blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners.png date: 2024-04-15T00:02:00Z author: Manuel Meyer featured: false tags: - Hybrid Cloud - launch partners --- With the launch of [Qdrant Hybrid Cloud](/hybrid-cloud/) we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment, be it *in the cloud, on premise, or on the edge*. We are excited to have trusted industry players support the launch of Qdrant Hybrid Cloud, allowing developers to unlock best-in-class advantages for building production-ready AI applications: - **Deploy In Your Own Environment:** Deploy the Qdrant vector database as a managed service on the infrastructure of choice, such as our launch partner solutions [Oracle Cloud Infrastructure (OCI)](https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers), [Red Hat OpenShift](/blog/hybrid-cloud-red-hat-openshift/), [Vultr](/blog/hybrid-cloud-vultr/), [DigitalOcean](/blog/hybrid-cloud-digitalocean/), [OVHcloud](/blog/hybrid-cloud-ovhcloud/), [Scaleway](/blog/hybrid-cloud-scaleway/), [Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo), and [STACKIT](/blog/hybrid-cloud-stackit/). - **Seamlessly Integrate with Every Key Component of the Modern AI Stack:** Our new hybrid cloud offering also allows you to integrate with all of the relevant solutions for building AI applications. These include partner frameworks like [LlamaIndex](/blog/hybrid-cloud-llamaindex/), [LangChain](/blog/hybrid-cloud-langchain/), [Haystack by deepset](/blog/hybrid-cloud-haystack/), and [Airbyte](/blog/hybrid-cloud-airbyte/), as well as large language models (LLMs) like [JinaAI](/blog/hybrid-cloud-jinaai/) and [Aleph Alpha](/blog/hybrid-cloud-aleph-alpha/). - **Ensure Full Data Sovereignty and Privacy Control:** Qdrant Hybrid Cloud offers unparalleled data isolation and the flexibility to process workloads either in the cloud or on-premise, ensuring data privacy and sovereignty requirements - all while being fully managed. #### Try Qdrant Hybrid Cloud on Partner Platforms ![Hybrid Cloud Launch Partners Tutorials](/blog/hybrid-cloud-launch-partners/hybrid-cloud-launch-partners-tutorials.png) Together with our launch partners, we created in-depth tutorials and use cases for production-ready vector search that explain how developers can leverage Qdrant Hybrid Cloud alongside the best-in-class solutions of our launch partners. These tutorials demonstrate that Qdrant Hybrid Cloud is the most flexible foundation to build modern, customer-centric AI applications with endless deployment options and full data sovereignty. Let’s dive right in: **AI Customer Support Chatbot** with Qdrant Hybrid Cloud, Airbyte, Cohere, and AWS > This tutorial shows how to build a private AI customer support system using Cohere's AI models on AWS, Airbyte, and Qdrant Hybrid Cloud for efficient and secure query automation. [View Tutorial](/documentation/tutorials/rag-customer-support-cohere-airbyte-aws/) **RAG System for Employee Onboarding** with Qdrant Hybrid Cloud, Oracle Cloud Infrastructure (OCI), Cohere, and LangChain > This tutorial demonstrates how to use Oracle Cloud Infrastructure (OCI) for a secure setup that integrates Cohere's language models with Qdrant Hybrid Cloud, using LangChain to orchestrate natural language search for corporate documents, enhancing resource discovery and onboarding. [View Tutorial](/documentation/tutorials/natural-language-search-oracle-cloud-infrastructure-cohere-langchain/) **Hybrid Search for Product PDF Manuals** with Qdrant Hybrid Cloud, LlamaIndex, and JinaAI > Create a RAG-based chatbot that enhances customer support by parsing product PDF manuals using Qdrant Hybrid Cloud, LlamaIndex, and JinaAI, with DigitalOcean as the cloud host. This tutorial will guide you through the setup and integration process, enabling your system to deliver precise, context-aware responses for household appliance inquiries. [View Tutorial](/documentation/tutorials/hybrid-search-llamaindex-jinaai/) **Region-Specific RAG System for Contract Management** with Qdrant Hybrid Cloud, Aleph Alpha, and STACKIT > Learn how to streamline contract management with a RAG-based system in this tutorial, which utilizes Aleph Alpha’s embeddings and a region-specific cloud setup. Hosted on STACKIT with Qdrant Hybrid Cloud, this solution ensures secure, GDPR-compliant storage and processing of data, ideal for businesses with intensive contractual needs. [View Tutorial](/documentation/tutorials/rag-contract-management-stackit-aleph-alpha/) **Movie Recommendation System** with Qdrant Hybrid Cloud and OVHcloud > Discover how to build a recommendation system with our guide on collaborative filtering, using sparse vectors and the Movielens dataset. [View Tutorial](/documentation/tutorials/recommendation-system-ovhcloud/) **Private RAG Information Extraction Engine** with Qdrant Hybrid Cloud and Vultr using DSPy and Ollama > This tutorial teaches you how to handle and structure private documents with large unstructured data. Learn to use DSPy for information extraction, run your LLM with Ollama on Vultr, and manage data with Qdrant Hybrid Cloud on Vultr, perfect for regulated environments needing data privacy. [View Tutorial](/documentation/tutorials/rag-chatbot-vultr-dspy-ollama/) **RAG System That Chats with Blog Contents** with Qdrant Hybrid Cloud and Scaleway using LangChain. > Build a RAG system that combines blog scanning with the capabilities of semantic search. RAG enhances the generation of answers by retrieving relevant documents to aid the question-answering process. This setup showcases the integration of advanced search and AI language processing to improve information retrieval and generation tasks. [View Tutorial](/documentation/tutorials/rag-chatbot-scaleway/) **Private Chatbot for Interactive Learning** with Qdrant Hybrid Cloud and Red Hat OpenShift using Haystack. > In this tutorial, you will build a chatbot without public internet access. The goal is to keep sensitive data secure and isolated. Your RAG system will be built with Qdrant Hybrid Cloud on Red Hat OpenShift, leveraging Haystack for enhanced generative AI capabilities. This tutorial especially explores how this setup ensures that not a single data point leaves the environment. [View Tutorial](/documentation/tutorials/rag-chatbot-red-hat-openshift-haystack/) #### Supporting Documentation Additionally, we built comprehensive documentation tutorials on how to successfully deploy Qdrant Hybrid Cloud on the right infrastructure of choice. For more information, please visit our documentation pages: - [How to Deploy Qdrant Hybrid Cloud on AWS](/documentation/hybrid-cloud/platform-deployment-options/#amazon-web-services-aws) - [How to Deploy Qdrant Hybrid Cloud on GCP](/documentation/hybrid-cloud/platform-deployment-options/#google-cloud-platform-gcp) - [How to Deploy Qdrant Hybrid Cloud on Azure](/documentation/hybrid-cloud/platform-deployment-options/#mircrosoft-azure) - [How to Deploy Qdrant Hybrid Cloud on DigitalOcean](/documentation/hybrid-cloud/platform-deployment-options/#digital-ocean) - [How to Deploy Qdrant on Oracle Cloud](/documentation/hybrid-cloud/platform-deployment-options/#oracle-cloud-infrastructure) - [How to Deploy Qdrant on Vultr](/documentation/hybrid-cloud/platform-deployment-options/#vultr) - [How to Deploy Qdrant on Scaleway](/documentation/hybrid-cloud/platform-deployment-options/#scaleway) - [How to Deploy Qdrant on OVHcloud](/documentation/hybrid-cloud/platform-deployment-options/#ovhcloud) - [How to Deploy Qdrant on STACKIT](/documentation/hybrid-cloud/platform-deployment-options/#stackit) - [How to Deploy Qdrant on Red Hat OpenShift](/documentation/hybrid-cloud/platform-deployment-options/#red-hat-openshift) - [How to Deploy Qdrant on Linode](/documentation/hybrid-cloud/platform-deployment-options/#akamai-linode) - [How to Deploy Qdrant on Civo](/documentation/hybrid-cloud/platform-deployment-options/#civo) #### Get Started Now! [Qdrant Hybrid Cloud](/hybrid-cloud/) marks a significant advancement in vector databases, offering the most flexible way to implement vector search. You can test out Qdrant Hybrid Cloud today! Simply sign up for or log into your [Qdrant Cloud account](https://cloud.qdrant.io/login) and get started in the **Hybrid Cloud** section. Also, to learn more about Qdrant Hybrid Cloud read our [Official Release Blog](/blog/hybrid-cloud/) or our [Qdrant Hybrid Cloud website](/hybrid-cloud/). For additional technical insights, please read our [documentation](/documentation/hybrid-cloud/). [![hybrid-cloud-get-started](/blog/hybrid-cloud-launch-partners/hybrid-cloud-get-started.png)](https://cloud.qdrant.io/login)
blog/hybrid-cloud-launch-partners.md
--- title: Recommendation Systems description: Step into the next generation of recommendation engines powered by Qdrant. Experience a new level of intelligence in application interactions, offering unprecedented accuracy and depth in user personalization. startFree: text: Get Started url: https://cloud.qdrant.io/ learnMore: text: Contact Us url: /contact-us/ image: src: /img/vectors/vector-1.svg alt: Recommendation systems sitemapExclude: true ---
recommendations/recommendations-hero.md
--- title: Recommendations with Qdrant description: Recommendation systems, powered by Qdrant's efficient data retrieval, boost the ability to deliver highly personalized content recommendations across various media, enhancing user engagement and accuracy on a scalable platform. Explore why Qdrant is the optimal solution for your recommendation system projects. features: - id: 0 icon: src: /icons/outline/chart-bar-blue.svg alt: Chart bar title: Efficient Data Handling description: Qdrant excels in managing high-dimensional vectors, enabling streamlined storage and retrieval for complex recommendation systems. - id: 1 icon: src: /icons/outline/search-text-blue.svg alt: Search text title: Advanced Indexing Method description: Leveraging HNSW indexing, Qdrant ensures rapid, accurate searches crucial for effective recommendation engines. - id: 2 icon: src: /icons/outline/headphones-blue.svg alt: Headphones title: Flexible Query Options description: With support for payloads and filters, Qdrant offers personalized recommendation capabilities through detailed metadata handling. sitemapExclude: true ---
recommendations/recommendations-features.md
--- title: Learn how to get started with Qdrant for your recommendation system use case features: - id: 0 image: src: /img/recommendations-use-cases/music-recommendation.svg srcMobile: /img/recommendations-use-cases/music-recommendation-mobile.svg alt: Music recommendation title: Music Recommendation with Qdrant description: Build a song recommendation engine based on music genres and other metadata. link: text: View Tutorial url: /blog/human-language-ai-models/ - id: 1 image: src: /img/recommendations-use-cases/food-discovery.svg srcMobile: /img/recommendations-use-cases/food-discovery-mobile.svg alt: Food discovery title: Food Discovery with Qdrant description: Interactive demo recommends meals based on likes/dislikes and local restaurant options. link: text: View Demo url: https://food-discovery.qdrant.tech/ caseStudy: logo: src: /img/recommendations-use-cases/customer-logo.svg alt: Logo title: Recommendation Engine with Qdrant Vector Database description: Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database. link: text: Read Case Study url: /blog/case-study-dailymotion/ image: src: /img/recommendations-use-cases/case-study.png alt: Preview sitemapExclude: true ---
recommendations/recommendations-use-cases.md
--- title: Qdrant Recommendation API description: The Qdrant Recommendation API enhances recommendation systems with advanced flexibility, supporting both ID and vector-based queries, and search strategies for precise, personalized content suggestions. learnMore: text: Learn More url: /documentation/concepts/explore/ image: src: /img/recommendation-api.svg alt: Recommendation api sitemapExclude: true ---
recommendations/recommendations-api.md
--- title: "Recommendation Engines: Personalization & Data Handling" description: "Leverage personalized content suggestions, powered by efficient data retrieval and advanced indexing methods." build: render: always cascade: - build: list: local publishResources: false render: never ---
recommendations/_index.md
--- title: Subscribe section_title: Subscribe subtitle: Subscribe description: Subscribe image: src: /img/subscribe.svg srcMobile: /img/mobile/subscribe.svg alt: Astronaut form: title: Sign up for Qdrant Updates description: Stay up to date on product news, technical articles, and upcoming educational webinars. label: Email placeholder: info@qdrant.com button: Subscribe footer: rights: "&copy; 2024 Qdrant. All Rights Reserved" termsLink: url: /legal/terms_and_conditions/ text: Terms policyLink: url: /legal/privacy-policy/ text: Privacy Policy impressumLink: url: /legal/impressum/ text: Impressum ---
subscribe/_index.md
--- title: Customer Support and Sales Optimization icon: customer-service sitemapExclude: True --- Current advances in NLP can reduce the retinue work of customer service by up to 80 percent. No more answering the same questions over and over again. A chatbot will do that, and people can focus on complex problems. But not only automated answering, it is also possible to control the quality of the department and automatically identify flaws in conversations.
use-cases/customer-support-optimization.md
--- title: Media and Games icon: game-controller sitemapExclude: True --- Personalized recommendations for music, movies, games, and other entertainment content are also some sort of search. Except the query in it is not a text string, but user preferences and past experience. And with Qdrant, user preference vectors can be updated in real-time, no need to deploy a MapReduce cluster. Read more about "[Metric Learning Recommendation System](https://arxiv.org/abs/1803.00202)"
use-cases/media-and-games.md
--- title: Food Discovery weight: 20 icon: search sitemapExclude: True --- There are multiple ways to discover things, text search is not the only one. In the case of food, people rely more on appearance than description and ingredients. So why not let people choose their next lunch by its appearance, even if they don't know the name of the dish? We made a [demo](https://food-discovery.qdrant.tech/) to showcase this approach.
use-cases/food-search.md
--- title: Law Case Search icon: hammer sitemapExclude: True --- The wording of court decisions can be difficult not only for ordinary people, but sometimes for the lawyers themselves. It is rare to find words that exactly match a similar precedent. That's where AI, which has seen hundreds of thousands of court decisions and can compare them using vector similarity search engine, can help. Here is some related [research](https://arxiv.org/abs/2004.12307).
use-cases/law-search.md
--- title: Medical Diagnostics icon: x-rays sitemapExclude: True --- The growing volume of data and the increasing interest in the topic of health care is creating products to help doctors with diagnostics. One such product might be a search for similar cases in an ever-expanding database of patient histories. Search not only by symptom description, but also by data from, for example, MRI machines. Vector Search [is applied](https://www.sciencedirect.com/science/article/abs/pii/S0925231217308445) even here.
use-cases/medical-diagnostics.md
--- title: HR & Job Search icon: job-search weight: 10 sitemapExclude: True --- Vector search engine can be used to match candidates and jobs even if there are no matching keywords or explicit skill descriptions. For example, it can automatically map **'frontend engineer'** to **'web developer'**, no need for any predefined categorization. Neural job matching is used at [MoBerries](https://www.moberries.com/) for automatic job recommendations.
use-cases/job-matching.md
--- title: Fashion Search icon: clothing custom_link_name: Article by Zalando custom_link: https://engineering.zalando.com/posts/2018/02/search-deep-neural-network.html custom_link_name2: Our Demo custom_link2: https://qdrant.to/fashion-search-demo sitemapExclude: True --- Empower shoppers to find the items they want by uploading any image or browsing through a gallery instead of searching with keywords. A visual similarity search helps solve this problem. And with the advanced filters that Qdrant provides, you can be sure to have the right size in stock for the jacket the user finds. Large companies like [Zalando](https://engineering.zalando.com/posts/2018/02/search-deep-neural-network.html) are investing in it, but we also made our [demo](https://qdrant.to/fashion-search-demo) using public dataset.
use-cases/fashion-search.md
--- title: Qdrant Vector Database Use Cases subtitle: Explore the vast applications of the Qdrant vector database. From retrieval augmented generation to anomaly detection, advanced search, and recommendation systems, our solutions unlock new dimensions of data and performance. featureCards: - id: 0 title: Advanced Search content: Elevate your apps with advanced search capabilities. Qdrant excels in processing high-dimensional data, enabling nuanced similarity searches, and understanding semantics in depth. Qdrant also handles multimodal data with fast and accurate search algorithms. link: text: Learn More url: /advanced-search/ - id: 1 title: Recommendation Systems content: Create highly responsive and personalized recommendation systems with tailored suggestions. Qdrant’s Recommendation API offers great flexibility, featuring options such as best score recommendation strategy. This enables new scenarios of using multiple vectors in a single query to impact result relevancy. link: text: Learn More url: /recommendations/ - id: 2 title: Retrieval Augmented Generation (RAG) content: Enhance the quality of AI-generated content. Leverage Qdrant's efficient nearest neighbor search and payload filtering features for retrieval-augmented generation. You can then quickly access relevant vectors and integrate a vast array of data points. link: text: Learn More url: /rag/ - id: 3 title: Data Analysis and Anomaly Detection content: Transform your approach to Data Analysis and Anomaly Detection. Leverage vectors to quickly identify patterns and outliers in complex datasets. This ensures robust and real-time anomaly detection for critical applications. link: text: Learn More url: /data-analysis-anomaly-detection/ ---
use-cases/vectors-use-case.md
--- title: Fintech icon: bank sitemapExclude: True --- Fraud detection is like recommendations in reverse. One way to solve the problem is to look for similar cheating behaviors. But often this is not enough and manual rules come into play. Qdrant vector database allows you to combine both approaches because it provides a way to filter the result using arbitrary conditions. And all this can happen in the time till the client takes his hand off the terminal. Here is some related [research paper](https://arxiv.org/abs/1808.05492).
use-cases/fintech.md
--- title: Advertising icon: ad-campaign sitemapExclude: True --- User interests cannot be described with rules, and that's where neural networks come in. Qdrant vector database will allow sufficient flexibility in neural network recommendations so that each user sees only the relevant ad. Advanced filtering mechanisms, such as geo-location, do not compromise on speed and accuracy, which is especially important for online advertising.
use-cases/advertising.md
--- title: Biometric identification icon: face-scan sitemapExclude: True --- Not only totalitarian states use facial recognition. With this technology, you can also improve the user experience and simplify authentication. Make it possible to pay without a credit card and buy in the store without cashiers. And the scalable face recognition technology is based on vector search, which is what Qdrant provides. Some of the many articles on the topic of [Face Recognition](https://arxiv.org/abs/1810.06951v1) and [Speaker Recognition](https://arxiv.org/abs/2003.11982).
use-cases/face-recognition.md
--- title: E-Commerce Search icon: dairy-products weight: 30 sitemapExclude: True --- Increase your online basket size and revenue with the AI-powered search. No need in manually assembled synonym lists, neural networks get the context better. With neural approach the search results could be not only precise, but also **personalized**. And Qdrant will be the backbone of this search. Read more about [Deep Learning-based Product Recommendations](https://arxiv.org/abs/2104.07572) in the paper by The Home Depot.
use-cases/e-commerce-search.md
--- title: Vector Database Use Cases section_title: Apps and ideas Qdrant make possible type: page description: Discover the diverse applications of Qdrant vector database, from retrieval and augmented generation to anomaly detection, advanced search, and more. build: render: always cascade: - build: list: local publishResources: false render: never aliases: - /solutions/ ---
use-cases/_index.md
--- salesTitle: Qdrant Enterprise Solutions description: Our Managed Cloud, Hybrid Cloud, and Private Cloud solutions offer flexible deployment options for top-tier data privacy. cards: - id: 0 icon: /icons/outline/cloud-managed-blue.svg title: Managed Cloud description: Qdrant Cloud provides optimal flexibility and offers a suite of features focused on efficient and scalable vector search - fully managed. Available on AWS, Google Cloud, and Azure. - id: 1 icon: /icons/outline/cloud-hybrid-violet.svg title: Hybrid Cloud description: Bring your own Kubernetes clusters from any cloud provider, on-premise infrastructure, or edge locations and connect them to the Managed Cloud. - id: 2 icon: /icons/outline/cloud-private-teal.svg title: Private Cloud description: Deploy Qdrant in your own infrastructure. form: title: Connect with us # description: id: contact-sales-form hubspotFormOptions: '{ "region": "eu1", "portalId": "139603372", "formId": "fc7a9f1d-9d41-418d-a9cc-ef9c5fb9b207", "submitButtonClass": "button button_contained", }' logosSectionTitle: Qdrant is trusted by top-tier enterprises ---
contact-sales/_index.md
--- title: Qdrant Hybrid Cloud features: - id: 0 content: Privacy and Data Sovereignty icon: src: /icons/fill/cloud-system-purple.svg alt: Privacy and Data Sovereignty - id: 1 content: Flexible Deployment icon: src: /icons/fill/separate-blue.svg alt: Flexible Deployment - id: 2 content: Minimum Cost icon: src: /icons/fill/money-growth-green.svg alt: Minimum Cost description: Seamlessly deploy and manage the vector database across diverse environments, ensuring performance, security, and cost efficiency for AI-driven applications. startFree: text: Get Started url: https://cloud.qdrant.io/ contactUs: text: Request a demo url: /contact-hybrid-cloud/ image: src: /img/hybrid-cloud-graphic.svg alt: Enterprise-solutions sitemapExclude: true ---
hybrid-cloud/hybrid-cloud-hero.md
--- title: "Learn how Qdrant Hybrid Cloud works:" video: src: / button: Watch Demo icon: src: /icons/outline/play-white.svg alt: Play preview: /img/qdrant-cloud-demo.png youtube: | <iframe src="https://www.youtube.com/embed/gWH2uhWgTvM?si=l9a27GwI9mpTknDa" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe> sitemapExclude: true ---
hybrid-cloud/hybrid-cloud-video.md
--- content: Do you have further questions? We are happy to assist you. contactUs: text: Contact us url: /contact-hybrid-cloud/ sitemapExclude: true ---
hybrid-cloud/get-contacted-with-question.md
--- title: How it Works steps: - id: 0 number: 1 title: Integration description: Qdrant Hybrid Cloud allows you to deploy managed Qdrant clusters on any cloud platform or on-premise infrastructure, ensuring your data stays private by separating the data and control layers. - id: 1 number: 2 title: Management description: A straightforward Kubernetes operator installation allows for hands-off cluster management, including scaling operations, zero-downtime upgrades and disaster recovery. - id: 2 number: 3 title: Privacy and Security description: The architecture guarantees database isolation. The Qdrant Cloud only receives telemetry through an outgoing connection. No access to databases or your Kubernetes API is necessary to maintain the highest level of data security and privacy. image: src: /img/how-it-works-scheme.svg alt: How it works scheme sitemapExclude: true ---
hybrid-cloud/hybrid-cloud-how-it-works.md
--- title: Get started today subtitle: Turn embeddings or neural network encoders into full-fledged applications for matching, searching, recommending, and more. button: text: Get Started url: https://cloud.qdrant.io/ sitemapExclude: true ---
hybrid-cloud/hybrid-cloud-get-started.md
--- title: Qdrant Hybrid Cloud Features cards: - id: 0 icon: src: /icons/outline/server-rack-blue.svg alt: Server rack description: Run clusters in your own infrastructure, incl. your own cloud, infrastructure, or edge - id: 1 icon: src: /icons/outline/cloud-check-blue.svg alt: Cloud check description: All benefits of Qdrant Cloud - id: 2 icon: src: /icons/outline/cloud-connections-blue.svg alt: Cloud connections description: Use the Managed Cloud Central Cluster Management - id: 3 icon: src: /icons/outline/headphones-blue.svg alt: Headphones-blue description: Premium support plan option available link: content: Learn more about Qdrant Hybrid Cloud in our documentation. url: /documentation/hybrid-cloud/ text: See Documentation sitemapExclude: true ---
hybrid-cloud/hybrid-cloud-features.md