id
stringlengths
14
17
text
stringlengths
42
2.1k
6243fab641c5-9
to hallucination they can make up information they can give you inaccurate information and the second problem is that evaluation is actually really difficult even if you're fortunate enough to get that user feedback from your user to get that thumbs up or thumbs down signal from your user pinpointing the exact reason and getting to an actionable next step to improve your service can actually be very difficult and the picture just becomes more complicated when you add in an additional step of search and retrieval so once again what we're doing here is we are taking that user query embedding the query retrieving relevant documents and then enriching our prompt with context from those relevant documents in addition to the user query which is then being sent to the llm and this introduces an additional possible failure mode namely we might retrieve the wrong context and this is where arise is going to come in we are going to offer you llm observability which just means that we're going to provide this foundational layer that is going to allow you to detect issues when they arise and quickly identify the root cause of the issue so let's let's dive in a little bit deeper I'm going to dive into this particular box and we're going to we're going to unpack what it means to get bad retrievals and we're going to keep running with this example of the arise documentation because that's actually the exact data that we're going to be working with today in the workshop portion of this talk we're actually going to be debugging a chat bot over a knowledge base containing the arise Docs so let's run with this example for a minute when you create this indexed knowledge base what you do is you take the arise documentation you chunk it up and you store it in pine cone and then you're going to have users ask questions of your question answering service so one question that someone might ask is what's the price of a rise
6243fab641c5-10
and it turns out that there are a couple of different ways that a query could go wrong in our particular case as you're going to see we don't actually have any information in our documentation about pricing so the first issue that could happen is maybe I don't have any similar documents maybe I don't have any relevant documents to retrieve from my knowledge base and so it's not actually possible for my chatbot to answer the question because it doesn't have the information that it needs so one of the things that we're actually going to show you how to do is how can I identify areas of user interest um that my knowledge base doesn't cover so that I can go augment my knowledge base but there's a second mode of failure that can happen which is that even if you have information about the query even if you have documentation around the pricing for example sometimes the retrieval step just fails sometimes even though you're retrieving the most similar documents by cosine similarity maybe they're not actually relevant to the question and so you're actually feeding irrelevant context to the llm and again if you feed in bad context to the llm you can't expect it to give you a good response so these are the two particular modes of failure that we're going to be investigating today all right very good so now we're going to hop into the demo portion of the workshop um but before before we actually jump in there let me let me just explain to you what we're going to be using we're going to
6243fab641c5-11
be using a a tool from arise that's called Phoenix it's relatively new it was open sourced about two months ago or so and again you can just pip install it with Pip install arise Phoenix what Phoenix is is an application that runs in your notebook environment So today we're actually going to be running Phoenix on your collab server you could also run it on your local laptop or anywhere where you can run a jupyter notebook and what Phoenix does is it helps you detect issues with your machine learning models and your llm applications and again it helps you pinpoint the mode of failure find the root cause of the issue and what kinds of issues are we talking about well there's a lot of different applications that you could use Phoenix for but the use case that we're dealing with today is a search and retrieval application built with Lang chain and Pinecone over the arise documentation okay so hopefully that's clear so far and you're going to have the opportunity to follow along so here's a link I'm going to drop this in the chat so let me open up the link here's the collab that we're going to be running today and I am copying and pasting that into the chat if I can find it here we go okay so that's the full link um if you if you want you can also just use the the bitly though it's https colon slash slash bit dot Li slash arise Dash Lang chain Dash Pinecone Dash collab okay and before we actually dive into the notebook too much I want to I Don't Wanna I wanna show you the cool
6243fab641c5-12
part right so this is where we're heading this right here is Phoenix and I'm actually running it on my laptop right now and what you're seeing in this view is actually a low dimensional representation of our embeddings our query embeddings in blue and or um database entries or our chunk embeddings the documentation Chunk embeddings in purple and if I actually just circle a cluster here let me look for a good area of overlap so I'm seeing this area right here where I'm seeing a little bit of overlap between some of the blue and the purple points let's just take a look at what these points actually represent so again these blue questions are user queries one example of a user query is do we have the ability to resolve a monitor okay so that's a question that someone might ask of our search and retrieval application and yeah a little bit of context too monitoring is one of the things that arise offers so this is a user asking questions about the arise platform um it looks like all these questions are about monitors and here in purple you can actually see some documentation some chunks of documentation and you can see that the documentation is also about monitors so what does that tell us it just means that we have reasonable embeddings that are embedding semantically similar pieces of text nearby in the embedding space and we're going to dive into how you can use this visualization in just a moment but let's actually return here to The Notebook so this notebook is is pretty long
6243fab641c5-13
it's a little complicated in order to actually run it all the cells what you need to do is come here and click on this script and this is a script that is actually going to build you a Pinecone index using Lang chain of the arise documentation and I'm not going to go over how to run this script in this particular Workshop I'm going to assume um that if you want to run the notebook you should just go ahead run the script at home and to do that you're going to need to go to pinecone.io create a pine cone account create an open AI account get your openai key and just run the script and then you will wind up with this Pinecone database full of documentation chunks from the arise documentation so for the for this Workshop today you'll notice that a lot of these cells have these red exclamation points ahead of them for any cell that has this red exclamation point go ahead and skip that cell those are the cells that require your open API key and your pine cone API key um so skip those cells but you'll see a lot of the cells also don't have those um red exclamation points so go ahead and run those cells just run the ones without the exclamation points all the way down and then you will wind up inside of Phoenix at the very end so for me I'm going to do something slightly different though I'm going to actually copy over my Pinecone API key I'm going to copy over all that information let me grab it real quick just so that you can see what this would look like once you've actually built
6243fab641c5-14
your Pinecone index so I'm going to first of all grab my pine cone environment and paste it in here I'm going to grab my pine cone API key paste it in here gonna grab my Pinecone index name which I think is called arise Dash Docs and I'm going to grab also my openai API key and in general I would advise you not to do what I'm doing right now which is to share all of your private API keys but just remember to revoke them afterwards if you do all right so let me see I've got my openai API key I've got my Pinecone information here at this point I should just be able to run all so I'm going to run all and I'm going to let the notebook run and for this particular notebook you don't need a GPU and while it's running let me just check looks like it's going while it's running I'm going to dive into the particular chat bot architecture that we're dealing with today so this is the chat bot that we've built and that we're dealing with the purple component or the purple part of this diagram is the Lang chain and pine cone chat bot and the blue part of this diagram is the foundation model provider in this case open AI and I'm going to walk you through the five steps that we've got here that take the user from asking a question to receiving a response so in Step One the user asks a question and again for this particular chat bot this is a chat bot over the arise documentation so it's going to be probably some kind of question related to the arise platform and the very first thing that
6243fab641c5-15
we're going to do is we're going to use a lang chain component called open AI embeddings and what that's going to do is it's going to make a request to the embeddings API um for for open AI in order to embed the user query and it's going to get back the embedding for the user query and this particular chat bot data that we're dealing with is using the text embedding attitude model in the Second Step oh in the third step rather we're going to take that embedding that we just computed for the query and we're going to run a similarity search across Pinecone and what that's going to do is it's going to look for the two most similar pieces of context in our Vector database by cosine similarity and that is what is being returned over here so again context 0 and context 1 are just chunks of our documentation you can think of it as a paragraph of documentation in step four we're actually going to generate the response to the user and the way we do that is we take the user query and we take the retrieve context we're going to format it into a single prompt The Prompt is going to say something to the effect of here's a question from the user here's some documentation answer the question using the documentation and we're going to send that prompt to the chat completions API in this case we're using GPT 3.5 turbo and that's going to actually generate the final response which is then returned to the user so hopefully that is clear at a high level um what the chatbot architecture looks like and you can actually if you come back to the notebook you can actually see some of these components right in this cell right here here we go so you can see the embedding model name you can see the actual open AI embeddings here you can see where we're doing the document search over Pinecone and this is where we actually Define the Lang chain which is going to be doing the user query and in this particular case says it's already run yeah
6243fab641c5-16
so in this particular case I'll show you what it looks like when you actually get this whole thing running we've got a piece of query text and the query text is a user asking how do I get an arise API key okay so that's a pretty natural thing that someone might ask of our documentation and you can see that we actually run this command chain dot run query text and that is running all of the steps that I just went through in that previous diagram in order to get a response from the user and that's what we see here so the response is this is again the response from this chat bot it says to get Enterprise API key you need to click the get your API key button on the top right of the Explorer page which will generate your API key etc etc and the other things that we're also recording here are things that are relevant for Phoenix things like the first piece of retrieved context the similarity score to that first piece of retrieve context the second piece of retrieve context the second similarity score and the embedding from the query the query uh from the user okay so hopefully that's pretty clear so far next step is we are going to talk about the data so again I don't want to dive too deep into the details of the notebook The Notebook is pretty complex but basically the tldr here is we're trying to get your data to look like this this is basically what the whole notebook is about so the first thing is let's go over what does our database data look like again this is the data from
6243fab641c5-17
our knowledge base so our knowledge base it really is is quite simple it consists of chunks of documentation so here's an example this first one says to get started quickly to get started quickly you can use the scripts provided with the distribution extract the tar file provided by the arise team etc etc and then here's the actual embedding Vector that is being stored inside of Pinecone so that's what the the database data looks like and here's what our query data looks like each each query is again going to have a piece of text just the question asked from the user in this case the user is asking how do I use the SDK to upload a ranking model once again a question that someone might ask of our documentation we've got the associated embedding Vector for that query we have the context the similarity and the second piece oh I guess uh the response from the user or from the from the chat bot here so the chatbot responded to use the SDK to upload a ranking model you will need to First install the SDK etc etc we've got user feedback and again user feedback in this case is just going to be thumbs up thumbs down so a plus one means the user gave this response a thumbs up a minus one means the user gave the response a thumbs down and then we're going to get into these last two columns in a moment okay awesome so now that we've gone over what the data looks like I'm gonna see it looks like I it looks like my collab is run so I'm going to scroll all the way down to the bottom of the
6243fab641c5-18
collab and I encourage you to do the same scrolling all the way down to the bottom of the collab you see this link right here to view the Phoenix app in your browser visit this link and again Phoenix is just an application that's running in this case on this very collab server so now we're going to get to the fun part which is actually using Phoenix to debug your application this just takes a minute um and what you're seeing here is the Phoenix home page and the first thing I'm going to do is I'm going to click here in the embedding section I'm going to click on centered text vector and that's going to bring me back to that page that embeddings view that I showed you a couple minutes ago so right now what's happening is we are actually taking those High dimensional embeddings that that we just defined in our notebook and we're reducing the dimension down to three dimensions so that you can see them and this is what you see and so again what you're seeing are query data points query embeddings in blue and database entries or knowledge base entries chunks of your documentation in purple and it's spinning around that's pretty cool but let's actually uh let's see what we can actually do with this like what value can you actually get out of this right I'm going to tell you a couple of stories here let me go back to the slides for a minute I'm gonna I'm gonna tell you a couple stories about the kinds of questions you can answer using Phoenix so the first story is how do I catch bad responses and let's suppose for a minute that I have some user feedback I have a thumbs up or a thumbs down signal from my users that's telling me hey this is a good response or hey this is a bad response how can I actually visualize those responses and understand them in a meaningful way so let's do that let's let's hop back into Phoenix and the first thing I'm going to do is
6243fab641c5-19
I'm going to call your attention to the display section down here in the bottom left corner and you'll see that there's this color by drop down and in the color by drop down let's go ahead and select dimension all a dimension is is um think of it as a field of your data and the field of my data that I care about right now is user feedback okay and let me pull this up a little bit so we can see it and what you're seeing over here now in the point cloud is let me toggle off the database entries for a minute what you're seeing are the queries only the queries right now and the purple queries are the ones where our users have given a thumbs down to the response from the chatbot or green queries or the points where our users have given a thumbs up to the responses from the chat bot and there's also some gray points in here which just didn't get any response from the user so if I care about where are my bad responses what kinds of questions are my users asking where the chat bot is producing responses that get thumbs down like if that's what I care about what you can do is you can come up to this metric drop down up here and once again let's just select user feedback and you'll notice that these clusters over in the left actually just flipped around and maybe maybe I should actually explain these a little bit right once we actually reduce the dimension of your embeddings what we do then is cluster in this low dimensional space in order to surface up meaningful groups of queries in
6243fab641c5-20
this case and let's go ahead and sort these clusters by lowest metric value so what that means is that now the Clusters here are being sorted in ascending order by the average user feedback so you can see that this cluster is up at the top meaning this is the most problematic cluster because it has an average user feedback of -1 which means that every point in this cluster got a thumbs down so let's take a look at this cluster all right so again here's the cluster right here and the points in the cluster are here and again I did a little bit of foreshadowing earlier but some of these the topic here or the theme of this cluster should be pretty familiar um how much does the arise platform cost and how do you charge is there a cost for uh for a rise Beyond an annual subscription price what's the difference between arizes Pro and Enterprise plans how much does a rise charge okay so all of the questions in this cluster are around cost and again you can see that all these purple ones are the ones where people gave it a thumbs down they gave the response a thumbs down and you can notice something really interesting here which is that even though I've already told you like we already know um that we don't have any pricing information in our documentation even though that's the case you'll notice a few of these examples actually give you um let me see if I can find a good one how about this a few of these examples are actually giving you um a response so this question is the question
6243fab641c5-21
from the user is is there cost for a rise Beyond an annual subscription price the response is arise offers different subscription plans to choose from depending on your needs however there may be additional costs associated with using certain features etc etc so it's giving what sounds like a pretty confident response but here's the catch if you actually look at the retrieved pieces of context the first piece of context says arise is an open platform that works with your machine learning infrastructure et cetera Etc doesn't have any information about cost the second piece of retrieve context is arise supports full role-based access control using organizations and spaces et cetera et cetera also no information around cost so it turns out that this particular response is actually a hallucination the chatbot in this case is actually just making stuff up we actually don't have different subscription plans and you can imagine how actually um how having a chat bot that makes up this kind of information could make customer calls pretty awkward for you right and this is really undesirable behavior when your chatbot doesn't retrieve relevant context you want it to to say um I don't know you wanted to say I'm not sure of the answer and that's not that's not what's happening in this case so hopefully at this point I've shown you how we can help you um pinpoint those subjects that your users are asking about where you're getting a lot of negative user feedback um there is one little of a drawback there is a little bit of a drawback here which is that to do this you actually need user feedback right you need that thumbs up thumbs down signal from your users and you're not always going to have that or even if you do have that thumbs up thumbs down signal from your users um it's not always going to help you pinpoint the exact mode of failure all you know is the user didn't like the response
6243fab641c5-22
but you don't know exactly where in the search and retrieval process did this bad response come from what part of the search and retrieval application failed so the next idea I'm going to introduce is this idea of actually using query density in order to highlight broad topics that um your documentation does not cover and this is going to be something you can do even when you don't have ground truth so to do that let's go ahead and come back to our display settings and color by data set so now once again I'm looking at my queries in blue and my documentation chunks in purple and if I come up to the metric drop down let's go ahead and select euclidean distance and what that what that is doing is now I am oh yeah and also select most drift here now I am looking at the Clusters and sorting them based on the purity of the cluster meaning the proportion of queries in the cluster versus um versus database entries so you can see that in this blue bar right here this blue bar means that this cluster contains all blue data points all query data points and why is that a problem it means that we actually have this cluster of queries out here that is really far away from our knowledge base the embeddings for these queries are distant different far away from our knowledge base um and if we actually click on what these questions they're asking it's actually identifying the exact same um cluster of questions so again we don't have any information about cost in our documentation and hence
6243fab641c5-23
what you see is that all of these questions about the cost of our platform they wind up all the way out here way far away from any purple data points um meaning that it's kind of out in left field and again this method allows you to actually detect questions or areas of user interest um that are not covered by your documentation even if even if you don't have any feedback signal from the user okay so that was the second story that I wanted to tell you um I showed you how can you use a rise in order to detect areas of user interest that are far away from anything in your in your knowledge base but the downside of this approach is that it only identifies really broad topics but it's not going to show you the small little gaps right sometimes you know you might have um a topic in your knowledge space that is covered by your documentation but you might be asking a really very very particular question that isn't answered on that particular piece of documentation so now let's start thinking about like how can I get something that's more fine-grained that's going to really help me pinpoint the cause of failure for individual queries and one first thing that you might try in this situation is you might try something like cosine similarity you might think oh well hey during that retrieval step we are using cosine similarity as a heuristic in order to retrieve documents so so maybe I should expect that the documents that I retrieved that have a high cosine similarity with the query
6243fab641c5-24
embedding um maybe I should expect those documents to be highly relevant or good retrievals and conversely maybe I should expect um documents that have a low cosine similarity with the query maybe maybe those ones are not going to be very relevant or not going to be um very good retrievals and it's going to turn out that this is a little bit of a mixed bag cosine similarity is um is going to work in some cases not going to work in others let's actually just dive in though so let's take a look here back down in the display section down here I'm going to go back to Dimension and this time the dimension I care about is context similarity zero so what this number is is the cosine similarity between each query and the most similar piece of context in the knowledge base and that is the first piece of context that is being retrieved by this particular search and retrieval application and here in the legend you can see that the cosine similarity it runs between the range of about zero point or 0.75 up to about 0.9 ish and again those are the colors that you're seeing over here in this point cloud and let's actually toggle off our database entries for a moment and I'm going to select Dimension and that's going to deselect all the points and then I'm just going to select the lowest cosine similarity scores here so now you should see a bunch of purple data points over here and let's go ahead and take a look at some of these points all right so I'm seeing some purple data points let's
6243fab641c5-25
look at this one what counts against my plan usage and you can see that the cosine similarity is quite low and indeed if you actually look at the question and look at this retrieved piece of context they're not actually that relevant right the question is asking about plan usage what counts it's kind of asking a question around pricing I think um and the context says model performance metrics measure how well your model performs in production so it's actually it's talking about something totally different and it indeed does have this low cosine similarity score so that's pretty encouraging right um and conversely let's actually look at what the good retrievals look like what the high cosine similarity retrievals look like so now these green data points again these ones represent those retrievals those retrieved pieces of context that had a very high level of similarity with the query and let's go ahead and take a look at this one so this question is asking what drift metrics are supported in arise and you can see the cosine similarity is quite High 0.88 um and this very first piece of context actually answers the question it's a relevant piece of context a rise calculates drift metrics such as popular population stability index scale Divergence etc etc all right so um the important thing to note here is that this piece of context directly answers the question and it has a high cosine similarity but already you're starting to see here that there's a little bit of a little bit of a gap just because basically here's the second piece of retrieve context you can see that it has basically the same cosine similarity score very very similar um and if you actually read this piece of context it says drift monitors measure distribution shift which is the difference between two statistical distributions etc etc turns out this one doesn't actually answer the question but it has a lot of the same like words in common it has um drift it has metrics and those superficial similarities are actually causing the um cosine similarity to be high even though the context is not
6243fab641c5-26
directly answering the question and you can really see this if you actually just zoom out here and look at our our pricing cluster and again this is that that cluster that we've been looking at that has all of these questions about pricing all the way out in left field if you look here all of these retrievals have to be irrelevant retrievals um because again we don't have any information about price in our documentation but you can see that the cosine similarity here actually runs the gamut you've got some retrievals that were really low cosine similarity you've got other retrievals that were really high cosine similarity and if you actually look at one of these ones it's like really high cosine similarity here what you can notice is the question is what is the cost of the arise platform the retrieved piece of context says arise is an open platform that works with your machine learning infrastructure and again this is not not talking about cost at all not a relevant retrieval but it has these superficial similarities in common with the query it's talking about arise it's talking about platforms and it has these superficial similarities that are causing it to have a high cosine similarity even though the retrieval even though that retrieved piece of context isn't really relevant to the user's query okay so hopefully at this point I have convinced you that similarity is not the same as relevance and really the thing that we care about when we're talking about that retrieval
6243fab641c5-27
step the thing we really care about is actually um whether or not the context is relevant so the last idea I want to leave you with is this this new thing that we're working on which is the idea of using ranking metrics and in particular llm assisted ranking metrics in order to directly measure the effectiveness of the retrieval step and I'm going to walk you through what that means right here looks like I'm running a little bit long on time um Amanda are we still good like I think I can wrap up in about like five-ish minutes yeah yeah you're fine sounds good sounds good um so let me let me explain this idea of what I mean by llm assisted ranking metrics and I'm going to do that with an example so here once again we've got some queries and again here queries about the arise documentation this particular query that we're looking at is uh what format should the prediction timestamp be so the prediction timestamp is it's something you use with arise and the user is asking like oh like what what format should I use for the timestamp and here are the two pieces of context that were actually retrieved by this particular application the first piece of context says the timestamp indicates when the data will show up in the UI and here's the important part sent as an integer representing the Unix timestamp in seconds so there you go right there the Unix timestamp in seconds is telling you the format right there and here is the actual relevant not relevant classification and where did we get this we actually are going to get this by asking gpd4 hey here's a question here's a piece of context does that piece of context answer the question relevant not relevant so we're getting this relevant not relevant signal from another llm
6243fab641c5-28
and I'll show you the place in the code in in the notebook where we actually do that in a second but let me just show you the second example here to make it more concrete um again same query what format should the prediction timestamp be the prediction time stamp represents when your model is pretty soon was made it's a column you send it to arise this particular piece of context does not talk about the format and so in this case gpd4 correctly says hey this is an irrelevant piece of context it does not address the query from the user and and why are we going through the trouble of getting these relevant not relevant classifications well the really cool thing that you can do is that once you have this information you can compute ranking metrics on your query and you can directly measure the effectiveness of that retrieval step so there are many different ranking metrics that you could choose from the one that we're going to talk about today is called Precision at K and in particular we're going to talk about Precision at 2 because for a particular application we're only retrieving two pieces of context so what is precision at two it's just the number of the two retrieved pieces of context that are actually relevant to your query divided by the total number of retrieved documents which is two so it's just how like what percentage of the retrieved documents that I got are actually relevant that's what Precision at 2 is okay and let me just quickly show you pull this off let me
6243fab641c5-29
just show you real quick where this actually is in the notebook you can take a look for yourself um we're actually just Computing this right here in this cell this cell right here um and all we're doing is we are setting a system message for gpd4 that says you will be given a query and a reference text you must determine whether the reference text contains an answer to the input query your response must be binary should not contain any other characters aside from zero and one zero means the reference text does not contain an answer one means the reference text contains an answer to the query okay and then we paste in the query we paste in the context and we ask it for a response so that's how we got that relevant not relevant signal I don't claim that this is like a like the exact prompt that we should be using for this particular task this is just the one that came to mind um but that's how we're getting that relevant not relevant classification okay so now okay now we've talked about like what is precision at two we've talked about how do we get it how do we use an llm in order to get this Precision to number now let's actually use it and in order to use it I'm once again going to come down to the color by Dimension and select open AI Precision at 2. so what you're seeing now over here is um our Point cloud and let me once again deselect my database entries what you're seeing is the point cloud and now you're going to see a couple of different colors you're seeing purple
6243fab641c5-30
purple means zero Precision at two so zero Precision to two just means that none of my retrievals were relevant you also see these Blue Points which are going to be 0.5 Precision to two so a 0.5 Precision 2 means that one of my two retrievals was relevant and the green points are good green means um that both of the retrieved pieces of context were relevant and that I have a Precision at 2 score of one okay and once again let's go ahead and select the metric we care about in this case we care about open AI Precision at two and now let's go sort our clusters by lowest metric value so that once again the Clusters are being sorted in ascending order by that Precision at two value and boom once again we have identified that same cluster right off the bat which again is this cluster about questions that actually have no um grounding no relevant documents in the arise documentation and you can see that all of these data points are purple meaning that for each data point for each data point uh in this case the query is like do you have a pricing calculator um the open AI relevance is irrelevant meaning that gpd4 is correctly saying hey this piece of context is not actually relevant hey this is a problematic um this is a problematic point that you need to look at because the user is asking a question and getting a response for a um data point that actually doesn't have context in your knowledge base okay that just about wraps up the presentation I think we want to reserve a little
6243fab641c5-31
bit of time uh just for questions and answers but before we do that I just also want to say thank you so much to our partners uh Lang chain and Pinecone thank you so much to Lance and Kevin uh we really appreciate it um if you like the demo please go ahead and try out Phoenix once again you can just pip install it fully open source uh consider leaving us a store on GitHub and stay tuned for more Integrations because we've got a lot planned um for tighter Integrations with Lane train and Pinecone and if you run into any issues with the notebook or run into any issues with Phoenix um please go ahead and join our slack Community you can ask those questions in the Phoenix support Channel and I will get back to you uh really quickly all right awesome thanks so much Xander
6243fab641c5-32
yeah we have a time for a few questions but just to reiterate what I said in the beginning this session has been recorded we will be sharing it with you guys after the event we will also share on our social and YouTube channel as well so let's dive on in how do you guys feel about domain adaption uh fine-tuning small lmms to increase accuracy of the vectors that's an interesting question um so okay so the questions around like fine-tuning like when maybe maybe the question is really around like when would you use fine tuning right in what circumstance right um you know I you know I think the answer is like I'm not I'm not quite certain I think that like we're still kind of waiting to see best practices around that certainly it's not the first thing that I would try um like fine tuning is is relatively heavy lift um there's there's likely to be other areas of the system they could use um that where I would probably try to improve first um that's my initial thought though I don't know Lancer Kevin do you have any any thoughts there I can add something quickly so I think what Xander just showed is a very nice overview of how to think about this remember the flow we start with documents we do retrieval step you get your documents you pass those to the model the model synthesized to an answer the retrieval step independently can be debugged and should be debugged and that's what Xander showed very nicely you can lots of tools to say am I retrieving relevant docs from my Vector DB
6243fab641c5-33
so that's stage one right that should be independently evaluated stage two is take those docs put them in the put them in the context window use an element to generate the answer that's where you might think about fine-tuning if for example you wanted answers from a very particular type if you had very domain specific documentation that required certain nuances in terms of those answers you might fine tune for that answer stage but the document retrieval stage should be independently evaluated and that's what xender showed very nicely here so I think fine tuning is definitely an interesting tool on that kind of answer synthesis stage of this retrieval augmented generation flow I'm curious too Lance because this is I think this is a really interesting question too is like there's another way that you can also try to tweak the output at the response generation stage which is to actually tweak the the system message right or tweak the The Prompt right and and that's certainly going to be like uh probably like less work than fine tuning too so like in your opinion do you think um like do you recommend that people like squeak that that system a system message in the response generation step first or um intuitively that seems like where I would start yeah okay so this is an important Point there's some very nice tweets out there from Travis Fisher carpathies also mentioned this there's kind of a hierarchy of things that you might try to improve your answer quality prompt
6243fab641c5-34
engineering is like the first thing and that's easiest you should certainly start there you can also do I mean in a sense when you're providing this context you're you're giving the model a lot of information you can also try kind of few shot learning where you give it kind of example pairs prompt engineering few shot approaches are certainly advised prior to the heavier weight approach of actually fine-tuning the new model there are some really nice cool platforms for fine tuning Mosaic I know in particular has done a lot there but it's certainly a heavier weight than tuning your I mean tuning the parameters of your overall chain is another thing you can tune your chunk size you can tune your chunk overlaps you can tune your prompt there's a ton of things you can do before you get into fine tuning of course yeah thanks Lance cool hopefully that answered the question great and if you have um any links to those tweets I think that's something that folks are asking for in the chat perfect now we have a question about hybrid search uh do you have any experience in improving the efficiency of hybrid search methods when dealing with large numbers of legal documents for example with the use of search prompt generation and so while this is obviously talking about legal docs you can kind of apply that to anything within the same formula essentially yeah I'm actually not not deeply familiar with hybrid search and I haven't worked in the legal space in particular so I'm not sure I'm able to answer that one but if you follow up with me go ahead and um go ahead and follow up with me and we can talk about it more yeah yeah I can uh I can chime in with a couple of ideas here
6243fab641c5-35
so uh we do have several legal customers who are trying to do things like chatbot Style on contracts and uh agreements and things like that um with respect to actual hybrid search so there's a traditional way to do it and that's with your uh document stores that you may have today such as an elasticsearch or something like that where you have those documents stored you have lexical search already today uh you have the ability to do things like faceting and pagination and all the other fun search engine techniques that you can use with those tools one thing that Pinecone has done over the last several months is we haven't introduced our own version of a hybrid search capability and that hybrid search capability how it works is not only are you creating your dense embeddings which is what you're already doing today when you upload your vectors to Pine Cone there's the notion of a sparse embedding and we also recommend the hugging face model splade and we are able to then store essentially two vectors one for the sparse Vector which is the lexical representation and then of course the traditional dense Vector which is the semantic representation and then at query time we do the same thing take the customer's query we could generate a sparse version and a dense version and we send that to Pinecone and what you get out of that is a automatically re-ranked result best between the lexical representation for your very specific domain vocabulary as well as the semantic or the intent
6243fab641c5-36
of the query that's coming in as well so it works very well we've written a really nice paper on it as well one of our data scientists have and uh so we're we're very happy to share that information in fact someone asked a q a question earlier that I dropped the link on there but it's um I can drop it in the chat here as well yeah and we have a ton of uh really great resources in our Learning Center on our website so I highly recommend checking it out uh one last question uh not super deep on embeddings here but any intuition on what are common causes of poor retrievals common words Clauses sentences especially for poor retrievals that have high cosine similarity scores yeah that's a that's a good question Okay so what are the common causes for poor retrievals I mean I think you know I think part of it really is is just like um part of it is is I think in these applications we're using cosine similarity we're using similarity right um and similarity it could be capturing a lot of different things as you're mentioning it could be it could be capturing things like um what words are there um it could be capturing um you know one interesting thing is it could be capturing like certain things around like the length of the piece of text or like is it a question it could be capturing things around tone sentiment right so the embeddings um depending on what model you're using it could be capturing all this kind of information and we're using similarity as a proxy for relevance when
6243fab641c5-37
we're actually trying to retrieve pieces of context with similarity but I think really what it boils down to is similarity is just a proxy for relevance it's not actually relevant and so we can't expect um that similarity step to always perform really well uh because after all being similar is not the same as being as being relevant so I guess that's kind of like my my intuition there a little bit yeah great um with that we are at time I want to say thank you to everybody who's participated um really appreciate you guys being here sharing your knowledge I've also for those who are in the Bay Area on July 13th we are having a pine cone Summit centering around chat Bots and hallucinations so if you are out there I hope you can join us and thank you again round of applause for our hosts thanks everybody bye thanks Center you can go ahead and go off. okay we'll give it a second for people. to come on in. my dog being the the soundtrack in the. background. okay guys we're gonna Jump On In welcome. to troubleshooting search and retrieval. enhanced performance with powerful AI. Solutions my name is Amanda Wagner I am. the senior community manager at Pinecone. and we are absolutely thrilled to have. you here. for those of you who may not be super. familiar with Pinecone Pinecone is a. vector database that makes it easy to. build high performance Vector search. applications way more to come on that uh. but before we dive into today's content. I want to go over a few basic.
6243fab641c5-38
housekeeping rules. number one we ask that you use the chat. for chat go ahead say hello let us know. where you're coming from let us know. what you're building and why you're here. uh we want to hear from you but when you. have a question we ask that you ask all. questions in the Q a portion of the zoom. this helps us to get to everybody's. questions when it comes to the Q a so. again chat and chat questions in the Q a. portion of the zoom if you miss. something no worries this event is being. recorded and we will share it with you. via email after the event we also share. all of our previous events on YouTube on. our social uh so if you have not. followed and subscribed I highly suggest. you do so now. uh in that same email we will be sharing. a post event survey it takes five. seconds helps us let us know what we're. doing right what we can do more of uh.
6243fab641c5-39
and gives us just additional ideas to. bring you guys great content. have any questions after the event go. ahead and ask them in the pine cone. forums you can find that on our website. under community. now without further ado I'm going to. hand it over to Xander to tell us a. little bit about what we're going to be. learning today Xander take it away. Xander you're on mute. has to happen once at least every. webinar. can everybody hear me. awesome thank you Amanda awesome I'm. going to share out my screen looks like. uh looks like I am can people see my. screen here. thumbs up awesome. so let's go ahead and start the. slideshow uh thank you so much for. joining us today my name is Xander I'm a. developer Advocate at arise Ai and today. we're going to be talking about how to. troubleshoot your search and retrieval. applications and with me today I also. have Lance Martin from langchain and. Kevin Butler from Pinecone Lance do you. want to give us a brief intro to. yourself. yeah sure great to be here and see all. of you uh I'm at Lang chain as mentioned. uh part of that I spent a lot of time. working on applied AI the self-driving. car industry. nice and uh and Kevin uh can you tell us. about yourself. sure hi everyone my name is Kevin Butler. I'm a customer success engineer at fine. code and a part of my role is to help. customers onboard enable them from a. technical standpoint and uh I've been in. the search space for several years now. both from a lexical and a semantic.
6243fab641c5-40
search. perspective. right very good thanks Kevin so let's go. ahead and kick off the presentation I'm. going to start out just by giving you an. overview of our agenda we're going to. start out and Lance is actually going to. just talk about what is search and. retrieval how does it work and he's. going to give us an introduction to Lang. chain. then Kevin's going to talk about pine. cone and then we're going to actually. get into the llm deployment stack and. start talking about what can go wrong. when you're deploying search and. retrieval applications and after we've. done all that we're going to hop into an. interactive Workshop where we're. actually going to detect some issues. with a search and retrieval application. and identify the root cause of the issue. using a an open source project called. Phoenix from arise okay so without any. further Ado I'm going to kick it over to. Lance and he's going to tell us more. about search and retrieval in the. context of link chain. thanks a lot well first and foremost. maybe it's good to introduce Lang chain. which is an application development. framework for language models. and one of the most popular use cases. for lion chain is this retrieval. augmented generation flow shown here. in which you typically start with some. documents it could be your data. it could be public data YouTube Twitter. it could be storage we have over 80. Integrations with various data sources. and this allows you to pull kind of data.
6243fab641c5-41
into the world of language models and. language model applications. following that typically we perform. splitting this is due to. um kind of the does the need to store. smaller size chunks in Vector DBS. typically because of the llm context. window we'll talk about that later but. in any case typically you're starting. with sources from some large number of. possible Integrations splitting them. storing them of course pine cones we'll. talk about is one of the primary kind of. vector DBS. and then there's this retrieval step and. basically what you're doing here is. you're asking a question to the model. for example you know asking a question. about your documents and it's going to. retrieve from your storage Source your. vector DB relevant juncture question. pack those into the prompt and pass. those to the language model. and that's really the flow that we'll. kind of be talking about a bit today and. Lang chain has mentioned has a lot of. Integrations across this flow over 80 on. the document kind of connector side. over 30 Integrations including Pinecone. for storage and retrieval and then over. 40 different Integrations for language. models you can just synthesize those. final answers. maybe next slide. of course one of the challenges here is. kind of paradox of choice you have many. different options across this stack of. many different parameters which kind of. motivates the need for evaluations which. I think Xander's going to talk about a. bit later.
6243fab641c5-42
as well as we've had a lot of questions. over time about answer quality you know. how did you you you store and embed. these text chunks in your vector DB but. how do you actually can troubleshoot. when you have kind of strange or low. quality answers. um how do you kind of go back and debug. your embedded texts and how they relate. to your answer or your question I think. Xander's going to provide some nice. demonstrations there and I'll hand this. over to Kevin to talk about pineco which. is one of our primary storage. Integrations in Pinecone. and not. thank you Lance so Pinecone as you as. Amanda already kind of defined Force as. a core component as part of all of this. process of course with any GPT style. application whether it's a chatbot. whether it's semantic search or in a. variety of other use cases Vector. databases allow us to store that data in. a very. um. explicit way and actually use enter next. slide. when we talk about Vector databases what. we have below here is four different. types of custom databases if you will so. we typically are familiar with key value. stores we understand document stores.
6243fab641c5-43
such as your elastic searches and other. traditional search engine style document. stores and then of course we have graph. databases and then finally the vector. database so Pinecone was built. purposefully and specifically for. vectors from the start so it is designed. to be an efficient and scalable database. next slide. and you're probably going to see a. similar image to what we show here on. the right hand side this is a typical. GPT style q a style chatbot style. application where you have your data on. the left hand side we use an embedding. model. to create the embeddings from your data. and store that into pine cone and then. if we start to work from the right hand. side we see the application that can. make a query also needs to be vectorized. and then. queried against Pinecone. once we get the results from Pinecone. that becomes part of the context and. also part of the prompt that will. ultimately go to the AI model. next slide. okay so for Pinecone why choose pine. cone of course the first and foremost. again it's purposefully built as a. vector database first and then as you. are making your choices on a vector. database there's a lot of options out. there anything from databases that you. can run on your laptop to a server to a. large-scale vector database such as. pinecode. we're able to offer large scale at low. low latencies. the ease of use is one of our favorite. parts of our product in that the. developer experience is very good very.
6243fab641c5-44
satisfactory as far as being able to use. the simple API as well as our python. clients and we have a couple of other. connectors as well or clients. it's easy to operate it's easy to scale. and it at Large Scale it is low cost and. we essentially scale it to the size that. you need and then the cost to. Effectiveness is there as well. in the meantime I would recommend that. you visit us at app.pinecone.io to. actually sign up for a free account no. credit card required that way you can. actually use the API you can use the. database for development purposes and. experimentation and evaluation and see. how you like it. next slide. however the story doesn't end here. actually so Lang chain is an extremely. powerful framework we use it a lot. in-house we wrote a huge YouTube series. all about language so I definitely do. recommend it and in fact the link is. right there and then finally uh of. course the app.con.io again try that. free account no credit card and we are. that Enterprise ready solution to serve. your production scale application. But ultimately the next question that we. might be asking ourselves that I'm going. to hand it back to Xander what if our. solution still isn't producing high. quality results. awesome. yeah so so thanks guys so as as Kevin. mentioned. um. the workshop that we're going to be. getting into is actually debugging. um issues with search and retrieval. detecting issues as they happen and. actually pinpointing the exact mode of.
6243fab641c5-45
failure and we're going to get into all. of that we're going to get into the. details and you're going to actually. have the the opportunity. um to run a notebook where you're gonna. you're gonna do all of that um but. before we dive into the details too much. I still want to keep it a little bit. high level at first. um so so Lance gave you the the what he. gave you how does search and retrieval. work what is it but let's also just. backtrack for a minute and let's talk. about why do I need search and retrieval. why are we going through all this. trouble and to illustrate this idea. um I'm gonna I'm gonna take you to chat. gbt if you go to chat GPT and you ask. Chachi BT tell me about arise the. response that you're going to get back. is going to be I'm sorry as of my. knowledge cut off in September 2021 I. don't have any specific information on. arise so it turns out arise is a little. bit too recent in order to be in the. training set for chat GPT chat GPT. doesn't know anything about arise but. if you were to go to the arise AI. documentation homepage and copy and. paste our documentation into chatgpt and. then ask Chachi BT tell me about arise. chatgpt is going to give you a really. nice answer it's going to say arise AI. is a real-time machine learning ML. observability and explainability. platform okay so why do we need search. and retrieval well there's a couple of. reasons search and retrieval first of. all allows you to provide your own.
6243fab641c5-46
context and provide your own data to. give that information to a large. language model to synthesize a response. and this can be useful for example if. you're dealing with documentation that's. constantly changing or if you're dealing. with a proprietary data set that a large. language model like gpt4 was never. trained on. the second reason is that search and. retrieval allows you to ground your. responses in trusted sources of. information so you can give it a piece. of documentation that you know is. correct and that just increases the. chances that the uh the llm is going to. produce good high quality output and. again I kind of talked about this. already but the third reason would be. it's really easy to just swap out. documents from your knowledge base and. from your index. um easier than fine-tuning an llm for. example so that hopefully explains why. you might want to use search and. retrieval. and now let's actually start talking. about the various kinds of problems that. can arise when you are um using these. kinds of applications so what we have in.
6243fab641c5-47
this diagram is a really high level. picture of the simplest possible. question answering service that you. could use with an llm. in this service the user over here on. the left is just going to send some kind. of question to the service. that question is going to be sent to a. foundation model provider such as openai. or anthropic which is going to produce a. response which is then sent back to the. user and ideally you would have some. kind of thumbs up thumbs down button or. some other mechanism to actually get. user feedback to figure out was this a. good response from my service. and even in this very simple picture. there's already a couple of problems. that can arise the first problem being. that some of these llms are prone to. hallucination they can make up. information they can give you inaccurate. information. and the second problem is that. evaluation is actually really difficult. even if you're fortunate enough to get. that user feedback from your user to get. that thumbs up or thumbs down signal. from your user pinpointing the exact. reason and getting to an actionable next. step to improve your service can. actually be very difficult. and the picture just becomes more. complicated when you add in an. additional step of search and retrieval. so once again what we're doing here is. we are taking that user query embedding. the query retrieving relevant documents. and then enriching our prompt with. context from those relevant documents in.
6243fab641c5-48
addition to the user query which is then. being sent to the llm and this. introduces an additional possible. failure mode namely we might retrieve. the wrong context. and this is where arise is going to come. in we are going to offer you llm. observability which just means that. we're going to provide this foundational. layer that is going to allow you to. detect issues when they arise and. quickly identify the root cause of the. issue. so let's let's dive in a little bit. deeper I'm going to dive into this. particular box and we're going to we're. going to unpack what it means to get bad. retrievals. and we're going to keep running with. this example of the arise documentation. because that's actually the exact data. that we're going to be working with. today in the workshop portion of this. talk we're actually going to be. debugging a chat bot over a knowledge. base containing the arise Docs so let's. run with this example for a minute when. you create this indexed knowledge base. what you do is you take the arise. documentation you chunk it up and you. store it in pine cone. and then you're going to have users ask. questions of your question answering. service so one question that someone. might ask is what's the price of a rise. and it turns out that there are a couple. of different ways that a query could go. wrong. in our particular case as you're going. to see we don't actually have any. information in our documentation about. pricing so the first issue that could.
6243fab641c5-49
happen is maybe I don't have any similar. documents maybe I don't have any. relevant documents to retrieve from my. knowledge base and so it's not actually. possible for my chatbot to answer the. question because it doesn't have the. information that it needs so one of the. things that we're actually going to show. you how to do is how can I identify. areas of user interest. um that my knowledge base doesn't cover. so that I can go augment my knowledge. base. but there's a second mode of failure. that can happen which is that even if. you have information about the query. even if you have documentation around. the pricing for example sometimes the. retrieval step just fails sometimes even. though you're retrieving the most. similar documents by cosine similarity. maybe they're not actually relevant to. the question and so you're actually. feeding irrelevant context to the llm. and again if you feed in bad context to. the llm you can't expect it to give you. a good response. so these are the two particular modes of. failure that we're going to be. investigating today. all right. very good so now we're going to hop into. the demo portion of the workshop. um but before before we actually jump in. there let me let me just explain to you. what we're going to be using we're going. to be using a a tool from arise that's. called Phoenix it's relatively new it. was open sourced about two months ago or. so and again you can just pip install it. with Pip install arise Phoenix what.
6243fab641c5-50
Phoenix is is an application that runs. in your notebook environment So today. we're actually going to be running. Phoenix on your collab server you could. also run it on your local laptop or. anywhere where you can run a jupyter. notebook. and what Phoenix does is it helps you. detect issues with your machine learning. models and your llm applications and. again it helps you pinpoint the mode of. failure find the root cause of the issue. and what kinds of issues are we talking. about well there's a lot of different. applications that you could use Phoenix. for but the use case that we're dealing. with today is a search and retrieval. application built with Lang chain and. Pinecone over the arise documentation. okay so hopefully that's clear so far. and you're going to have the opportunity. to follow along so here's a link I'm. going to drop this in the chat so let me. open up the link. here's the collab that we're going to be. running today. and I am copying and pasting that into. the chat if I can find it. here we go. okay. so that's the full link. um if you if you want you can also just. use the.
6243fab641c5-51
the bitly though it's https colon slash. slash bit dot Li slash arise Dash Lang. chain Dash Pinecone Dash collab okay and. before we actually dive into the. notebook too much I want to I Don't. Wanna I wanna show you the cool part. right so this is where we're heading. this right here is Phoenix and I'm. actually running it on my laptop right. now and what you're seeing in this view. is actually a low dimensional. representation of our embeddings our. query embeddings in blue and or. um database entries or our. chunk embeddings the documentation Chunk. embeddings in purple and if I actually. just circle a cluster here let me look. for a good area of overlap so I'm seeing. this area right here where I'm seeing a. little bit of overlap between some of. the blue and the purple points. let's just take a look at what these. points actually represent so again these. blue questions are user queries one. example of a user query is do we have. the ability to resolve a monitor okay so. that's a question that someone might ask. of our search and retrieval application. and yeah a little bit of context too. monitoring is one of the things that. arise offers so this is a user asking. questions about the arise platform. um it looks like all these questions are. about monitors and here in purple you. can actually see some documentation some. chunks of documentation and you can see. that the documentation is also about. monitors so what does that tell us it.
6243fab641c5-52
just means that we have reasonable. embeddings that are embedding. semantically similar pieces of text. nearby in the embedding space and we're. going to dive into how you can use this. visualization in just a moment but let's. actually return here to The Notebook. so this notebook is is pretty long it's. a little complicated in order to. actually run it all the cells what you. need to do is. come here and click on this script. and this is a script that is actually. going to build you a Pinecone index. using Lang chain of the arise. documentation and I'm not going to go. over how to run this script in this. particular Workshop I'm going to assume. um that if you want to run the notebook. you should just go ahead run the script. at home and to do that you're going to. need to go to pinecone.io create a pine. cone account create an open AI account. get your openai key and just run the. script and then you will wind up with. this Pinecone database full of. documentation chunks from the arise. documentation. so for the for this Workshop today. you'll notice that a lot of these cells. have these red exclamation points ahead. of them. for any cell that has this red. exclamation point go ahead and skip that. cell those are the cells that require. your open API key and your pine cone API. key. um so skip those cells but you'll see a. lot of the cells also don't have those. um red exclamation points so go ahead. and run those cells just run the ones.
6243fab641c5-53
without the exclamation points all the. way down and then you will wind up. inside of Phoenix at the very end. so for me I'm going to do something. slightly different though I'm going to. actually copy over my Pinecone API key. I'm going to copy over all that. information let me grab it real quick. just so that you can see what this would. look like once you've actually built. your Pinecone index. so I'm going to first of all grab my. pine cone environment and paste it in. here. I'm going to grab my pine cone API key. paste it in here. gonna grab my Pinecone index name which. I think is called arise Dash Docs. and I'm going to grab also my openai API. key. and in general I would advise you not to. do what I'm doing right now which is to. share all of your private API keys but. just remember to revoke them afterwards. if you do. all right. so let me see I've got my openai API key. I've got my Pinecone information here at. this point I should just be able to run. all so I'm going to run all. and I'm going to let the notebook run. and for this particular notebook you. don't need a GPU. and while it's running let me just check. looks like it's going. while it's running I'm going to dive. into the particular chat bot. architecture that we're dealing with. today. so this is the chat bot that we've built. and that we're dealing with the purple. component or the purple part of this. diagram is the Lang chain and pine cone. chat bot and the blue part of this.
6243fab641c5-54
diagram is the foundation model provider. in this case open AI. and I'm going to walk you through the. five steps that we've got here that take. the user from asking a question to. receiving a response. so in Step One the user asks a question. and again for this particular chat bot. this is a chat bot over the arise. documentation so it's going to be. probably some kind of question related. to the arise platform. and the very first thing that we're. going to do is we're going to use a lang. chain component called open AI. embeddings. and what that's going to do is it's. going to make a request to the. embeddings API. um for for open AI in order to embed the. user query and it's going to get back. the embedding for the user query and. this particular chat bot data that we're. dealing with is using the text embedding. attitude model. in the Second Step oh in the third step. rather we're going to take that.
6243fab641c5-55
embedding that we just computed for the. query and we're going to run a. similarity search across Pinecone and. what that's going to do is it's going to. look for the two most similar pieces of. context in our Vector database by cosine. similarity and that is what is being. returned over here so again context 0. and context 1 are just chunks of our. documentation you can think of it as a. paragraph of documentation. in step four we're actually going to. generate the response to the user and. the way we do that is we take the user. query and we take the retrieve context. we're going to format it into a single. prompt The Prompt is going to say. something to the effect of. here's a question from the user. here's some documentation answer the. question using the documentation. and we're going to send that prompt to. the chat completions API in this case. we're using GPT 3.5 turbo and that's. going to actually generate the final. response which is then returned to the. user so hopefully that is clear at a. high level. um what the chatbot architecture looks. like and you can actually if you come. back to the notebook. you can actually see. some of these components right in this. cell right here here we go so you can. see the embedding model name you can see. the actual open AI embeddings here you. can see where we're doing the document. search over Pinecone and this is where. we actually Define the Lang chain which. is going to be doing the user query and.
6243fab641c5-56
in this particular case says it's. already run yeah so in this particular. case I'll show you what it looks like. when you actually get this whole thing. running we've got a piece of query text. and the query text is a user asking how. do I get an arise API key okay so that's. a pretty natural thing that someone. might ask of our documentation and you. can see that we actually run this. command chain dot run query text and. that is running all of the steps that I. just went through in that previous. diagram in order to get a response from. the user and that's what we see here so. the response is. this is again the response from this. chat bot it says to get Enterprise API. key you need to click the get your API. key button on the top right of the. Explorer page which will generate your. API key etc etc and the other things. that we're also recording here are. things that are relevant for Phoenix. things like the first piece of retrieved. context the similarity score to that. first piece of retrieve context the. second piece of retrieve context the. second similarity score and the. embedding from the query the query uh. from the user. okay so hopefully that's pretty clear so. far next step is we are going to. talk about the data. so again I don't want to dive too deep. into the details of the notebook The. Notebook is pretty complex but basically. the tldr here is we're trying to get. your data to look like this this is. basically what the whole notebook is. about.
6243fab641c5-57
so the first thing is let's go over what. does our database data look like again. this is the data from our knowledge base. so our knowledge base it really is is. quite simple it consists of chunks of. documentation so here's an example this. first one says to get started quickly. to get started quickly you can use the. scripts provided with the distribution. extract the tar file provided by the. arise team etc etc and then here's the. actual embedding Vector that is being. stored inside of Pinecone. so that's what the the database data. looks like and here's what our query. data looks like. each each query is again going to have a. piece of text just the question asked. from the user in this case the user is. asking how do I use the SDK to upload a. ranking model once again a question that. someone might ask of our documentation. we've got the associated embedding. Vector for that query we have the. context the similarity and the second. piece oh I guess uh the response from. the user or from the from the chat bot. here so the chatbot responded to use the. SDK to upload a ranking model you will. need to First install the SDK etc etc. we've got user feedback and again user. feedback in this case is just going to. be thumbs up thumbs down so a plus one. means the user gave this response a. thumbs up a minus one means the user. gave the response a thumbs down. and then we're going to get into these. last two columns in a moment. okay awesome so now that we've gone over.
6243fab641c5-58
what the data looks like. I'm gonna see it looks like I it looks. like my collab is run so I'm going to. scroll all the way down to the bottom of. the collab and I encourage you to do the. same. scrolling all the way down to the bottom. of the collab you see this link right. here to view the Phoenix app in your. browser visit this link and again. Phoenix is just an application that's. running in this case on this very collab. server. so now we're going to get to the fun. part which is actually using Phoenix to. debug your application. this just takes a minute. um and what you're seeing here is the. Phoenix home page. and the first thing I'm going to do is. I'm going to click here in the embedding. section I'm going to click on centered. text vector. and that's going to bring me back to. that page. that embeddings view that I showed you a. couple minutes ago.
6243fab641c5-59
so right now what's happening is we are. actually taking those High dimensional. embeddings that that we just defined in. our notebook and we're reducing the. dimension down to three dimensions so. that you can see them and this is what. you see and so again what you're seeing. are query data points query embeddings. in blue. and database entries or knowledge base. entries chunks of your documentation in. purple. and it's spinning around that's pretty. cool but let's actually uh let's see. what we can actually do with this like. what value can you actually get out of. this right. I'm going to tell you a couple of. stories here. let me go back to the slides for a. minute. I'm gonna I'm gonna tell you a couple. stories about the kinds of questions you. can answer using Phoenix. so the first story is how do I catch bad. responses and let's suppose for a minute. that I have some user feedback I have a. thumbs up or a thumbs down signal from. my users that's telling me hey this is a. good response or hey this is a bad. response how can I actually. visualize those responses and understand. them in a meaningful way. so let's do that let's let's hop back. into Phoenix. and the first thing I'm going to do is. I'm going to call your attention to the. display section down here in the bottom. left corner and you'll see that there's. this color by drop down and in the color. by drop down let's go ahead and select. dimension. all a dimension is is. um think of it as a field of your data.
6243fab641c5-60
and the field of my data that I care. about right now is user feedback. okay. and let me pull this up a little bit so. we can see it and what you're seeing. over here now in the point cloud is let. me toggle off the. database entries for a minute what. you're seeing are the queries only the. queries right now and. the purple queries are the ones where. our users have given a thumbs down to. the response from the chatbot. or green queries or the points where our. users have given a thumbs up to the. responses from the chat bot and there's. also some gray points in here which just. didn't get any response from the user. so if I care about where are my bad. responses what kinds of questions are my. users asking where the chat bot is. producing responses that get thumbs down. like if that's what I care about what. you can do is you can come up to this. metric drop down up here. and once again let's just select user. feedback and you'll notice that these. clusters over in the left actually just. flipped around and maybe maybe I should. actually explain these a little bit. right once we actually reduce the. dimension of your embeddings what we do. then is cluster in this low dimensional. space in order to surface up meaningful. groups of queries in this case. and let's go ahead and sort these. clusters by lowest metric value so what. that means is that now the Clusters here. are being sorted in ascending order by. the average user feedback so you can see.
6243fab641c5-61
that this cluster is up at the top. meaning this is the most problematic. cluster because it has an average user. feedback of -1 which means that every. point in this cluster got a thumbs down. so let's take a look at this cluster. all right so again here's the cluster. right here. and the points in the cluster are here. and again I did a little bit of. foreshadowing earlier but some of these. the topic here or the theme of this. cluster should be pretty familiar. um how much does the arise platform cost. and how do you charge is there a cost. for uh for a rise Beyond an annual. subscription price what's the difference. between arizes Pro and Enterprise plans. how much does a rise charge okay so all. of the questions in this cluster are. around cost and again you can see that. all these purple ones are the ones where. people gave it a thumbs down they gave. the response a thumbs down and you can. notice something really interesting here. which is that even though I've already. told you like we already know. um that we don't have any pricing. information in our documentation even. though that's the case you'll notice a. few of these examples actually give you. um. let me see if I can find a good one. how about this. a few of these examples are actually. giving you um a response so this. question is the question from the user. is is there cost for a rise Beyond an. annual subscription price the response. is arise offers different subscription.
6243fab641c5-62
plans to choose from depending on your. needs however there may be additional. costs associated with using certain. features etc etc so it's giving what. sounds like a pretty confident response. but here's the catch if you actually. look at the retrieved pieces of context. the first piece of context says arise is. an open platform that works with your. machine learning infrastructure et. cetera Etc doesn't have any information. about cost the second piece of retrieve. context is arise supports full. role-based access control using. organizations and spaces et cetera et. cetera also no information around cost. so it turns out that this particular. response is actually a hallucination the. chatbot in this case is actually just. making stuff up we actually don't have. different subscription plans and you can. imagine how actually. um how having a chat bot that makes up. this kind of information could make. customer calls pretty awkward for you. right and this is really undesirable. behavior when your chatbot doesn't. retrieve relevant context you want it to.
6243fab641c5-63
to say. um I don't know you wanted to say I'm. not sure of the answer and that's not. that's not what's happening in this case. so hopefully at this point I've shown. you how we can help you. um pinpoint those subjects that your. users are asking about where you're. getting a lot of negative user feedback. um there is one little of a drawback. there is a little bit of a drawback here. which is that to do this you actually. need user feedback right you need that. thumbs up thumbs down signal from your. users and you're not always going to. have that or even if you do have that. thumbs up thumbs down signal from your. users. um it's not always going to help you. pinpoint the exact mode of failure all. you know is the user didn't like the. response but you don't know exactly. where in the search and retrieval. process did this bad response come from. what part of the search and retrieval. application failed. so the next idea I'm going to introduce. is this idea of actually using query. density in order to highlight broad. topics that um your documentation does. not cover and this is going to be. something you can do even when you don't. have ground truth. so to do that let's go ahead and come. back to our display settings and color. by data set so now once again I'm. looking at my queries in blue and my. documentation chunks in purple. and if I come up to the metric drop down. let's go ahead and select euclidean. distance and what that what that is.
6243fab641c5-64
doing is now I am oh yeah and also. select most drift here. now I am looking at the Clusters and. sorting them based on the purity of the. cluster meaning the proportion of. queries in the cluster versus. um versus database entries so you can. see that in this blue bar right here. this blue bar means that this cluster. contains all blue data points all query. data points and why is that a problem it. means that we actually have this cluster. of queries out here that is really far. away from our knowledge base the. embeddings for these queries are distant. different far away from our knowledge. base. um and if we actually click on what. these. questions they're asking it's actually. identifying the exact same. um cluster of questions so again we. don't have any information about cost in. our documentation and hence what you see. is that all of these questions about the. cost of our platform they wind up all. the way out here way far away from any. purple data points um meaning that it's. kind of out in left field and again this. method allows you to actually detect. questions or areas of user interest. um that are not covered by your. documentation. even if even if you don't have any. feedback signal from the user. okay so that was the second story that I. wanted to tell you. um I showed you how can you use a rise. in order to detect areas of user. interest that are far away from anything. in your in your knowledge base. but the downside of this approach is.
6243fab641c5-65
that it only identifies really broad. topics but it's not going to show you. the small little gaps right sometimes. you know you might have um a topic in. your knowledge space that is covered by. your documentation but you might be. asking a really very very particular. question that isn't answered on that. particular piece of documentation so now. let's start thinking about like how can. I get something that's more fine-grained. that's going to really help me pinpoint. the cause of failure for individual. queries. and one first thing that you might try. in this situation is you might try. something like cosine similarity you. might think oh well hey during that. retrieval step we are using cosine. similarity as a heuristic in order to. retrieve documents so so maybe I should. expect that the documents that I. retrieved that have a high cosine. similarity with the query embedding. um maybe I should expect those documents. to be highly relevant or good retrievals. and conversely maybe I should expect. um. documents that have a low cosine. similarity with the query maybe maybe. those ones are not going to be very. relevant or not going to be. um very good retrievals and it's going. to turn out that this is a little bit of. a mixed bag cosine similarity is um is. going to work in some cases not going to. work in others let's actually just dive. in though. so let's take a look here. back down in the display section down. here I'm going to go back to Dimension.
6243fab641c5-66
and this time the dimension I care about. is context similarity zero so what this. number is is the cosine similarity. between each query and the most similar. piece of context in the knowledge base. and that is the first piece of context. that is being retrieved by this. particular search and retrieval. application and here in the legend you. can see that the cosine similarity it. runs between the range of about zero. point or 0.75 up to about 0.9 ish. and again those are the colors that. you're seeing over here in this point. cloud. and let's actually toggle off our. database entries for a moment and I'm. going to. select Dimension and that's going to. deselect all the points and then I'm. just going to select. the lowest cosine similarity scores here. so now you should see a bunch of purple. data points over here. and let's go ahead and take a look at. some of these points. all right. so I'm seeing some purple data points. let's look at this one what counts. against my plan usage and you can see.
6243fab641c5-67
that the cosine similarity is quite low. and indeed if you actually look at the. question and look at this retrieved. piece of context they're not actually. that relevant right the question is. asking about plan usage what counts it's. kind of asking a question around pricing. I think. um and the context says model. performance metrics measure how well. your model performs in production so. it's actually it's talking about. something totally different and it. indeed does have this low cosine. similarity score so that's pretty. encouraging right. um and conversely let's actually look at. what the good retrievals look like what. the high cosine similarity retrievals. look like. so now these green data points again. these ones represent those retrievals. those retrieved pieces of context that. had a very high level of similarity with. the query. and let's go ahead and take a look at. this one so this question is asking what. drift metrics are supported in arise and. you can see the cosine similarity is. quite High 0.88. um and this very first piece of context. actually answers the question it's a. relevant piece of context a rise. calculates drift metrics such as popular. population stability index scale. Divergence etc etc all right so. um the important thing to note here is. that this piece of context directly. answers the question and it has a high. cosine similarity but already you're. starting to see here that there's a. little bit of a little bit of a gap just.
6243fab641c5-68
because basically here's the second. piece of retrieve context you can see. that it has basically the same cosine. similarity score very very similar. um and if you actually read this piece. of context it says drift monitors. measure distribution shift which is the. difference between two statistical. distributions etc etc turns out this one. doesn't actually answer the question but. it has a lot of the same like words in. common it has. um drift it has metrics and those. superficial similarities are actually. causing the um cosine similarity to be. high even though the context is not. directly answering the question. and you can really see this if you. actually just zoom out here and look at. our. our pricing cluster and again this is. that that cluster that we've been. looking at that has all of these. questions about pricing all the way out. in left field. if you look here. all of these retrievals have to be. irrelevant retrievals um because again. we don't have any information about. price in our documentation but you can. see that the cosine similarity here. actually runs the gamut you've got some. retrievals that were really low cosine. similarity you've got other retrievals. that were really high cosine similarity. and if you actually look at one of these. ones it's like really high cosine. similarity here what you can notice is. the question is what is the cost of the. arise platform the retrieved piece of. context says arise is an open platform.
6243fab641c5-69
that works with your machine learning. infrastructure and again this is not not. talking about cost at all not a relevant. retrieval but it has these superficial. similarities in common with the query. it's talking about arise it's talking. about platforms and it has these. superficial similarities that are. causing it to have a high cosine. similarity even though the retrieval. even though that retrieved piece of. context isn't really relevant to the. user's query okay so hopefully at this. point I have convinced you that. similarity is not the same as relevance. and really the thing that we care about. when we're talking about that retrieval. step the thing we really care about is. actually. um whether or not the context is. relevant so the last idea I want to. leave you with is this this new thing. that we're working on which is the idea. of using ranking metrics and in. particular llm assisted ranking metrics. in order to directly measure the. effectiveness of the retrieval step and. I'm going to walk you through what that. means right here. looks like I'm running a little bit long. on time. um Amanda are we still good like I think. I can wrap up in about like five-ish. minutes yeah yeah you're fine. sounds good sounds good. um. so let me let me explain this idea of. what I mean by llm assisted ranking. metrics and I'm going to do that with an. example. so here once again we've got some. queries and again here queries about the. arise documentation this particular.
6243fab641c5-70
query that we're looking at is uh what. format should the prediction timestamp. be so the prediction timestamp is it's. something you use with arise and the. user is asking like oh like what what. format should I use for the timestamp. and here are the two pieces of context. that were actually retrieved by this. particular application the first piece. of context says the timestamp indicates. when the data will show up in the UI and. here's the important part sent as an. integer representing the Unix timestamp. in seconds so there you go right there. the Unix timestamp in seconds is telling. you the format right there and here is. the actual relevant not relevant. classification and where did we get this. we actually are going to get this by. asking gpd4 hey here's a question here's. a piece of context does that piece of. context answer the question relevant not. relevant so we're getting this relevant. not relevant signal from another llm and. I'll show you the place in the code in. in the notebook where we actually do. that in a second but let me just show. you the second example here to make it. more concrete. um again same query what format should. the prediction timestamp be the. prediction time stamp represents when. your model is pretty soon was made.
6243fab641c5-71
it's a column you send it to arise this. particular piece of context does not. talk about the format and so in this. case gpd4 correctly says hey this is an. irrelevant piece of context it does not. address the query from the user. and and why are we going through the. trouble of getting these relevant not. relevant classifications well the really. cool thing that you can do is that once. you have this information you can. compute ranking metrics on your query. and you can directly measure the. effectiveness of that retrieval step so. there are many different ranking metrics. that you could choose from the one that. we're going to talk about today is. called Precision at K and in particular. we're going to talk about Precision at 2. because for a particular application. we're only retrieving two pieces of. context so what is precision at two it's. just the number of the two retrieved. pieces of context that are actually. relevant to your query divided by the. total number of retrieved documents. which is two so it's just how like what. percentage of the retrieved documents. that I got are actually relevant that's. what Precision at 2 is. okay. and let me just quickly show you pull. this off let me just show you real quick. where this actually is in the notebook. you can take a look for yourself. um we're actually just Computing this. right here in this cell. this cell right here. um and all we're doing is we are setting. a system message for gpd4 that says you.
6243fab641c5-72
will be given a query and a reference. text you must determine whether the. reference text contains an answer to the. input query your response must be binary. should not contain any other characters. aside from zero and one zero means the. reference text does not contain an. answer one means the reference text. contains an answer to the query okay and. then we paste in the query we paste in. the context and we ask it for a response. so that's how we got that relevant not. relevant signal I don't claim that this. is like a like the exact prompt that we. should be using for this particular task. this is just the one that came to mind. um but that's how we're getting that. relevant not relevant classification. okay so now okay now we've talked about. like what is precision at two we've. talked about how do we get it how do we. use an llm in order to get this. Precision to number now let's actually. use it and in order to use it I'm once. again going to come down to the color by. Dimension and select open AI Precision. at 2. so what you're seeing now. over here is. um our Point cloud and let me once again. deselect my database entries. what you're seeing is the point cloud. and now. you're going to see a couple of. different colors you're seeing purple. purple means zero Precision at two so. zero Precision to two just means that. none of my retrievals were relevant you. also see these Blue Points which are. going to be 0.5 Precision to two so a.
6243fab641c5-73
0.5 Precision 2 means that one of my two. retrievals was relevant. and the green points are good green. means um that both of the retrieved. pieces of context were relevant and that. I have a Precision at 2 score of one. okay and. once again let's go ahead and. select the metric we care about. in this case we care about open AI. Precision at two and now let's go. sort our clusters by lowest metric value. so that once again the Clusters are. being sorted in ascending order by that. Precision at two value. and boom. once again we have identified that same. cluster right off the bat which again is. this cluster about questions that. actually have no. um grounding no relevant documents in. the arise documentation and you can see. that all of these data points are purple. meaning that for each data point. for each data point. uh in this case the query is like do you. have a pricing calculator. um the open AI relevance is irrelevant. meaning that gpd4 is correctly saying. hey this piece of context is not. actually relevant hey this is a. problematic. um this is a problematic point that you. need to look at because the user is. asking a question and getting a response. for a um data point that actually. doesn't have context in your knowledge. base. okay. that just about wraps up the. presentation. I think we want to reserve a little bit. of time uh just for questions and. answers but before we do that I just. also want to say thank you so much to.
6243fab641c5-74
our partners uh Lang chain and Pinecone. thank you so much to Lance and Kevin uh. we really appreciate it. um if you like the demo please go ahead. and try out Phoenix once again you can. just pip install it fully open source uh. consider leaving us a store on GitHub. and stay tuned for more Integrations. because we've got a lot planned. um for tighter Integrations with Lane. train and Pinecone and if you run into. any issues with the notebook or run into. any issues with Phoenix. um please go ahead and join our slack. Community you can ask those questions in. the Phoenix support Channel and I will. get back to you uh really quickly all. right. awesome. thanks so much Xander yeah we have a. time for a few questions but just to. reiterate what I said in the beginning. this session has been recorded we will. be sharing it with you guys after the. event we will also share on our social. and YouTube channel as well. so let's dive on in how do you guys feel.
6243fab641c5-75
about domain adaption uh fine-tuning. small lmms to increase accuracy of the. vectors. that's an interesting question. um. so okay so the questions around like. fine-tuning like when maybe maybe the. question is really around like when. would you use fine tuning right in what. circumstance right. um. you know I you know I think the answer. is like I'm not I'm not quite certain I. think that like we're still kind of. waiting to see best practices around. that certainly it's not the first thing. that I would try. um like fine tuning is is relatively. heavy lift. um there's there's likely to be other. areas of the system they could use. um that where I would probably try to. improve first. um that's my initial thought though I. don't know Lancer Kevin do you have any. any thoughts there I can add something. quickly so I think what Xander just. showed is a very nice overview of how to. think about this. remember the flow we start with. documents we do retrieval step you get. your documents you pass those to the. model the model synthesized to an answer. the retrieval step independently can be. debugged and should be debugged and. that's what Xander showed very nicely. you can lots of tools to say am I. retrieving relevant docs from my Vector. DB so that's stage one right that should. be independently evaluated. stage two is take those docs put them in. the put them in the context window use. an element to generate the answer that's. where you might think about fine-tuning.
6243fab641c5-76
if for example you wanted answers from a. very particular type if you had very. domain specific documentation that. required certain nuances in terms of. those answers you might fine tune for. that answer stage. but the document retrieval stage should. be independently evaluated and that's. what xender showed very nicely here. so I think fine tuning is definitely an. interesting tool on that kind of answer. synthesis stage of this retrieval. augmented generation flow. I'm curious too Lance because. this is I think this is a really. interesting question too is like. there's another way that you can also. try to tweak the output at the response. generation stage which is to actually. tweak the the system message right or. tweak the The Prompt right and and. that's certainly going to be like uh. probably like less work than fine tuning. too so like in your opinion do you think. um like do you recommend that people. like squeak that that system a system. message in the response generation step. first or. um intuitively that seems like where I. would start yeah okay so this is an. important Point there's some very nice. tweets out there from Travis Fisher. carpathies also mentioned this there's. kind of a hierarchy of things that you. might try to improve your answer quality. prompt engineering is like the first. thing and that's easiest you should. certainly start there you can also do I. mean in a sense when you're providing. this context you're you're giving the.
6243fab641c5-77
model a lot of information you can also. try kind of few shot learning where you. give it kind of example pairs prompt. engineering few shot approaches are. certainly advised prior to the heavier. weight approach of actually fine-tuning. the new model there are some really nice. cool platforms for fine tuning Mosaic I. know in particular has done a lot there. but it's certainly a heavier weight than. tuning your I mean tuning the parameters. of your overall chain is another thing. you can tune your chunk size you can. tune your chunk overlaps you can tune. your prompt there's a ton of things you. can do before you get into fine tuning. of course. yeah thanks Lance cool hopefully that. answered the question. great and if you have um any links to. those tweets I think that's something. that folks are asking for in the chat. perfect now we have a question about. hybrid search uh do you have any. experience in improving the efficiency. of hybrid search methods when dealing. with large numbers of legal documents. for example with the use of search. prompt generation and so while this is. obviously talking about legal docs you. can kind of apply that to anything. within the same formula essentially. yeah I'm actually not not deeply. familiar with hybrid search and I. haven't worked in the legal space in. particular so I'm not sure I'm able to. answer that one but if you follow up. with me go ahead and um go ahead and. follow up with me and we can talk about. it more yeah. yeah.
6243fab641c5-78
I can uh I can chime in with a couple of. ideas here so uh we do have several. legal customers who are trying to do. things like chatbot Style on contracts. and uh agreements and things like that. um with respect to actual hybrid search. so there's a traditional way to do it. and that's with your uh document stores. that you may have today such as an. elasticsearch or something like that. where you have those documents stored. you have lexical search already today uh. you have the ability to do things like. faceting and pagination and all the. other fun search engine techniques that. you can use with those tools one thing. that Pinecone has done over the last. several months is we haven't introduced. our own version of a hybrid search. capability and that hybrid search. capability how it works is not only are. you creating your dense embeddings which. is what you're already doing today when. you upload your vectors to Pine Cone. there's the notion of a sparse embedding. and we also recommend the hugging face. model splade and we are able to then. store essentially two vectors one for. the sparse Vector which is the lexical. representation and then of course the.
6243fab641c5-79
traditional dense Vector which is the. semantic representation. and then at query time we do the same. thing take the customer's query we could. generate a sparse version and a dense. version and we send that to Pinecone and. what you get out of that is a. automatically re-ranked result best. between the lexical representation for. your very specific domain vocabulary as. well as the semantic or the intent of. the query that's coming in as well so it. works very well we've written a really. nice paper on it as well one of our data. scientists have and uh so we're we're. very happy to share that information in. fact someone asked a q a question. earlier that I dropped the link on there. but it's um I can drop it in the chat. here as well. yeah and we have a ton of uh really. great resources in our Learning Center. on our website so I highly recommend. checking it out uh one last question uh. not super deep on embeddings here but. any intuition on what are common causes. of poor retrievals common words Clauses. sentences especially for poor retrievals. that have high cosine similarity scores. yeah that's a that's a good question. Okay so what are the common causes for. poor retrievals I mean. I think you know I think part of it. really is is just like. um. part of it is is. I think in these applications we're. using cosine similarity we're using. similarity right um and similarity it. could be capturing a lot of different. things as you're mentioning it could be.
6243fab641c5-80
it could be capturing things like um. what words are there. um. it could be capturing um you know one. interesting thing is it could be. capturing like certain things around. like the length of the piece of text or. like is it a question it could be. capturing things around tone sentiment. right so the embeddings um depending on. what model you're using it could be. capturing all this kind of information. and we're using similarity as a proxy. for relevance when we're actually trying. to retrieve pieces of context with. similarity but I think really what it. boils down to is similarity is just a. proxy for relevance it's not actually. relevant and so we can't expect. um that similarity step to always. perform really well uh because after all. being similar is not the same as being. as being relevant so I guess that's kind. of like my my intuition there a little. bit. yeah. great um with that we are at time I want. to say thank you to everybody who's. participated. um really appreciate you guys being here. sharing your knowledge I've also for. those who are in the Bay Area on July. 13th we are having a pine cone Summit. centering around chat Bots and. hallucinations so if you are out there I. hope you can join us and thank you again. round of applause for our hosts. thanks everybody. bye thanks.