id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
49
117
d7bac2a60dbd-4
- [Gallery](https://python.langchain.com/en/latest/additional_resources/gallery.html): A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications. - [Deployments](https://python.langchain.com/en/latest/additional_resources/deployments.html): A collection of instructions, code snippets, and template repositories for deploying LangChain apps. - [Tracing](https://python.langchain.com/en/latest/additional_resources/tracing.html): A guide on using tracing in LangChain to visualize the execution of chains and agents. - [Model Laboratory](https://python.langchain.com/en/latest/additional_resources/model_laboratory.html): Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. - [Discord](https://discord.gg/6adMQxSpJS): Join us on our Discord to discuss all things LangChain! - [YouTube](https://python.langchain.com/en/latest/additional_resources/youtube.html): A collection of the LangChain tutorials and videos. - [Production Support](https://forms.gle/57d8AmXBYp8PP8tZA): As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel. previous Stripe next Twitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/tomarkdown.html
79280504db46-0
.ipynb .pdf Jupyter Notebook Jupyter Notebook# Jupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents. This notebook covers how to load data from a Jupyter notebook (.ipynb) into a format suitable by LangChain. from langchain.document_loaders import NotebookLoader loader = NotebookLoader("example_data/notebook.ipynb", include_outputs=True, max_output_length=20, remove_newline=True) NotebookLoader.load() loads the .ipynb notebook file into a Document object. Parameters: include_outputs (bool): whether to include cell outputs in the resulting document (default is False). max_output_length (int): the maximum number of characters to include from each cell output (default is 10). remove_newline (bool): whether to remove newline characters from the cell sources and outputs (default is False). traceback (bool): whether to include full traceback (default is False). loader.load()
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html
79280504db46-1
traceback (bool): whether to include full traceback (default is False). loader.load() [Document(page_content='\'markdown\' cell: \'[\'# Notebook\', \'\', \'This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.\']\'\n\n \'code\' cell: \'[\'from langchain.document_loaders import NotebookLoader\']\'\n\n \'code\' cell: \'[\'loader = NotebookLoader("example_data/notebook.ipynb")\']\'\n\n \'markdown\' cell: \'[\'`NotebookLoader.load()` loads the `.ipynb` notebook file into a `Document` object.\', \'\', \'**Parameters**:\', \'\', \'* `include_outputs` (bool): whether to include cell outputs in the resulting document (default is False).\', \'* `max_output_length` (int): the maximum number of characters to include from each cell output (default is 10).\', \'* `remove_newline` (bool): whether to remove newline characters from the cell sources and outputs (default is False).\', \'* `traceback` (bool): whether to include full traceback (default is False).\']\'\n\n \'code\' cell: \'[\'loader.load(include_outputs=True, max_output_length=20, remove_newline=True)\']\'\n\n', metadata={'source': 'example_data/notebook.ipynb'})] previous Images next JSON By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/jupyter_notebook.html
2ba962768247-0
.ipynb .pdf OpenAIWhisperParser OpenAIWhisperParser# This notebook goes over how to load data from an audio file, such as an mp3. We use the OpenAIWhisperParser, which will use the OpenAI Whisper API to transcribe audio to text. Note: You will need to have an OPENAI_API_KEY supplied. from langchain.document_loaders.generic import GenericLoader from langchain.document_loaders.parsers import OpenAIWhisperParser # Directory contains audio for the first 20 minutes of one Andrej Karpathy video # "The spelled-out intro to neural networks and backpropagation: building micrograd" # https://www.youtube.com/watch?v=VMj-3S1tku0 audio_file_path = "example_data/" loader = GenericLoader.from_filesystem(audio_file_path, glob="*.mp3", parser=OpenAIWhisperParser()) docs = loader.load() docs
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-1
[Document(page_content="Hello, my name is Andrej and I've been training deep neural networks for a bit more than a decade. And in this lecture I'd like to show you what neural network training looks like under the hood. So in particular we are going to start with a blank Jupyter notebook and by the end of this lecture we will define and train a neural net and you'll get to see everything that goes on under the hood and exactly sort of how that works on an intuitive level. Now specifically what I would like to do is I would like to take you through building of micrograd. Now micrograd is this library that I released on GitHub about two years ago but at the time I only uploaded the source code and you'd have to go in by yourself and really figure out how it works. So in this lecture I will take you through it step by step and kind of comment on all the pieces of it. So what is micrograd and why is it interesting? Thank you. Micrograd is basically an autograd engine. Autograd is short for automatic gradient and really what it does is it implements back propagation. Now back propagation is this algorithm that allows you to efficiently evaluate the gradient of some kind of a loss function with respect to the weights of a neural network and what that allows us to do then is we can iteratively tune the weights of that neural network to minimize the loss function and therefore improve the accuracy of the network. So back propagation would be at the mathematical core of any modern deep neural network library like say PyTorch or JAX. So the functionality of micrograd is I think best illustrated by an example. So if we just scroll down here you'll see that micrograd basically allows you to build out mathematical expressions and here what we are doing is we have an expression that we're building out where you have two inputs a and b and you'll see that a and b are negative four and two but we are wrapping those values into
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-2
and you'll see that a and b are negative four and two but we are wrapping those values into this value object that we are going to build out as part of micrograd. So this value object will wrap the numbers themselves and then we are going to build out a mathematical expression here where a and b are transformed into c d and eventually e f and g and I'm showing some of the functionality of micrograd and the operations that it supports. So you can add two value objects, you can multiply them, you can raise them to a constant power, you can offset by one, negate, squash at zero, square, divide by constant, divide by it, etc. And so we're building out an expression graph with these two inputs a and b and we're creating an output value of g and micrograd will in the background build out this entire mathematical expression. So it will for example know that c is also a value, c was a result of an addition operation and the child nodes of c are a and b because the and it will maintain pointers to a and b value objects. So we'll basically know exactly how all of this is laid out and then not only can we do what we call the forward pass where we actually look at the value of g of course, that's pretty straightforward, we will access that using the dot data attribute and so the output of the forward pass, the value of g, is 24.7 it turns out. But the big deal is that we can also take this g value object and we can call dot backward and this will basically initialize backpropagation at the node g. And what backpropagation is going to do is it's going to start at g and it's going to go backwards through that expression graph and it's going to recursively apply the chain rule from calculus. And what that allows us to do then is we're going to evaluate basically the derivative of g with respect to all the internal nodes like e, d,
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-3
going to evaluate basically the derivative of g with respect to all the internal nodes like e, d, and c but also with respect to the inputs a and b. And then we can actually query this derivative of g with respect to a, for example that's a.grad, in this case it happens to be 138, and the derivative of g with respect to b which also happens to be here 645. And this derivative we'll see soon is very important information because it's telling us how a and b are affecting g through this mathematical expression. So in particular a.grad is 138, so if we slightly nudge a and make it slightly larger, 138 is telling us that g will grow and the slope of that growth is going to be 138 and the slope of growth of b is going to be 645. So that's going to tell us about how g will respond if a and b get tweaked a tiny amount in a positive direction. Now you might be confused about what this expression is that we built out here and this expression by the way is completely meaningless. I just made it up, I'm just flexing about the kinds of operations that are supported by micrograd. What we actually really care about are neural networks but it turns out that neural networks are just mathematical expressions just like this one but actually slightly a bit less crazy even. Neural networks are just a mathematical expression, they take the input data as an input and they take the weights of a neural network as an input and it's a mathematical expression and the output are your predictions of your neural net or the loss function, we'll see this in a bit. But basically neural networks just happen to be a certain class of mathematical expressions but back propagation is actually significantly more general. It doesn't actually care about neural networks at all, it only cares about arbitrary mathematical expressions and then we happen to use that machinery for training of neural networks. Now one more note I would like to make at this stage is
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-4
machinery for training of neural networks. Now one more note I would like to make at this stage is that as you see here micrograd is a scalar valued autograd engine so it's working on the you know level of individual scalars like negative 4 and 2 and we're taking neural nets and we're breaking them down all the way to these atoms of individual scalars and all the little pluses and times and it's just excessive and so obviously you would never be doing any of this in production. It's really just done for pedagogical reasons because it allows us to not have to deal with these n-dimensional tensors that you would use in modern deep neural network library. So this is really done so that you understand and refactor out back propagation and chain rule and understanding of neural training and then if you actually want to train bigger networks you have to be using these tensors but none of the math changes, this is done purely for efficiency. We are basically taking all the scalars all the scalar values we're packaging them up into tensors which are just arrays of these scalars and then because we have these large arrays we're making operations on those large arrays that allows us to take advantage of the parallelism in a computer and all those operations can be done in parallel and then the whole thing runs faster but really none of the math changes and they're done purely for efficiency so I don't think that it's pedagogically useful to be dealing with tensors from scratch and I think and that's why I fundamentally wrote micrograd because you can understand how things work at the fundamental level and then you can speed it up later. Okay so here's the fun part. My claim is that micrograd is what you need to train neural networks and everything else is just efficiency so you'd think that micrograd would be a very complex piece of code and that turns out to not be the case. So if we just go to micrograd and you'll see that there's only two files here
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-5
So if we just go to micrograd and you'll see that there's only two files here in micrograd. This is the actual engine, it doesn't know anything about neural nets and this is the entire neural nets library on top of micrograd. So engine and nn.py. So the actual back propagation autograd engine that gives you the power of neural networks is literally 100 lines of code of like very simple python which we'll understand by the end of this lecture and then nn.py, this neural network library built on top of the autograd engine is like a joke. It's like we have to define what is a neuron and then we have to define what is a layer of neurons and then we define what is a multilayer perceptron which is just a sequence of layers of neurons and so it's just a total joke. So basically there's a lot of power that comes from only 150 lines of code and that's all you need to understand to understand neural network training and everything else is just efficiency and of course there's a lot to efficiency but fundamentally that's all that's happening. Okay so now let's dive right in and implement micrograd step by step. The first thing I'd like to do is I'd like to make sure that you have a very good understanding intuitively of what a derivative is and exactly what information it gives you. So let's start with some basic imports that I copy-paste in every jupyter notebook always and let's define a function, a scalar valued function f of x as follows. So I just made this up randomly. I just wanted a scalar valued function that takes a single scalar x and returns a single scalar y and we can call this function of course so we can pass in say 3.0 and get 20 back. Now we can also plot this function to get a sense of its shape. You can tell from the mathematical expression that this is probably a parabola, it's a quadratic and
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-6
can tell from the mathematical expression that this is probably a parabola, it's a quadratic and so if we just create a set of scalar values that we can feed in using for example a range from negative 5 to 5 in steps of 0.25. So this is so x is just from negative 5 to 5 not including 5 in steps of 0.25 and we can actually call this function on this numpy array as well so we get a set of y's if we call f on x's and these y's are basically also applying the function on every one of these elements independently and we can plot this using matplotlib. So plt.plot x's and y's and we get a nice parabola. So previously here we fed in 3.0 somewhere here and we received 20 back which is here the y-coordinate. So now I'd like to think through what is the derivative of this function at any single input point x. So what is the derivative at different points x of this function? Now if you remember back to your calculus class you've probably derived derivatives so we take this mathematical expression 3x squared minus 4x plus 5 and you would write out on a piece of paper and you would apply the product rule and all the other rules and derive the mathematical expression of the great derivative of the original function and then you could plug in different texts and see what the derivative is. We're not going to actually do that because no one in neural networks actually writes out the expression for the neural net. It would be a massive expression, it would be thousands, tens of thousands of terms. No one actually derives the derivative of course and so we're not going to take this kind of like symbolic approach. Instead what I'd like to do is I'd like to look at the definition of derivative and just make sure that we really understand what the derivative is measuring, what it's telling you about the function. And so if
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-7
really understand what the derivative is measuring, what it's telling you about the function. And so if we just look up derivative we see that okay so this is not a very good definition of derivative. This is a definition of what it means to be differentiable but if you remember from your calculus it is the limit as h goes to zero of f of x plus h minus f of x over h. So basically what it's saying is if you slightly bump up your at some point x that you're interested in or a and if you slightly bump up you know you slightly increase it by small number h how does the function respond with what sensitivity does it respond where is the slope at that point does the function go up or does it go down and by how much and that's the slope of that function the the slope of that response at that point and so we can basically evaluate the derivative here numerically by taking a very small h of course the definition would ask us to take h to zero we're just going to pick a very small h 0.001 and let's say we're interested in 0.3.0 so we can look at f of x of course as 20 and now f of x plus h so if we slightly nudge x in a positive direction how is the function going to respond and just looking at this do you expand do you expect f of x plus h to be slightly greater than 20 or do you expect it to be slightly lower than 20 and since this 3 is here and this is 20 if we slightly go positively the function will respond positively so you'd expect this to be slightly greater than 20 and now by how much is telling you the sort of the the strength of that slope right the the size of the slope so f of x plus h minus f of x this is how much the function responded in a positive direction and we have to normalize by the run so we have the rise over run to get the slope so this
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-8
we have to normalize by the run so we have the rise over run to get the slope so this of course is just a numerical approximation of the slope because we have to make h very very small to converge to the exact amount now if i'm doing too many zeros at some point i'm going to i'm going to get an incorrect answer because we're using floating point arithmetic and the representations of all these numbers in computer memory is finite and at some point we get into trouble so we can converge towards the right answer with this approach but basically at 3 the slope is 14 and you can see that by taking 3x squared minus 4x plus 5 and differentiating it in our head so 3x squared would be 6x minus 4 and then we plug in x equals 3 so that's 18 minus 4 is 14 so this is correct so that's at 3 now how about the slope at say negative 3 would you expect what would you expect for the slope now telling the exact value is really hard but what is the sign of that slope so at negative 3 if we slightly go in the positive direction at x the function would actually go down and so that tells you that the slope would be negative so we'll get a slight number below below 20 and so if we take the slope we expect something negative negative 22 okay and at some point here of course the slope would be zero now for this specific function i looked it up previously and it's at point uh 2 over 3 so at roughly 2 over 3 that's somewhere here this this derivative would be zero so basically at that precise point yeah at that precise point if we nudge in a positive direction the function doesn't respond this stays the same almost and so that's why the slope is zero okay now let's look at a bit more complex case so we're going to start you know complexifying a bit so now we have a function here with output variable
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-9
going to start you know complexifying a bit so now we have a function here with output variable d that is a function of three scalar inputs a b and c so a b and c are some specific values three inputs into our expression graph and a single output d and so if we just print d we get four and now what i like to do is i'd like to again look at the derivatives of d with respect to a b and c and uh think through uh again just the intuition of what this derivative is telling us so in order to evaluate this derivative we're going to get a bit hacky here we're going to again have a very small value of h and then we're going to fix the inputs at some values that we're interested in so these are the this is the point a b c at which we're going to be evaluating the the derivative of d with respect to all a b and c at that point so there are the inputs and now we have d1 is that expression and then we're going to for example look at the derivative of d with respect to a so we'll take a and we'll bump it by h and then we'll get d2 to be the exact same function and now we're going to print um you know f1 d1 is d1 d2 is d2 and print slope so the derivative or slope here will be um of course d2 minus d1 divide h so d2 minus d1 is how much the function increased uh when we bumped the uh the specific input that we're interested in by a tiny amount and this is the normalized by this is the normalized by h to get the slope so um yeah so this so i just run this we're going to print d1 which we know is four now d2 will be bumped a will be bumped by h so let's just think through a little bit uh what d2 will be uh printed out here in particular d1 will be four will d2 be a number slightly greater than
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-10
uh printed out here in particular d1 will be four will d2 be a number slightly greater than four or slightly lower than four and that's going to tell us the sign of the derivative so we're bumping a by h b is minus three c is 10 so you can just intuitively think through this derivative and what it's doing a will be slightly more positive and but b is a negative number so if a is slightly more positive because b is negative three we're actually going to be adding less to d so you'd actually expect that the value of the function will go down so let's just see this yeah and so we went from four to 3.9996 and that tells you that the slope will be negative and then um will be a negative number because we went down and then the exact number of slope will be exact amount of slope is negative three and you can also convince yourself that negative three is the right answer um mathematically and analytically because if you have a times b plus c and you are you know you have calculus then uh differentiating a times b plus c with respect to a gives you just b and indeed the value of b is negative three which is the derivative that we have so you can tell that that's correct so now if we do this with b so if we bump b by a little bit in a positive direction we'd get different slopes so what is the influence of b on the output d so if we bump b by a tiny amount in a positive direction then because a is positive we'll be adding more to d right so um and now what is the what is the sensitivity what is the slope of that addition and it might not surprise you that this should be two and why is it two because d of d by db differentiating with respect to b would be would give us a and the value of a is two so that's also working well and then if c gets bumped a tiny amount in h by h then of course a times
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-11
working well and then if c gets bumped a tiny amount in h by h then of course a times b is unaffected and now c becomes slightly bit higher what does that do to the function it makes it slightly bit higher because we're simply adding c and it makes it slightly bit higher by the exact same amount that we added to c and so that tells you that the slope is one that will be the the rate at which d will increase as we scale c okay so we now have some intuitive sense of what this derivative is telling you about the function and we'd like to move to neural networks now as i mentioned neural networks will be pretty massive expressions mathematical expressions so we need some data structures that maintain these expressions and that's what we're going to start to build out now so we're going to build out this value object that i showed you in the readme page of micrograd so let me copy paste a skeleton of the first very simple value object so class value takes a single scalar value that it wraps and keeps track of and that's it so we can for example do value of 2.0 and then we can get we can look at its content and python will internally use the wrapper function to return this string like that so this is a value object that we're going to call value object", metadata={'source': 'example_data/Lecture_1_0.mp3'})]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
2ba962768247-12
previous Airtable next CoNLL-U By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/audio.html
138364f3fabc-0
.ipynb .pdf Unstructured File Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured API Unstructured File# This notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package !pip install "unstructured[local-inference]" !pip install layoutparser[layoutmodels,tesseract] # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt") docs = loader.load() docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredFileLoader("./example_data/state_of_the_union.txt", mode="elements") docs = loader.load() docs[:5]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
138364f3fabc-1
docs = loader.load() docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy# Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partition the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below. from langchain.document_loaders import UnstructuredFileLoader
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
138364f3fabc-2
from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements") docs = loader.load() docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] PDF Example# Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" loader = UnstructuredFileLoader("./example_data/layout-parser-paper.pdf", mode="elements") docs = loader.load() docs[:5]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
138364f3fabc-3
docs = loader.load() docs[:5] [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI shannons@allenai.org', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen zhang@brown.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] Unstructured API# If you want to get up and running with less set up, you can simply run pip install unstructured and use UnstructuredAPIFileLoader or UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API. Note that currently (as of 11 May 2023) the Unstructured API is open, but it will soon require an API. The Unstructured documentation page will have instructions on how to generate an API key once they’re available. Check out the instructions here if you’d like to self-host the Unstructured API or run it locally. from langchain.document_loaders import UnstructuredAPIFileLoader filenames = ["example_data/fake.docx", "example_data/fake-email.eml"] loader = UnstructuredAPIFileLoader(
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
138364f3fabc-4
loader = UnstructuredAPIFileLoader( file_path=filenames[0], api_key="FAKE_API_KEY", ) docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.', metadata={'source': 'example_data/fake.docx'}) You can also batch multiple files through the Unstructured API in a single API using UnstructuredAPIFileLoader. loader = UnstructuredAPIFileLoader( file_path=filenames, api_key="FAKE_API_KEY", ) docs = loader.load() docs[0] Document(page_content='Lorem ipsum dolor sit amet.\n\nThis is a test email to use for unit tests.\n\nImportant points:\n\nRoses are red\n\nViolets are blue', metadata={'source': ['example_data/fake.docx', 'example_data/fake-email.eml']}) previous TOML next URL Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured API By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html
da658218a0ce-0
.ipynb .pdf WhatsApp Chat WhatsApp Chat# WhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content. This notebook covers how to load data from the WhatsApp Chats into a format that can be ingested into LangChain. from langchain.document_loaders import WhatsAppChatLoader loader = WhatsAppChatLoader("example_data/whatsapp_chat.txt") loader.load() previous Weather next Arxiv By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/whatsapp_chat.html
3de0ead61219-0
.ipynb .pdf Telegram Telegram# Telegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features. This notebook covers how to load data from Telegram into a format that can be ingested into LangChain. from langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoader loader = TelegramChatFileLoader("example_data/telegram.json") loader.load() [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", metadata={'source': 'example_data/telegram.json'})] TelegramChatApiLoader loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account. You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=apps chat_entity – recommended to be the entity of a channel. loader = TelegramChatApiLoader( chat_entity="<CHAT_URL>", # recommended to use Entity here api_hash="<API HASH >", api_id="<API_ID>", user_name ="", # needed only for caching the session. ) loader.load() previous Subtitle next TOML By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/telegram.html
4c53e9105228-0
.ipynb .pdf Arxiv Contents Installation Examples Arxiv# arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream. Installation# First, you need to install arxiv python package. #!pip install arxiv Second, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format. #!pip install pymupdf Examples# ArxivLoader has these arguments: query: free text which used to find documents in the Arxiv optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded. from langchain.document_loaders import ArxivLoader docs = ArxivLoader(query="1605.08386", load_max_docs=2).load() len(docs) docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch',
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html
4c53e9105228-1
'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'} docs[0].page_content[:400] # all pages of the Document content 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b' previous WhatsApp Chat next AZLyrics Contents Installation Examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/arxiv.html
882ec2509c30-0
.ipynb .pdf Trello Contents Features Trello# Trello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a “board” where users can create lists and cards to represent their tasks and activities. The TrelloLoader allows you to load cards from a Trello board and is implemented on top of py-trello This currently supports api_key/token only. Credentials generation: https://trello.com/power-ups/admin/ Click in the manual token generation link to get the token. To specify the API key and token you can either set the environment variables TRELLO_API_KEY and TRELLO_TOKEN or you can pass api_key and token directly into the from_credentials convenience constructor method. This loader allows you to provide the board name to pull in the corresponding cards into Document objects. Notice that the board “name” is also called “title” in oficial documentation: https://support.atlassian.com/trello/docs/changing-a-boards-title-and-description/ You can also specify several load parameters to include / remove different fields both from the document page_content properties and metadata. Features# Load cards from a Trello board. Filter cards based on their status (open or closed). Include card names, comments, and checklists in the loaded documents. Customize the additional metadata fields to include in the document. By default all card fields are included for the full text page_content and metadata accordinly. #!pip install py-trello beautifulsoup4 # If you have already set the API key and token using environment variables, # you can skip this cell and comment out the `api_key` and `token` named arguments # in the initialization steps below. from getpass import getpass API_KEY = getpass() TOKEN = getpass()
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html
882ec2509c30-1
from getpass import getpass API_KEY = getpass() TOKEN = getpass() ········ ········ from langchain.document_loaders import TrelloLoader # Get the open cards from "Awesome Board" loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, card_filter="open", ) documents = loader.load() print(documents[0].page_content) print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'labels': ['Demand Marketing'], 'list': 'Done', 'closed': False, 'due_date': ''} # Get all the cards from "Awesome Board" but only include the # card list(column) as extra metadata. loader = TrelloLoader.from_credentials( "Awesome Board", api_key=API_KEY, token=TOKEN, extra_metadata=("list"), ) documents = loader.load() print(documents[0].page_content) print(documents[0].metadata) Review Tech partner pages Comments: {'title': 'Review Tech partner pages', 'id': '6475357890dc8d17f73f2dcc', 'url': 'https://trello.com/c/b0OTZwkZ/1-review-tech-partner-pages', 'list': 'Done'} # Get the cards from "Another Board" and exclude the card name, # checklist and comments from the Document page_content text. loader = TrelloLoader.from_credentials( "test",
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html
882ec2509c30-2
loader = TrelloLoader.from_credentials( "test", api_key=API_KEY, token=TOKEN, include_card_name= False, include_checklist= False, include_comments= False, ) documents = loader.load() print("Document: " + documents[0].page_content) print(documents[0].metadata) Contents Features By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/trello.html
f4c8407a149b-0
.ipynb .pdf Microsoft Excel Microsoft Excel# The UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the text_as_html key. from langchain.document_loaders import UnstructuredExcelLoader loader = UnstructuredExcelLoader( "example_data/stanley-cups.xlsx", mode="elements" ) docs = loader.load() docs[0]
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/excel.html
f4c8407a149b-1
mode="elements" ) docs = loader.load() docs[0] Document(page_content='\n \n \n Team\n Location\n Stanley Cups\n \n \n Blues\n STL\n 1\n \n \n Flyers\n PHI\n 2\n \n \n Maple Leafs\n TOR\n 13\n \n \n', metadata={'source': 'example_data/stanley-cups.xlsx', 'filename': 'stanley-cups.xlsx', 'file_directory': 'example_data', 'filetype': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'page_number': 1, 'page_name': 'Stanley Cups', 'text_as_html': '<table border="1" class="dataframe">\n <tbody>\n <tr>\n <td>Team</td>\n <td>Location</td>\n <td>Stanley Cups</td>\n </tr>\n <tr>\n <td>Blues</td>\n <td>STL</td>\n <td>1</td>\n </tr>\n <tr>\n <td>Flyers</td>\n <td>PHI</td>\n <td>2</td>\n </tr>\n <tr>\n <td>Maple Leafs</td>\n <td>TOR</td>\n <td>13</td>\n </tr>\n </tbody>\n</table>', 'category': 'Table'}) previous EverNote next Facebook Chat By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/excel.html
b42eea8c12d6-0
.ipynb .pdf Microsoft PowerPoint Contents Retain Elements Microsoft PowerPoint# Microsoft PowerPoint is a presentation program by Microsoft. This covers how to load Microsoft PowerPoint documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredPowerPointLoader loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx") data = loader.load() data [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', metadata={'source': 'example_data/fake-power-point.pptx'})] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx", mode="elements") data = loader.load() data[0] Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0) previous Markdown next Microsoft Word Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/microsoft_powerpoint.html
fc717c0565f5-0
.ipynb .pdf Getting Started Getting Started# The default recommended text splitter is the RecursiveCharacterTextSplitter. This text splitter takes a list of characters. It tries to create chunks based on splitting on the first character, but if any chunks are too large it then moves onto the next character, and so forth. By default the characters it tries to split on are ["\n\n", "\n", " ", ""] In addition to controlling which characters you can split on, you can also control a few other things: length_function: how the length of chunks is calculated. Defaults to just counting number of characters, but it’s pretty common to pass a token counter here. chunk_size: the maximum size of your chunks (as measured by the length function). chunk_overlap: the maximum overlap between chunks. It can be nice to have some overlap to maintain some continuity between chunks (eg do a sliding window). add_start_index : wether to include the starting position of each chunk within the original document in the metadata. # This is a long document we can split up. with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, add_start_index = True, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' metadata={'start_index': 0}
https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html
fc717c0565f5-1
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' metadata={'start_index': 82} previous Text Splitters next Character By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/getting_started.html
ddead15725ed-0
.ipynb .pdf Hugging Face tokenizer Hugging Face tokenizer# Hugging Face has many tokenizers. We use Hugging Face tokenizer, the GPT2TokenizerFast to count the text length in tokens. How the text is split: by character passed in How the chunk size is measured: by number of tokens calculated by the Hugging Face tokenizer from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter.from_huggingface_tokenizer(tokenizer, chunk_size=100, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. previous Tiktoken next tiktoken (OpenAI) tokenizer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/huggingface_length_function.html
bb0ed53381be-0
.ipynb .pdf Tiktoken Tiktoken# tiktoken is a fast BPE tokeniser created by OpenAI. How the text is split: by tiktoken tokens How the chunk size is measured: by tiktoken tokens #!pip install tiktoken # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import TokenTextSplitter text_splitter = TokenTextSplitter(chunk_size=10, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our previous spaCy next Hugging Face tokenizer By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken_splitter.html
1f296b407e44-0
.ipynb .pdf NLTK NLTK# The Natural Language Toolkit, or more commonly NLTK, is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language. Rather than just splitting on “\n\n”, we can use NLTK to split based on NLTK tokenizers. How the text is split: by NLTK tokenizer. How the chunk size is measured:by number of characters #pip install nltk # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import NLTKTextSplitter text_splitter = NLTKTextSplitter(chunk_size=1000) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html
1f296b407e44-1
Groups of citizens blocking tanks with their bodies. previous CodeTextSplitter next Recursive Character By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/nltk.html
e5b2c36f394e-0
.ipynb .pdf Character Character# This is the simplest method. This splits based on characters (by default “\n\n”) and measure chunk length by number of characters. How the text is split: by single character How the chunk size is measured: by number of characters # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter( separator = "\n\n", chunk_size = 1000, chunk_overlap = 200, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0])
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
e5b2c36f394e-1
print(texts[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={} lookup_index=0 Here’s an example of passing metadata along with the documents, notice that it is split along with the documents. metadatas = [{"document": 1}, {"document": 2}] documents = text_splitter.create_documents([state_of_the_union, state_of_the_union], metadatas=metadatas) print(documents[0])
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
e5b2c36f394e-2
print(documents[0]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' lookup_str='' metadata={'document': 1} lookup_index=0 text_splitter.split_text(state_of_the_union)[0]
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
e5b2c36f394e-3
text_splitter.split_text(state_of_the_union)[0] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' previous Getting Started next CodeTextSplitter By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/character_text_splitter.html
75f162d065b1-0
.ipynb .pdf tiktoken (OpenAI) tokenizer tiktoken (OpenAI) tokenizer# tiktoken is a fast BPE tokenizer created by OpenAI. We can use it to estimate tokens used. It will probably be more accurate for the OpenAI models. How the text is split: by character passed in How the chunk size is measured: by tiktoken tokenizer #!pip install tiktoken # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. previous Hugging Face tokenizer next Vectorstores By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/tiktoken.html
8017eeb4dbd1-0
.ipynb .pdf spaCy spaCy# spaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython. Another alternative to NLTK is to use Spacy tokenizer. How the text is split: by spaCy tokenizer How the chunk size is measured: by number of characters #!pip install spacy # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import SpacyTextSplitter text_splitter = SpacyTextSplitter(chunk_size=1000) texts = text_splitter.split_text(state_of_the_union) print(texts[0]) Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. previous Recursive Character next Tiktoken By Harrison Chase
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html
8017eeb4dbd1-1
previous Recursive Character next Tiktoken By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/spacy.html
2ad46927ed10-0
.ipynb .pdf CodeTextSplitter Contents Python JS Markdown Latex HTML CodeTextSplitter# CodeTextSplitter allows you to split your code with multiple language support. Import enum Language and specify the language. from langchain.text_splitter import ( RecursiveCharacterTextSplitter, Language, ) # Full list of support languages [e.value for e in Language] ['cpp', 'go', 'java', 'js', 'php', 'proto', 'python', 'rst', 'ruby', 'rust', 'scala', 'swift', 'markdown', 'latex', 'html'] # You can also see the separators used for a given language RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON) ['\nclass ', '\ndef ', '\n\tdef ', '\n\n', '\n', ' ', ''] Python# Here’s an example using the PythonTextSplitter PYTHON_CODE = """ def hello_world(): print("Hello, World!") # Call the function hello_world() """ python_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.PYTHON, chunk_size=50, chunk_overlap=0 ) python_docs = python_splitter.create_documents([PYTHON_CODE]) python_docs [Document(page_content='def hello_world():\n print("Hello, World!")', metadata={}), Document(page_content='# Call the function\nhello_world()', metadata={})] JS# Here’s an example using the JS text splitter JS_CODE = """ function helloWorld() { console.log("Hello, World!"); } // Call the function helloWorld(); """ js_splitter = RecursiveCharacterTextSplitter.from_language(
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html
2ad46927ed10-1
helloWorld(); """ js_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.JS, chunk_size=60, chunk_overlap=0 ) js_docs = js_splitter.create_documents([JS_CODE]) js_docs [Document(page_content='function helloWorld() {\n console.log("Hello, World!");\n}', metadata={}), Document(page_content='// Call the function\nhelloWorld();', metadata={})] Markdown# Here’s an example using the Markdown text splitter. markdown_text = """ # 🦜️🔗 LangChain ⚡ Building applications with LLMs through composability ⚡ ## Quick Install ```bash # Hopefully this code block isn't split pip install langchain ``` As an open source project in a rapidly developing field, we are extremely open to contributions. """ md_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0 ) md_docs = md_splitter.create_documents([markdown_text]) md_docs [Document(page_content='# 🦜️🔗 LangChain', metadata={}), Document(page_content='⚡ Building applications with LLMs through composability ⚡', metadata={}), Document(page_content='## Quick Install', metadata={}), Document(page_content="```bash\n# Hopefully this code block isn't split", metadata={}), Document(page_content='pip install langchain', metadata={}), Document(page_content='```', metadata={}), Document(page_content='As an open source project in a rapidly developing field, we', metadata={}), Document(page_content='are extremely open to contributions.', metadata={})] Latex# Here’s an example on Latex text latex_text = """ \documentclass{article}
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html
2ad46927ed10-2
latex_text = """ \documentclass{article} \begin{document} \maketitle \section{Introduction} Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis. \subsection{History of LLMs} The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance. \subsection{Applications of LLMs} LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics. \end{document} """ latex_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0 ) latex_docs = latex_splitter.create_documents([latex_text]) latex_docs [Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', metadata={}), Document(page_content='\\section{Introduction}', metadata={}), Document(page_content='Large language models (LLMs) are a type of machine learning', metadata={}), Document(page_content='model that can be trained on vast amounts of text data to', metadata={}), Document(page_content='generate human-like language. In recent years, LLMs have', metadata={}),
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html
2ad46927ed10-3
Document(page_content='made significant advances in a variety of natural language', metadata={}), Document(page_content='processing tasks, including language translation, text', metadata={}), Document(page_content='generation, and sentiment analysis.', metadata={}), Document(page_content='\\subsection{History of LLMs}', metadata={}), Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,', metadata={}), Document(page_content='but they were limited by the amount of data that could be', metadata={}), Document(page_content='processed and the computational power available at the', metadata={}), Document(page_content='time. In the past decade, however, advances in hardware and', metadata={}), Document(page_content='software have made it possible to train LLMs on massive', metadata={}), Document(page_content='datasets, leading to significant improvements in', metadata={}), Document(page_content='performance.', metadata={}), Document(page_content='\\subsection{Applications of LLMs}', metadata={}), Document(page_content='LLMs have many applications in industry, including', metadata={}), Document(page_content='chatbots, content creation, and virtual assistants. They', metadata={}), Document(page_content='can also be used in academia for research in linguistics,', metadata={}), Document(page_content='psychology, and computational linguistics.', metadata={}), Document(page_content='\\end{document}', metadata={})] HTML# Here’s an example using an HTML text splitter html_text = """ <!DOCTYPE html> <html> <head> <title>🦜️🔗 LangChain</title> <style> body { font-family: Arial, sans-serif; } h1 { color: darkblue; } </style> </head>
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html
2ad46927ed10-4
color: darkblue; } </style> </head> <body> <div> <h1>🦜️🔗 LangChain</h1> <p>⚡ Building applications with LLMs through composability ⚡</p> </div> <div> As an open source project in a rapidly developing field, we are extremely open to contributions. </div> </body> </html> """ html_splitter = RecursiveCharacterTextSplitter.from_language( language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0 ) html_docs = html_splitter.create_documents([html_text]) html_docs [Document(page_content='<!DOCTYPE html>\n<html>\n <head>', metadata={}), Document(page_content='<title>🦜️🔗 LangChain</title>\n <style>', metadata={}), Document(page_content='body {', metadata={}), Document(page_content='font-family: Arial, sans-serif;', metadata={}), Document(page_content='}\n h1 {', metadata={}), Document(page_content='color: darkblue;\n }', metadata={}), Document(page_content='</style>\n </head>\n <body>\n <div>', metadata={}), Document(page_content='<h1>🦜️🔗 LangChain</h1>', metadata={}), Document(page_content='<p>⚡ Building applications with LLMs through', metadata={}), Document(page_content='composability ⚡</p>', metadata={}), Document(page_content='</div>\n <div>', metadata={}), Document(page_content='As an open source project in a rapidly', metadata={}),
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html
2ad46927ed10-5
Document(page_content='As an open source project in a rapidly', metadata={}), Document(page_content='developing field, we are extremely open to contributions.', metadata={}), Document(page_content='</div>\n </body>\n</html>', metadata={})] previous Character next NLTK Contents Python JS Markdown Latex HTML By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/code_splitter.html
19367f03b39c-0
.ipynb .pdf Recursive Character Recursive Character# This text splitter is the recommended one for generic text. It is parameterized by a list of characters. It tries to split on them in order until the chunks are small enough. The default list is ["\n\n", "\n", " ", ""]. This has the effect of trying to keep all paragraphs (and then sentences, and then words) together as long as possible, as those would generically seem to be the strongest semantically related pieces of text. How the text is split: by list of characters How the chunk size is measured: by number of characters # This is a long document we can split up. with open('../../../state_of_the_union.txt') as f: state_of_the_union = f.read() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( # Set a really small chunk size, just to show. chunk_size = 100, chunk_overlap = 20, length_function = len, ) texts = text_splitter.create_documents([state_of_the_union]) print(texts[0]) print(texts[1]) page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0 page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0 text_splitter.split_text(state_of_the_union)[:2] ['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and', 'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.'] previous NLTK next spaCy By Harrison Chase
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html
19367f03b39c-1
previous NLTK next spaCy By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/text_splitters/examples/recursive_text_splitter.html
988884a574d1-0
.ipynb .pdf Self-querying with Chroma Contents Creating a Chroma vectorstore Creating our self-querying retriever Testing it out Filter k Self-querying with Chroma# Chroma is a database for building AI applications with embeddings. In the notebook we’ll demo the SelfQueryRetriever wrapped around a Chroma vector store. Creating a Chroma vectorstore# First we’ll want to create a Chroma VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies. NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the chromadb package. #!pip install lark #!pip install chromadb We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') from langchain.schema import Document from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma embeddings = OpenAIEmbeddings() docs = [ Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}), Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}), Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
988884a574d1-1
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}), Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}), Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9}) ] vectorstore = Chroma.from_documents( docs, embeddings ) Using embedded DuckDB without persistence: data will be transient Creating our self-querying retriever# Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents. from langchain.llms import OpenAI from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain.chains.query_constructor.base import AttributeInfo metadata_field_info=[ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ), ]
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
988884a574d1-2
type="float" ), ] document_content_description = "Brief summary of a movie" llm = OpenAI(temperature=0) retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True) Testing it out# And now we can try actually using our retriever! # This example only specifies a relevant query retriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})] # This example only specifies a filter retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
988884a574d1-3
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})] # This example specifies a query and a filter retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') [Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'year': 2019, 'director': 'Greta Gerwig', 'rating': 8.3})] # This example specifies a composite filter retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'year': 1979, 'rating': 9.9, 'director': 'Andrei Tarkovsky', 'genre': 'science fiction'})] # This example specifies a query and composite filter retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
988884a574d1-4
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})] Filter k# We can also use the self query retriever to specify k: the number of documents to fetch. We can do this by passing enable_limit=True to the constructor. retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True ) # This example only specifies a relevant query retriever.get_relevant_documents("what are two movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}), Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'year': 2006, 'director': 'Satoshi Kon', 'rating': 8.6}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
988884a574d1-5
Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'year': 2010, 'director': 'Christopher Nolan', 'rating': 8.2})] previous ChatGPT Plugin next Cohere Reranker Contents Creating a Chroma vectorstore Creating our self-querying retriever Testing it out Filter k By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query.html
01bc45200067-0
.ipynb .pdf Self-querying Contents Creating a Pinecone index Creating our self-querying retriever Testing it out Filter k Self-querying# In the notebook we’ll demo the SelfQueryRetriever, which, as the name suggests, has the ability to query itself. Specifically, given any natural language query, the retriever uses a query-constructing LLM chain to write a structured query and then applies that structured query to it’s underlying VectorStore. This allows the retriever to not only use the user-input query for semantic similarity comparison with the contents of stored documented, but to also extract filters from the user query on the metadata of stored documents and to execute those filters. Creating a Pinecone index# First we’ll want to create a Pinecone VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies. To use Pinecone, you to have pinecone package installed and you must have an API key and an Environment. Here are the installation instructions. NOTE: The self-query retriever requires you to have lark package installed. # !pip install lark #!pip install pinecone-client import os import pinecone pinecone.init(api_key=os.environ["PINECONE_API_KEY"], environment=os.environ["PINECONE_ENV"]) /Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import tqdm from langchain.schema import Document from langchain.embeddings.openai import OpenAIEmbeddings
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
01bc45200067-1
from langchain.schema import Document from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Pinecone embeddings = OpenAIEmbeddings() # create new index pinecone.create_index("langchain-self-retriever-demo", dimension=1536) docs = [ Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": ["action", "science fiction"]}), Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}), Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}), Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}), Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}), Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": ["science fiction", "thriller"], "rating": 9.9}) ] vectorstore = Pinecone.from_documents( docs, embeddings, index_name="langchain-self-retriever-demo" ) Creating our self-querying retriever#
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
01bc45200067-2
) Creating our self-querying retriever# Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents. from langchain.llms import OpenAI from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain.chains.query_constructor.base import AttributeInfo metadata_field_info=[ AttributeInfo( name="genre", description="The genre of the movie", type="string or list[string]", ), AttributeInfo( name="year", description="The year the movie was released", type="integer", ), AttributeInfo( name="director", description="The name of the movie director", type="string", ), AttributeInfo( name="rating", description="A 1-10 rating for the movie", type="float" ), ] document_content_description = "Brief summary of a movie" llm = OpenAI(temperature=0) retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True) Testing it out# And now we can try actually using our retriever! # This example only specifies a relevant query retriever.get_relevant_documents("What are some movies about dinosaurs") query='dinosaur' filter=None [Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': ['action', 'science fiction'], 'rating': 7.7, 'year': 1993.0}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
01bc45200067-3
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0}), Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Leo DiCaprio gets lost in a dream within a dream within a dream within a ...', metadata={'director': 'Christopher Nolan', 'rating': 8.2, 'year': 2010.0})] # This example only specifies a filter retriever.get_relevant_documents("I want to watch a movie rated higher than 8.5") query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) [Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0}), Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})] # This example specifies a query and a filter retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women") query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
01bc45200067-4
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'director': 'Greta Gerwig', 'rating': 8.3, 'year': 2019.0})] # This example specifies a composite filter retriever.get_relevant_documents("What's a highly rated (above 8.5) science fiction film?") query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)]) [Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': ['science fiction', 'thriller'], 'rating': 9.9, 'year': 1979.0})] # This example specifies a query and composite filter retriever.get_relevant_documents("What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated") query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990.0), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005.0), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) [Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0})] Filter k# We can also use the self query retriever to specify k: the number of documents to fetch.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
01bc45200067-5
We can do this by passing enable_limit=True to the constructor. retriever = SelfQueryRetriever.from_llm( llm, vectorstore, document_content_description, metadata_field_info, enable_limit=True, verbose=True ) # This example only specifies a relevant query retriever.get_relevant_documents("What are two movies about dinosaurs") previous Self-querying with Qdrant next SVM Contents Creating a Pinecone index Creating our self-querying retriever Testing it out Filter k By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query.html
ac246f227c88-0
.ipynb .pdf VectorStore Contents Maximum Marginal Relevance Retrieval Similarity Score Threshold Retrieval Specifying top k VectorStore# The index - and therefore the retriever - that LangChain has the most support for is the VectorStoreRetriever. As the name suggests, this retriever is backed heavily by a VectorStore. Once you construct a VectorStore, its very easy to construct a retriever. Let’s walk through an example. from langchain.document_loaders import TextLoader loader = TextLoader('../../../state_of_the_union.txt') from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() db = FAISS.from_documents(texts, embeddings) Exiting: Cleaning up .chroma directory retriever = db.as_retriever() docs = retriever.get_relevant_documents("what did he say about ketanji brown jackson") Maximum Marginal Relevance Retrieval# By default, the vectorstore retriever uses similarity search. If the underlying vectorstore support maximum marginal relevance search, you can specify that as the search type. retriever = db.as_retriever(search_type="mmr") docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson") Similarity Score Threshold Retrieval# You can also use a retrieval method that sets a similarity score threshold and only returns documents with a score above that threshold retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": .5})
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html
ac246f227c88-1
docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson") Specifying top k# You can also specify search kwargs like k to use when doing retrieval. retriever = db.as_retriever(search_kwargs={"k": 1}) docs = retriever.get_relevant_documents("what did he say abotu ketanji brown jackson") len(docs) 1 previous Time Weighted VectorStore next Vespa Contents Maximum Marginal Relevance Retrieval Similarity Score Threshold Retrieval Specifying top k By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html
d52edacd440a-0
.ipynb .pdf Contextual Compression Contents Contextual Compression Using a vanilla vector store retriever Adding contextual compression with an LLMChainExtractor More built-in compressors: filters LLMChainFilter EmbeddingsFilter Stringing compressors and document transformers together Contextual Compression# This notebook introduces the concept of DocumentCompressors and the ContextualCompressionRetriever. The core idea is simple: given a specific query, we should be able to return only the documents relevant to that query, and only the parts of those documents that are relevant. The ContextualCompressionsRetriever is a wrapper for another retriever that iterates over the initial output of the base retriever and filters and compresses those initial documents, so that only the most relevant information is returned. # Helper function for printing docs def pretty_print_docs(docs): print(f"\n{'-' * 100}\n".join([f"Document {i+1}:\n\n" + d.page_content for i, d in enumerate(docs)])) Using a vanilla vector store retriever# Let’s start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them. from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import OpenAIEmbeddings from langchain.document_loaders import TextLoader from langchain.vectorstores import FAISS documents = TextLoader('../../../state_of_the_union.txt').load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents)
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-1
texts = text_splitter.split_documents(documents) retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever() docs = retriever.get_relevant_documents("What did the president say about Ketanji Brown Jackson") pretty_print_docs(docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-2
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. ---------------------------------------------------------------------------------------------------- Document 3: And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. ---------------------------------------------------------------------------------------------------- Document 4: Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. That ends on my watch. Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-3
Let’s pass the Paycheck Fairness Act and paid leave. Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges. Adding contextual compression with an LLMChainExtractor# Now let’s wrap our base retriever with a ContextualCompressionRetriever. We’ll add an LLMChainExtractor, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query. from langchain.llms import OpenAI from langchain.retrievers import ContextualCompressionRetriever from langchain.retrievers.document_compressors import LLMChainExtractor llm = OpenAI(temperature=0) compressor = LLMChainExtractor.from_llm(llm) compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=retriever) compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown") pretty_print_docs(compressed_docs) Document 1: "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence." ---------------------------------------------------------------------------------------------------- Document 2:
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-4
---------------------------------------------------------------------------------------------------- Document 2: "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." More built-in compressors: filters# LLMChainFilter# The LLMChainFilter is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents. from langchain.retrievers.document_compressors import LLMChainFilter _filter = LLMChainFilter.from_llm(llm) compression_retriever = ContextualCompressionRetriever(base_compressor=_filter, base_retriever=retriever) compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown") pretty_print_docs(compressed_docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. EmbeddingsFilter#
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-5
EmbeddingsFilter# Making an extra LLM call over each retrieved document is expensive and slow. The EmbeddingsFilter provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query. from langchain.embeddings import OpenAIEmbeddings from langchain.retrievers.document_compressors import EmbeddingsFilter embeddings = OpenAIEmbeddings() embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76) compression_retriever = ContextualCompressionRetriever(base_compressor=embeddings_filter, base_retriever=retriever) compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown") pretty_print_docs(compressed_docs) Document 1: Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2:
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-6
---------------------------------------------------------------------------------------------------- Document 2: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. ---------------------------------------------------------------------------------------------------- Document 3: And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. Stringing compressors and document transformers together#
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-7
First, beat the opioid epidemic. Stringing compressors and document transformers together# Using the DocumentCompressorPipeline we can also easily combine multiple compressors in sequence. Along with compressors we can add BaseDocumentTransformers to our pipeline, which don’t perform any contextual compression but simply perform some transformation on a set of documents. For example TextSplitters can be used as document transformers to split documents into smaller pieces, and the EmbeddingsRedundantFilter can be used to filter out redundant documents based on embedding similarity between documents. Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query. from langchain.document_transformers import EmbeddingsRedundantFilter from langchain.retrievers.document_compressors import DocumentCompressorPipeline from langchain.text_splitter import CharacterTextSplitter splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=". ") redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings) relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76) pipeline_compressor = DocumentCompressorPipeline( transformers=[splitter, redundant_filter, relevant_filter] ) compression_retriever = ContextualCompressionRetriever(base_compressor=pipeline_compressor, base_retriever=retriever) compressed_docs = compression_retriever.get_relevant_documents("What did the president say about Ketanji Jackson Brown") pretty_print_docs(compressed_docs) Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson ---------------------------------------------------------------------------------------------------- Document 2:
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
d52edacd440a-8
---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder previous Cohere Reranker next Databerry Contents Contextual Compression Using a vanilla vector store retriever Adding contextual compression with an LLMChainExtractor More built-in compressors: filters LLMChainFilter EmbeddingsFilter Stringing compressors and document transformers together By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/contextual-compression.html
543779df7dd9-0
.ipynb .pdf ElasticSearch BM25 Contents Create New Retriever Add texts (if necessary) Use Retriever ElasticSearch BM25# Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others. The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London’s City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval. This notebook shows how to use a retriever that uses ElasticSearch and BM25. For more information on the details of BM25 see this blog post. #!pip install elasticsearch from langchain.retrievers import ElasticSearchBM25Retriever Create New Retriever# elasticsearch_url="http://localhost:9200" retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4") # Alternatively, you can load an existing index # import elasticsearch # elasticsearch_url="http://localhost:9200"
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
543779df7dd9-1
# import elasticsearch # elasticsearch_url="http://localhost:9200" # retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index") Add texts (if necessary)# We can optionally add texts to the retriever (if they aren’t already in there) retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365', '8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7'] Use Retriever# We can now use the retriever! result = retriever.get_relevant_documents("foo") result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})] previous Databerry next kNN Contents Create New Retriever Add texts (if necessary) Use Retriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/elastic_search_bm25.html
df6a4725e1f4-0
.ipynb .pdf Time Weighted VectorStore Contents Low Decay Rate High Decay Rate Virtual Time Time Weighted VectorStore# This retriever uses a combination of semantic similarity and a time decay. The algorithm for scoring them is: semantic_similarity + (1.0 - decay_rate) ** hours_passed Notably, hours_passed refers to the hours passed since the object in the retriever was last accessed, not since it was created. This means that frequently accessed objects remain “fresh.” import faiss from datetime import datetime, timedelta from langchain.docstore import InMemoryDocstore from langchain.embeddings import OpenAIEmbeddings from langchain.retrievers import TimeWeightedVectorStoreRetriever from langchain.schema import Document from langchain.vectorstores import FAISS Low Decay Rate# A low decay rate (in this, to be extreme, we will set close to 0) means memories will be “remembered” for longer. A decay rate of 0 means memories never be forgotten, making this retriever equivalent to the vector lookup. # Define your embedding model embeddings_model = OpenAIEmbeddings() # Initialize the vectorstore as empty embedding_size = 1536 index = faiss.IndexFlatL2(embedding_size) vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}) retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.0000000000000000000000001, k=1) yesterday = datetime.now() - timedelta(days=1) retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})]) retriever.add_documents([Document(page_content="hello foo")])
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
df6a4725e1f4-1
retriever.add_documents([Document(page_content="hello foo")]) ['d7f85756-2371-4bdf-9140-052780a0f9b3'] # "Hello World" is returned first because it is most salient, and the decay rate is close to 0., meaning it's still recent enough retriever.get_relevant_documents("hello world") [Document(page_content='hello world', metadata={'last_accessed_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 678341), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})] High Decay Rate# With a high decay rate (e.g., several 9’s), the recency score quickly goes to 0! If you set this all the way to 1, recency is 0 for all objects, once again making this equivalent to a vector lookup. # Define your embedding model embeddings_model = OpenAIEmbeddings() # Initialize the vectorstore as empty embedding_size = 1536 index = faiss.IndexFlatL2(embedding_size) vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {}) retriever = TimeWeightedVectorStoreRetriever(vectorstore=vectorstore, decay_rate=.999, k=1) yesterday = datetime.now() - timedelta(days=1) retriever.add_documents([Document(page_content="hello world", metadata={"last_accessed_at": yesterday})]) retriever.add_documents([Document(page_content="hello foo")]) ['40011466-5bbe-4101-bfd1-e22e7f505de2']
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
df6a4725e1f4-2
# "Hello Foo" is returned first because "hello world" is mostly forgotten retriever.get_relevant_documents("hello world") [Document(page_content='hello foo', metadata={'last_accessed_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 494798), 'created_at': datetime.datetime(2023, 4, 16, 22, 9, 2, 178722), 'buffer_idx': 1})] Virtual Time# Using some utils in LangChain, you can mock out the time component from langchain.utils import mock_now import datetime # Notice the last access time is that date time with mock_now(datetime.datetime(2011, 2, 3, 10, 11)): print(retriever.get_relevant_documents("hello world")) [Document(page_content='hello world', metadata={'last_accessed_at': MockDateTime(2011, 2, 3, 10, 11), 'created_at': datetime.datetime(2023, 5, 13, 21, 0, 27, 279596), 'buffer_idx': 0})] previous TF-IDF next VectorStore Contents Low Decay Rate High Decay Rate Virtual Time By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/time_weighted_vectorstore.html
bce31c051c51-0
.ipynb .pdf Wikipedia Contents Installation Examples Running retriever Question Answering on facts Wikipedia# Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history. This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream. Installation# First, you need to install wikipedia python package. #!pip install wikipedia WikipediaRetriever has these arguments: optional lang: default=”en”. Use it to search in a specific language part of Wikipedia optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded. get_relevant_documents() has one argument, query: free text which used to find documents in Wikipedia Examples# Running retriever# from langchain.retrievers import WikipediaRetriever retriever = WikipediaRetriever() docs = retriever.get_relevant_documents(query='HUNTER X HUNTER') docs[0].metadata # meta-information of the Document {'title': 'Hunter × Hunter',
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
bce31c051c51-1
'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
bce31c051c51-2
with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'}
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
bce31c051c51-3
docs[0].page_content[:400] # a content of the Document 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto' Question Answering on facts# # get a token: https://platform.openai.com/account/api-keys from getpass import getpass OPENAI_API_KEY = getpass() ········ import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationalRetrievalChain model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4' qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever) questions = [ "What is Apify?", "When the Monument to the Martyrs of the 1830 Revolution was created?", "What is the Abhayagiri Vihāra?", # "How big is Wikipédia en français?", ] chat_history = [] for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result['answer'])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What is Apify?
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
bce31c051c51-4
-> **Question**: What is Apify? **Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services. -> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created? **Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient. -> **Question**: What is the Abhayagiri Vihāra? **Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka. previous Self-querying with Weaviate next Zep Contents Installation Examples Running retriever Question Answering on facts By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/wikipedia.html
e90baab6ce7d-0
.ipynb .pdf Databerry Contents Query Databerry# Databerry platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Databerry API. This notebook shows how to use Databerry’s retriever. First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url. You need the API Key. Query# Now that our index is set up, we can set up a retriever and start querying it. from langchain.retrievers import DataberryRetriever retriever = DataberryRetriever( datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query", # api_key="DATABERRY_API_KEY", # optional if datastore is public # top_k=10 # optional ) retriever.get_relevant_documents("What is Daftpage?")
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
e90baab6ce7d-1
) retriever.get_relevant_documents("What is Daftpage?") [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
e90baab6ce7d-2
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
e90baab6ce7d-3
Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})] previous Contextual Compression next ElasticSearch BM25 Contents Query By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/databerry.html
c54e50d36f0e-0
.ipynb .pdf Pinecone Hybrid Search Contents Setup Pinecone Get embeddings and sparse encoders Load Retriever Add texts (if necessary) Use Retriever Pinecone Hybrid Search# Pinecone is a vector database with broad functionality. This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search. The logic of this retriever is taken from this documentaion To use Pinecone, you must have an API key and an Environment. Here are the installation instructions. #!pip install pinecone-client pinecone-text import os import getpass os.environ['PINECONE_API_KEY'] = getpass.getpass('Pinecone API Key:') from langchain.retrievers import PineconeHybridSearchRetriever os.environ['PINECONE_ENVIRONMENT'] = getpass.getpass('Pinecone Environment:') We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:') Setup Pinecone# You should only have to do this part once. Note: it’s important to make sure that the “context” field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone’s docs. import os import pinecone api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY" # find environment next to your API key in the Pinecone console env = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT" index_name = "langchain-pinecone-hybrid-search" pinecone.init(api_key=api_key, enviroment=env)
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
c54e50d36f0e-1
pinecone.init(api_key=api_key, enviroment=env) pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test') # create the index pinecone.create_index( name = index_name, dimension = 1536, # dimensionality of dense model metric = "dotproduct", # sparse values supported only for dotproduct pod_type = "s1", metadata_config={"indexed": []} # see explaination above ) Now that its created, we can use it index = pinecone.Index(index_name) Get embeddings and sparse encoders# Embeddings are used for the dense vectors, tokenizer is used for the sparse vector from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25. For more information about the sparse encoders you can checkout pinecone-text library docs. from pinecone_text.sparse import BM25Encoder # or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE # use default tf-idf values bm25_encoder = BM25Encoder().default() The above code is using default tfids values. It’s highly recommended to fit the tf-idf values to your own corpus. You can do it as follow: corpus = ["foo", "bar", "world", "hello"] # fit tf-idf values on your corpus bm25_encoder.fit(corpus) # store the values to a json file bm25_encoder.dump("bm25_values.json") # load to your BM25Encoder object bm25_encoder = BM25Encoder().load("bm25_values.json") Load Retriever#
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
c54e50d36f0e-2
Load Retriever# We can now construct the retriever! retriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index) Add texts (if necessary)# We can optionally add texts to the retriever (if they aren’t already in there) retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it] Use Retriever# We can now use the retriever! result = retriever.get_relevant_documents("foo") result[0] Document(page_content='foo', metadata={}) previous Metal next PubMed Retriever Contents Setup Pinecone Get embeddings and sparse encoders Load Retriever Add texts (if necessary) Use Retriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/pinecone_hybrid_search.html
1452b0547a90-0
.ipynb .pdf kNN Contents Create New Retriever with Texts Use Retriever kNN# In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. This notebook goes over how to use a retriever that under the hood uses an kNN. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.ipynb from langchain.retrievers import KNNRetriever from langchain.embeddings import OpenAIEmbeddings Create New Retriever with Texts# retriever = KNNRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings()) Use Retriever# We can now use the retriever! result = retriever.get_relevant_documents("foo") result [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})] previous ElasticSearch BM25 next LOTR (Merger Retriever) Contents Create New Retriever with Texts Use Retriever By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/knn.html
0ee80f4e216a-0
.ipynb .pdf Zep Contents Retriever Example Initialize the Zep Chat Message History Class and add a chat message history to the memory store Use the Zep Retriever to vector search over the Zep memory Zep# Zep - A long-term memory store for LLM applications. More on Zep: Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. Key Features: Long-term memory persistence, with access to historical messages irrespective of your summarization strategy. Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies. Vector search over memories, with messages automatically embedded on creation. Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly. Python and JavaScript SDKs. Zep’s Go Extractor model is easily extensible, with a simple, clean interface available to build new enrichment functionality, such as summarizers, entity extractors, embedders, and more. Zep project: getzep/zep Retriever Example# This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store. We’ll demonstrate: Adding conversation history to the Zep memory store. Vector search over the conversation history. from langchain.memory.chat_message_histories import ZepChatMessageHistory from langchain.schema import HumanMessage, AIMessage from uuid import uuid4 # Set this to your Zep server URL ZEP_API_URL = "http://localhost:8000" Initialize the Zep Chat Message History Class and add a chat message history to the memory store#
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
0ee80f4e216a-1
Initialize the Zep Chat Message History Class and add a chat message history to the memory store# NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever. session_id = str(uuid4()) # This is a unique identifier for the user/session # Set up Zep Chat History. We'll use this to add chat histories to the memory store zep_chat_history = ZepChatMessageHistory( session_id=session_id, url=ZEP_API_URL, ) # Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization. test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), },
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
0ee80f4e216a-2
" Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), }, ] for msg in test_history: zep_chat_history.append( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) ) Use the Zep Retriever to vector search over the Zep memory# Zep provides native vector search over historical conversation memory. Embedding happens automatically.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
0ee80f4e216a-3
Zep provides native vector search over historical conversation memory. Embedding happens automatically. NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated. from langchain.retrievers import ZepRetriever zep_retriever = ZepRetriever( session_id=session_id, # Ensure that you provide the session_id when instantiating the Retriever url=ZEP_API_URL, top_k=5, ) await zep_retriever.aget_relevant_documents("Who wrote Parable of the Sower?") [Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759001673780126, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602262941130749, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
0ee80f4e216a-4
Document(page_content='Who were her contemporaries?', metadata={'score': 0.757553366415519, 'uuid': '41f9c41a-a205-41e1-b48b-a0a4cd943fc8', 'created_at': '2023-05-25T15:03:30.243995Z', 'role': 'human', 'token_count': 8}), Document(page_content='Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546211059317948, 'uuid': '34678311-0098-4f1a-8fd4-5615ac692deb', 'created_at': '2023-05-25T15:03:30.231427Z', 'role': 'ai', 'token_count': 31}), Document(page_content='Which books of hers were made into movies?', metadata={'score': 0.7496714959247069, 'uuid': '18046c3a-9666-4d3e-b4f0-43d1394732b7', 'created_at': '2023-05-25T15:03:30.236837Z', 'role': 'human', 'token_count': 11})] We can also use the Zep sync API to retrieve results: zep_retriever.get_relevant_documents("Who wrote Parable of the Sower?")
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
0ee80f4e216a-5
[Document(page_content='Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', metadata={'score': 0.8897321402776546, 'uuid': '1c09603a-52c1-40d7-9d69-29f26256029c', 'created_at': '2023-05-25T15:03:30.268257Z', 'role': 'ai', 'token_count': 56}), Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857628682610436, 'uuid': 'f6706e8c-6c91-452f-8c1b-9559fd924657', 'created_at': '2023-05-25T15:03:30.265302Z', 'role': 'human', 'token_count': 23}), Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7759670375149477, 'uuid': '3a82a02f-056e-4c6a-b960-67ebdf3b2b93', 'created_at': '2023-05-25T15:03:30.2041Z', 'role': 'human', 'token_count': 8}),
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html
0ee80f4e216a-6
Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602854653476563, 'uuid': 'a2fc9c21-0897-46c8-bef7-6f5c0f71b04a', 'created_at': '2023-05-25T15:03:30.248065Z', 'role': 'ai', 'token_count': 27}), Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7595293992240313, 'uuid': 'f22f2498-6118-4c74-8718-aa89ccd7e3d6', 'created_at': '2023-05-25T15:03:30.261198Z', 'role': 'ai', 'token_count': 18})] previous Wikipedia next Chains Contents Retriever Example Initialize the Zep Chat Message History Class and add a chat message history to the memory store Use the Zep Retriever to vector search over the Zep memory By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 11, 2023.
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/zep_memorystore.html