id
stringlengths
14
15
text
stringlengths
17
2.72k
source
stringlengths
47
115
ae8b65ca644d-2
print( chain.run( {"question": """Write a message to remind John to do password reset for his website to stay secure."""}, callbacks=[StdOutCallbackHandler()], ) ) From the output, you can see the following context from user input has sensitive data. # Context from user input During our recent meeting on February 23, 2023, at 10:30 AM, John Doe provided me with his personal details. His email is johndoe@example.com and his contact number is 650-456-7890. He lives in New York City, USA, and belongs to the American nationality with Christian beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European travels, he noted down his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his website as https://johndoeportfolio.com. John also discussed some of his US-specific details. He said his bank account number is 1234567890123456 and his drivers license is Y12345678. His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is 123456789. He emphasized not to share his SSN, which is 669-45-6789. Furthermore, he mentioned that he accesses his work files remotely through the IP 192.168.1.1 and has a medical license number MED-123456. OpaquePrompts will automatically detect the sensitive data and replace it with a placeholder. # Context after OpaquePrompts
https://python.langchain.com/docs/integrations/llms/opaqueprompts
ae8b65ca644d-3
During our recent meeting on DATE_TIME_3, at DATE_TIME_2, PERSON_3 provided me with his personal details. His email is EMAIL_ADDRESS_1 and his contact number is PHONE_NUMBER_1. He lives in LOCATION_3, LOCATION_2, and belongs to the NRP_3 nationality with NRP_2 beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card CREDIT_CARD_1 and transferred bitcoins to the wallet address CRYPTO_1. While discussing his NRP_1 travels, he noted down his IBAN as IBAN_CODE_1. Additionally, he provided his website as URL_1. PERSON_2 also discussed some of his LOCATION_1-specific details. He said his bank account number is US_BANK_NUMBER_1 and his drivers license is US_DRIVER_LICENSE_2. His ITIN is US_ITIN_1, and he recently renewed his passport, the number for which is DATE_TIME_1. He emphasized not to share his SSN, which is US_SSN_1. Furthermore, he mentioned that he accesses his work files remotely through the IP IP_ADDRESS_1 and has a medical license number MED-US_DRIVER_LICENSE_1. Placeholder is used in the LLM response. # response returned by LLM Hey PERSON_1, just wanted to remind you to do a password reset for your website URL_1 through your email EMAIL_ADDRESS_1. It's important to stay secure online, so don't forget to do it! Response is desanitized by replacing the placeholder with the original sensitive data. # desanitized LLM response from OpaquePrompts Hey John, just wanted to remind you to do a password reset for your website https://johndoeportfolio.com through your email johndoe@example.com. It's important to stay secure online, so don't forget to do it! Use OpaquePrompts in LangChain expression There are functions that can be used with LangChain expression as well if a drop-in replacement doesn't offer the flexibility you need. import langchain.utilities.opaqueprompts as op from langchain.schema.runnable import RunnableMap from langchain.schema.output_parser import StrOutputParser
https://python.langchain.com/docs/integrations/llms/opaqueprompts
ae8b65ca644d-4
prompt=PromptTemplate.from_template(prompt_template), llm = OpenAI() pg_chain = ( op.sanitize | RunnableMap( { "response": (lambda x: x["sanitized_input"]) | prompt | llm | StrOutputParser(), "secure_context": lambda x: x["secure_context"], } ) | (lambda x: op.desanitize(x["response"], x["secure_context"])) ) pg_chain.invoke({"question": "Write a text message to remind John to do password reset for his website through his email to stay secure.", "history": ""})
https://python.langchain.com/docs/integrations/llms/opaqueprompts
11d7b842c5f0-0
OpenAI OpenAI offers a spectrum of models with different levels of power suitable for different tasks. This example goes over how to use LangChain to interact with OpenAI models # get a token: https://platform.openai.com/account/api-keys from getpass import getpass OPENAI_API_KEY = getpass() import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY Should you need to specify your organization ID, you can use the following cell. However, it is not required if you are only part of a single organization or intend to use your default organization. You can check your default organization here. To specify your organization, you can use this: OPENAI_ORGANIZATION = getpass() os.environ["OPENAI_ORGANIZATION"] = OPENAI_ORGANIZATION from langchain.llms import OpenAI from langchain import PromptTemplate, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) If you manually want to specify your OpenAI API key and/or organization ID, you can use the following: llm = OpenAI(openai_api_key="YOUR_API_KEY", openai_organization="YOUR_ORGANIZATION_ID") Remove the openai_organization parameter should it not apply to you. llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys.' If you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"
https://python.langchain.com/docs/integrations/llms/openai
a374f9d94021-0
OpenLLM 🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps. Installation​ Install openllm through PyPI Launch OpenLLM server locally​ To start an LLM server, use openllm start command. For example, to start a dolly-v2 server, run the following command from a terminal: Wrapper​ from langchain.llms import OpenLLM server_url = "http://localhost:3000" # Replace with remote host if you are running on a remote server llm = OpenLLM(server_url=server_url) Optional: Local LLM Inference​ You may also choose to initialize an LLM managed by OpenLLM locally from current process. This is useful for development purpose and allows developers to quickly try out different types of LLMs. When moving LLM applications to production, we recommend deploying the OpenLLM server separately and access via the server_url option demonstrated above. To load an LLM locally via the LangChain wrapper: from langchain.llms import OpenLLM llm = OpenLLM( model_name="dolly-v2", model_id="databricks/dolly-v2-3b", temperature=0.94, repetition_penalty=1.2, ) Integrate with a LLMChain​ from langchain import PromptTemplate, LLMChain template = "What is a good name for a company that makes {product}?" prompt = PromptTemplate(template=template, input_variables=["product"]) llm_chain = LLMChain(prompt=prompt, llm=llm) generated = llm_chain.run(product="mechanical keyboard") print(generated)
https://python.langchain.com/docs/integrations/llms/openllm
5020f7e88eea-0
OpenLM OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code. This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both. Setup​ Install dependencies and set API keys. # Uncomment to install openlm and openai if you haven't already # !pip install openlm # !pip install openai from getpass import getpass import os import subprocess # Check if OPENAI_API_KEY environment variable is set if "OPENAI_API_KEY" not in os.environ: print("Enter your OpenAI API key:") os.environ["OPENAI_API_KEY"] = getpass() # Check if HF_API_TOKEN environment variable is set if "HF_API_TOKEN" not in os.environ: print("Enter your HuggingFace Hub API key:") os.environ["HF_API_TOKEN"] = getpass() Using LangChain with OpenLM​ Here we're going to call two models in an LLMChain, text-davinci-003 from OpenAI and gpt2 on HuggingFace. from langchain.llms import OpenLM from langchain import PromptTemplate, LLMChain question = "What is the capital of France?" template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) for model in ["text-davinci-003", "huggingface.co/gpt2"]: llm = OpenLM(model=model) llm_chain = LLMChain(prompt=prompt, llm=llm) result = llm_chain.run(question) print( """Model: {} Result: {}""".format( model, result ) ) Model: text-davinci-003 Result: France is a country in Europe. The capital of France is Paris. Model: huggingface.co/gpt2 Result: Question: What is the capital of France? Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more
https://python.langchain.com/docs/integrations/llms/openlm
2fd916aa0e02-0
PipelineAI PipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models. This notebook goes over how to use Langchain with PipelineAI. PipelineAI example​ This example shows how PipelineAI integrated with LangChain and it is created by PipelineAI. Setup​ The pipeline-ai library is required to use the PipelineAI API, AKA Pipeline Cloud. Install pipeline-ai using pip install pipeline-ai. # Install the package pip install pipeline-ai Example​ Imports​ import os from langchain.llms import PipelineAI from langchain import PromptTemplate, LLMChain Set the Environment API Key​ Make sure to get your API key from PipelineAI. Check out the cloud quickstart guide. You'll be given a 30 day free trial with 10 hours of serverless GPU compute to test different models. os.environ["PIPELINE_API_KEY"] = "YOUR_API_KEY_HERE" Create the PipelineAI instance​ When instantiating PipelineAI, you need to specify the id or tag of the pipeline you want to use, e.g. pipeline_key = "public/gpt-j:base". You then have the option of passing additional pipeline-specific keyword arguments: llm = PipelineAI(pipeline_key="YOUR_PIPELINE_KEY", pipeline_kwargs={...}) Create a Prompt Template​ We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain​ llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question)
https://python.langchain.com/docs/integrations/llms/pipelineai
0348f2f8eaba-0
Petals Petals runs 100B+ language models at home, BitTorrent-style. This notebook goes over how to use Langchain with Petals. Install petals​ The petals package is required to use the Petals API. Install petals using pip3 install petals. For Apple Silicon(M1/M2) users please follow this guide https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642 to install petals Imports​ import os from langchain.llms import Petals from langchain import PromptTemplate, LLMChain Set the Environment API Key​ Make sure to get your API key from Huggingface. from getpass import getpass HUGGINGFACE_API_KEY = getpass() os.environ["HUGGINGFACE_API_KEY"] = HUGGINGFACE_API_KEY Create the Petals instance​ You can specify different parameters such as the model name, max new tokens, temperature, etc. # this can take several minutes to download big files! llm = Petals(model_name="bigscience/bloom-petals") Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s] Create a Prompt Template​ We will create a prompt template for Question and Answer. template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) Initiate the LLMChain​ llm_chain = LLMChain(prompt=prompt, llm=llm) Run the LLMChain​ Provide a question and run the LLMChain. question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question)
https://python.langchain.com/docs/integrations/llms/petals
4bdaff350e3d-0
Predibase Predibase allows you to train, finetune, and deploy any ML model—from linear regression to large language model. This example demonstrates using Langchain with models deployed on Predibase Setup To run this notebook, you'll need a Predibase account and an API key. You'll also need to install the Predibase Python package: pip install predibase import os os.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}" Initial Call​ from langchain.llms import Predibase model = Predibase( model="vicuna-13b", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN") ) response = model("Can you recommend me a nice dry wine?") print(response) Chain Call Setup​ llm = Predibase( model="vicuna-13b", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN") ) SequentialChain​ from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # This is an LLMChain to write a synopsis given a title of a play. template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template) # This is an LLMChain to write a review of a play given a synopsis. template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template) # This is the overall chain where we run these two chains in sequence. from langchain.chains import SimpleSequentialChain
https://python.langchain.com/docs/integrations/llms/predibase
4bdaff350e3d-1
overall_chain = SimpleSequentialChain( chains=[synopsis_chain, review_chain], verbose=True ) review = overall_chain.run("Tragedy at sunset on the beach") Fine-tuned LLM (Use your own fine-tuned LLM from Predibase)​ from langchain.llms import Predibase model = Predibase( model="my-finetuned-LLM", predibase_api_key=os.environ.get("PREDIBASE_API_TOKEN") ) # replace my-finetuned-LLM with the name of your model in Predibase # response = model("Can you help categorize the following emails into positive, negative, and neutral?")
https://python.langchain.com/docs/integrations/llms/predibase
84f9e3daaef8-0
Prediction Guard pip install predictionguard langchain import os import predictionguard as pg from langchain.llms import PredictionGuard from langchain import PromptTemplate, LLMChain Basic LLM usage​ # Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows # you to access all the latest open access models (see https://docs.predictionguard.com) os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>" # Your Prediction Guard API key. Get one at predictionguard.com os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>" pgllm = PredictionGuard(model="OpenAI-text-davinci-003") Control the output structure/ type of LLMs​ template = """Respond to the following query based on the context. Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦 Exclusive Candle Box - $80 Monthly Candle Box - $45 (NEW!) Scent of The Month Box - $28 (NEW!) Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉 Query: {query} Result: """ prompt = PromptTemplate(template=template, input_variables=["query"]) # Without "guarding" or controlling the output of the LLM. pgllm(prompt.format(query="What kind of post is this?")) # With "guarding" or controlling the output of the LLM. See the # Prediction Guard docs (https://docs.predictionguard.com) to learn how to # control the output with integer, float, boolean, JSON, and other types and # structures. pgllm = PredictionGuard( model="OpenAI-text-davinci-003", output={ "type": "categorical", "categories": ["product announcement", "apology", "relational"], }, ) pgllm(prompt.format(query="What kind of post is this?")) Chaining​ pgllm = PredictionGuard(model="OpenAI-text-davinci-003") template = """Question: {question}
https://python.langchain.com/docs/integrations/llms/predictionguard
84f9e3daaef8-1
Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.predict(question=question) template = """Write a {adjective} poem about {subject}.""" prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True) llm_chain.predict(adjective="sad", subject="ducks")
https://python.langchain.com/docs/integrations/llms/predictionguard
14245b7f3829-0
PromptLayer OpenAI PromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library. PromptLayer records all your OpenAI API requests, allowing you to search and explore request history in the PromptLayer dashboard. This example showcases how to connect to PromptLayer to start recording your OpenAI requests. Another example is here. Install PromptLayer​ The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. Imports​ import os from langchain.llms import PromptLayerOpenAI import promptlayer Set the Environment API Key​ You can create a PromptLayer API Key at www.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. You also need an OpenAI Key, called OPENAI_API_KEY. from getpass import getpass PROMPTLAYER_API_KEY = getpass() os.environ["PROMPTLAYER_API_KEY"] = PROMPTLAYER_API_KEY from getpass import getpass OPENAI_API_KEY = getpass() os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY Use the PromptLayerOpenAI LLM like normal​ You can optionally pass in pl_tags to track your requests with PromptLayer's tagging feature. llm = PromptLayerOpenAI(pl_tags=["langchain"]) llm("I am a cat and I want") The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track​ If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. llm = PromptLayerOpenAI(return_pl_id=True) llm_results = llm.generate(["Tell me a joke"]) for res in llm_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard.
https://python.langchain.com/docs/integrations/llms/promptlayer_openai
fb7831ffb230-0
RELLM RELLM is a library that wraps local Hugging Face pipeline models for structured decoding. It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression. Warning - this module is still experimental pip install rellm > /dev/null Hugging Face Baseline​ First, let's establish a qualitative baseline by checking the output of the model without structured decoding. import logging logging.basicConfig(level=logging.ERROR) prompt = """Human: "What's the capital of the United States?" AI Assistant:{ "action": "Final Answer", "action_input": "The capital of the United States is Washington D.C." } Human: "What's the capital of Pennsylvania?" AI Assistant:{ "action": "Final Answer", "action_input": "The capital of Pennsylvania is Harrisburg." } Human: "What 2 + 5?" AI Assistant:{ "action": "Final Answer", "action_input": "2 + 5 = 7." } Human: 'What's the capital of Maryland?' AI Assistant:""" from transformers import pipeline from langchain.llms import HuggingFacePipeline hf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200 ) original_model = HuggingFacePipeline(pipeline=hf_model) generated = original_model.generate([prompt], stop=["Human:"]) print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. generations=[[Generation(text=' "What\'s the capital of Maryland?"\n', generation_info=None)]] llm_output=None That's not so impressive, is it? It didn't answer the question and it didn't follow the JSON format at all! Let's try with the structured decoder. RELLM LLM Wrapper​ Let's try that again, now providing a regex to match the JSON structured format. import regex # Note this is the regex library NOT python's re stdlib module
https://python.langchain.com/docs/integrations/llms/rellm_experimental
fb7831ffb230-1
# We'll choose a regex that matches to a structured json string that looks like: # { # "action": "Final Answer", # "action_input": string or dict # } pattern = regex.compile( r'\{\s*"action":\s*"Final Answer",\s*"action_input":\s*(\{.*\}|"[^"]*")\s*\}\nHuman:' ) from langchain_experimental.llms import RELLM model = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200) generated = model.predict(prompt, stop=["Human:"]) print(generated) {"action": "Final Answer", "action_input": "The capital of Maryland is Baltimore." } Voila! Free of parsing errors.
https://python.langchain.com/docs/integrations/llms/rellm_experimental
86562de89390-0
Replicate Replicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale. This example goes over how to use LangChain to interact with Replicate models Setup​ # magics to auto-reload external modules in case you are making changes to langchain while working on this notebook %autoreload 2 To run this notebook, you'll need to create a replicate account and install the replicate python client. poetry run pip install replicate Collecting replicate Using cached replicate-0.9.0-py3-none-any.whl (21 kB) Requirement already satisfied: packaging in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (23.1) Requirement already satisfied: pydantic>1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (1.10.9) Requirement already satisfied: requests>2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from replicate) (2.28.2) Requirement already satisfied: typing-extensions>=4.2.0 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from pydantic>1->replicate) (4.5.0) Requirement already satisfied: charset-normalizer<4,>=2 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (3.4)
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-1
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (1.26.16) Requirement already satisfied: certifi>=2017.4.17 in /root/Source/github/docugami.langchain/libs/langchain/.venv/lib/python3.9/site-packages (from requests>2->replicate) (2023.5.7) Installing collected packages: replicate Successfully installed replicate-0.9.0 # get a token: https://replicate.com/account
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-2
from getpass import getpass REPLICATE_API_TOKEN = getpass() import os
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-3
os.environ["REPLICATE_API_TOKEN"] = REPLICATE_API_TOKEN from langchain.llms import Replicate from langchain import PromptTemplate, LLMChain Calling a model​ Find a model on the replicate explore page, and then paste in the model name and version in this format: model_name/version. For example, here is LLama-V2. llm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.75, "max_length": 500, "top_p": 1}, ) prompt = """ User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car? Assistant: """ llm(prompt) "1. Dogs do not have the ability to operate complex machinery like cars.\n2. Dogs do not have the physical dexterity or coordination to manipulate the controls of a car.\n3. Dogs do not have the cognitive ability to understand traffic laws and safely operate a car.\n4. Therefore, no, a dog cannot drive a car.\nAssistant, please provide the reasoning step by step.\n\nAssistant:\n\n1. Dogs do not have the ability to operate complex machinery like cars.\n\t* This is because dogs do not possess the necessary cognitive abilities to understand how to operate a car.\n2. Dogs do not have the physical dexterity or coordination to manipulate the controls of a car.\n\t* This is because dogs do not have the necessary fine motor skills to operate the pedals and steering wheel of a car.\n3. Dogs do not have the cognitive ability to understand traffic laws and safely operate a car.\n\t* This is because dogs do not have the ability to comprehend and interpret traffic signals, road signs, and other drivers' behaviors.\n4. Therefore, no, a dog cannot drive a car."
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-4
As another example, for this dolly model, click on the API tab. The model name/version would be: replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5 Only the model param is required, but we can add other model params when initializing. For example, if we were running stable diffusion and wanted to change the image dimensions: Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'}) Note that only the first output of a model will be returned. llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5" ) prompt = """ Answer the following yes/no question by reasoning step by step. Can a dog drive a car? """ llm(prompt) 'No, dogs are not capable of driving cars since they do not have hands to operate a steering wheel nor feet to control a gas pedal. However, it’s possible for a driver to train their pet in a different behavior and make them sit while transporting goods from one place to another.\n\n' We can call any replicate model using this syntax. For example, we can call stable diffusion. text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={"image_dimensions": "512x512"}, ) image_output = text2image("A cat riding a motorcycle by Picasso") image_output
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-5
image_output = text2image("A cat riding a motorcycle by Picasso") image_output 'https://replicate.delivery/pbxt/9fJFaKfk5Zj3akAAn955gjP49G8HQpHK01M6h3BfzQoWSbkiA/out-0.png' The model spits out a URL. Let's render it. poetry run pip install Pillow Collecting Pillow Using cached Pillow-10.0.0-cp39-cp39-manylinux_2_28_x86_64.whl (3.4 MB) Installing collected packages: Pillow Successfully installed Pillow-10.0.0 from PIL import Image import requests from io import BytesIO
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-6
response = requests.get(image_output) img = Image.open(BytesIO(response.content)) img Streaming Response​ You can optionally stream the response as it is produced, which is helpful to show interactivity to users for time-consuming generations. See detailed docs on Streaming for more information. from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = Replicate( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.75, "max_length": 500, "top_p": 1}, ) prompt = """ User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car? Assistant: """ _ = llm(prompt) 1. Dogs do not have the ability to operate complex machinery like cars. 2. Dogs do not have the physical dexterity to manipulate the controls of a car. 3. Dogs do not have the cognitive ability to understand traffic laws and drive safely. Therefore, the answer is no, a dog cannot drive a car. Stop Sequences You can also specify stop sequences. If you have a definite stop sequence for the generation that you are going to parse with anyway, it is better (cheaper and faster!) to just cancel the generation once one or more stop sequences are reached, rather than letting the model ramble on till the specified max_length. Stop sequences work regardless of whether you are in streaming mode or not, and Replicate only charges you for the generation up until the stop sequence. import time llm = Replicate( model="a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5", input={"temperature": 0.01, "max_length": 500, "top_p": 1}, )
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-7
prompt = """ User: What is the best way to learn python? Assistant: """ start_time = time.perf_counter() raw_output = llm(prompt) # raw output, no stop end_time = time.perf_counter() print(f"Raw output:\n {raw_output}") print(f"Raw output runtime: {end_time - start_time} seconds") start_time = time.perf_counter() stopped_output = llm(prompt, stop=["\n\n"]) # stop on double newlines end_time = time.perf_counter() print(f"Stopped output:\n {stopped_output}") print(f"Stopped output runtime: {end_time - start_time} seconds") Raw output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions:
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-8
1. Online tutorials and courses: Websites such as Codecademy, Coursera, and edX offer interactive coding lessons and courses on Python. These can be a great way to get started, especially if you prefer a self-paced approach. 2. Books: There are many excellent books on Python that can provide a comprehensive introduction to the language. Some popular options include "Python Crash Course" by Eric Matthes, "Learning Python" by Mark Lutz, and "Automate the Boring Stuff with Python" by Al Sweigart. 3. Online communities: Participating in online communities such as Reddit's r/learnpython community or Python communities on Discord can be a great way to get support and feedback as you learn. 4. Practice: The best way to learn Python is by doing. Start by writing simple programs and gradually work your way up to more complex projects. 5. Find a mentor: Having a mentor who is experienced in Python can be a great way to get guidance and feedback as you learn. 6. Join online meetups and events: Joining online meetups and events can be a great way to connect with other Python learners and get a sense of the community. 7. Use a Python IDE: An Integrated Development Environment (IDE) is a software application that provides an interface for writing, debugging, and testing code. Using a Python IDE such as PyCharm, VSCode, or Spyder can make writing and debugging Python code much easier. 8. Learn by building: One of the best ways to learn Python is by building projects. Start with small projects and gradually work your way up to more complex ones. 9. Learn from others: Look at other people's code, understand how it works and try to implement it in your own way. 10. Be patient: Learning a programming language takes time and practice, so be patient with yourself and don't get discouraged if you don't understand something at first.
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-9
Please let me know if you have any other questions or if there is anything Raw output runtime: 32.74260359999607 seconds Stopped output: There are several ways to learn Python, and the best method for you will depend on your learning style and goals. Here are a few suggestions: Stopped output runtime: 3.2350128999969456 seconds Chaining Calls​ The whole point of langchain is to... chain! Here's an example of how do that. from langchain.chains import SimpleSequentialChain First, let's define the LLM for this model as a flan-5, and text2image as a stable diffusion model. dolly_llm = Replicate( model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5" ) text2image = Replicate( model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf" ) First prompt in the chain prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) chain = LLMChain(llm=dolly_llm, prompt=prompt) Second prompt to get the logo for company description second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}", ) chain_two = LLMChain(llm=dolly_llm, prompt=second_prompt) Third prompt, let's create the image based on the description output from prompt 2 third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}", ) chain_three = LLMChain(llm=text2image, prompt=third_prompt) Now let's run it! # Run the chain specifying only the input variable for the first chain. overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three], verbose=True ) catchphrase = overall_chain.run("colorful socks") print(catchphrase)
https://python.langchain.com/docs/integrations/llms/replicate
86562de89390-10
> Entering new SimpleSequentialChain chain... Colorful socks could be named "Dazzle Socks" A logo featuring bright colorful socks could be named Dazzle Socks https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png > Finished chain. https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png response = requests.get( "https://replicate.delivery/pbxt/682XgeUlFela7kmZgPOf39dDdGDDkwjsCIJ0aQ0AO5bTbbkiA/out-0.png" ) img = Image.open(BytesIO(response.content)) img
https://python.langchain.com/docs/integrations/llms/replicate
c0ab64047b42-0
Arxiv arXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics. This notebook shows how to retrieve scientific articles from Arxiv.org into the Document format that is used downstream. Installation​ First, you need to install arxiv python package. ArxivRetriever has these arguments: optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now. optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded. get_relevant_documents() has one argument, query: free text which used to find documents in Arxiv.org Examples​ Running retriever​ from langchain.retrievers import ArxivRetriever retriever = ArxivRetriever(load_max_docs=2) docs = retriever.get_relevant_documents(query="1605.08386") docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'} docs[0].page_content[:400] # a content of the Document
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-1
docs[0].page_content[:400] # a content of the Document 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on fibers of a\nfixed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b' Question Answering on facts​ # get a token: https://platform.openai.com/account/api-keys
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-2
from getpass import getpass OPENAI_API_KEY = getpass() import os os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY from langchain.chat_models import ChatOpenAI from langchain.chains import ConversationalRetrievalChain model = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4' qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever) questions = [ "What are Heat-bath random walks with Markov base?", "What is the ImageBind model?", "How does Compositional Reasoning with Large Language Models works?", ] chat_history = [] for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? **Answer**: I'm not sure, as I don't have enough context to provide a definitive answer. The term "Heat-bath random walks with Markov base" is not mentioned in the given text. Could you provide more information or context about where you encountered this term? -> **Question**: What is the ImageBind model? **Answer**: ImageBind is an approach developed by Facebook AI Research to learn a joint embedding across six different modalities, including images, text, audio, depth, thermal, and IMU data. The approach uses the binding property of images to align each modality's embedding to image embeddings and achieve an emergent alignment across all modalities. This enables novel multimodal capabilities, including cross-modal retrieval, embedding-space arithmetic, and audio-to-image generation, among others. The approach sets a new state-of-the-art on emergent zero-shot recognition tasks across modalities, outperforming specialist supervised models. Additionally, it shows strong few-shot recognition results and serves as a new way to evaluate vision models for visual and non-visual tasks. -> **Question**: How does Compositional Reasoning with Large Language Models works?
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-3
-> **Question**: How does Compositional Reasoning with Large Language Models works? **Answer**: Compositional reasoning with large language models refers to the ability of these models to correctly identify and represent complex concepts by breaking them down into smaller, more basic parts and combining them in a structured way. This involves understanding the syntax and semantics of language and using that understanding to build up more complex meanings from simpler ones. In the context of the paper "Does CLIP Bind Concepts? Probing Compositionality in Large Image Models", the authors focus specifically on the ability of a large pretrained vision and language model (CLIP) to encode compositional concepts and to bind variables in a structure-sensitive way. They examine CLIP's ability to compose concepts in a single-object setting, as well as in situations where concept binding is needed. The authors situate their work within the tradition of research on compositional distributional semantics models (CDSMs), which seek to bridge the gap between distributional models and formal semantics by building architectures which operate over vectors yet still obey traditional theories of linguistic composition. They compare the performance of CLIP with several architectures from research on CDSMs to evaluate its ability to encode and reason about compositional concepts. questions = [ "What are Heat-bath random walks with Markov base? Include references to answer.", ] chat_history = [] for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n") -> **Question**: What are Heat-bath random walks with Markov base? Include references to answer. **Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
https://python.langchain.com/docs/integrations/retrievers/arxiv
c0ab64047b42-4
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties. References: Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18. Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
https://python.langchain.com/docs/integrations/retrievers/arxiv
8b2aa2e7aa85-0
With Kendra, users can search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and contextual meanings to provide highly relevant search results. import boto3 from langchain.retrievers import AmazonKendraRetriever
https://python.langchain.com/docs/integrations/retrievers/amazon_kendra_retriever
ddf3d6daa172-0
Azure Cognitive Search Azure Cognitive Search (formerly known as Azure Search) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications. Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities: A search engine for full text search over a search index containing user-owned content Rich indexing, with lexical analysis and optional AI enrichment for content extraction and transformation Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more Programmability through REST APIs and client libraries in Azure SDKs Azure integration at the data layer, machine learning layer, and AI (Cognitive Services) This notebook shows how to use Azure Cognitive Search (ACS) within LangChain. Set up Azure Cognitive Search​ To set up ACS, please follow the instrcutions here. Please note the name of your ACS service, the name of your ACS index, your API key. Your API key can be either Admin or Query key, but as we only read data it is recommended to use a Query key. Using the Azure Cognitive Search Retriever​ import os from langchain.retrievers import AzureCognitiveSearchRetriever Set Service Name, Index Name and API key as environment variables (alternatively, you can pass them as arguments to AzureCognitiveSearchRetriever). os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = "<YOUR_ACS_SERVICE_NAME>" os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = "<YOUR_ACS_INDEX_NAME>" os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = "<YOUR_API_KEY>" Create the Retriever retriever = AzureCognitiveSearchRetriever(content_key="content", top_k=10) Now you can use retrieve documents from Azure Cognitive Search retriever.get_relevant_documents("what is langchain") You can change the number of results returned with the top_k parameter. The default value is None, which returns all results.
https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search
f468fc335bbd-0
BM25 BM25 also known as the Okapi BM25, is a ranking function used in information retrieval systems to estimate the relevance of documents to a given search query. This notebook goes over how to use a retriever that under the hood uses BM25 using rank_bm25 package. from langchain.retrievers import BM25Retriever /workspaces/langchain/.venv/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.10) is available. It's recommended that you update to the latest version using `pip install -U deeplake`. warnings.warn( Create New Retriever with Texts​ retriever = BM25Retriever.from_texts(["foo", "bar", "world", "hello", "foo bar"]) Create a New Retriever with Documents​ You can now create a new retriever with the documents you created. from langchain.schema import Document retriever = BM25Retriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ] ) Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]
https://python.langchain.com/docs/integrations/retrievers/bm25
331c9696c849-0
This notebook shows how to use the ChatGPT Retriever Plugin within LangChain. # STEP 1: Load # Load documents using LangChain's DocumentLoaders # This is from https://langchain.readthedocs.io/en/latest/modules/document_loaders/examples/csv.html from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader( file_path="../../document_loaders/examples/example_data/mlb_teams_2012.csv" ) data = loader.load() # STEP 2: Convert # Convert Document to format expected by https://github.com/openai/chatgpt-retrieval-plugin from typing import List from langchain.docstore.document import Document import json def write_json(path: str, documents: List[Document]) -> None: results = [{"text": doc.page_content} for doc in documents] with open(path, "w") as f: json.dump(results, f, indent=2) write_json("foo.json", data) # STEP 3: Use
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
331c9696c849-1
write_json("foo.json", data) # STEP 3: Use # Ingest this as you would any other json file in https://github.com/openai/chatgpt-retrieval-plugin/tree/main/scripts/process_json Okay, so we've created the ChatGPT Retriever Plugin, but how do we actually use it? The below code walks through how to do that. We want to use ChatGPTPluginRetriever so we have to get the OpenAI API Key. [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata={'id': '456_0', 'metadata': {'source': 'email', 'source_id': '567', 'url': None, 'created_at': '1609592400.0', 'author': 'Alice', 'document_id': '456'}, 'embedding': None, 'score': 0.925571561}, lookup_index=0), Document(page_content='This is a document about something', lookup_str='', metadata={'id': '123_0', 'metadata': {'source': 'file', 'source_id': 'https://example.com/doc1', 'url': 'https://example.com/doc1', 'created_at': '1609502400.0', 'author': 'Alice', 'document_id': '123'}, 'embedding': None, 'score': 0.6987589}, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata={'id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631_0', 'metadata': {'source': None, 'source_id': None, 'url': None, 'created_at': None, 'author': None, 'document_id': '59c2c0c1-ae3f-4272-a1da-f44a723ea631'}, 'embedding': None, 'score': 0.697888613}, lookup_index=0)]
https://python.langchain.com/docs/integrations/retrievers/chatgpt-plugin
c7c5f52df5af-0
Chaindesk platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources). Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Chaindesk API. First, you will need to sign up for Chaindesk, create a datastore, add some data and get your datastore api endpoint url. You need the API Key. Now that our index is set up, we can set up a retriever and start querying it. [Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),
https://python.langchain.com/docs/integrations/retrievers/chaindesk
c7c5f52df5af-1
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}), Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
https://python.langchain.com/docs/integrations/retrievers/chaindesk
2e32f06a0cac-0
Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs. Document 1: One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. ---------------------------------------------------------------------------------------------------- Document 2: As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. ---------------------------------------------------------------------------------------------------- Document 3: A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. ---------------------------------------------------------------------------------------------------- Document 4: He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. ---------------------------------------------------------------------------------------------------- Document 5: I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. I’ve worked on these issues a long time. I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-1
So let’s not abandon our streets. Or choose between safety and equal justice. ---------------------------------------------------------------------------------------------------- Document 6: Vice President Harris and I ran for office with a new economic vision for America. Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up and the middle out, not from the top down. Because we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. America used to have the best roads, bridges, and airports on Earth. Now our infrastructure is ranked 13th in the world. ---------------------------------------------------------------------------------------------------- Document 7: And tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. By the end of this year, the deficit will be down to less than half what it was before I took office. The only president ever to cut the deficit by more than one trillion dollars in a single year. Lowering your costs also means demanding more competition. I’m a capitalist, but capitalism without competition isn’t capitalism. It’s exploitation—and it drives up prices. ---------------------------------------------------------------------------------------------------- Document 8: For the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. Vice President Harris and I ran for office with a new economic vision for America. ---------------------------------------------------------------------------------------------------- Document 9: All told, we created 369,000 new manufacturing jobs in America just last year. Powered by people I’ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who’s here with us tonight. As Ohio Senator Sherrod Brown says, “It’s time to bury the label “Rust Belt.” It’s time. But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. ---------------------------------------------------------------------------------------------------- Document 10: I’m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-2
And fourth, let’s end cancer as we know it. This is personal to me and Jill, to Kamala, and to so many of you. Cancer is the #2 cause of death in America–second only to heart disease. ---------------------------------------------------------------------------------------------------- Document 11: He will never extinguish their love of freedom. He will never weaken the resolve of the free world. We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. The pandemic has been punishing. And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. I understand. ---------------------------------------------------------------------------------------------------- Document 12: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. ---------------------------------------------------------------------------------------------------- Document 13: I know. One of those soldiers was my son Major Beau Biden. We don’t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. But I’m committed to finding out everything we can. Committed to military families like Danielle Robinson from Ohio. The widow of Sergeant First Class Heath Robinson. He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. ---------------------------------------------------------------------------------------------------- Document 14: And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. First, beat the opioid epidemic. There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. ---------------------------------------------------------------------------------------------------- Document 15: Third, support our veterans. Veterans are the best of us.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-3
Third, support our veterans. Veterans are the best of us. I’ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. Our troops in Iraq and Afghanistan faced many dangers. ---------------------------------------------------------------------------------------------------- Document 16: When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven’t done in a long time: build a better America. For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. And I know you’re tired, frustrated, and exhausted. But I also know this. ---------------------------------------------------------------------------------------------------- Document 17: Now is the hour. Our moment of responsibility. Our test of resolve and conscience, of history itself. It is in this moment that our character is formed. Our purpose is found. Our future is forged. Well I know this nation. We will meet the test. To protect freedom and liberty, to expand fairness and opportunity. We will save democracy. As hard as these times have been, I am more optimistic about America today than I have been my whole life. ---------------------------------------------------------------------------------------------------- Document 18: He didn’t know how to stop fighting, and neither did she. Through her pain she found purpose to demand we do better. Tonight, Danielle—we are. The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. And tonight, I’m announcing we’re expanding eligibility to veterans suffering from nine respiratory cancers. ---------------------------------------------------------------------------------------------------- Document 19: I understand. I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. That’s why one of the first things I did as President was fight to pass the American Rescue Plan. Because people were hurting. We needed to act, and we did. Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis. ---------------------------------------------------------------------------------------------------- Document 20:
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
2e32f06a0cac-4
So let’s not abandon our streets. Or choose between safety and equal justice. Let’s come together to protect our communities, restore trust, and hold law enforcement accountable. That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. Now let's wrap our base retriever with a ContextualCompressionRetriever. We'll add an CohereRerank, uses the Cohere rerank endpoint to rerank the returned results.
https://python.langchain.com/docs/integrations/retrievers/cohere-reranker
713cf8005120-0
DocArray Retriever DocArray is a versatile, open-source tool for managing your multi-modal data. It lets you shape your data however you want, and offers the flexibility to store and search it using various document index backends. Plus, it gets even better - you can utilize your DocArray document index to create a DocArrayRetriever, and build awesome Langchain apps! This notebook is split into two sections. The first section offers an introduction to all five supported document index backends. It provides guidance on setting up and indexing each backend, and also instructs you on how to build a DocArrayRetriever for finding relevant documents. In the second section, we'll select one of these backends and illustrate how to use it through a basic example. Document Index Backends InMemoryExactNNIndex HnswDocumentIndex WeaviateDocumentIndex ElasticDocIndex QdrantDocumentIndex Movie Retrieval using HnswDocumentIndex Normal Retriever Retriever with Filters Retriever with MMR Search Document Index Backends from langchain.retrievers import DocArrayRetriever from docarray import BaseDoc from docarray.typing import NdArray import numpy as np from langchain.embeddings import FakeEmbeddings import random embeddings = FakeEmbeddings(size=32) Before you start building the index, it's important to define your document schema. This determines what fields your documents will have and what type of data each field will hold. For this demonstration, we'll create a somewhat random schema containing 'title' (str), 'title_embedding' (numpy array), 'year' (int), and 'color' (str) class MyDoc(BaseDoc): title: str title_embedding: NdArray[32] year: int color: str InMemoryExactNNIndex​ InMemoryExactNNIndex stores all Documentsin memory. It is a great starting point for small datasets, where you may not want to launch a database server. Learn more here: https://docs.docarray.org/user_guide/storing/index_in_memory/ from docarray.index import InMemoryExactNNIndex
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-1
# initialize the index db = InMemoryExactNNIndex[MyDoc]() # index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = {"year": {"$lte": 90}} # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query, ) # find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 56', metadata={'id': '1f33e58b6468ab722f3786b96b20afe6', 'year': 56, 'color': 'red'})] HnswDocumentIndex​ HnswDocumentIndex is a lightweight Document Index implementation that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. Learn more here: https://docs.docarray.org/user_guide/storing/index_hnswlib/ from docarray.index import HnswDocumentIndex # initialize the index db = HnswDocumentIndex[MyDoc](work_dir="hnsw_index") # index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = {"year": {"$lte": 90}} # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query, )
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-2
# find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 28', metadata={'id': 'ca9f3f4268eec7c97a7d6e77f541cb82', 'year': 28, 'color': 'red'})] WeaviateDocumentIndex​ WeaviateDocumentIndex is a document index that is built upon Weaviate vector database. Learn more here: https://docs.docarray.org/user_guide/storing/index_weaviate/ # There's a small difference with the Weaviate backend compared to the others. # Here, you need to 'mark' the field used for vector search with 'is_embedding=True'. # So, let's create a new schema for Weaviate that takes care of this requirement. from pydantic import Field class WeaviateDoc(BaseDoc): title: str title_embedding: NdArray[32] = Field(is_embedding=True) year: int color: str from docarray.index import WeaviateDocumentIndex # initialize the index dbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080") db = WeaviateDocumentIndex[WeaviateDoc](db_config=dbconfig) # index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = {"path": ["year"], "operator": "LessThanEqual", "valueInt": "90"} # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query, )
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-3
# find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 17', metadata={'id': '3a5b76e85f0d0a01785dc8f9d965ce40', 'year': 17, 'color': 'red'})] ElasticDocIndex​ ElasticDocIndex is a document index that is built upon ElasticSearch Learn more here: https://docs.docarray.org/user_guide/storing/index_elastic/ from docarray.index import ElasticDocIndex # initialize the index db = ElasticDocIndex[MyDoc]( hosts="http://localhost:9200", index_name="docarray_retriever" ) # index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = {"range": {"year": {"lte": 90}}} # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query, ) # find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 46', metadata={'id': 'edbc721bac1c2ad323414ad1301528a4', 'year': 46, 'color': 'green'})] QdrantDocumentIndex​ QdrantDocumentIndex is a document index that is build upon Qdrant vector database Learn more here: https://docs.docarray.org/user_guide/storing/index_qdrant/ from docarray.index import QdrantDocumentIndex from qdrant_client.http import models as rest # initialize the index qdrant_config = QdrantDocumentIndex.DBConfig(path=":memory:") db = QdrantDocumentIndex[MyDoc](qdrant_config)
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-4
# index data db.index( [ MyDoc( title=f"My document {i}", title_embedding=embeddings.embed_query(f"query {i}"), year=i, color=random.choice(["red", "green", "blue"]), ) for i in range(100) ] ) # optionally, you can create a filter query filter_query = rest.Filter( must=[ rest.FieldCondition( key="year", range=rest.Range( gte=10, lt=90, ), ) ] ) WARNING:root:Payload indexes have no effect in the local Qdrant. Please use server Qdrant if you need payload indexes. # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="title_embedding", content_field="title", filters=filter_query, )
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-5
# find the relevant document doc = retriever.get_relevant_documents("some query") print(doc) [Document(page_content='My document 80', metadata={'id': '97465f98d0810f1f330e4ecc29b13d20', 'year': 80, 'color': 'blue'})] Movie Retrieval using HnswDocumentIndex movies = [ { "title": "Inception", "description": "A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.", "director": "Christopher Nolan", "rating": 8.8, }, { "title": "The Dark Knight", "description": "When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.", "director": "Christopher Nolan", "rating": 9.0, }, { "title": "Interstellar", "description": "Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.", "director": "Christopher Nolan", "rating": 8.6, }, { "title": "Pulp Fiction", "description": "The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", "director": "Quentin Tarantino", "rating": 8.9, }, { "title": "Reservoir Dogs", "description": "When a simple jewelry heist goes horribly wrong, the surviving criminals begin to suspect that one of them is a police informant.", "director": "Quentin Tarantino", "rating": 8.3, }, { "title": "The Godfather", "description": "An aging patriarch of an organized crime dynasty transfers control of his empire to his reluctant son.", "director": "Francis Ford Coppola", "rating": 9.2, }, ] import getpass import os os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") from docarray import BaseDoc, DocList from docarray.typing import NdArray from langchain.embeddings.openai import OpenAIEmbeddings
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-6
# define schema for your movie documents class MyDoc(BaseDoc): title: str description: str description_embedding: NdArray[1536] rating: float director: str embeddings = OpenAIEmbeddings() # get "description" embeddings, and create documents docs = DocList[MyDoc]( [ MyDoc( description_embedding=embeddings.embed_query(movie["description"]), **movie ) for movie in movies ] ) from docarray.index import HnswDocumentIndex # initialize the index db = HnswDocumentIndex[MyDoc](work_dir="movie_search") # add data db.index(docs) Normal Retriever​ from langchain.retrievers import DocArrayRetriever # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", ) # find the relevant document doc = retriever.get_relevant_documents("movie about dreams") print(doc) [Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})] Retriever with Filters​ from langchain.retrievers import DocArrayRetriever # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"director": {"$eq": "Christopher Nolan"}}, top_k=2, )
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-7
# find relevant documents docs = retriever.get_relevant_documents("space travel") print(docs) [Document(page_content='Interstellar explores the boundaries of human exploration as a group of astronauts venture through a wormhole in space. In their quest to ensure the survival of humanity, they confront the vastness of space-time and grapple with love and sacrifice.', metadata={'id': 'ab704cc7ae8573dc617f9a5e25df022a', 'title': 'Interstellar', 'rating': 8.6, 'director': 'Christopher Nolan'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'})] Retriever with MMR search​ from langchain.retrievers import DocArrayRetriever # create a retriever retriever = DocArrayRetriever( index=db, embeddings=embeddings, search_field="description_embedding", content_field="description", filters={"rating": {"$gte": 8.7}}, search_type="mmr", top_k=3, )
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
713cf8005120-8
# find relevant documents docs = retriever.get_relevant_documents("action movies") print(docs) [Document(page_content="The lives of two mob hitmen, a boxer, a gangster's wife, and a pair of diner bandits intertwine in four tales of violence and redemption.", metadata={'id': 'e6aa313bbde514e23fbc80ab34511afd', 'title': 'Pulp Fiction', 'rating': 8.9, 'director': 'Quentin Tarantino'}), Document(page_content='A thief who steals corporate secrets through the use of dream-sharing technology is given the task of planting an idea into the mind of a CEO.', metadata={'id': 'f1649d5b6776db04fec9a116bbb6bbe5', 'title': 'Inception', 'rating': 8.8, 'director': 'Christopher Nolan'}), Document(page_content='When the menace known as the Joker wreaks havoc and chaos on the people of Gotham, Batman must accept one of the greatest psychological and physical tests of his ability to fight injustice.', metadata={'id': '91dec17d4272041b669fd113333a65f7', 'title': 'The Dark Knight', 'rating': 9.0, 'director': 'Christopher Nolan'})]
https://python.langchain.com/docs/integrations/retrievers/docarray_retriever
5201c48903c9-0
ElasticSearch BM25 Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others. The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval. This notebook shows how to use a retriever that uses ElasticSearch and BM25. For more information on the details of BM25 see this blog post. #!pip install elasticsearch from langchain.retrievers import ElasticSearchBM25Retriever Create New Retriever​ elasticsearch_url = "http://localhost:9200" retriever = ElasticSearchBM25Retriever.create(elasticsearch_url, "langchain-index-4") # Alternatively, you can load an existing index # import elasticsearch # elasticsearch_url="http://localhost:9200" # retriever = ElasticSearchBM25Retriever(elasticsearch.Elasticsearch(elasticsearch_url), "langchain-index") Add texts (if necessary)​ We can optionally add texts to the retriever (if they aren't already in there) retriever.add_texts(["foo", "bar", "world", "hello", "foo bar"]) ['cbd4cb47-8d9f-4f34-b80e-ea871bc49856', 'f3bd2e24-76d1-4f9b-826b-ec4c0e8c7365',
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
5201c48903c9-1
'8631bfc8-7c12-48ee-ab56-8ad5f373676e', '8be8374c-3253-4d87-928d-d73550a2ecf0', 'd79f457b-2842-4eab-ae10-77aa420b53d7'] Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={})]
https://python.langchain.com/docs/integrations/retrievers/elastic_search_bm25
6b31b1a8cd23-0
Google Cloud Enterprise Search Enterprise Search is a part of the Generative AI App Builder suite of tools offered by Google Cloud. Gen AI App Builder lets developers, even those with limited machine learning skills, quickly and easily tap into the power of Google’s foundation models, search expertise, and conversational AI technologies to create enterprise-grade generative AI applications. Enterprise Search lets organizations quickly build generative AI powered search engines for customers and employees.Enterprise Search is underpinned by a variety of Google Search technologies, including semantic search, which helps deliver more relevant results than traditional keyword-based search techniques by using natural language processing and machine learning techniques to infer relationships within the content and intent from the user’s query input. Enterprise Search also benefits from Google’s expertise in understanding how users search and factors in content relevance to order displayed results. Google Cloud offers Enterprise Search via Gen App Builder in Google Cloud Console and via an API for enterprise workflow integration. This notebook demonstrates how to configure Enterprise Search and use the Enterprise Search retriever. The Enterprise Search retriever encapsulates the Generative AI App Builder Python client library and uses it to access the Enterprise Search Search Service API. Install pre-requisites​ You need to install the google-cloud-discoverengine package to use the Enterprise Search retriever. pip install google-cloud-discoveryengine Configure access to Google Cloud and Google Cloud Enterprise Search​ Enterprise Search is generally available for the allowlist (which means customers need to be approved for access) as of June 6, 2023. Contact your Google Cloud sales team for access and pricing details. We are previewing additional features that are coming soon to the generally available offering as part of our Trusted Tester program. Sign up for Trusted Tester and contact your Google Cloud sales team for an expedited trial. Before you can run this notebook you need to: Set or create a Google Cloud project and turn on Gen App Builder Create and populate an unstructured data store Set credentials to access Enterprise Search API Set or create a Google Cloud poject and turn on Gen App Builder​ Follow the instructions in the Enterprise Search Getting Started guide to set/create a GCP project and enable Gen App Builder. Create and populate an unstructured data store​
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-1
Create and populate an unstructured data store​ Use Google Cloud Console to create an unstructured data store and populate it with the example PDF documents from the gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs Cloud Storage folder. Make sure to use the Cloud Storage (without metadata) option. Set credentials to access Enterprise Search API​ The Gen App Builder client libraries used by the Enterprise Search retriever provide high-level language support for authenticating to Gen App Builder programmatically. Client libraries support Application Default Credentials (ADC); the libraries look for credentials in a set of defined locations and use those credentials to authenticate requests to the API. With ADC, you can make credentials available to your application in a variety of environments, such as local development or production, without needing to modify your application code. If running in Google Colab authenticate with google.colab.google.auth otherwise follow one of the supported methods to make sure that you Application Default Credentials are properly set. import sys
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-2
if "google.colab" in sys.modules: from google.colab import auth as google_auth
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-3
google_auth.authenticate_user() Configure and use the Enterprise Search retriever​ The Enterprise Search retriever is implemented in the langchain.retriever.GoogleCloudEntepriseSearchRetriever class. The get_relevant_documents method returns a list of langchain.schema.Document documents where the page_content field of each document is populated the document content. Depending on the data type used in Enterprise search (structured or unstructured) the page_content field is populated as follows: Structured data source: either an extractive segment or an extractive answer that matches a query. The metadata field is populated with metadata (if any) of the document from which the segments or answers were extracted. Unstructured data source: a string json containing all the fields returned from the structured data source. The metadata field is populated with metadata (if any) of the document Only for Unstructured data sources:​ An extractive answer is verbatim text that is returned with each search result. It is extracted directly from the original document. Extractive answers are typically displayed near the top of web pages to provide an end user with a brief answer that is contextually relevant to their query. Extractive answers are available for website and unstructured search. An extractive segment is verbatim text that is returned with each search result. An extractive segment is usually more verbose than an extractive answer. Extractive segments can be displayed as an answer to a query, and can be used to perform post-processing tasks and as input for large language models to generate answers or new text. Extractive segments are available for unstructured search. For more information about extractive segments and extractive answers refer to product documentation. When creating an instance of the retriever you can specify a number of parameters that control which Enterprise data store to access and how a natural language query is processed, including configurations for extractive answers and segments. The mandatory parameters are:​ project_id - Your Google Cloud PROJECT_ID search_engine_id - The ID of the data store you want to use. The project_id and search_engine_id parameters can be provided explicitly in the retriever's constructor or through the environment variables - PROJECT_ID and SEARCH_ENGINE_ID. You can also configure a number of optional parameters, including: max_documents - The maximum number of documents used to provide extractive segments or extractive answers
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-4
max_documents - The maximum number of documents used to provide extractive segments or extractive answers get_extractive_answers - By default, the retriever is configured to return extractive segments. Set this field to True to return extractive answers. This is used only when engine_data_type set to 0 (unstructured) max_extractive_answer_count - The maximum number of extractive answers returned in each search result. At most 5 answers will be returned. This is used only when engine_data_type set to 0 (unstructured) max_extractive_segment_count - The maximum number of extractive segments returned in each search result. Currently one segment will be returned. This is used only when engine_data_type set to 0 (unstructured) filter - The filter expression that allows you filter the search results based on the metadata associated with the documents in the searched data store. query_expansion_condition - Specification to determine under which conditions query expansion should occur. 0 - Unspecified query expansion condition. In this case, server behavior defaults to disabled. 1 - Disabled query expansion. Only the exact search query is used, even if SearchResponse.total_size is zero. 2 - Automatic query expansion built by the Search API. engine_data_type - Defines the enterprise search data type 0 - Unstructured data 1 - Structured data Configure and use the retriever for unstructured data with extractve segments​ from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
6b31b1a8cd23-5
PROJECT_ID = "<YOUR PROJECT ID>" # Set to your Project ID SEARCH_ENGINE_ID = "<YOUR SEARCH ENGINE ID>" # Set to your data store ID retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, ) query = "What are Alphabet's Other Bets?" result = retriever.get_relevant_documents(query) for doc in result: print(doc) Configure and use the retriever for unstructured data with extractve answers​ retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, max_extractive_answer_count=3, get_extractive_answers=True, ) query = "What are Alphabet's Other Bets?" result = retriever.get_relevant_documents(query) for doc in result: print(doc) Configure and use the retriever for structured data with extractve answers​ retriever = GoogleCloudEnterpriseSearchRetriever( project_id=PROJECT_ID, search_engine_id=SEARCH_ENGINE_ID, max_documents=3, engine_data_type=1 ) result = retriever.get_relevant_documents(query) for doc in result: print(doc)
https://python.langchain.com/docs/integrations/retrievers/google_cloud_enterprise_search
2d79838bde66-0
Google Drive Retriever This notebook covers how to retrieve documents from Google Drive. Prerequisites​ Create a Google Cloud project or use an existing project Enable the Google Drive API Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib Instructions for retrieving your Google Docs data​ By default, the GoogleDriveRetriever expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the GOOGLE_ACCOUNT_FILE environment variable. The location of token.json use the same directory (or use the parameter token_path). Note that token.json will be created automatically the first time you use the retriever. GoogleDriveRetriever can retrieve a selection of files with some requests. By default, If you use a folder_id, all the files inside this folder can be retrieved to Document. You can obtain your folder and document id from the URL: Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is "1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5" Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is "1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw" The special value root is for your personal home. #!pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib from langchain_googledrive.retrievers import GoogleDriveRetriever folder_id="root" #folder_id='1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5' retriever = GoogleDriveRetriever( num_results=2, ) By default, all files with these mime-type can be converted to Document. text/text text/plain text/html text/csv text/markdown image/png image/jpeg application/epub+zip application/pdf application/rtf
https://python.langchain.com/docs/integrations/retrievers/google_drive
2d79838bde66-1
image/jpeg application/epub+zip application/pdf application/rtf application/vnd.google-apps.document (GDoc) application/vnd.google-apps.presentation (GSlide) application/vnd.google-apps.spreadsheet (GSheet) application/vnd.google.colaboratory (Notebook colab) application/vnd.openxmlformats-officedocument.presentationml.presentation (PPTX) application/vnd.openxmlformats-officedocument.wordprocessingml.document (DOCX) It's possible to update or customize this. See the documentation of GDriveRetriever. But, the corresponding packages must be installed. #!pip install unstructured retriever.get_relevant_documents("machine learning") You can customize the criteria to select the files. A set of predefined filter are proposed: | template | description | | -------------------------------------- | --------------------------------------------------------------------- | | gdrive-all-in-folder | Return all compatible files from a folder_id | | gdrive-query | Search query in all drives | | gdrive-by-name | Search file with name query) | | gdrive-query-in-folder | Search query in folder_id (and sub-folders in _recursive=true) | | gdrive-mime-type | Search a specific mime_type | | gdrive-mime-type-in-folder | Search a specific mime_type in folder_id | | gdrive-query-with-mime-type | Search query with a specific mime_type | | gdrive-query-with-mime-type-and-folder | Search query with a specific mime_type and in folder_id | retriever = GoogleDriveRetriever( template="gdrive-query", # Search everywhere num_results=2, # But take only 2 documents ) for doc in retriever.get_relevant_documents("machine learning"): print("---") print(doc.page_content.strip()[:60]+"...") Else, you can customize the prompt with a specialized PromptTemplate from langchain import PromptTemplate retriever = GoogleDriveRetriever( template=PromptTemplate(input_variables=['query'], # See https://developers.google.com/drive/api/guides/search-files template="(fullText contains '{query}') " "and mimeType='application/vnd.google-apps.document' " "and modifiedTime > '2000-01-01T00:00:00' " "and trashed=false"), num_results=2,
https://python.langchain.com/docs/integrations/retrievers/google_drive
2d79838bde66-2
"and trashed=false"), num_results=2, # See https://developers.google.com/drive/api/v3/reference/files/list includeItemsFromAllDrives=False, supportsAllDrives=False, ) for doc in retriever.get_relevant_documents("machine learning"): print(f"{doc.metadata['name']}:") print("---") print(doc.page_content.strip()[:60]+"...") Use GDrive 'description' metadata Each Google Drive has a description field in metadata (see the details of a file). Use the snippets mode to return the description of selected files. retriever = GoogleDriveRetriever( template='gdrive-mime-type-in-folder', folder_id=folder_id, mime_type='application/vnd.google-apps.document', # Only Google Docs num_results=2, mode='snippets', includeItemsFromAllDrives=False, supportsAllDrives=False, ) retriever.get_relevant_documents("machine learning")
https://python.langchain.com/docs/integrations/retrievers/google_drive
f9e5598c592f-0
kNN In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. This notebook goes over how to use a retriever that under the hood uses an kNN. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.html from langchain.retrievers import KNNRetriever from langchain.embeddings import OpenAIEmbeddings Create New Retriever with Texts​ retriever = KNNRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings() ) Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='bar', metadata={})]
https://python.langchain.com/docs/integrations/retrievers/knn
a8b350bb4375-0
Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers. The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first. import os import chromadb from langchain.retrievers.merger_retriever import MergerRetriever from langchain.vectorstores import Chroma from langchain.embeddings import HuggingFaceEmbeddings from langchain.embeddings import OpenAIEmbeddings from langchain.document_transformers import ( EmbeddingsRedundantFilter, EmbeddingsClusteringFilter, ) from langchain.retrievers.document_compressors import DocumentCompressorPipeline from langchain.retrievers import ContextualCompressionRetriever # Get 3 diff embeddings. all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1") filter_embeddings = OpenAIEmbeddings() ABS_PATH = os.path.dirname(os.path.abspath(__file__)) DB_DIR = os.path.join(ABS_PATH, "db") # Instantiate 2 diff cromadb indexs, each one with a diff embedding. client_settings = chromadb.config.Settings( is_persistent=True, persist_directory=DB_DIR, anonymized_telemetry=False, ) db_all = Chroma( collection_name="project_store_all", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=all_mini, ) db_multi_qa = Chroma( collection_name="project_store_multi", persist_directory=DB_DIR, client_settings=client_settings, embedding_function=multi_qa_mini, )
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
a8b350bb4375-1
# Define 2 diff retrievers with 2 diff embeddings and diff search type. retriever_all = db_all.as_retriever( search_type="similarity", search_kwargs={"k": 5, "include_metadata": True} ) retriever_multi_qa = db_multi_qa.as_retriever( search_type="mmr", search_kwargs={"k": 5, "include_metadata": True} ) # The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other # retriever on different types of chains. lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa]) # We can remove redundant results from both retrievers using yet another embedding. # Using multiples embeddings in diff steps could help reduce biases. filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings) pipeline = DocumentCompressorPipeline(transformers=[filter]) compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr ) # This filter will divide the documents vectors into clusters or "centers" of meaning. # Then it will pick the closest document to that center for the final results. # By default the result document will be ordered/grouped by clusters. filter_ordered_cluster = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, ) # If you want the final document to be ordered by the original retriever scores # you need to add the "sorted" parameter. filter_ordered_by_retriever = EmbeddingsClusteringFilter( embeddings=filter_embeddings, num_clusters=10, num_closest=1, sorted=True, ) pipeline = DocumentCompressorPipeline(transformers=[filter_ordered_by_retriever]) compression_retriever = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr ) No matter the architecture of your model, there is a sustancial performance degradation when you include 10+ retrieved documents. In brief: When models must access relevant information in the middle of long contexts, then tend to ignore the provided documents. See: https://arxiv.org/abs//2307.03172 # You can use an additional document transformer to reorder documents after removing redudance. from langchain.document_transformers import LongContextReorder
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
a8b350bb4375-2
filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings) reordering = LongContextReorder() pipeline = DocumentCompressorPipeline(transformers=[filter, reordering]) compression_retriever_reordered = ContextualCompressionRetriever( base_compressor=pipeline, base_retriever=lotr )
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
18c020e88f0b-0
Metal Metal is a managed service for ML Embeddings. This notebook shows how to use Metal's retriever. First, you will need to sign up for Metal and get an API key. You can do so here from metal_sdk.metal import Metal API_KEY = "" CLIENT_ID = "" INDEX_ID = "" metal = Metal(API_KEY, CLIENT_ID, INDEX_ID); Ingest Documents​ You only need to do this if you haven't already set up an index metal.index({"text": "foo1"}) metal.index({"text": "foo"}) {'data': {'id': '642739aa7559b026b4430e42', 'text': 'foo', 'createdAt': '2023-03-31T19:51:06.748Z'}} Query​ Now that our index is set up, we can set up a retriever and start querying it. from langchain.retrievers import MetalRetriever retriever = MetalRetriever(metal, params={"limit": 2}) retriever.get_relevant_documents("foo1") [Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}), Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
https://python.langchain.com/docs/integrations/retrievers/metal
13118753abdf-0
Pinecone Hybrid Search Pinecone is a vector database with broad functionality. This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search. The logic of this retriever is taken from this documentaion To use Pinecone, you must have an API key and an Environment. Here are the installation instructions. #!pip install pinecone-client pinecone-text import os import getpass os.environ["PINECONE_API_KEY"] = getpass.getpass("Pinecone API Key:") from langchain.retrievers import PineconeHybridSearchRetriever os.environ["PINECONE_ENVIRONMENT"] = getpass.getpass("Pinecone Environment:") We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") Setup Pinecone​ You should only have to do this part once. Note: it's important to make sure that the "context" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's docs. import os import pinecone api_key = os.getenv("PINECONE_API_KEY") or "PINECONE_API_KEY" # find environment next to your API key in the Pinecone console env = os.getenv("PINECONE_ENVIRONMENT") or "PINECONE_ENVIRONMENT" index_name = "langchain-pinecone-hybrid-search" pinecone.init(api_key=api_key, environment=env) pinecone.whoami() WhoAmIResponse(username='load', user_label='label', projectname='load-test') # create the index pinecone.create_index( name=index_name, dimension=1536, # dimensionality of dense model metric="dotproduct", # sparse values supported only for dotproduct pod_type="s1", metadata_config={"indexed": []}, # see explaination above ) Now that its created, we can use it index = pinecone.Index(index_name) Get embeddings and sparse encoders​ Embeddings are used for the dense vectors, tokenizer is used for the sparse vector from langchain.embeddings import OpenAIEmbeddings
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
13118753abdf-1
embeddings = OpenAIEmbeddings() To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25. For more information about the sparse encoders you can checkout pinecone-text library docs. from pinecone_text.sparse import BM25Encoder # or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE # use default tf-idf values bm25_encoder = BM25Encoder().default() The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow: corpus = ["foo", "bar", "world", "hello"] # fit tf-idf values on your corpus bm25_encoder.fit(corpus) # store the values to a json file bm25_encoder.dump("bm25_values.json") # load to your BM25Encoder object bm25_encoder = BM25Encoder().load("bm25_values.json") Load Retriever​ We can now construct the retriever! retriever = PineconeHybridSearchRetriever( embeddings=embeddings, sparse_encoder=bm25_encoder, index=index ) Add texts (if necessary)​ We can optionally add texts to the retriever (if they aren't already in there) retriever.add_texts(["foo", "bar", "world", "hello"]) 100%|██████████| 1/1 [00:02<00:00, 2.27s/it] Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") Document(page_content='foo', metadata={})
https://python.langchain.com/docs/integrations/retrievers/pinecone_hybrid_search
04f5d15eb9db-0
PubMed® by The National Center for Biotechnology Information, National Library of Medicine comprises more than 35 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. [Document(page_content='', metadata={'uid': '37549050', 'Title': 'ChatGPT: "To Be or Not to Be" in Bikini Bottom.', 'Published': '--', 'Copyright Information': ''}),
https://python.langchain.com/docs/integrations/retrievers/pubmed
04f5d15eb9db-1
Document(page_content="BACKGROUND: ChatGPT is a large language model that has performed well on professional examinations in the fields of medicine, law, and business. However, it is unclear how ChatGPT would perform on an examination assessing professionalism and situational judgement for doctors.\nOBJECTIVE: We evaluated the performance of ChatGPT on the Situational Judgement Test (SJT): a national examination taken by all final-year medical students in the United Kingdom. This examination is designed to assess attributes such as communication, teamwork, patient safety, prioritization skills, professionalism, and ethics.\nMETHODS: All questions from the UK Foundation Programme Office's (UKFPO's) 2023 SJT practice examination were inputted into ChatGPT. For each question, ChatGPT's answers and rationales were recorded and assessed on the basis of the official UK Foundation Programme Office scoring template. Questions were categorized into domains of Good Medical Practice on the basis of the domains referenced in the rationales provided in the scoring sheet. Questions without clear domain links were screened by reviewers and assigned one or multiple domains. ChatGPT's overall performance, as well as its performance across the domains of Good Medical Practice, was evaluated.\nRESULTS: Overall, ChatGPT performed well, scoring 76% on the SJT but scoring full marks on only a few questions (9%), which may reflect possible flaws in ChatGPT's situational judgement or inconsistencies in the reasoning across questions (or both) in the examination itself. ChatGPT demonstrated consistent performance across the 4 outlined domains in Good Medical Practice for doctors.\nCONCLUSIONS: Further research is needed to understand the potential applications of large language models, such as ChatGPT, in medical education for standardizing questions and providing consistent rationales for examinations assessing professionalism and ethics.", metadata={'uid': '37548997', 'Title': 'Performance of ChatGPT on the Situational Judgement Test-A Professional Dilemmas-Based Examination for Doctors in the United Kingdom.', 'Published': '2023-08-07', 'Copyright Information': '©Robin J Borchert, Charlotte R Hickman, Jack Pepys, Timothy J Sadler. Originally published in JMIR Medical Education (https://mededu.jmir.org), 07.08.2023.'}),
https://python.langchain.com/docs/integrations/retrievers/pubmed
04f5d15eb9db-2
Document(page_content='', metadata={'uid': '37548971', 'Title': "Large Language Models Answer Medical Questions Accurately, but Can't Match Clinicians' Knowledge.", 'Published': '2023-08-07', 'Copyright Information': ''})]
https://python.langchain.com/docs/integrations/retrievers/pubmed
cd6ab017300a-0
RePhraseQueryRetriever Simple retriever that applies an LLM between the user input and the query pass the to retriever. It can be used to pre-process the user input in any way. The default prompt used in the from_llm classmethod: DEFAULT_TEMPLATE = """You are an assistant tasked with taking a natural language \ query from a user and converting it into a query for a vectorstore. \ In this process, you strip out information that is not relevant for \ the retrieval task. Here is the user query: {question}""" Create a vectorstore. from langchain.document_loaders import WebBaseLoader loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/") data = loader.load() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0) all_splits = text_splitter.split_documents(data) from langchain.vectorstores import Chroma from langchain.embeddings import OpenAIEmbeddings vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings()) import logging logging.basicConfig() logging.getLogger("langchain.retrievers.re_phraser").setLevel(logging.INFO) from langchain.chat_models import ChatOpenAI from langchain.retrievers import RePhraseQueryRetriever Using the default prompt​ llm = ChatOpenAI(temperature=0) retriever_from_llm = RePhraseQueryRetriever.from_llm( retriever=vectorstore.as_retriever(), llm=llm ) docs = retriever_from_llm.get_relevant_documents( "Hi I'm Lance. What are the approaches to Task Decomposition?" ) INFO:langchain.retrievers.re_phraser:Re-phrased question: The user query can be converted into a query for a vectorstore as follows: "approaches to Task Decomposition" docs = retriever_from_llm.get_relevant_documents( "I live in San Francisco. What are the Types of Memory?" ) INFO:langchain.retrievers.re_phraser:Re-phrased question: Query for vectorstore: "Types of Memory" Supply a prompt​ from langchain import LLMChain from langchain.prompts import PromptTemplate
https://python.langchain.com/docs/integrations/retrievers/re_phrase
cd6ab017300a-1
QUERY_PROMPT = PromptTemplate( input_variables=["question"], template="""You are an assistant tasked with taking a natural languge query from a user and converting it into a query for a vectorstore. In the process, strip out all information that is not relevant for the retrieval task and return a new, simplified question for vectorstore retrieval. The new user query should be in pirate speech. Here is the user query: {question} """, ) llm = ChatOpenAI(temperature=0) llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT) retriever_from_llm_chain = RePhraseQueryRetriever( retriever=vectorstore.as_retriever(), llm_chain=llm_chain ) docs = retriever_from_llm_chain.get_relevant_documents( "Hi I'm Lance. What is Maximum Inner Product Search?" ) INFO:langchain.retrievers.re_phraser:Re-phrased question: Ahoy matey! What be Maximum Inner Product Search, ye scurvy dog?
https://python.langchain.com/docs/integrations/retrievers/re_phrase
f2e08b19737d-0
SVM Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. This notebook goes over how to use a retriever that under the hood uses an SVM using scikit-learn package. Largely based on https://github.com/karpathy/randomfun/blob/master/knn_vs_svm.html #!pip install scikit-learn We want to use OpenAIEmbeddings so we have to get the OpenAI API Key. import os import getpass os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:") from langchain.retrievers import SVMRetriever from langchain.embeddings import OpenAIEmbeddings Create New Retriever with Texts​ retriever = SVMRetriever.from_texts( ["foo", "bar", "world", "hello", "foo bar"], OpenAIEmbeddings() ) Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]
https://python.langchain.com/docs/integrations/retrievers/svm
08fd7520ec2c-0
Vespa Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. This notebook shows how to use Vespa.ai as a LangChain retriever. In order to create a retriever, we use pyvespa to create a connection a Vespa service. from vespa.application import Vespa vespa_app = Vespa(url="https://doc-search.vespa.oath.cloud") This creates a connection to a Vespa service, here the Vespa documentation search service. Using pyvespa package, you can also connect to a Vespa Cloud instance or a local Docker instance. After connecting to the service, you can set up the retriever: from langchain.retrievers.vespa_retriever import VespaRetriever vespa_query_body = { "yql": "select content from paragraph where userQuery()", "hits": 5, "ranking": "documentation", "locale": "en-us", } vespa_content_field = "content" retriever = VespaRetriever(vespa_app, vespa_query_body, vespa_content_field) This sets up a LangChain retriever that fetches documents from the Vespa application. Here, up to 5 results are retrieved from the content field in the paragraph document type, using doumentation as the ranking method. The userQuery() is replaced with the actual query passed from LangChain. Please refer to the pyvespa documentation for more information. Now you can return the results and continue using the results in LangChain. retriever.get_relevant_documents("what is vespa?")
https://python.langchain.com/docs/integrations/retrievers/vespa
5de718f307ac-0
TF-IDF TF-IDF means term-frequency times inverse document-frequency. This notebook goes over how to use a retriever that under the hood uses TF-IDF using scikit-learn package. For more information on the details of TF-IDF see this blog post. # !pip install scikit-learn from langchain.retrievers import TFIDFRetriever Create New Retriever with Texts​ retriever = TFIDFRetriever.from_texts(["foo", "bar", "world", "hello", "foo bar"]) Create a New Retriever with Documents​ You can now create a new retriever with the documents you created. from langchain.schema import Document retriever = TFIDFRetriever.from_documents( [ Document(page_content="foo"), Document(page_content="bar"), Document(page_content="world"), Document(page_content="hello"), Document(page_content="foo bar"), ] ) Use Retriever​ We can now use the retriever! result = retriever.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})] Save and load​ You can easily save and load this retriever, making it handy for local development! retriever.save_local("testing.pkl") retriever_copy = TFIDFRetriever.load_local("testing.pkl") retriever_copy.get_relevant_documents("foo") [Document(page_content='foo', metadata={}), Document(page_content='foo bar', metadata={}), Document(page_content='hello', metadata={}), Document(page_content='world', metadata={})]
https://python.langchain.com/docs/integrations/retrievers/tf_idf
1bc7053edd3f-0
Weaviate Hybrid Search Weaviate is an open source vector database. Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques. The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents. This notebook shows how to use Weaviate hybrid search as a LangChain retriever. Set up the retriever: #!pip install weaviate-client import weaviate import os WEAVIATE_URL = os.getenv("WEAVIATE_URL") auth_client_secret = (weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),) client = weaviate.Client( url=WEAVIATE_URL, additional_headers={ "X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"), }, )
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-1
# client.schema.delete_all() from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever from langchain.schema import Document retriever = WeaviateHybridSearchRetriever( client=client, index_name="LangChain", text_key="text", attributes=[], create_schema_if_missing=True, ) Add some data: docs = [ Document( metadata={ "title": "Embracing The Future: AI Unveiled", "author": "Dr. Rebecca Simmons", }, page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.", ), Document( metadata={ "title": "Symbiosis: Harmonizing Humans and AI", "author": "Prof. Jonathan K. Sterling", }, page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.", ), Document( metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"}, page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.", ), Document( metadata={ "title": "Conscious Constructs: The Search for AI Sentience", "author": "Dr. Samuel Cortez", }, page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.", ), Document( metadata={ "title": "Invisible Routines: Hidden AI in Everyday Life", "author": "Prof. Jonathan K. Sterling", }, page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", ), ] retriever.add_documents(docs)
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-2
), ] retriever.add_documents(docs) ['3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be', 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907', '7ebbdae7-1061-445f-a046-1989f2343d8f', 'c2ab315b-3cab-467f-b23a-b26ed186318d', 'b83765f2-e5d2-471f-8c02-c3350ade4c4f'] Do a hybrid search: retriever.get_relevant_documents("the ethical implications of AI") [Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}), Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})] Do a hybrid search with where filter: retriever.get_relevant_documents( "AI integration in society", where_filter={ "path": ["author"], "operator": "Equal", "valueString": "Prof. Jonathan K. Sterling", }, ) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-3
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})] Do a hybrid search with scores: retriever.get_relevant_documents( "AI integration in society", score=True, ) [Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score', 'score': '0.016393442'}}), Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.0078125 to the score\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.008064516129032258 to the score', 'score': '0.015877016'}}),
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
1bc7053edd3f-4
Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={'_additional': {'explainScore': '(bm25)\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.008064516129032258 to the score\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.0078125 to the score', 'score': '0.015877016'}}), Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={'_additional': {'explainScore': '(vector) [-0.0071824766 -0.0006682752 0.001723625 -0.01897258 -0.0045127636 0.0024410256 -0.020503938 0.013768672 0.009520169 -0.037972264]... \n(hybrid) Document 3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be contributed 0.007936507936507936 to the score', 'score': '0.007936508'}})]
https://python.langchain.com/docs/integrations/retrievers/weaviate-hybrid
ab896a0a20fd-0
This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream. First, you need to install wikipedia python package. get_relevant_documents() has one argument, query: free text which used to find documents in Wikipedia {'title': 'Hunter × Hunter',
https://python.langchain.com/docs/integrations/retrievers/wikipedia
ab896a0a20fd-1
{'title': 'Hunter × Hunter', 'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'} questions = [ "What is Apify?",
https://python.langchain.com/docs/integrations/retrievers/wikipedia
ab896a0a20fd-2
questions = [ "What is Apify?", "When the Monument to the Martyrs of the 1830 Revolution was created?", "What is the Abhayagiri Vihāra?", # "How big is Wikipédia en français?", ] chat_history = []
https://python.langchain.com/docs/integrations/retrievers/wikipedia
ab896a0a20fd-3
for question in questions: result = qa({"question": question, "chat_history": chat_history}) chat_history.append((question, result["answer"])) print(f"-> **Question**: {question} \n") print(f"**Answer**: {result['answer']} \n")
https://python.langchain.com/docs/integrations/retrievers/wikipedia
887c59c1d21b-0
OpenAI Let's load the OpenAI Embedding class. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363] doc_result = embeddings.embed_documents([text]) [-0.003186025367556387, 0.011071979803637493, -0.004020420763285827, -0.011658221276953042, -0.0010534035786864363] Let's load the OpenAI Embedding class with first generation models (e.g. text-search-ada-doc-001/text-search-ada-query-001). Note: These are not recommended models - see here from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings(model="text-search-ada-doc-001") text = "This is a test document." query_result = embeddings.embed_query(text) [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212, 0.005785414075152477] doc_result = embeddings.embed_documents([text]) [0.004452846988523035, 0.034550655976098514, -0.015029939040690051, 0.03827273883655212, 0.005785414075152477] # if you are behind an explicit proxy, you can use the OPENAI_PROXY environment variable to pass through os.environ["OPENAI_PROXY"] = "http://proxy.yourcompany.com:8080"
https://python.langchain.com/docs/integrations/text_embedding/openai
5ccb869a618c-0
Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. This notebook demonstrates how to search historical chat message histories using the Zep Long-term Memory Store. NOTE: Unlike other Retrievers, the content returned by the Zep Retriever is session/user specific. A session_id is required when instantiating the Retriever. # Preload some messages into the memory. The default message window is 12 messages. We want to push beyond this to demonstrate auto-summarization. test_history = [ {"role": "human", "content": "Who was Octavia Butler?"}, { "role": "ai", "content": ( "Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American" " science fiction author." ), }, {"role": "human", "content": "Which books of hers were made into movies?"}, { "role": "ai", "content": ( "The most well-known adaptation of Octavia Butler's work is the FX series" " Kindred, based on her novel of the same name." ), }, {"role": "human", "content": "Who were her contemporaries?"}, { "role": "ai", "content": ( "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R." " Delany, and Joanna Russ." ), }, {"role": "human", "content": "What awards did she win?"}, { "role": "ai", "content": ( "Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur" " Fellowship." ), }, { "role": "human", "content": "Which other women sci-fi writers might I want to read?", }, { "role": "ai", "content": "You might want to read Ursula K. Le Guin or Joanna Russ.", }, { "role": "human", "content": ( "Write a short synopsis of Butler's book, Parable of the Sower. What is it" " about?" ), }, {
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-1
" about?" ), }, { "role": "ai", "content": ( "Parable of the Sower is a science fiction novel by Octavia Butler," " published in 1993. It follows the story of Lauren Olamina, a young woman" " living in a dystopian future where society has collapsed due to" " environmental disasters, poverty, and violence." ), }, ]
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-2
for msg in test_history: zep_memory.chat_memory.add_message( HumanMessage(content=msg["content"]) if msg["role"] == "human" else AIMessage(content=msg["content"]) )
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-3
time.sleep(2) # Wait for the messages to be embedded Zep provides native vector search over historical conversation memory. Embedding happens automatically. NOTE: Embedding of messages occurs asynchronously, so the first query may not return results. Subsequent queries will return results as the embeddings are generated. [Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7758688965570713, 'uuid': 'b3322d28-f589-48c7-9daf-5eb092d65976', 'created_at': '2023-08-11T20:31:12.3856Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}]}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602672137411663, 'uuid': '756b7136-0b4c-4664-ad33-c4431670356c', 'created_at': '2023-08-11T20:31:12.420717Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 27}),
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-4
Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596040989115522, 'uuid': '166d9556-2d48-4237-8a84-5d8a1024d5f4', 'created_at': '2023-08-11T20:31:12.434522Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18}), Document(page_content='Who were her contemporaries?', metadata={'score': 0.7575531381951208, 'uuid': 'c6a16691-4012-439f-b223-84fd4e79c4cf', 'created_at': '2023-08-11T20:31:12.410336Z', 'role': 'human', 'token_count': 8}),
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-5
Document(page_content='Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', metadata={'score': 0.7546476914454683, 'uuid': '7c093a2a-0099-415a-95c5-615a8026a894', 'created_at': '2023-08-11T20:31:12.399979Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 0, 'Text': 'Octavia Estelle Butler'}], 'Name': 'Octavia Estelle Butler'}, {'Label': 'DATE', 'Matches': [{'End': 37, 'Start': 24, 'Text': 'June 22, 1947'}], 'Name': 'June 22, 1947'}, {'Label': 'DATE', 'Matches': [{'End': 57, 'Start': 40, 'Text': 'February 24, 2006'}], 'Name': 'February 24, 2006'}, {'Label': 'NORP', 'Matches': [{'End': 74, 'Start': 66, 'Text': 'American'}], 'Name': 'American'}]}}, 'token_count': 31})] [Document(page_content="Write a short synopsis of Butler's book, Parable of the Sower. What is it about?", metadata={'score': 0.8857504413268114, 'uuid': '82f07ab5-9d4b-4db6-aaae-6028e6fd836b', 'created_at': '2023-08-11T20:31:12.437365Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 32, 'Start': 26, 'Text': 'Butler'}], 'Name': 'Butler'}, {'Label': 'WORK_OF_ART', 'Matches': [{'End': 61, 'Start': 41, 'Text': 'Parable of the Sower'}], 'Name': 'Parable of the Sower'}]}}, 'token_count': 23}),
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-6
Document(page_content='Who was Octavia Butler?', metadata={'score': 0.7758688965570713, 'uuid': 'b3322d28-f589-48c7-9daf-5eb092d65976', 'created_at': '2023-08-11T20:31:12.3856Z', 'role': 'human', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 22, 'Start': 8, 'Text': 'Octavia Butler'}], 'Name': 'Octavia Butler'}]}}, 'token_count': 8}), Document(page_content="Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", metadata={'score': 0.7602672137411663, 'uuid': '756b7136-0b4c-4664-ad33-c4431670356c', 'created_at': '2023-08-11T20:31:12.420717Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'PERSON', 'Matches': [{'End': 16, 'Start': 0, 'Text': "Octavia Butler's"}], 'Name': "Octavia Butler's"}, {'Label': 'ORG', 'Matches': [{'End': 58, 'Start': 41, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 76, 'Start': 60, 'Text': 'Samuel R. Delany'}], 'Name': 'Samuel R. Delany'}, {'Label': 'PERSON', 'Matches': [{'End': 93, 'Start': 82, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 27}),
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
5ccb869a618c-7
Document(page_content='You might want to read Ursula K. Le Guin or Joanna Russ.', metadata={'score': 0.7596040989115522, 'uuid': '166d9556-2d48-4237-8a84-5d8a1024d5f4', 'created_at': '2023-08-11T20:31:12.434522Z', 'role': 'ai', 'metadata': {'system': {'entities': [{'Label': 'ORG', 'Matches': [{'End': 40, 'Start': 23, 'Text': 'Ursula K. Le Guin'}], 'Name': 'Ursula K. Le Guin'}, {'Label': 'PERSON', 'Matches': [{'End': 55, 'Start': 44, 'Text': 'Joanna Russ'}], 'Name': 'Joanna Russ'}]}}, 'token_count': 18}), Document(page_content='Who were her contemporaries?', metadata={'score': 0.7575531381951208, 'uuid': 'c6a16691-4012-439f-b223-84fd4e79c4cf', 'created_at': '2023-08-11T20:31:12.410336Z', 'role': 'human', 'token_count': 8})]
https://python.langchain.com/docs/integrations/retrievers/zep_memorystore
da1d3a44835f-0
Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. For instructions on how to do this, please see here. Note: In order to handle batched requests, you will need to adjust the return line in the predict_fn() function within the custom inference.py script: return {"vectors": sentence_embeddings.tolist()}. from typing import Dict, List from langchain.embeddings import SagemakerEndpointEmbeddings from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler import json class ContentHandler(EmbeddingsContentHandler): content_type = "application/json" accepts = "application/json" def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes: """ Transforms the input into bytes that can be consumed by SageMaker endpoint. Args: inputs: List of input strings. model_kwargs: Additional keyword arguments to be passed to the endpoint. Returns: The transformed bytes input. """ # Example: inference.py expects a JSON string with a "inputs" key: input_str = json.dumps({"inputs": inputs, **model_kwargs}) return input_str.encode("utf-8") def transform_output(self, output: bytes) -> List[List[float]]: """ Transforms the bytes output from the endpoint into a list of embeddings. Args: output: The bytes output from SageMaker endpoint. Returns: The transformed output - list of embeddings Note: The length of the outer list is the number of input strings. The length of the inner lists is the embedding dimension. """ # Example: inference.py returns a JSON string with the list of # embeddings in a "vectors" key: response_json = json.loads(output.read().decode("utf-8")) return response_json["vectors"] content_handler = ContentHandler() embeddings = SagemakerEndpointEmbeddings( # credentials_profile_name="credentials-profile-name", endpoint_name="huggingface-pytorch-inference-2023-03-21-16-14-03-834", region_name="us-east-1", content_handler=content_handler, )
https://python.langchain.com/docs/integrations/text_embedding/sagemaker-endpoint
19bf10187d3a-0
Self Hosted Embeddings Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddings, ) import runhouse as rh # For an on-demand A100 with GCP, Azure, or Lambda gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False) # For an on-demand A10G with AWS (no single A100s on AWS) # gpu = rh.cluster(name='rh-a10x', instance_type='g5.2xlarge', provider='aws') # For an existing cluster # gpu = rh.cluster(ips=['<ip of the cluster>'], # ssh_creds={'ssh_user': '...', 'ssh_private_key':'<path_to_key>'}, # name='my-cluster') embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu) text = "This is a test document." query_result = embeddings.embed_query(text) And similarly for SelfHostedHuggingFaceInstructEmbeddings: embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu) Now let's load an embedding model with a custom load function: def get_pipeline(): from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) # Must be inside the function in notebooks model_id = "facebook/bart-base" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[0][-1] for emb in pipeline(prompt)] return pipeline(prompt)[0][-1] embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu, model_reqs=["./", "torch", "transformers"], inference_fn=inference_fn, ) query_result = embeddings.embed_query(text)
https://python.langchain.com/docs/integrations/text_embedding/self-hosted
0ee100f94808-0
Sentence Transformers Embeddings SentenceTransformers embeddings are called using the HuggingFaceEmbeddings integration. We have also added an alias for SentenceTransformerEmbeddings for users who are more familiar with directly using that package. SentenceTransformers is a python package that can generate text and image embeddings, originating from Sentence-BERT pip install sentence_transformers > /dev/null [notice] A new release of pip is available: 23.0.1 -> 23.1.1 [notice] To update, run: pip install --upgrade pip from langchain.embeddings import HuggingFaceEmbeddings, SentenceTransformerEmbeddings embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2") # Equivalent to SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text, "This is not a test document."])
https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers
3a6f3d83aa1b-0
Spacy Embedding Loading the Spacy embedding class to generate and query embeddings​ Import the necessary classes​ from langchain.embeddings.spacy_embeddings import SpacyEmbeddings Initialize SpacyEmbeddings.This will load the Spacy model into memory.​ embedder = SpacyEmbeddings() Define some example texts . These could be any documents that you want to analyze - for example, news articles, social media posts, or product reviews.​ texts = [ "The quick brown fox jumps over the lazy dog.", "Pack my box with five dozen liquor jugs.", "How vexingly quick daft zebras jump!", "Bright vixens jump; dozy fowl quack.", ] Generate and print embeddings for the texts . The SpacyEmbeddings class generates an embedding for each document, which is a numerical representation of the document's content. These embeddings can be used for various natural language processing tasks, such as document similarity comparison or text classification.​ embeddings = embedder.embed_documents(texts) for i, embedding in enumerate(embeddings): print(f"Embedding for document {i+1}: {embedding}") Generate and print an embedding for a single piece of text. You can also generate an embedding for a single piece of text, such as a search query. This can be useful for tasks like information retrieval, where you want to find documents that are similar to a given query.​ query = "Quick foxes and lazy dogs." query_embedding = embedder.embed_query(query) print(f"Embedding for query: {query_embedding}")
https://python.langchain.com/docs/integrations/text_embedding/spacy_embedding