url
stringlengths
30
161
markdown
stringlengths
27
670k
last_modified
stringclasses
1 value
https://github.com/langchain-ai/langchain/blob/master/templates/cohere-librarian/cohere_librarian/rag.py
from langchain.retrievers import CohereRagRetriever from langchain_community.chat_models import ChatCohere rag = CohereRagRetriever(llm=ChatCohere()) def get_docs_message(message): docs = rag.invoke(message) message_doc = next( (x for x in docs if x.metadata.get("type") == "model_response"), None ) return message_doc.page_content def librarian_rag(x): return get_docs_message(x["message"])
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cohere-librarian/cohere_librarian/router.py
from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnableBranch from .blurb_matcher import book_rec_chain from .chat import chat from .library_info import library_info from .rag import librarian_rag chain = ( ChatPromptTemplate.from_template( """Given the user message below, classify it as either being about `recommendation`, `library` or `other`. '{message}' Respond with just one word. For example, if the message is about a book recommendation,respond with `recommendation`. """ ) | chat | StrOutputParser() ) def extract_op_field(x): return x["output_text"] branch = RunnableBranch( ( lambda x: "recommendation" in x["topic"].lower(), book_rec_chain | extract_op_field, ), ( lambda x: "library" in x["topic"].lower(), {"message": lambda x: x["message"]} | library_info, ), librarian_rag, ) branched_chain = {"topic": chain, "message": lambda x: x["message"]} | branch
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/README.md
# Chat Bot Feedback Template This template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py) and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling `with_config` on the chat bot before serving. You can also directly deploy your chat app using this template. [Chat bots](https://python.langchain.com/docs/use_cases/chatbots) are one of the most common interfaces for deploying LLMs. The quality of chat bots varies, making continuous development important. But users are wont to leave explicit feedback through mechanisms like thumbs-up or thumbs-down buttons. Furthermore, traditional analytics such as "session length" or "conversation length" often lack clarity. However, multi-turn conversations with a chat bot can provide a wealth of information, which we can transform into metrics for fine-tuning, evaluation, and product analytics. Taking [Chat Langchain](https://chat.langchain.com/) as a case study, only about 0.04% of all queries receive explicit feedback. Yet, approximately 70% of the queries are follow-ups to previous questions. A significant portion of these follow-up queries continue useful information we can use to infer the quality of the previous AI response. This template helps solve this "feedback scarcity" problem. Below is an example invocation of this chat bot: [![Screenshot of a chat bot interaction where the AI responds in a pirate accent to a user asking where their keys are.](./static/chat_interaction.png "Chat Bot Interaction Example")](https://smith.langchain.com/public/3378daea-133c-4fe8-b4da-0a3044c5dbe8/r?runtab=1) When the user responds to this ([link](https://smith.langchain.com/public/a7e2df54-4194-455d-9978-cecd8be0df1e/r)), the response evaluator is invoked, resulting in the following evaluationrun: [![Screenshot of an evaluator run showing the AI's response effectiveness score based on the user's follow-up message expressing frustration.](./static/evaluator.png "Chat Bot Evaluator Run")](https://smith.langchain.com/public/534184ee-db8f-4831-a386-3f578145114c/r) As shown, the evaluator sees that the user is increasingly frustrated, indicating that the prior response was not effective ## LangSmith Feedback [LangSmith](https://smith.langchain.com/) is a platform for building production-grade LLM applications. Beyond its debugging and offline evaluation features, LangSmith helps you capture both user and model-assisted feedback to refine your LLM application. This template uses an LLM to generate feedback for your application, which you can use to continuously improve your service. For more examples on collecting feedback using LangSmith, consult the [documentation](https://docs.smith.langchain.com/cookbook/feedback-examples). ## Evaluator Implementation The user feedback is inferred by custom `RunEvaluator`. This evaluator is called using the `EvaluatorCallbackHandler`, which run it in a separate thread to avoid interfering with the chat bot's runtime. You can use this custom evaluator on any compatible chat bot by calling the following function on your LangChain object: ```python my_chain.with_config( callbacks=[ EvaluatorCallbackHandler( evaluators=[ ResponseEffectivenessEvaluator(evaluate_response_effectiveness) ] ) ], ) ``` The evaluator instructs an LLM, specifically `gpt-3.5-turbo`, to evaluate the AI's most recent chat message based on the user's followup response. It generates a score and accompanying reasoning that is converted to feedback in LangSmith, applied to the value provided as the `last_run_id`. The prompt used within the LLM [is available on the hub](https://smith.langchain.com/hub/wfh/response-effectiveness). Feel free to customize it with things like additional app context (such as the goal of the app or the types of questions it should respond to) or "symptoms" you'd like the LLM to focus on. This evaluator also utilizes OpenAI's function-calling API to ensure a more consistent, structured output for the grade. ## Environment Variables Ensure that `OPENAI_API_KEY` is set to use OpenAI models. Also, configure LangSmith by setting your `LANGSMITH_API_KEY`. ```bash export OPENAI_API_KEY=sk-... export LANGSMITH_API_KEY=... export LANGCHAIN_TRACING_V2=true export LANGCHAIN_PROJECT=my-project # Set to the project you want to save to ``` ## Usage If deploying via `LangServe`, we recommend configuring the server to return callback events as well. This will ensure the backend traces are included in whatever traces you generate using the `RemoteRunnable`. ```python from chat_bot_feedback.chain import chain add_routes(app, chain, path="/chat-bot-feedback", include_callback_events=True) ``` With the server running, you can use the following code snippet to stream the chat bot responses for a 2 turn conversation. ```python from functools import partial from typing import Dict, Optional, Callable, List from langserve import RemoteRunnable from langchain.callbacks.manager import tracing_v2_enabled from langchain_core.messages import BaseMessage, AIMessage, HumanMessage # Update with the URL provided by your LangServe server chain = RemoteRunnable("http://127.0.0.1:8031/chat-bot-feedback") def stream_content( text: str, chat_history: Optional[List[BaseMessage]] = None, last_run_id: Optional[str] = None, on_chunk: Callable = None, ): results = [] with tracing_v2_enabled() as cb: for chunk in chain.stream( {"text": text, "chat_history": chat_history, "last_run_id": last_run_id}, ): on_chunk(chunk) results.append(chunk) last_run_id = cb.latest_run.id if cb.latest_run else None return last_run_id, "".join(results) chat_history = [] text = "Where are my keys?" last_run_id, response_message = stream_content(text, on_chunk=partial(print, end="")) print() chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)]) text = "I CAN'T FIND THEM ANYWHERE" # The previous response will likely receive a low score, # as the user's frustration appears to be escalating. last_run_id, response_message = stream_content( text, chat_history=chat_history, last_run_id=str(last_run_id), on_chunk=partial(print, end=""), ) print() chat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)]) ``` This uses the `tracing_v2_enabled` callback manager to get the run ID of the call, which we provide in subsequent calls in the same chat thread, so the evaluator can assign feedback to the appropriate trace. ## Conclusion This template provides a simple chat bot definition you can directly deploy using LangServe. It defines a custom evaluator to log evaluation feedback for the bot without any explicit user ratings. This is an effective way to augment your analytics and to better select data points for fine-tuning and evaluation.
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/__init__.py
from chat_bot_feedback.chain import chain __all__ = ["chain"]
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py
from __future__ import annotations from typing import List, Optional from langchain import hub from langchain.callbacks.tracers.evaluation import EvaluatorCallbackHandler from langchain.callbacks.tracers.schemas import Run from langchain.schema import ( AIMessage, BaseMessage, HumanMessage, StrOutputParser, get_buffer_string, ) from langchain_community.chat_models import ChatOpenAI from langchain_core.output_parsers.openai_functions import JsonOutputFunctionsParser from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langchain_core.pydantic_v1 import BaseModel, Field from langchain_core.runnables import Runnable from langsmith.evaluation import EvaluationResult, RunEvaluator from langsmith.schemas import Example ############################################################################### # | Chat Bot Evaluator Definition # | This section defines an evaluator that evaluates any chat bot # | without explicit user feedback. It formats the dialog up to # | the current message and then instructs an LLM to grade the last AI response # | based on the subsequent user response. If no chat history is present, # V the evaluator is not called. ############################################################################### class ResponseEffectiveness(BaseModel): """Score the effectiveness of the AI chat bot response.""" reasoning: str = Field( ..., description="Explanation for the score.", ) score: int = Field( ..., min=0, max=5, description="Effectiveness of AI's final response.", ) def format_messages(input: dict) -> List[BaseMessage]: """Format the messages for the evaluator.""" chat_history = input.get("chat_history") or [] results = [] for message in chat_history: if message["type"] == "human": results.append(HumanMessage.parse_obj(message)) else: results.append(AIMessage.parse_obj(message)) return results def format_dialog(input: dict) -> dict: """Format messages and convert to a single string.""" chat_history = format_messages(input) formatted_dialog = get_buffer_string(chat_history) + f"\nhuman: {input['text']}" return {"dialog": formatted_dialog} def normalize_score(response: dict) -> dict: """Normalize the score to be between 0 and 1.""" response["score"] = int(response["score"]) / 5 return response # To view the prompt in the playground: https://smith.langchain.com/hub/wfh/response-effectiveness evaluation_prompt = hub.pull("wfh/response-effectiveness") evaluate_response_effectiveness = ( format_dialog | evaluation_prompt # bind_functions formats the schema for the OpenAI function # calling endpoint, which returns more reliable structured data. | ChatOpenAI(model="gpt-3.5-turbo").bind_functions( functions=[ResponseEffectiveness], function_call="ResponseEffectiveness", ) # Convert the model's output to a dict | JsonOutputFunctionsParser(args_only=True) | normalize_score ) class ResponseEffectivenessEvaluator(RunEvaluator): """Evaluate the chat bot based the subsequent user responses.""" def __init__(self, evaluator_runnable: Runnable) -> None: super().__init__() self.runnable = evaluator_runnable def evaluate_run( self, run: Run, example: Optional[Example] = None ) -> EvaluationResult: # This evaluator grades the AI's PREVIOUS response. # If no chat history is present, there isn't anything to evaluate # (it's the user's first message) if not run.inputs.get("chat_history"): return EvaluationResult( key="response_effectiveness", comment="No chat history present." ) # This only occurs if the client isn't correctly sending the run IDs # of the previous calls. elif "last_run_id" not in run.inputs: return EvaluationResult( key="response_effectiveness", comment="No last run ID present." ) # Call the LLM to evaluate the response eval_grade: Optional[dict] = self.runnable.invoke(run.inputs) target_run_id = run.inputs["last_run_id"] return EvaluationResult( **eval_grade, key="response_effectiveness", target_run_id=target_run_id, # Requires langsmith >= 0.0.54 ) ############################################################################### # | The chat bot definition # | This is what is actually exposed by LangServe in the API # | It can be any chain that accepts the ChainInput schema and returns a str # | all that is required is the with_config() call at the end to add the # V evaluators as "listeners" to the chain. # ############################################################################ class ChainInput(BaseModel): """Input for the chat bot.""" chat_history: Optional[List[BaseMessage]] = Field( description="Previous chat messages." ) text: str = Field(..., description="User's latest query.") last_run_id: Optional[str] = Field("", description="Run ID of the last run.") _prompt = ChatPromptTemplate.from_messages( [ ( "system", "You are a helpful assistant who speaks like a pirate", ), MessagesPlaceholder(variable_name="chat_history"), ("user", "{text}"), ] ) _model = ChatOpenAI() def format_chat_history(chain_input: dict) -> dict: messages = format_messages(chain_input) return { "chat_history": messages, "text": chain_input.get("text"), } # if you update the name of this, you MUST also update ../pyproject.toml # with the new `tool.langserve.export_attr` chain = ( (format_chat_history | _prompt | _model | StrOutputParser()) # This is to add the evaluators as "listeners" # and to customize the name of the chain. # Any chain that accepts a compatible input type works here. .with_config( run_name="ChatBot", callbacks=[ EvaluatorCallbackHandler( evaluators=[ ResponseEffectivenessEvaluator(evaluate_response_effectiveness) ] ) ], ) ) chain = chain.with_types(input_type=ChainInput)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/chain-of-note-wiki/README.md
# Chain-of-Note (Wikipedia) Implements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval. Check out the prompt being used here https://smith.langchain.com/hub/bagatur/chain-of-note-wiki. ## Environment Setup Uses Anthropic claude-3-sonnet-20240229 chat model. Set Anthropic API key: ```bash export ANTHROPIC_API_KEY="..." ``` ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U "langchain-cli[serve]" ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package chain-of-note-wiki ``` If you want to add this to an existing project, you can just run: ```shell langchain app add chain-of-note-wiki ``` And add the following code to your `server.py` file: ```python from chain_of_note_wiki import chain as chain_of_note_wiki_chain add_routes(app, chain_of_note_wiki_chain, path="/chain-of-note-wiki") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/chain-of-note-wiki/playground](http://127.0.0.1:8000/chain-of-note-wiki/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/chain-of-note-wiki") ```
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/chain-of-note-wiki/chain_of_note_wiki/__init__.py
from chain_of_note_wiki.chain import chain __all__ = ["chain"]
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/chain-of-note-wiki/chain_of_note_wiki/chain.py
from langchain import hub from langchain_anthropic import ChatAnthropic from langchain_community.utilities import WikipediaAPIWrapper from langchain_core.output_parsers import StrOutputParser from langchain_core.pydantic_v1 import BaseModel from langchain_core.runnables import RunnableLambda, RunnablePassthrough class Question(BaseModel): __root__: str wiki = WikipediaAPIWrapper(top_k_results=5) prompt = hub.pull("bagatur/chain-of-note-wiki") llm = ChatAnthropic(model="claude-3-sonnet-20240229") def format_docs(docs): return "\n\n".join( f"Wikipedia {i+1}:\n{doc.page_content}" for i, doc in enumerate(docs) ) chain = ( { "passages": RunnableLambda(wiki.load) | format_docs, "question": RunnablePassthrough(), } | prompt | llm | StrOutputParser() ).with_types(input_type=Question)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-synonym-caching/README.md
# cassandra-synonym-caching This template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra® or Astra DB through CQL. ## Environment Setup To set up your environment, you will need the following: - an [Astra](https://astra.datastax.com) Vector Database (free tier is fine!). **You need a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)**, in particular the string starting with `AstraCS:...`; - likewise, get your [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) ready, you will have to enter it below; - an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access), note that out-of-the-box this demo supports OpenAI unless you tinker with the code.) _Note:_ you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it. ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package cassandra-synonym-caching ``` If you want to add this to an existing project, you can just run: ```shell langchain app add cassandra-synonym-caching ``` And add the following code to your `server.py` file: ```python from cassandra_synonym_caching import chain as cassandra_synonym_caching_chain add_routes(app, cassandra_synonym_caching_chain, path="/cassandra-synonym-caching") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/cassandra-synonym-caching/playground](http://127.0.0.1:8000/cassandra-synonym-caching/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/cassandra-synonym-caching") ``` ## Reference Stand-alone LangServe template repo: [here](https://github.com/hemidactylus/langserve_cassandra_synonym_caching).
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-synonym-caching/cassandra_synonym_caching/__init__.py
import os import cassio import langchain from langchain_community.cache import CassandraCache from langchain_community.chat_models import ChatOpenAI from langchain_core.messages import BaseMessage from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnableLambda use_cassandra = int(os.environ.get("USE_CASSANDRA_CLUSTER", "0")) if use_cassandra: from .cassandra_cluster_init import get_cassandra_connection session, keyspace = get_cassandra_connection() cassio.init( session=session, keyspace=keyspace, ) else: cassio.init( token=os.environ["ASTRA_DB_APPLICATION_TOKEN"], database_id=os.environ["ASTRA_DB_ID"], keyspace=os.environ.get("ASTRA_DB_KEYSPACE"), ) # inits langchain.llm_cache = CassandraCache(session=None, keyspace=None) llm = ChatOpenAI() # custom runnables def msg_splitter(msg: BaseMessage): return [w.strip() for w in msg.content.split(",") if w.strip()] # synonym-route preparation synonym_prompt = ChatPromptTemplate.from_template( "List up to five comma-separated synonyms of this word: {word}" ) chain = synonym_prompt | llm | RunnableLambda(msg_splitter)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-synonym-caching/cassandra_synonym_caching/cassandra_cluster_init.py
import os from cassandra.auth import PlainTextAuthProvider from cassandra.cluster import Cluster def get_cassandra_connection(): contact_points = [ cp.strip() for cp in os.environ.get("CASSANDRA_CONTACT_POINTS", "").split(",") if cp.strip() ] CASSANDRA_KEYSPACE = os.environ["CASSANDRA_KEYSPACE"] CASSANDRA_USERNAME = os.environ.get("CASSANDRA_USERNAME") CASSANDRA_PASSWORD = os.environ.get("CASSANDRA_PASSWORD") # if CASSANDRA_USERNAME and CASSANDRA_PASSWORD: auth_provider = PlainTextAuthProvider( CASSANDRA_USERNAME, CASSANDRA_PASSWORD, ) else: auth_provider = None c_cluster = Cluster( contact_points if contact_points else None, auth_provider=auth_provider ) session = c_cluster.connect() return (session, CASSANDRA_KEYSPACE)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-entomology-rag/README.md
# cassandra-entomology-rag This template will perform RAG using Apache Cassandra® or Astra DB through CQL (`Cassandra` vector store class) ## Environment Setup For the setup, you will require: - an [Astra](https://astra.datastax.com) Vector Database. You must have a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure), specifically the string starting with `AstraCS:...`. - [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier). - an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access)) You may also use a regular Cassandra cluster. In this case, provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it. The connection parameters and secrets must be provided through environment variables. Refer to `.env.template` for the required variables. ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package cassandra-entomology-rag ``` If you want to add this to an existing project, you can just run: ```shell langchain app add cassandra-entomology-rag ``` And add the following code to your `server.py` file: ```python from cassandra_entomology_rag import chain as cassandra_entomology_rag_chain add_routes(app, cassandra_entomology_rag_chain, path="/cassandra-entomology-rag") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/cassandra-entomology-rag/playground](http://127.0.0.1:8000/cassandra-entomology-rag/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/cassandra-entomology-rag") ``` ## Reference Stand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_cassandra_entomology_rag).
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-entomology-rag/cassandra_entomology_rag/__init__.py
import os import cassio from langchain_community.chat_models import ChatOpenAI from langchain_community.embeddings import OpenAIEmbeddings from langchain_community.vectorstores import Cassandra from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnablePassthrough from .populate_vector_store import populate use_cassandra = int(os.environ.get("USE_CASSANDRA_CLUSTER", "0")) if use_cassandra: from .cassandra_cluster_init import get_cassandra_connection session, keyspace = get_cassandra_connection() cassio.init( session=session, keyspace=keyspace, ) else: cassio.init( token=os.environ["ASTRA_DB_APPLICATION_TOKEN"], database_id=os.environ["ASTRA_DB_ID"], keyspace=os.environ.get("ASTRA_DB_KEYSPACE"), ) # inits llm = ChatOpenAI() embeddings = OpenAIEmbeddings() vector_store = Cassandra( session=None, keyspace=None, embedding=embeddings, table_name="langserve_rag_demo", ) retriever = vector_store.as_retriever(search_kwargs={"k": 3}) # For demo reasons, let's ensure there are rows on the vector store. # Please remove this and/or adapt to your use case! inserted_lines = populate(vector_store) if inserted_lines: print(f"Done ({inserted_lines} lines inserted).") entomology_template = """ You are an expert entomologist, tasked with answering enthusiast biologists' questions. You must answer based only on the provided context, do not make up any fact. Your answers must be concise and to the point, but strive to provide scientific details (such as family, order, Latin names, and so on when appropriate). You MUST refuse to answer questions on other topics than entomology, as well as questions whose answer is not found in the provided context. CONTEXT: {context} QUESTION: {question} YOUR ANSWER:""" entomology_prompt = ChatPromptTemplate.from_template(entomology_template) chain = ( {"context": retriever, "question": RunnablePassthrough()} | entomology_prompt | llm | StrOutputParser() )
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-entomology-rag/cassandra_entomology_rag/cassandra_cluster_init.py
import os from cassandra.auth import PlainTextAuthProvider from cassandra.cluster import Cluster def get_cassandra_connection(): contact_points = [ cp.strip() for cp in os.environ.get("CASSANDRA_CONTACT_POINTS", "").split(",") if cp.strip() ] CASSANDRA_KEYSPACE = os.environ["CASSANDRA_KEYSPACE"] CASSANDRA_USERNAME = os.environ.get("CASSANDRA_USERNAME") CASSANDRA_PASSWORD = os.environ.get("CASSANDRA_PASSWORD") # if CASSANDRA_USERNAME and CASSANDRA_PASSWORD: auth_provider = PlainTextAuthProvider( CASSANDRA_USERNAME, CASSANDRA_PASSWORD, ) else: auth_provider = None c_cluster = Cluster( contact_points if contact_points else None, auth_provider=auth_provider ) session = c_cluster.connect() return (session, CASSANDRA_KEYSPACE)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/cassandra-entomology-rag/cassandra_entomology_rag/populate_vector_store.py
import os BASE_DIR = os.path.abspath(os.path.dirname(__file__)) def populate(vector_store): # is the store empty? find out with a probe search hits = vector_store.similarity_search_by_vector( embedding=[0.001] * 1536, k=1, ) # if len(hits) == 0: # this seems a first run: # must populate the vector store src_file_name = os.path.join(BASE_DIR, "..", "sources.txt") lines = [ line.strip() for line in open(src_file_name).readlines() if line.strip() if line[0] != "#" ] # deterministic IDs to prevent duplicates on multiple runs ids = ["_".join(line.split(" ")[:2]).lower().replace(":", "") for line in lines] # vector_store.add_texts(texts=lines, ids=ids) return len(lines) else: return 0
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/bedrock-jcvd/README.md
# Bedrock JCVD 🕺🥋 ## Overview LangChain template that uses [Anthropic's Claude on Amazon Bedrock](https://aws.amazon.com/bedrock/claude/) to behave like JCVD. > I am the Fred Astaire of Chatbots! 🕺 '![Animated GIF of Jean-Claude Van Damme dancing.](https://media.tenor.com/CVp9l7g3axwAAAAj/jean-claude-van-damme-jcvd.gif "Jean-Claude Van Damme Dancing") ## Environment Setup ### AWS Credentials This template uses [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html), the AWS SDK for Python, to call [Amazon Bedrock](https://aws.amazon.com/bedrock/). You **must** configure both AWS credentials *and* an AWS Region in order to make requests. > For information on how to do this, see [AWS Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) (Developer Guide > Credentials). ### Foundation Models By default, this template uses [Anthropic's Claude v2](https://aws.amazon.com/about-aws/whats-new/2023/08/claude-2-foundation-model-anthropic-amazon-bedrock/) (`anthropic.claude-v2`). > To request access to a specific model, check out the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) (Model access) To use a different model, set the environment variable `BEDROCK_JCVD_MODEL_ID`. A list of base models is available in the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html) (Use the API > API operations > Run inference > Base Model IDs). > The full list of available models (including base and [custom models](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html)) is available in the [Amazon Bedrock Console](https://docs.aws.amazon.com/bedrock/latest/userguide/using-console.html) under **Foundation Models** or by calling [`aws bedrock list-foundation-models`](https://docs.aws.amazon.com/cli/latest/reference/bedrock/list-foundation-models.html). ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package bedrock-jcvd ``` If you want to add this to an existing project, you can just run: ```shell langchain app add bedrock-jcvd ``` And add the following code to your `server.py` file: ```python from bedrock_jcvd import chain as bedrock_jcvd_chain add_routes(app, bedrock_jcvd_chain, path="/bedrock-jcvd") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs). We can also access the playground at [http://127.0.0.1:8000/bedrock-jcvd/playground](http://127.0.0.1:8000/bedrock-jcvd/playground) ![Screenshot of the LangServe Playground interface with an example input and output demonstrating a Jean-Claude Van Damme voice imitation.](jcvd_langserve.png "JCVD Playground")
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/bedrock-jcvd/bedrock_jcvd/chain.py
import os from langchain_aws import ChatBedrock from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import ConfigurableField # For a description of each inference parameter, see # https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html _model_kwargs = { "temperature": float(os.getenv("BEDROCK_JCVD_TEMPERATURE", "0.1")), "top_p": float(os.getenv("BEDROCK_JCVD_TOP_P", "1")), "top_k": int(os.getenv("BEDROCK_JCVD_TOP_K", "250")), "max_tokens_to_sample": int(os.getenv("BEDROCK_JCVD_MAX_TOKENS_TO_SAMPLE", "300")), } # Full list of base model IDs is available at # https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html _model_alts = { "claude_2_1": ChatBedrock( model_id="anthropic.claude-v2:1", model_kwargs=_model_kwargs ), "claude_1": ChatBedrock(model_id="anthropic.claude-v1", model_kwargs=_model_kwargs), "claude_instant_1": ChatBedrock( model_id="anthropic.claude-instant-v1", model_kwargs=_model_kwargs ), } # For some tips on how to construct effective prompts for Claude, # check out Anthropic's Claude Prompt Engineering deck (Bedrock edition) # https://docs.google.com/presentation/d/1tjvAebcEyR8la3EmVwvjC7PHR8gfSrcsGKfTPAaManw _prompt = ChatPromptTemplate.from_messages( [ ("human", "You are JCVD. {input}"), ] ) _model = ChatBedrock( model_id="anthropic.claude-v2", model_kwargs=_model_kwargs ).configurable_alternatives( which=ConfigurableField( id="model", name="Model", description="The model that will be used" ), default_key="claude_2", **_model_alts, ) chain = _prompt | _model
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/basic-critique-revise/README.md
# basic-critique-revise Iteratively generate schema candidates and revise them based on errors. ## Environment Setup This template uses OpenAI function calling, so you will need to set the `OPENAI_API_KEY` environment variable in order to use this template. ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U "langchain-cli[serve]" ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package basic-critique-revise ``` If you want to add this to an existing project, you can just run: ```shell langchain app add basic-critique-revise ``` And add the following code to your `server.py` file: ```python from basic_critique_revise import chain as basic_critique_revise_chain add_routes(app, basic_critique_revise_chain, path="/basic-critique-revise") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/basic-critique-revise/playground](http://127.0.0.1:8000/basic-critique-revise/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/basic-critique-revise") ```
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/basic-critique-revise/basic_critique_revise/__init__.py
from basic_critique_revise.chain import chain __all__ = ["chain"]
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/basic-critique-revise/basic_critique_revise/chain.py
import json from datetime import datetime from enum import Enum from operator import itemgetter from typing import Any, Dict, Sequence from langchain.chains.openai_functions import convert_to_openai_function from langchain_community.chat_models import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel, Field, ValidationError, conint from langchain_core.runnables import ( Runnable, RunnableBranch, RunnableLambda, RunnablePassthrough, ) class TaskType(str, Enum): call = "Call" message = "Message" todo = "Todo" in_person_meeting = "In-Person Meeting" email = "Email" mail = "Mail" text = "Text" open_house = "Open House" class Task(BaseModel): title: str = Field(..., description="The title of the tasks, reminders and alerts") due_date: datetime = Field( ..., description="Due date. Must be a valid ISO date string with timezone" ) task_type: TaskType = Field(None, description="The type of task") class Tasks(BaseModel): """JSON definition for creating tasks, reminders and alerts""" tasks: Sequence[Task] template = """Respond to the following user query to the best of your ability: {query}""" generate_prompt = ChatPromptTemplate.from_template(template) function_args = {"functions": [convert_to_openai_function(Tasks)]} task_function_call_model = ChatOpenAI(model="gpt-3.5-turbo").bind(**function_args) output_parser = RunnableLambda( lambda x: json.loads( x.additional_kwargs.get("function_call", {}).get("arguments", '""') ) ) revise_template = """ Based on the provided context, fix the incorrect result of the original prompt and the provided errors. Only respond with an answer that satisfies the constraints laid out in the original prompt and fixes the Pydantic errors. Hint: Datetime fields must be valid ISO date strings. <context> <original_prompt> {original_prompt} </original_prompt> <incorrect_result> {completion} </incorrect_result> <errors> {error} </errors> </context>""" revise_prompt = ChatPromptTemplate.from_template(revise_template) revise_chain = revise_prompt | task_function_call_model | output_parser def output_validator(output): try: Tasks.validate(output["completion"]) except ValidationError as e: return str(e) return None class IntermediateType(BaseModel): error: str completion: Dict original_prompt: str max_revisions: int validation_step = RunnablePassthrough().assign(error=RunnableLambda(output_validator)) def revise_loop(input: IntermediateType) -> IntermediateType: revise_step = RunnablePassthrough().assign(completion=revise_chain) else_step: Runnable[IntermediateType, IntermediateType] = RunnableBranch( (lambda x: x["error"] is None, RunnablePassthrough()), revise_step | validation_step, ).with_types(input_type=IntermediateType) for _ in range(max(0, input["max_revisions"] - 1)): else_step = RunnableBranch( (lambda x: x["error"] is None, RunnablePassthrough()), revise_step | validation_step | else_step, ) return else_step revise_lambda = RunnableLambda(revise_loop) class InputType(BaseModel): query: str max_revisions: conint(ge=1, le=10) = 5 chain: Runnable[Any, Any] = ( { "original_prompt": generate_prompt, "max_revisions": itemgetter("max_revisions"), } | RunnablePassthrough().assign( completion=( RunnableLambda(itemgetter("original_prompt")) | task_function_call_model | output_parser ) ) | validation_step | revise_lambda | RunnableLambda(itemgetter("completion")) ).with_types(input_type=InputType)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/README.md
# anthropic-iterative-search This template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions. It is heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb). ## Environment Setup Set the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models. ## Usage To use this package, you should first have the LangChain CLI installed: ```shell pip install -U langchain-cli ``` To create a new LangChain project and install this as the only package, you can do: ```shell langchain app new my-app --package anthropic-iterative-search ``` If you want to add this to an existing project, you can just run: ```shell langchain app add anthropic-iterative-search ``` And add the following code to your `server.py` file: ```python from anthropic_iterative_search import chain as anthropic_iterative_search_chain add_routes(app, anthropic_iterative_search_chain, path="/anthropic-iterative-search") ``` (Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section ```shell export LANGCHAIN_TRACING_V2=true export LANGCHAIN_API_KEY=<your-api-key> export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default" ``` If you are inside this directory, then you can spin up a LangServe instance directly by: ```shell langchain serve ``` This will start the FastAPI app with a server is running locally at [http://localhost:8000](http://localhost:8000) We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) We can access the playground at [http://127.0.0.1:8000/anthropic-iterative-search/playground](http://127.0.0.1:8000/anthropic-iterative-search/playground) We can access the template from code with: ```python from langserve.client import RemoteRunnable runnable = RemoteRunnable("http://localhost:8000/anthropic-iterative-search") ```
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/main.py
from anthropic_iterative_search import final_chain if __name__ == "__main__": query = ( "Which movie came out first: Oppenheimer, or " "Are You There God It's Me Margaret?" ) print( final_chain.with_config(configurable={"chain": "retrieve"}).invoke( {"query": query} ) )
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/__init__.py
from .chain import chain __all__ = ["chain"]
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/agent_scratchpad.py
def _format_docs(docs): result = "\n".join( [ f'<item index="{i+1}">\n<page_content>\n{r}\n</page_content>\n</item>' for i, r in enumerate(docs) ] ) return result def format_agent_scratchpad(intermediate_steps): thoughts = "" for action, observation in intermediate_steps: thoughts += action.log thoughts += "</search_query>" + _format_docs(observation) return thoughts
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/chain.py
from langchain_anthropic import ChatAnthropic from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.pydantic_v1 import BaseModel from langchain_core.runnables import ConfigurableField from .prompts import answer_prompt from .retriever_agent import executor prompt = ChatPromptTemplate.from_template(answer_prompt) model = ChatAnthropic( model="claude-3-sonnet-20240229", temperature=0, max_tokens_to_sample=1000 ) chain = ( {"query": lambda x: x["query"], "information": executor | (lambda x: x["output"])} | prompt | model | StrOutputParser() ) # Add typing for the inputs to be used in the playground class Inputs(BaseModel): query: str chain = chain.with_types(input_type=Inputs) chain = chain.configurable_alternatives( ConfigurableField(id="chain"), default_key="response", # This adds a new option, with name `openai` that is equal to `ChatOpenAI()` retrieve=executor, )
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/output_parser.py
import re from langchain_core.agents import AgentAction, AgentFinish from .agent_scratchpad import _format_docs def extract_between_tags(tag: str, string: str, strip: bool = True) -> str: ext_list = re.findall(f"<{tag}\s?>(.+?)</{tag}\s?>", string, re.DOTALL) if strip: ext_list = [e.strip() for e in ext_list] if ext_list: if len(ext_list) != 1: raise ValueError # Only return the first one return ext_list[0] def parse_output(outputs): partial_completion = outputs["partial_completion"] steps = outputs["intermediate_steps"] search_query = extract_between_tags( "search_query", partial_completion + "</search_query>" ) if search_query is None: docs = [] str_output = "" for action, observation in steps: docs.extend(observation) str_output += action.log str_output += "</search_query>" + _format_docs(observation) str_output += partial_completion return AgentFinish({"docs": docs, "output": str_output}, log=partial_completion) else: return AgentAction( tool="search", tool_input=search_query, log=partial_completion )
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/prompts.py
retrieval_prompt = """{retriever_description} Before beginning to research the user's question, first think for a moment inside <scratchpad> tags about what information is necessary for a well-informed answer. If the user's question is complex, you may need to decompose the query into multiple subqueries and execute them individually. Sometimes the search engine will return empty search results, or the search results may not contain the information you need. In such cases, feel free to try again with a different query. After each call to the Search Engine Tool, reflect briefly inside <search_quality></search_quality> tags about whether you now have enough information to answer, or whether more information is needed. If you have all the relevant information, write it in <information></information> tags, WITHOUT actually answering the question. Otherwise, issue a new search. Here is the user's question: <question>{query}</question> Remind yourself to make short queries in your scratchpad as you plan out your strategy.""" # noqa: E501 answer_prompt = "Here is a user query: <query>{query}</query>. Here is some relevant information: <information>{information}</information>. Please answer the question using the relevant information." # noqa: E501
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/retriever.py
from langchain.retrievers import WikipediaRetriever from langchain.tools import tool # This is used to tell the model how to best use the retriever. retriever_description = """You will be asked a question by a human user. You have access to the following tool to help answer the question. <tool_description> Search Engine Tool * The search engine will exclusively search over Wikipedia for pages similar to your query. It returns for each page its title and full page content. Use this tool if you want to get up-to-date and comprehensive information on a topic to help answer queries. Queries should be as atomic as possible -- they only need to address one part of the user's question. For example, if the user's query is "what is the color of a basketball?", your search query should be "basketball". Here's another example: if the user's question is "Who created the first neural network?", your first query should be "neural network". As you can see, these queries are quite short. Think keywords, not phrases. * At any time, you can make a call to the search engine using the following syntax: <search_query>query_word</search_query>. * You'll then get results back in <search_result> tags.</tool_description>""" # noqa: E501 retriever = WikipediaRetriever() # This should be the same as the function name below RETRIEVER_TOOL_NAME = "search" @tool def search(query): """Search with the retriever.""" return retriever.invoke(query)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/templates/anthropic-iterative-search/anthropic_iterative_search/retriever_agent.py
from langchain.agents import AgentExecutor from langchain_anthropic import ChatAnthropic from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import ChatPromptTemplate from langchain_core.runnables import RunnableParallel, RunnablePassthrough from .agent_scratchpad import format_agent_scratchpad from .output_parser import parse_output from .prompts import retrieval_prompt from .retriever import retriever_description, search prompt = ChatPromptTemplate.from_messages( [ ("user", retrieval_prompt), ("ai", "{agent_scratchpad}"), ] ) prompt = prompt.partial(retriever_description=retriever_description) model = ChatAnthropic( model="claude-3-sonnet-20240229", temperature=0, max_tokens_to_sample=1000 ) chain = ( RunnablePassthrough.assign( agent_scratchpad=lambda x: format_agent_scratchpad(x["intermediate_steps"]) ) | prompt | model.bind(stop_sequences=["</search_query>"]) | StrOutputParser() ) agent_chain = ( RunnableParallel( { "partial_completion": chain, "intermediate_steps": lambda x: x["intermediate_steps"], } ) | parse_output ) executor = AgentExecutor(agent=agent_chain, tools=[search], verbose=True)
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/HTML_header_metadata_splitter.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "c95fcd15cd52c944", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "# How to split by HTML header \n", "## Description and motivation\n", "\n", "[HTMLHeaderTextSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) is a \"structure-aware\" chunker that splits text at the HTML element level and adds metadata for each header \"relevant\" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.\n", "\n", "It is analogous to the [MarkdownHeaderTextSplitter](/docs/how_to/markdown_header_metadata_splitter) for markdown files.\n", "\n", "To specify what headers to split on, specify `headers_to_split_on` when instantiating `HTMLHeaderTextSplitter` as shown below.\n", "\n", "## Usage examples\n", "### 1) How to split HTML strings:" ] }, { "cell_type": "code", "execution_count": null, "id": "2e55d44c-1fff-449a-bf52-0d6df488323f", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-text-splitters" ] }, { "cell_type": "code", "execution_count": 1, "id": "initial_id", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T18:57:49.208965400Z", "start_time": "2023-10-02T18:57:48.899756Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Foo'),\n", " Document(page_content='Some intro text about Foo. \\nBar main section Bar subsection 1 Bar subsection 2', metadata={'Header 1': 'Foo'}),\n", " Document(page_content='Some intro text about Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section'}),\n", " Document(page_content='Some text about the first subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 1'}),\n", " Document(page_content='Some text about the second subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 2'}),\n", " Document(page_content='Baz', metadata={'Header 1': 'Foo'}),\n", " Document(page_content='Some text about Baz', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'}),\n", " Document(page_content='Some concluding text about Foo', metadata={'Header 1': 'Foo'})]" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_text_splitters import HTMLHeaderTextSplitter\n", "\n", "html_string = \"\"\"\n", "<!DOCTYPE html>\n", "<html>\n", "<body>\n", " <div>\n", " <h1>Foo</h1>\n", " <p>Some intro text about Foo.</p>\n", " <div>\n", " <h2>Bar main section</h2>\n", " <p>Some intro text about Bar.</p>\n", " <h3>Bar subsection 1</h3>\n", " <p>Some text about the first subtopic of Bar.</p>\n", " <h3>Bar subsection 2</h3>\n", " <p>Some text about the second subtopic of Bar.</p>\n", " </div>\n", " <div>\n", " <h2>Baz</h2>\n", " <p>Some text about Baz</p>\n", " </div>\n", " <br>\n", " <p>Some concluding text about Foo</p>\n", " </div>\n", "</body>\n", "</html>\n", "\"\"\"\n", "\n", "headers_to_split_on = [\n", " (\"h1\", \"Header 1\"),\n", " (\"h2\", \"Header 2\"),\n", " (\"h3\", \"Header 3\"),\n", "]\n", "\n", "html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)\n", "html_header_splits = html_splitter.split_text(html_string)\n", "html_header_splits" ] }, { "cell_type": "markdown", "id": "7126f179-f4d0-4b5d-8bef-44e83b59262c", "metadata": {}, "source": [ "To return each element together with their associated headers, specify `return_each_element=True` when instantiating `HTMLHeaderTextSplitter`:" ] }, { "cell_type": "code", "execution_count": 2, "id": "90c23088-804c-4c89-bd09-b820587ceeef", "metadata": {}, "outputs": [], "source": [ "html_splitter = HTMLHeaderTextSplitter(\n", " headers_to_split_on,\n", " return_each_element=True,\n", ")\n", "html_header_splits_elements = html_splitter.split_text(html_string)" ] }, { "cell_type": "markdown", "id": "b776c54e-9159-4d88-9d6c-3a1d0b639dfe", "metadata": {}, "source": [ "Comparing with the above, where elements are aggregated by their headers:" ] }, { "cell_type": "code", "execution_count": 3, "id": "711abc74-a7b0-4dc5-a4bb-af3cafe4e0f4", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Foo'\n", "page_content='Some intro text about Foo. \\nBar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'}\n" ] } ], "source": [ "for element in html_header_splits[:2]:\n", " print(element)" ] }, { "cell_type": "markdown", "id": "fe5528db-187c-418a-9480-fc0267645d42", "metadata": {}, "source": [ "Now each element is returned as a distinct `Document`:" ] }, { "cell_type": "code", "execution_count": 4, "id": "24722d8e-d073-46a8-a821-6b722412f1be", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Foo'\n", "page_content='Some intro text about Foo.' metadata={'Header 1': 'Foo'}\n", "page_content='Bar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'}\n" ] } ], "source": [ "for element in html_header_splits_elements[:3]:\n", " print(element)" ] }, { "cell_type": "markdown", "id": "e29b4aade2a0070c", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "#### 2) How to split from a URL or HTML file:\n", "\n", "To read directly from a URL, pass the URL string into the `split_text_from_url` method.\n", "\n", "Similarly, a local HTML file can be passed to the `split_text_from_file` method." ] }, { "cell_type": "code", "execution_count": 5, "id": "6ecb9fb2-32ff-4249-a4b4-d5e5e191f013", "metadata": {}, "outputs": [], "source": [ "url = \"https://plato.stanford.edu/entries/goedel/\"\n", "\n", "headers_to_split_on = [\n", " (\"h1\", \"Header 1\"),\n", " (\"h2\", \"Header 2\"),\n", " (\"h3\", \"Header 3\"),\n", " (\"h4\", \"Header 4\"),\n", "]\n", "\n", "html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)\n", "\n", "# for local file use html_splitter.split_text_from_file(<path_to_file>)\n", "html_header_splits = html_splitter.split_text_from_url(url)" ] }, { "cell_type": "markdown", "id": "c6e3dd41-0c57-472a-a3d4-4e7e8ea6914f", "metadata": {}, "source": [ "### 2) How to constrain chunk sizes:\n", "\n", "`HTMLHeaderTextSplitter`, which splits based on HTML headers, can be composed with another splitter which constrains splits based on character lengths, such as `RecursiveCharacterTextSplitter`.\n", "\n", "This can be done using the `.split_documents` method of the second splitter:" ] }, { "cell_type": "code", "execution_count": 6, "id": "6ada8ea093ea0475", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T18:57:51.016141300Z", "start_time": "2023-10-02T18:57:50.647495400Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='We see that Gödel first tried to reduce the consistency problem for analysis to that of arithmetic. This seemed to require a truth definition for arithmetic, which in turn led to paradoxes, such as the Liar paradox (“This sentence is false”) and Berry’s paradox (“The least number not defined by an expression consisting of just fourteen English words”). Gödel then noticed that such paradoxes would not necessarily arise if truth were replaced by provability. But this means that arithmetic truth', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n", " Document(page_content='means that arithmetic truth and arithmetic provability are not co-extensive — whence the First Incompleteness Theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n", " Document(page_content='This account of Gödel’s discovery was told to Hao Wang very much after the fact; but in Gödel’s contemporary correspondence with Bernays and Zermelo, essentially the same description of his path to the theorems is given. (See Gödel 2003a and Gödel 2003b respectively.) From those accounts we see that the undefinability of truth in arithmetic, a result credited to Tarski, was likely obtained in some form by Gödel by 1931. But he neither publicized nor published the result; the biases logicians', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n", " Document(page_content='result; the biases logicians had expressed at the time concerning the notion of truth, biases which came vehemently to the fore when Tarski announced his results on the undefinability of truth in formal systems 1935, may have served as a deterrent to Gödel’s publication of that theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n", " Document(page_content='We now describe the proof of the two theorems, formulating Gödel’s results in Peano arithmetic. Gödel himself used a system related to that defined in Principia Mathematica, but containing Peano arithmetic. In our presentation of the First and Second Incompleteness Theorems we refer to Peano arithmetic as P, following Gödel’s notation.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödel’s Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.2 The proof of the First Incompleteness Theorem'})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "chunk_size = 500\n", "chunk_overlap = 30\n", "text_splitter = RecursiveCharacterTextSplitter(\n", " chunk_size=chunk_size, chunk_overlap=chunk_overlap\n", ")\n", "\n", "# Split\n", "splits = text_splitter.split_documents(html_header_splits)\n", "splits[80:85]" ] }, { "cell_type": "markdown", "id": "ac0930371d79554a", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "## Limitations\n", "\n", "There can be quite a bit of structural variation from one HTML document to another, and while `HTMLHeaderTextSplitter` will attempt to attach all \"relevant\" headers to any given chunk, it can sometimes miss certain headers. For example, the algorithm assumes an informational hierarchy in which headers are always at nodes \"above\" associated text, i.e. prior siblings, ancestors, and combinations thereof. In the following news article (as of the writing of this document), the document is structured such that the text of the top-level headline, while tagged \"h1\", is in a *distinct* subtree from the text elements that we'd expect it to be *\"above\"*&mdash;so we can observe that the \"h1\" element and its associated text do not show up in the chunk metadata (but, where applicable, we do see \"h2\" and its associated text): \n" ] }, { "cell_type": "code", "execution_count": 6, "id": "5a5ec1482171b119", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T19:03:25.943524300Z", "start_time": "2023-10-02T19:03:25.691641Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "No two El Niño winters are the same, but many have temperature and precipitation trends in common. \n", "Average conditions during an El Niño winter across the continental US. \n", "One of the major reasons is the position of the jet stream, which often shifts south during an El Niño winter. This shift typically brings wetter and cooler weather to the South while the North becomes drier and warmer, according to NOAA. \n", "Because the jet stream is essentially a river of air that storms flow through, they c\n" ] } ], "source": [ "url = \"https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html\"\n", "\n", "headers_to_split_on = [\n", " (\"h1\", \"Header 1\"),\n", " (\"h2\", \"Header 2\"),\n", "]\n", "\n", "html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)\n", "html_header_splits = html_splitter.split_text_from_url(url)\n", "print(html_header_splits[1].page_content[:500])" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/HTML_section_aware_splitter.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "c95fcd15cd52c944", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "# How to split by HTML sections\n", "## Description and motivation\n", "Similar in concept to the [HTMLHeaderTextSplitter](/docs/how_to/HTML_header_metadata_splitter), the `HTMLSectionSplitter` is a \"structure-aware\" chunker that splits text at the element level and adds metadata for each header \"relevant\" to any given chunk.\n", "\n", "It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures.\n", "\n", "Use `xslt_path` to provide an absolute path to transform the HTML so that it can detect sections based on provided tags. The default is to use the `converting_to_header.xslt` file in the `data_connection/document_transformers` directory. This is for converting the html to a format/layout that is easier to detect sections. For example, `span` based on their font size can be converted to header tags to be detected as a section.\n", "\n", "## Usage examples\n", "### 1) How to split HTML strings:" ] }, { "cell_type": "code", "execution_count": 1, "id": "initial_id", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T18:57:49.208965400Z", "start_time": "2023-10-02T18:57:48.899756Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Foo \\n Some intro text about Foo.', metadata={'Header 1': 'Foo'}),\n", " Document(page_content='Bar main section \\n Some intro text about Bar. \\n Bar subsection 1 \\n Some text about the first subtopic of Bar. \\n Bar subsection 2 \\n Some text about the second subtopic of Bar.', metadata={'Header 2': 'Bar main section'}),\n", " Document(page_content='Baz \\n Some text about Baz \\n \\n \\n Some concluding text about Foo', metadata={'Header 2': 'Baz'})]" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_text_splitters import HTMLSectionSplitter\n", "\n", "html_string = \"\"\"\n", " <!DOCTYPE html>\n", " <html>\n", " <body>\n", " <div>\n", " <h1>Foo</h1>\n", " <p>Some intro text about Foo.</p>\n", " <div>\n", " <h2>Bar main section</h2>\n", " <p>Some intro text about Bar.</p>\n", " <h3>Bar subsection 1</h3>\n", " <p>Some text about the first subtopic of Bar.</p>\n", " <h3>Bar subsection 2</h3>\n", " <p>Some text about the second subtopic of Bar.</p>\n", " </div>\n", " <div>\n", " <h2>Baz</h2>\n", " <p>Some text about Baz</p>\n", " </div>\n", " <br>\n", " <p>Some concluding text about Foo</p>\n", " </div>\n", " </body>\n", " </html>\n", "\"\"\"\n", "\n", "headers_to_split_on = [(\"h1\", \"Header 1\"), (\"h2\", \"Header 2\")]\n", "\n", "html_splitter = HTMLSectionSplitter(headers_to_split_on)\n", "html_header_splits = html_splitter.split_text(html_string)\n", "html_header_splits" ] }, { "cell_type": "markdown", "id": "e29b4aade2a0070c", "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "source": [ "### 2) How to constrain chunk sizes:\n", "\n", "`HTMLSectionSplitter` can be used with other text splitters as part of a chunking pipeline. Internally, it uses the `RecursiveCharacterTextSplitter` when the section size is larger than the chunk size. It also considers the font size of the text to determine whether it is a section or not based on the determined font size threshold." ] }, { "cell_type": "code", "execution_count": 3, "id": "6ada8ea093ea0475", "metadata": { "ExecuteTime": { "end_time": "2023-10-02T18:57:51.016141300Z", "start_time": "2023-10-02T18:57:50.647495400Z" }, "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Foo \\n Some intro text about Foo.', metadata={'Header 1': 'Foo'}),\n", " Document(page_content='Bar main section \\n Some intro text about Bar.', metadata={'Header 2': 'Bar main section'}),\n", " Document(page_content='Bar subsection 1 \\n Some text about the first subtopic of Bar.', metadata={'Header 3': 'Bar subsection 1'}),\n", " Document(page_content='Bar subsection 2 \\n Some text about the second subtopic of Bar.', metadata={'Header 3': 'Bar subsection 2'}),\n", " Document(page_content='Baz \\n Some text about Baz \\n \\n \\n Some concluding text about Foo', metadata={'Header 2': 'Baz'})]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "html_string = \"\"\"\n", " <!DOCTYPE html>\n", " <html>\n", " <body>\n", " <div>\n", " <h1>Foo</h1>\n", " <p>Some intro text about Foo.</p>\n", " <div>\n", " <h2>Bar main section</h2>\n", " <p>Some intro text about Bar.</p>\n", " <h3>Bar subsection 1</h3>\n", " <p>Some text about the first subtopic of Bar.</p>\n", " <h3>Bar subsection 2</h3>\n", " <p>Some text about the second subtopic of Bar.</p>\n", " </div>\n", " <div>\n", " <h2>Baz</h2>\n", " <p>Some text about Baz</p>\n", " </div>\n", " <br>\n", " <p>Some concluding text about Foo</p>\n", " </div>\n", " </body>\n", " </html>\n", "\"\"\"\n", "\n", "headers_to_split_on = [\n", " (\"h1\", \"Header 1\"),\n", " (\"h2\", \"Header 2\"),\n", " (\"h3\", \"Header 3\"),\n", " (\"h4\", \"Header 4\"),\n", "]\n", "\n", "html_splitter = HTMLSectionSplitter(headers_to_split_on)\n", "\n", "html_header_splits = html_splitter.split_text(html_string)\n", "\n", "chunk_size = 500\n", "chunk_overlap = 30\n", "text_splitter = RecursiveCharacterTextSplitter(\n", " chunk_size=chunk_size, chunk_overlap=chunk_overlap\n", ")\n", "\n", "# Split\n", "splits = text_splitter.split_documents(html_header_splits)\n", "splits" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/MultiQueryRetriever.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "8cc82b48", "metadata": {}, "source": [ "# How to use the MultiQueryRetriever\n", "\n", "Distance-based vector database retrieval embeds (represents) queries in high-dimensional space and finds similar embedded documents based on a distance metric. But, retrieval may produce different results with subtle changes in query wording, or if the embeddings do not capture the semantics of the data well. Prompt engineering / tuning is sometimes done to manually address these problems, but can be tedious.\n", "\n", "The [MultiQueryRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.multi_query.MultiQueryRetriever.html) automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. For each query, it retrieves a set of relevant documents and takes the unique union across all queries to get a larger set of potentially relevant documents. By generating multiple perspectives on the same question, the `MultiQueryRetriever` can mitigate some of the limitations of the distance-based retrieval and get a richer set of results.\n", "\n", "Let's build a vectorstore using the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng from the [RAG tutorial](/docs/tutorials/rag):" ] }, { "cell_type": "code", "execution_count": 1, "id": "994d6c74", "metadata": {}, "outputs": [], "source": [ "# Build a sample vectorDB\n", "from langchain_chroma import Chroma\n", "from langchain_community.document_loaders import WebBaseLoader\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "# Load blog post\n", "loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n", "data = loader.load()\n", "\n", "# Split\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n", "splits = text_splitter.split_documents(data)\n", "\n", "# VectorDB\n", "embedding = OpenAIEmbeddings()\n", "vectordb = Chroma.from_documents(documents=splits, embedding=embedding)" ] }, { "cell_type": "markdown", "id": "cca8f56c", "metadata": {}, "source": [ "#### Simple usage\n", "\n", "Specify the LLM to use for query generation, and the retriever will do the rest." ] }, { "cell_type": "code", "execution_count": 2, "id": "edbca101", "metadata": {}, "outputs": [], "source": [ "from langchain.retrievers.multi_query import MultiQueryRetriever\n", "from langchain_openai import ChatOpenAI\n", "\n", "question = \"What are the approaches to Task Decomposition?\"\n", "llm = ChatOpenAI(temperature=0)\n", "retriever_from_llm = MultiQueryRetriever.from_llm(\n", " retriever=vectordb.as_retriever(), llm=llm\n", ")" ] }, { "cell_type": "code", "execution_count": 3, "id": "9e6d3b69", "metadata": {}, "outputs": [], "source": [ "# Set logging for the queries\n", "import logging\n", "\n", "logging.basicConfig()\n", "logging.getLogger(\"langchain.retrievers.multi_query\").setLevel(logging.INFO)" ] }, { "cell_type": "code", "execution_count": 4, "id": "bc93dc2b-9407-48b0-9f9a-338247e7eb69", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:langchain.retrievers.multi_query:Generated queries: ['1. How can Task Decomposition be achieved through different methods?', '2. What strategies are commonly used for Task Decomposition?', '3. What are the various techniques for breaking down tasks in Task Decomposition?']\n" ] }, { "data": { "text/plain": [ "5" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "unique_docs = retriever_from_llm.invoke(question)\n", "len(unique_docs)" ] }, { "cell_type": "markdown", "id": "7e170263-facd-4065-bb68-d11fb9123a45", "metadata": {}, "source": [ "Note that the underlying queries generated by the retriever are logged at the `INFO` level." ] }, { "cell_type": "markdown", "id": "c54a282f", "metadata": {}, "source": [ "#### Supplying your own prompt\n", "\n", "Under the hood, `MultiQueryRetriever` generates queries using a specific [prompt](https://api.python.langchain.com/en/latest/_modules/langchain/retrievers/multi_query.html#MultiQueryRetriever). To customize this prompt:\n", "\n", "1. Make a [PromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html) with an input variable for the question;\n", "2. Implement an [output parser](/docs/concepts#output-parsers) like the one below to split the result into a list of queries.\n", "\n", "The prompt and output parser together must support the generation of a list of queries." ] }, { "cell_type": "code", "execution_count": 5, "id": "d9afb0ca", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.output_parsers import BaseOutputParser\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_core.pydantic_v1 import BaseModel, Field\n", "\n", "\n", "# Output parser will split the LLM result into a list of queries\n", "class LineListOutputParser(BaseOutputParser[List[str]]):\n", " \"\"\"Output parser for a list of lines.\"\"\"\n", "\n", " def parse(self, text: str) -> List[str]:\n", " lines = text.strip().split(\"\\n\")\n", " return lines\n", "\n", "\n", "output_parser = LineListOutputParser()\n", "\n", "QUERY_PROMPT = PromptTemplate(\n", " input_variables=[\"question\"],\n", " template=\"\"\"You are an AI language model assistant. Your task is to generate five \n", " different versions of the given user question to retrieve relevant documents from a vector \n", " database. By generating multiple perspectives on the user question, your goal is to help\n", " the user overcome some of the limitations of the distance-based similarity search. \n", " Provide these alternative questions separated by newlines.\n", " Original question: {question}\"\"\",\n", ")\n", "llm = ChatOpenAI(temperature=0)\n", "\n", "# Chain\n", "llm_chain = QUERY_PROMPT | llm | output_parser\n", "\n", "# Other inputs\n", "question = \"What are the approaches to Task Decomposition?\"" ] }, { "cell_type": "code", "execution_count": 6, "id": "59c75c56-dbd7-4887-b9ba-0b5b21069f51", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:langchain.retrievers.multi_query:Generated queries: ['1. Can you provide insights on regression from the course material?', '2. How is regression discussed in the course content?', '3. What information does the course offer about regression?', '4. In what way is regression covered in the course?', '5. What are the teachings of the course regarding regression?']\n" ] }, { "data": { "text/plain": [ "9" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Run\n", "retriever = MultiQueryRetriever(\n", " retriever=vectordb.as_retriever(), llm_chain=llm_chain, parser_key=\"lines\"\n", ") # \"lines\" is the key (attribute name) of the parsed output\n", "\n", "# Results\n", "unique_docs = retriever.invoke(\"What does the course say about regression?\")\n", "len(unique_docs)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/add_scores_retriever.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "9d59582a-6473-4b34-929b-3e94cb443c3d", "metadata": {}, "source": [ "# How to add scores to retriever results\n", "\n", "Retrievers will return sequences of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects, which by default include no information about the process that retrieved them (e.g., a similarity score against a query). Here we demonstrate how to add retrieval scores to the `.metadata` of documents:\n", "1. From [vectorstore retrievers](/docs/how_to/vectorstore_retriever);\n", "2. From higher-order LangChain retrievers, such as [SelfQueryRetriever](/docs/how_to/self_query) or [MultiVectorRetriever](/docs/how_to/multi_vector).\n", "\n", "For (1), we will implement a short wrapper function around the corresponding vector store. For (2), we will update a method of the corresponding class.\n", "\n", "## Create vector store\n", "\n", "First we populate a vector store with some data. We will use a [PineconeVectorStore](https://api.python.langchain.com/en/latest/vectorstores/langchain_pinecone.vectorstores.PineconeVectorStore.html), but this guide is compatible with any LangChain vector store that implements a `.similarity_search_with_score` method." ] }, { "cell_type": "code", "execution_count": 2, "id": "b8cfcb1b-64ee-4b91-8d82-ce7803834985", "metadata": {}, "outputs": [], "source": [ "from langchain_core.documents import Document\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_pinecone import PineconeVectorStore\n", "\n", "docs = [\n", " Document(\n", " page_content=\"A bunch of scientists bring back dinosaurs and mayhem breaks loose\",\n", " metadata={\"year\": 1993, \"rating\": 7.7, \"genre\": \"science fiction\"},\n", " ),\n", " Document(\n", " page_content=\"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...\",\n", " metadata={\"year\": 2010, \"director\": \"Christopher Nolan\", \"rating\": 8.2},\n", " ),\n", " Document(\n", " page_content=\"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea\",\n", " metadata={\"year\": 2006, \"director\": \"Satoshi Kon\", \"rating\": 8.6},\n", " ),\n", " Document(\n", " page_content=\"A bunch of normal-sized women are supremely wholesome and some men pine after them\",\n", " metadata={\"year\": 2019, \"director\": \"Greta Gerwig\", \"rating\": 8.3},\n", " ),\n", " Document(\n", " page_content=\"Toys come alive and have a blast doing so\",\n", " metadata={\"year\": 1995, \"genre\": \"animated\"},\n", " ),\n", " Document(\n", " page_content=\"Three men walk into the Zone, three men walk out of the Zone\",\n", " metadata={\n", " \"year\": 1979,\n", " \"director\": \"Andrei Tarkovsky\",\n", " \"genre\": \"thriller\",\n", " \"rating\": 9.9,\n", " },\n", " ),\n", "]\n", "\n", "vectorstore = PineconeVectorStore.from_documents(\n", " docs, index_name=\"sample\", embedding=OpenAIEmbeddings()\n", ")" ] }, { "cell_type": "markdown", "id": "22ac5ef6-ce18-427f-a91c-62b38a8b41e9", "metadata": {}, "source": [ "## Retriever\n", "\n", "To obtain scores from a vector store retriever, we wrap the underlying vector store's `.similarity_search_with_score` method in a short function that packages scores into the associated document's metadata.\n", "\n", "We add a `@chain` decorator to the function to create a [Runnable](/docs/concepts/#langchain-expression-language) that can be used similarly to a typical retriever." ] }, { "cell_type": "code", "execution_count": 3, "id": "7e5677c3-f6ee-4974-ab5f-a0f50c199d45", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.documents import Document\n", "from langchain_core.runnables import chain\n", "\n", "\n", "@chain\n", "def retriever(query: str) -> List[Document]:\n", " docs, scores = zip(*vectorstore.similarity_search_with_score(query))\n", " for doc, score in zip(docs, scores):\n", " doc.metadata[\"score\"] = score\n", "\n", " return docs" ] }, { "cell_type": "code", "execution_count": 4, "id": "c9cad75e-b955-4012-989c-3c1820b49ba9", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993.0, 'score': 0.84429127}),\n", " Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'year': 1995.0, 'score': 0.792038262}),\n", " Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'director': 'Andrei Tarkovsky', 'genre': 'thriller', 'rating': 9.9, 'year': 1979.0, 'score': 0.751571238}),\n", " Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'director': 'Satoshi Kon', 'rating': 8.6, 'year': 2006.0, 'score': 0.747471571}))" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result = retriever.invoke(\"dinosaur\")\n", "result" ] }, { "cell_type": "markdown", "id": "6671308a-be8d-4c15-ae1f-5bd07b342560", "metadata": {}, "source": [ "Note that similarity scores from the retrieval step are included in the metadata of the above documents." ] }, { "cell_type": "markdown", "id": "af2e73a0-46a1-47e2-8103-68aaa637642a", "metadata": {}, "source": [ "## SelfQueryRetriever\n", "\n", "`SelfQueryRetriever` will use a LLM to generate a query that is potentially structured-- for example, it can construct filters for the retrieval on top of the usual semantic-similarity driven selection. See [this guide](/docs/how_to/self_query) for more detail.\n", "\n", "`SelfQueryRetriever` includes a short (1 - 2 line) method `_get_docs_with_query` that executes the `vectorstore` search. We can subclass `SelfQueryRetriever` and override this method to propagate similarity scores.\n", "\n", "First, following the [how-to guide](/docs/how_to/self_query), we will need to establish some metadata on which to filter:" ] }, { "cell_type": "code", "execution_count": 5, "id": "8280b829-2e81-4454-8adc-9a0930047fa2", "metadata": {}, "outputs": [], "source": [ "from langchain.chains.query_constructor.base import AttributeInfo\n", "from langchain.retrievers.self_query.base import SelfQueryRetriever\n", "from langchain_openai import ChatOpenAI\n", "\n", "metadata_field_info = [\n", " AttributeInfo(\n", " name=\"genre\",\n", " description=\"The genre of the movie. One of ['science fiction', 'comedy', 'drama', 'thriller', 'romance', 'action', 'animated']\",\n", " type=\"string\",\n", " ),\n", " AttributeInfo(\n", " name=\"year\",\n", " description=\"The year the movie was released\",\n", " type=\"integer\",\n", " ),\n", " AttributeInfo(\n", " name=\"director\",\n", " description=\"The name of the movie director\",\n", " type=\"string\",\n", " ),\n", " AttributeInfo(\n", " name=\"rating\", description=\"A 1-10 rating for the movie\", type=\"float\"\n", " ),\n", "]\n", "document_content_description = \"Brief summary of a movie\"\n", "llm = ChatOpenAI(temperature=0)" ] }, { "cell_type": "markdown", "id": "0a6c6fa8-1e2f-45ee-83e9-a6cbd82292d2", "metadata": {}, "source": [ "We then override the `_get_docs_with_query` to use the `similarity_search_with_score` method of the underlying vector store: " ] }, { "cell_type": "code", "execution_count": 6, "id": "62c8f3fa-8b64-4afb-87c4-ccbbf9a8bc54", "metadata": {}, "outputs": [], "source": [ "from typing import Any, Dict\n", "\n", "\n", "class CustomSelfQueryRetriever(SelfQueryRetriever):\n", " def _get_docs_with_query(\n", " self, query: str, search_kwargs: Dict[str, Any]\n", " ) -> List[Document]:\n", " \"\"\"Get docs, adding score information.\"\"\"\n", " docs, scores = zip(\n", " *vectorstore.similarity_search_with_score(query, **search_kwargs)\n", " )\n", " for doc, score in zip(docs, scores):\n", " doc.metadata[\"score\"] = score\n", "\n", " return docs" ] }, { "cell_type": "markdown", "id": "56e40109-1db6-44c7-a6e6-6989175e267c", "metadata": {}, "source": [ "Invoking this retriever will now include similarity scores in the document metadata. Note that the underlying structured-query capabilities of `SelfQueryRetriever` are retained." ] }, { "cell_type": "code", "execution_count": 7, "id": "3359a1ee-34ff-41b6-bded-64c05785b333", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "(Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993.0, 'score': 0.84429127}),)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever = CustomSelfQueryRetriever.from_llm(\n", " llm,\n", " vectorstore,\n", " document_content_description,\n", " metadata_field_info,\n", ")\n", "\n", "\n", "result = retriever.invoke(\"dinosaur movie with rating less than 8\")\n", "result" ] }, { "cell_type": "markdown", "id": "689ab3ba-3494-448b-836e-05fbe1ffd51c", "metadata": {}, "source": [ "## MultiVectorRetriever\n", "\n", "`MultiVectorRetriever` allows you to associate multiple vectors with a single document. This can be useful in a number of applications. For example, we can index small chunks of a larger document and run the retrieval on the chunks, but return the larger \"parent\" document when invoking the retriever. [ParentDocumentRetriever](/docs/how_to/parent_document_retriever/), a subclass of `MultiVectorRetriever`, includes convenience methods for populating a vector store to support this. Further applications are detailed in this [how-to guide](/docs/how_to/multi_vector/).\n", "\n", "To propagate similarity scores through this retriever, we can again subclass `MultiVectorRetriever` and override a method. This time we will override `_get_relevant_documents`.\n", "\n", "First, we prepare some fake data. We generate fake \"whole documents\" and store them in a document store; here we will use a simple [InMemoryStore](https://api.python.langchain.com/en/latest/stores/langchain_core.stores.InMemoryBaseStore.html)." ] }, { "cell_type": "code", "execution_count": 8, "id": "a112e545-7b53-4fcd-9c4a-7a42a5cc646d", "metadata": {}, "outputs": [], "source": [ "from langchain.storage import InMemoryStore\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "# The storage layer for the parent documents\n", "docstore = InMemoryStore()\n", "fake_whole_documents = [\n", " (\"fake_id_1\", Document(page_content=\"fake whole document 1\")),\n", " (\"fake_id_2\", Document(page_content=\"fake whole document 2\")),\n", "]\n", "docstore.mset(fake_whole_documents)" ] }, { "cell_type": "markdown", "id": "453b7415-4a6d-45d4-a329-9c1d7271d1b2", "metadata": {}, "source": [ "Next we will add some fake \"sub-documents\" to our vector store. We can link these sub-documents to the parent documents by populating the `\"doc_id\"` key in its metadata." ] }, { "cell_type": "code", "execution_count": 9, "id": "314519c0-dde4-41ea-a1ab-d3cf1c17c63f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['62a85353-41ff-4346-bff7-be6c8ec2ed89',\n", " '5d4a0e83-4cc5-40f1-bc73-ed9cbad0ee15',\n", " '8c1d9a56-120f-45e4-ba70-a19cd19a38f4']" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "docs = [\n", " Document(\n", " page_content=\"A snippet from a larger document discussing cats.\",\n", " metadata={\"doc_id\": \"fake_id_1\"},\n", " ),\n", " Document(\n", " page_content=\"A snippet from a larger document discussing discourse.\",\n", " metadata={\"doc_id\": \"fake_id_1\"},\n", " ),\n", " Document(\n", " page_content=\"A snippet from a larger document discussing chocolate.\",\n", " metadata={\"doc_id\": \"fake_id_2\"},\n", " ),\n", "]\n", "\n", "vectorstore.add_documents(docs)" ] }, { "cell_type": "markdown", "id": "e391f7f3-5a58-40fd-89fa-a0815c5146f7", "metadata": {}, "source": [ "To propagate the scores, we subclass `MultiVectorRetriever` and override its `_get_relevant_documents` method. Here we will make two changes:\n", "\n", "1. We will add similarity scores to the metadata of the corresponding \"sub-documents\" using the `similarity_search_with_score` method of the underlying vector store as above;\n", "2. We will include a list of these sub-documents in the metadata of the retrieved parent document. This surfaces what snippets of text were identified by the retrieval, together with their corresponding similarity scores." ] }, { "cell_type": "code", "execution_count": 10, "id": "1de61de7-1b58-41d6-9dea-939fef7d741d", "metadata": {}, "outputs": [], "source": [ "from collections import defaultdict\n", "\n", "from langchain.retrievers import MultiVectorRetriever\n", "from langchain_core.callbacks import CallbackManagerForRetrieverRun\n", "\n", "\n", "class CustomMultiVectorRetriever(MultiVectorRetriever):\n", " def _get_relevant_documents(\n", " self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n", " ) -> List[Document]:\n", " \"\"\"Get documents relevant to a query.\n", " Args:\n", " query: String to find relevant documents for\n", " run_manager: The callbacks handler to use\n", " Returns:\n", " List of relevant documents\n", " \"\"\"\n", " results = self.vectorstore.similarity_search_with_score(\n", " query, **self.search_kwargs\n", " )\n", "\n", " # Map doc_ids to list of sub-documents, adding scores to metadata\n", " id_to_doc = defaultdict(list)\n", " for doc, score in results:\n", " doc_id = doc.metadata.get(\"doc_id\")\n", " if doc_id:\n", " doc.metadata[\"score\"] = score\n", " id_to_doc[doc_id].append(doc)\n", "\n", " # Fetch documents corresponding to doc_ids, retaining sub_docs in metadata\n", " docs = []\n", " for _id, sub_docs in id_to_doc.items():\n", " docstore_docs = self.docstore.mget([_id])\n", " if docstore_docs:\n", " if doc := docstore_docs[0]:\n", " doc.metadata[\"sub_docs\"] = sub_docs\n", " docs.append(doc)\n", "\n", " return docs" ] }, { "cell_type": "markdown", "id": "7af27b38-631c-463f-9d66-bcc985f06a4f", "metadata": {}, "source": [ "Invoking this retriever, we can see that it identifies the correct parent document, including the relevant snippet from the sub-document with similarity score." ] }, { "cell_type": "code", "execution_count": 11, "id": "dc42a1be-22e1-4ade-b1bd-bafb85f2424f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='fake whole document 1', metadata={'sub_docs': [Document(page_content='A snippet from a larger document discussing cats.', metadata={'doc_id': 'fake_id_1', 'score': 0.831276655})]})]" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever = CustomMultiVectorRetriever(vectorstore=vectorstore, docstore=docstore)\n", "\n", "retriever.invoke(\"cat\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/agent_executor.ipynb
{ "cells": [ { "cell_type": "raw", "id": "17546ebb", "metadata": {}, "source": [ "---\n", "sidebar_position: 4\n", "---" ] }, { "cell_type": "markdown", "id": "f4c03f40-1328-412d-8a48-1db0cd481b77", "metadata": {}, "source": [ "# Build an Agent with AgentExecutor (Legacy)\n", "\n", ":::{.callout-important}\n", "This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n", ":::\n", "\n", "By themselves, language models can't take actions - they just output text.\n", "A big use case for LangChain is creating **agents**.\n", "Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be.\n", "The results of those actions can then be fed back into the agent and it determines whether more actions are needed, or whether it is okay to finish.\n", "\n", "In this tutorial, we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n", "\n", "## Concepts\n", "\n", "Concepts we will cover are:\n", "- Using [language models](/docs/concepts/#chat-models), in particular their tool calling ability\n", "- Creating a [Retriever](/docs/concepts/#retrievers) to expose specific information to our agent\n", "- Using a Search [Tool](/docs/concepts/#tools) to look up things online\n", "- [`Chat History`](/docs/concepts/#chat-history), which allows a chatbot to \"remember\" past interactions and take them into account when responding to follow-up questions. \n", "- Debugging and tracing your application using [LangSmith](/docs/concepts/#langsmith)\n", "\n", "## Setup\n", "\n", "### Jupyter Notebook\n", "\n", "This guide (and most of the other guides in the documentation) uses [Jupyter notebooks](https://jupyter.org/) and assumes the reader is as well. Jupyter notebooks are perfect for learning how to work with LLM systems because oftentimes things can go wrong (unexpected output, API down, etc) and going through guides in an interactive environment is a great way to better understand them.\n", "\n", "This and other tutorials are perhaps most conveniently run in a Jupyter notebook. See [here](https://jupyter.org/install) for instructions on how to install.\n", "\n", "### Installation\n", "\n", "To install LangChain run:\n", "\n", "```{=mdx}\n", "import Tabs from '@theme/Tabs';\n", "import TabItem from '@theme/TabItem';\n", "import CodeBlock from \"@theme/CodeBlock\";\n", "\n", "<Tabs>\n", " <TabItem value=\"pip\" label=\"Pip\" default>\n", " <CodeBlock language=\"bash\">pip install langchain</CodeBlock>\n", " </TabItem>\n", " <TabItem value=\"conda\" label=\"Conda\">\n", " <CodeBlock language=\"bash\">conda install langchain -c conda-forge</CodeBlock>\n", " </TabItem>\n", "</Tabs>\n", "\n", "```\n", "\n", "\n", "For more details, see our [Installation guide](/docs/how_to/installation).\n", "\n", "### LangSmith\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n", "As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n", "The best way to do this is with [LangSmith](https://smith.langchain.com).\n", "\n", "After you sign up at the link above, make sure to set your environment variables to start logging traces:\n", "\n", "```shell\n", "export LANGCHAIN_TRACING_V2=\"true\"\n", "export LANGCHAIN_API_KEY=\"...\"\n", "```\n", "\n", "Or, if in a notebook, you can set them with:\n", "\n", "```python\n", "import getpass\n", "import os\n", "\n", "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "```\n" ] }, { "cell_type": "markdown", "id": "c335d1bf", "metadata": {}, "source": [ "## Define tools\n", "\n", "We first need to create the tools we want to use. We will use two tools: [Tavily](/docs/integrations/tools/tavily_search) (to search online) and then a retriever over a local index we will create\n", "\n", "### [Tavily](/docs/integrations/tools/tavily_search)\n", "\n", "We have a built-in tool in LangChain to easily use Tavily search engine as tool.\n", "Note that this requires an API key - they have a free tier, but if you don't have one or don't want to create one, you can always ignore this step.\n", "\n", "Once you create your API key, you will need to export that as:\n", "\n", "```bash\n", "export TAVILY_API_KEY=\"...\"\n", "```" ] }, { "cell_type": "code", "execution_count": 5, "id": "482ce13d", "metadata": {}, "outputs": [], "source": [ "from langchain_community.tools.tavily_search import TavilySearchResults" ] }, { "cell_type": "code", "execution_count": 6, "id": "9cc86c0b", "metadata": {}, "outputs": [], "source": [ "search = TavilySearchResults(max_results=2)" ] }, { "cell_type": "code", "execution_count": 7, "id": "e593bbf6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'url': 'https://www.weatherapi.com/',\n", " 'content': \"{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714000492, 'localtime': '2024-04-24 16:14'}, 'current': {'last_updated_epoch': 1713999600, 'last_updated': '2024-04-24 16:00', 'temp_c': 15.6, 'temp_f': 60.1, 'is_day': 1, 'condition': {'text': 'Overcast', 'icon': '//cdn.weatherapi.com/weather/64x64/day/122.png', 'code': 1009}, 'wind_mph': 10.5, 'wind_kph': 16.9, 'wind_degree': 330, 'wind_dir': 'NNW', 'pressure_mb': 1018.0, 'pressure_in': 30.06, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 72, 'cloud': 100, 'feelslike_c': 15.6, 'feelslike_f': 60.1, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 14.8, 'gust_kph': 23.8}}\"},\n", " {'url': 'https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/',\n", " 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "search.invoke(\"what is the weather in SF\")" ] }, { "cell_type": "markdown", "id": "e8097977", "metadata": {}, "source": [ "### Retriever\n", "\n", "We will also create a retriever over some data of our own. For a deeper explanation of each step here, see [this tutorial](/docs/tutorials/rag)." ] }, { "cell_type": "code", "execution_count": 8, "id": "9c9ce713", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import WebBaseLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n", "docs = loader.load()\n", "documents = RecursiveCharacterTextSplitter(\n", " chunk_size=1000, chunk_overlap=200\n", ").split_documents(docs)\n", "vector = FAISS.from_documents(documents, OpenAIEmbeddings())\n", "retriever = vector.as_retriever()" ] }, { "cell_type": "code", "execution_count": 9, "id": "dae53ec6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(page_content='# The data to predict and grade over evaluators=[exact_match], # The evaluators to score the results experiment_prefix=\"sample-experiment\", # The name of the experiment metadata={ \"version\": \"1.0.0\", \"revision_id\": \"beta\" },)import { Client, Run, Example } from \\'langsmith\\';import { runOnDataset } from \\'langchain/smith\\';import { EvaluationResult } from \\'langsmith/evaluation\\';const client = new Client();// Define dataset: these are your test casesconst datasetName = \"Sample Dataset\";const dataset = await client.createDataset(datasetName, { description: \"A sample dataset in LangSmith.\"});await client.createExamples({ inputs: [ { postfix: \"to LangSmith\" }, { postfix: \"to Evaluations in LangSmith\" }, ], outputs: [ { output: \"Welcome to LangSmith\" }, { output: \"Welcome to Evaluations in LangSmith\" }, ], datasetId: dataset.id,});// Define your evaluatorconst exactMatch = async ({ run, example }: { run: Run; example?:', metadata={'source': 'https://docs.smith.langchain.com/overview', 'title': 'Getting started with LangSmith | \\uf8ffü¶úÔ∏è\\uf8ffüõ†Ô∏è LangSmith', 'description': 'Introduction', 'language': 'en'})" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.invoke(\"how to upload a dataset\")[0]" ] }, { "cell_type": "markdown", "id": "04aeca39", "metadata": {}, "source": [ "Now that we have populated our index that we will do doing retrieval over, we can easily turn it into a tool (the format needed for an agent to properly use it)" ] }, { "cell_type": "code", "execution_count": 10, "id": "117594b5", "metadata": {}, "outputs": [], "source": [ "from langchain.tools.retriever import create_retriever_tool" ] }, { "cell_type": "code", "execution_count": 11, "id": "7280b031", "metadata": {}, "outputs": [], "source": [ "retriever_tool = create_retriever_tool(\n", " retriever,\n", " \"langsmith_search\",\n", " \"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!\",\n", ")" ] }, { "cell_type": "markdown", "id": "c3b47c1d", "metadata": {}, "source": [ "### Tools\n", "\n", "Now that we have created both, we can create a list of tools that we will use downstream." ] }, { "cell_type": "code", "execution_count": 12, "id": "b8e8e710", "metadata": {}, "outputs": [], "source": [ "tools = [search, retriever_tool]" ] }, { "cell_type": "markdown", "id": "e00068b0", "metadata": {}, "source": [ "## Using Language Models\n", "\n", "Next, let's learn how to use a language model by to call tools. LangChain supports many different language models that you can use interchangably - select the one you want to use below!\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs openaiParams={`model=\"gpt-4\"`} />\n", "```" ] }, { "cell_type": "code", "execution_count": 4, "id": "69185491", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "model = ChatOpenAI(model=\"gpt-4\")" ] }, { "cell_type": "markdown", "id": "642ed8bf", "metadata": {}, "source": [ "You can call the language model by passing in a list of messages. By default, the response is a `content` string." ] }, { "cell_type": "code", "execution_count": 13, "id": "c96c960b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Hello! How can I assist you today?'" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import HumanMessage\n", "\n", "response = model.invoke([HumanMessage(content=\"hi!\")])\n", "response.content" ] }, { "cell_type": "markdown", "id": "47bf8210", "metadata": {}, "source": [ "We can now see what it is like to enable this model to do tool calling. In order to enable that we use `.bind_tools` to give the language model knowledge of these tools" ] }, { "cell_type": "code", "execution_count": 14, "id": "ba692a74", "metadata": {}, "outputs": [], "source": [ "model_with_tools = model.bind_tools(tools)" ] }, { "cell_type": "markdown", "id": "fd920b69", "metadata": {}, "source": [ "We can now call the model. Let's first call it with a normal message, and see how it responds. We can look at both the `content` field as well as the `tool_calls` field." ] }, { "cell_type": "code", "execution_count": 18, "id": "b6a7e925", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ContentString: Hello! How can I assist you today?\n", "ToolCalls: []\n" ] } ], "source": [ "response = model_with_tools.invoke([HumanMessage(content=\"Hi!\")])\n", "\n", "print(f\"ContentString: {response.content}\")\n", "print(f\"ToolCalls: {response.tool_calls}\")" ] }, { "cell_type": "markdown", "id": "e8c81e76", "metadata": {}, "source": [ "Now, let's try calling it with some input that would expect a tool to be called." ] }, { "cell_type": "code", "execution_count": 19, "id": "688b465d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ContentString: \n", "ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_4HteVahXkRAkWjp6dGXryKZX'}]\n" ] } ], "source": [ "response = model_with_tools.invoke([HumanMessage(content=\"What's the weather in SF?\")])\n", "\n", "print(f\"ContentString: {response.content}\")\n", "print(f\"ToolCalls: {response.tool_calls}\")" ] }, { "cell_type": "markdown", "id": "83c4bcd3", "metadata": {}, "source": [ "We can see that there's now no content, but there is a tool call! It wants us to call the Tavily Search tool.\n", "\n", "This isn't calling that tool yet - it's just telling us to. In order to actually calll it, we'll want to create our agent." ] }, { "cell_type": "markdown", "id": "40ccec80", "metadata": {}, "source": [ "## Create the agent\n", "\n", "Now that we have defined the tools and the LLM, we can create the agent. We will be using a tool calling agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts/#agent_types/).\n", "\n", "We can first choose the prompt we want to use to guide the agent.\n", "\n", "If you want to see the contents of this prompt and have access to LangSmith, you can go to:\n", "\n", "[https://smith.langchain.com/hub/hwchase17/openai-functions-agent](https://smith.langchain.com/hub/hwchase17/openai-functions-agent)" ] }, { "cell_type": "code", "execution_count": 20, "id": "af83d3e3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')),\n", " MessagesPlaceholder(variable_name='chat_history', optional=True),\n", " HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')),\n", " MessagesPlaceholder(variable_name='agent_scratchpad')]" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain import hub\n", "\n", "# Get the prompt to use - you can modify this!\n", "prompt = hub.pull(\"hwchase17/openai-functions-agent\")\n", "prompt.messages" ] }, { "cell_type": "markdown", "id": "f8014c9d", "metadata": {}, "source": [ "Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts/#agents).\n", "\n", "Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_agent` will call `.bind_tools` for us under the hood." ] }, { "cell_type": "code", "execution_count": 23, "id": "89cf72b4-6046-4b47-8f27-5522d8cb8036", "metadata": {}, "outputs": [], "source": [ "from langchain.agents import create_tool_calling_agent\n", "\n", "agent = create_tool_calling_agent(model, tools, prompt)" ] }, { "cell_type": "markdown", "id": "1a58c9f8", "metadata": {}, "source": [ "Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools)." ] }, { "cell_type": "code", "execution_count": 24, "id": "ce33904a", "metadata": {}, "outputs": [], "source": [ "from langchain.agents import AgentExecutor\n", "\n", "agent_executor = AgentExecutor(agent=agent, tools=tools)" ] }, { "cell_type": "markdown", "id": "e4df0e06", "metadata": {}, "source": [ "## Run the agent\n", "\n", "We can now run the agent on a few queries! Note that for now, these are all **stateless** queries (it won't remember previous interactions).\n", "\n", "First up, let's how it responds when there's no need to call a tool:" ] }, { "cell_type": "code", "execution_count": 25, "id": "114ba50d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'hi!', 'output': 'Hello! How can I assist you today?'}" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke({\"input\": \"hi!\"})" ] }, { "cell_type": "markdown", "id": "71493a42", "metadata": {}, "source": [ "In order to see exactly what is happening under the hood (and to make sure it's not calling a tool) we can take a look at the [LangSmith trace](https://smith.langchain.com/public/8441812b-94ce-4832-93ec-e1114214553a/r)\n", "\n", "Let's now try it out on an example where it should be invoking the retriever" ] }, { "cell_type": "code", "execution_count": 26, "id": "3fa4780a", "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "{'input': 'how can langsmith help with testing?',\n", " 'output': 'LangSmith is a platform that aids in building production-grade Language Learning Model (LLM) applications. It can assist with testing in several ways:\\n\\n1. **Monitoring and Evaluation**: LangSmith allows close monitoring and evaluation of your application. This helps you to ensure the quality of your application and deploy it with confidence.\\n\\n2. **Tracing**: LangSmith has tracing capabilities that can be beneficial for debugging and understanding the behavior of your application.\\n\\n3. **Evaluation Capabilities**: LangSmith has built-in tools for evaluating the performance of your LLM. \\n\\n4. **Prompt Hub**: This is a prompt management tool built into LangSmith that can help in testing different prompts and their responses.\\n\\nPlease note that to use LangSmith, you would need to install it and create an API key. The platform offers Python and Typescript SDKs for utilization. It works independently and does not require the use of LangChain.'}" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke({\"input\": \"how can langsmith help with testing?\"})" ] }, { "cell_type": "markdown", "id": "f2d94242", "metadata": {}, "source": [ "Let's take a look at the [LangSmith trace](https://smith.langchain.com/public/762153f6-14d4-4c98-8659-82650f860c62/r) to make sure it's actually calling that.\n", "\n", "Now let's try one where it needs to call the search tool:" ] }, { "cell_type": "code", "execution_count": 27, "id": "77c2f769", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'whats the weather in sf?',\n", " 'output': 'The current weather in San Francisco is partly cloudy with a temperature of 16.1°C (61.0°F). The wind is coming from the WNW at a speed of 10.5 mph. The humidity is at 67%. [source](https://www.weatherapi.com/)'}" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke({\"input\": \"whats the weather in sf?\"})" ] }, { "cell_type": "markdown", "id": "c174f838", "metadata": {}, "source": [ "We can check out the [LangSmith trace](https://smith.langchain.com/public/36df5b1a-9a0b-4185-bae2-964e1d53c665/r) to make sure it's calling the search tool effectively." ] }, { "cell_type": "markdown", "id": "022cbc8a", "metadata": {}, "source": [ "## Adding in memory\n", "\n", "As mentioned earlier, this agent is stateless. This means it does not remember previous interactions. To give it memory we need to pass in previous `chat_history`. Note: it needs to be called `chat_history` because of the prompt we are using. If we use a different prompt, we could change the variable name" ] }, { "cell_type": "code", "execution_count": 28, "id": "c4073e35", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'hi! my name is bob',\n", " 'chat_history': [],\n", " 'output': 'Hello Bob! How can I assist you today?'}" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Here we pass in an empty list of messages for chat_history because it is the first message in the chat\n", "agent_executor.invoke({\"input\": \"hi! my name is bob\", \"chat_history\": []})" ] }, { "cell_type": "code", "execution_count": 29, "id": "9dc5ed68", "metadata": {}, "outputs": [], "source": [ "from langchain_core.messages import AIMessage, HumanMessage" ] }, { "cell_type": "code", "execution_count": 30, "id": "550e0c6e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'chat_history': [HumanMessage(content='hi! my name is bob'),\n", " AIMessage(content='Hello Bob! How can I assist you today?')],\n", " 'input': \"what's my name?\",\n", " 'output': 'Your name is Bob. How can I assist you further?'}" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke(\n", " {\n", " \"chat_history\": [\n", " HumanMessage(content=\"hi! my name is bob\"),\n", " AIMessage(content=\"Hello Bob! How can I assist you today?\"),\n", " ],\n", " \"input\": \"what's my name?\",\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "07b3bcf2", "metadata": {}, "source": [ "If we want to keep track of these messages automatically, we can wrap this in a RunnableWithMessageHistory. For more information on how to use this, see [this guide](/docs/how_to/message_history). " ] }, { "cell_type": "code", "execution_count": 36, "id": "8edd96e6", "metadata": {}, "outputs": [], "source": [ "from langchain_community.chat_message_histories import ChatMessageHistory\n", "from langchain_core.chat_history import BaseChatMessageHistory\n", "from langchain_core.runnables.history import RunnableWithMessageHistory\n", "\n", "store = {}\n", "\n", "\n", "def get_session_history(session_id: str) -> BaseChatMessageHistory:\n", " if session_id not in store:\n", " store[session_id] = ChatMessageHistory()\n", " return store[session_id]" ] }, { "cell_type": "markdown", "id": "c450d6a5", "metadata": {}, "source": [ "Because we have multiple inputs, we need to specify two things:\n", "\n", "- `input_messages_key`: The input key to use to add to the conversation history.\n", "- `history_messages_key`: The key to add the loaded messages into." ] }, { "cell_type": "code", "execution_count": 37, "id": "828d1e95", "metadata": {}, "outputs": [], "source": [ "agent_with_chat_history = RunnableWithMessageHistory(\n", " agent_executor,\n", " get_session_history,\n", " input_messages_key=\"input\",\n", " history_messages_key=\"chat_history\",\n", ")" ] }, { "cell_type": "code", "execution_count": 38, "id": "1f5932b6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': \"hi! I'm bob\",\n", " 'chat_history': [],\n", " 'output': 'Hello Bob! How can I assist you today?'}" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_with_chat_history.invoke(\n", " {\"input\": \"hi! I'm bob\"},\n", " config={\"configurable\": {\"session_id\": \"<foo>\"}},\n", ")" ] }, { "cell_type": "code", "execution_count": 39, "id": "ae627966", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': \"what's my name?\",\n", " 'chat_history': [HumanMessage(content=\"hi! I'm bob\"),\n", " AIMessage(content='Hello Bob! How can I assist you today?')],\n", " 'output': 'Your name is Bob.'}" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_with_chat_history.invoke(\n", " {\"input\": \"what's my name?\"},\n", " config={\"configurable\": {\"session_id\": \"<foo>\"}},\n", ")" ] }, { "cell_type": "markdown", "id": "6de2798e", "metadata": {}, "source": [ "Example LangSmith trace: https://smith.langchain.com/public/98c8d162-60ae-4493-aa9f-992d87bd0429/r" ] }, { "cell_type": "markdown", "id": "c029798f", "metadata": {}, "source": [ "## Conclusion\n", "\n", "That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n", "\n", ":::{.callout-important}\n", "This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n", ":::\n", "\n", "If you want to continue using LangChain agents, some good advanced guides are:\n", "\n", "- [How to use LangGraph's built-in versions of `AgentExecutor`](/docs/how_to/migrate_agent)\n", "- [How to create a custom agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/custom_agent/)\n", "- [How to stream responses from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/)\n", "- [How to return structured output from an agent](https://python.langchain.com/v0.1/docs/modules/agents/how_to/agent_structured/)" ] }, { "cell_type": "code", "execution_count": null, "id": "e3ec3244", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/assign.ipynb
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 6\n", "keywords: [RunnablePassthrough, assign, LCEL]\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add values to a chain's state\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Calling runnables in parallel](/docs/how_to/parallel/)\n", "- [Custom functions](/docs/how_to/functions/)\n", "- [Passing data through](/docs/how_to/passthrough)\n", "\n", ":::\n", "\n", "An alternate way of [passing data through](/docs/how_to/passthrough) steps of a chain is to leave the current values of the chain state unchanged while assigning a new value under a given key. The [`RunnablePassthrough.assign()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html#langchain_core.runnables.passthrough.RunnablePassthrough.assign) static method takes an input value and adds the extra arguments passed to the assign function.\n", "\n", "This is useful in the common [LangChain Expression Language](/docs/concepts/#langchain-expression-language) pattern of additively creating a dictionary to use as input to a later step.\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'extra': {'num': 1, 'mult': 3}, 'modified': 2}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n", "\n", "runnable = RunnableParallel(\n", " extra=RunnablePassthrough.assign(mult=lambda x: x[\"num\"] * 3),\n", " modified=lambda x: x[\"num\"] + 1,\n", ")\n", "\n", "runnable.invoke({\"num\": 1})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's break down what's happening here.\n", "\n", "- The input to the chain is `{\"num\": 1}`. This is passed into a `RunnableParallel`, which invokes the runnables it is passed in parallel with that input.\n", "- The value under the `extra` key is invoked. `RunnablePassthrough.assign()` keeps the original keys in the input dict (`{\"num\": 1}`), and assigns a new key called `mult`. The value is `lambda x: x[\"num\"] * 3)`, which is `3`. Thus, the result is `{\"num\": 1, \"mult\": 3}`.\n", "- `{\"num\": 1, \"mult\": 3}` is returned to the `RunnableParallel` call, and is set as the value to the key `extra`.\n", "- At the same time, the `modified` key is called. The result is `2`, since the lambda extracts a key called `\"num\"` from its input and adds one.\n", "\n", "Thus, the result is `{'extra': {'num': 1, 'mult': 3}, 'modified': 2}`.\n", "\n", "## Streaming\n", "\n", "One convenient feature of this method is that it allows values to pass through as soon as they are available. To show this off, we'll use `RunnablePassthrough.assign()` to immediately return source docs in a retrieval chain:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'question': 'where did harrison work?'}\n", "{'context': [Document(page_content='harrison worked at kensho')]}\n", "{'output': ''}\n", "{'output': 'H'}\n", "{'output': 'arrison'}\n", "{'output': ' worked'}\n", "{'output': ' at'}\n", "{'output': ' Kens'}\n", "{'output': 'ho'}\n", "{'output': '.'}\n", "{'output': ''}\n" ] } ], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "model = ChatOpenAI()\n", "\n", "generation_chain = prompt | model | StrOutputParser()\n", "\n", "retrieval_chain = {\n", " \"context\": retriever,\n", " \"question\": RunnablePassthrough(),\n", "} | RunnablePassthrough.assign(output=generation_chain)\n", "\n", "stream = retrieval_chain.stream(\"where did harrison work?\")\n", "\n", "for chunk in stream:\n", " print(chunk)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the first chunk contains the original `\"question\"` since that is immediately available. The second chunk contains `\"context\"` since the retriever finishes second. Finally, the output from the `generation_chain` streams in chunks as soon as it is available.\n", "\n", "## Next steps\n", "\n", "Now you've learned how to pass data through your chains to help to help format the data flowing through your chains.\n", "\n", "To learn more, see the other how-to guides on runnables in this section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/binding.ipynb
{ "cells": [ { "cell_type": "raw", "id": "fe63ffaf", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "keywords: [RunnableBinding, LCEL]\n", "---" ] }, { "cell_type": "markdown", "id": "711752cb-4f15-42a3-9838-a0c67f397771", "metadata": {}, "source": [ "# How to add default invocation args to a Runnable\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Tool calling](/docs/how_to/tool_calling)\n", "\n", ":::\n", "\n", "Sometimes we want to invoke a [`Runnable`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) within a [RunnableSequence](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSequence.html) with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use the [`Runnable.bind()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.bind) method to set these arguments ahead of time.\n", "\n", "## Binding stop sequences\n", "\n", "Suppose we have a simple prompt + model chain:" ] }, { "cell_type": "code", "execution_count": null, "id": "c5dad8b5", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "f3fdf86d-155f-4587-b7cd-52d363970c1d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "EQUATION: x^3 + 7 = 12\n", "\n", "SOLUTION: \n", "Subtract 7 from both sides:\n", "x^3 = 5\n", "\n", "Take the cube root of both sides:\n", "x = ∛5\n" ] } ], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Write out the following equation using algebraic symbols then solve it. Use the format\\n\\nEQUATION:...\\nSOLUTION:...\\n\\n\",\n", " ),\n", " (\"human\", \"{equation_statement}\"),\n", " ]\n", ")\n", "\n", "model = ChatOpenAI(temperature=0)\n", "\n", "runnable = (\n", " {\"equation_statement\": RunnablePassthrough()} | prompt | model | StrOutputParser()\n", ")\n", "\n", "print(runnable.invoke(\"x raised to the third plus seven equals 12\"))" ] }, { "cell_type": "markdown", "id": "929c9aba-a4a0-462c-adac-2cfc2156e117", "metadata": {}, "source": [ "and want to call the model with certain `stop` words so that we shorten the output as is useful in certain types of prompting techniques. While we can pass some arguments into the constructor, other runtime args use the `.bind()` method as follows:" ] }, { "cell_type": "code", "execution_count": 3, "id": "32e0484a-78c5-4570-a00b-20d597245a96", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "EQUATION: x^3 + 7 = 12\n", "\n", "\n" ] } ], "source": [ "runnable = (\n", " {\"equation_statement\": RunnablePassthrough()}\n", " | prompt\n", " | model.bind(stop=\"SOLUTION\")\n", " | StrOutputParser()\n", ")\n", "\n", "print(runnable.invoke(\"x raised to the third plus seven equals 12\"))" ] }, { "cell_type": "markdown", "id": "f07d7528-9269-4d6f-b12e-3669592a9e03", "metadata": {}, "source": [ "What you can bind to a Runnable will depend on the extra parameters you can pass when invoking it.\n", "\n", "## Attaching OpenAI tools\n", "\n", "Another common use-case is tool calling. While you should generally use the [`.bind_tools()`](/docs/how_to/tool_calling) method for tool-calling models, you can also bind provider-specific args directly if you want lower level control:" ] }, { "cell_type": "code", "execution_count": 4, "id": "2cdeeb4c-0c1f-43da-bd58-4f591d9e0671", "metadata": {}, "outputs": [], "source": [ "tools = [\n", " {\n", " \"type\": \"function\",\n", " \"function\": {\n", " \"name\": \"get_current_weather\",\n", " \"description\": \"Get the current weather in a given location\",\n", " \"parameters\": {\n", " \"type\": \"object\",\n", " \"properties\": {\n", " \"location\": {\n", " \"type\": \"string\",\n", " \"description\": \"The city and state, e.g. San Francisco, CA\",\n", " },\n", " \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\n", " },\n", " \"required\": [\"location\"],\n", " },\n", " },\n", " }\n", "]" ] }, { "cell_type": "code", "execution_count": 5, "id": "2b65beab-48bb-46ff-a5a4-ef8ac95a513c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_z0OU2CytqENVrRTI6T8DkI3u', 'function': {'arguments': '{\"location\": \"San Francisco, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_ft96IJBh0cMKkQWrZjNg4bsw', 'function': {'arguments': '{\"location\": \"New York, NY\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}, {'id': 'call_tfbtGgCLmuBuWgZLvpPwvUMH', 'function': {'arguments': '{\"location\": \"Los Angeles, CA\", \"unit\": \"celsius\"}', 'name': 'get_current_weather'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 84, 'prompt_tokens': 85, 'total_tokens': 169}, 'model_name': 'gpt-3.5-turbo-1106', 'system_fingerprint': 'fp_77a673219d', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d57ad5fa-b52a-4822-bc3e-74f838697e18-0', tool_calls=[{'name': 'get_current_weather', 'args': {'location': 'San Francisco, CA', 'unit': 'celsius'}, 'id': 'call_z0OU2CytqENVrRTI6T8DkI3u'}, {'name': 'get_current_weather', 'args': {'location': 'New York, NY', 'unit': 'celsius'}, 'id': 'call_ft96IJBh0cMKkQWrZjNg4bsw'}, {'name': 'get_current_weather', 'args': {'location': 'Los Angeles, CA', 'unit': 'celsius'}, 'id': 'call_tfbtGgCLmuBuWgZLvpPwvUMH'}])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(tools=tools)\n", "model.invoke(\"What's the weather in SF, NYC and LA?\")" ] }, { "cell_type": "markdown", "id": "095001f7", "metadata": {}, "source": [ "## Next steps\n", "\n", "You now know how to bind runtime arguments to a Runnable.\n", "\n", "To learn more, see the other how-to guides on runnables in this section, including:\n", "\n", "- [Using configurable fields and alternatives](/docs/how_to/configure) to change parameters of a step in a chain, or even swap out entire steps, at runtime" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/caching_embeddings.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "bf4061ce", "metadata": {}, "source": [ "# Caching\n", "\n", "Embeddings can be stored or temporarily cached to avoid needing to recompute them.\n", "\n", "Caching embeddings can be done using a `CacheBackedEmbeddings`. The cache backed embedder is a wrapper around an embedder that caches\n", "embeddings in a key-value store. The text is hashed and the hash is used as the key in the cache.\n", "\n", "The main supported way to initialize a `CacheBackedEmbeddings` is `from_bytes_store`. It takes the following parameters:\n", "\n", "- underlying_embedder: The embedder to use for embedding.\n", "- document_embedding_cache: Any [`ByteStore`](/docs/integrations/stores/) for caching document embeddings.\n", "- batch_size: (optional, defaults to `None`) The number of documents to embed between store updates.\n", "- namespace: (optional, defaults to `\"\"`) The namespace to use for document cache. This namespace is used to avoid collisions with other caches. For example, set it to the name of the embedding model used.\n", "- query_embedding_cache: (optional, defaults to `None` or not caching) A [`ByteStore`](/docs/integrations/stores/) for caching query embeddings, or `True` to use the same store as `document_embedding_cache`.\n", "\n", "**Attention**:\n", "\n", "- Be sure to set the `namespace` parameter to avoid collisions of the same text embedded using different embeddings models.\n", "- `CacheBackedEmbeddings` does not cache query embeddings by default. To enable query caching, one need to specify a `query_embedding_cache`." ] }, { "cell_type": "code", "execution_count": 1, "id": "a463c3c2-749b-40d1-a433-84f68a1cd1c7", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain.embeddings import CacheBackedEmbeddings" ] }, { "cell_type": "markdown", "id": "9ddf07dd-3e72-41de-99d4-78e9521e272f", "metadata": {}, "source": [ "## Using with a Vector Store\n", "\n", "First, let's see an example that uses the local file system for storing embeddings and uses FAISS vector store for retrieval." ] }, { "cell_type": "code", "execution_count": null, "id": "50183825", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain-openai faiss-cpu" ] }, { "cell_type": "code", "execution_count": 3, "id": "9e4314d8-88ef-4f52-81ae-0be771168bb6", "metadata": {}, "outputs": [], "source": [ "from langchain.storage import LocalFileStore\n", "from langchain_community.document_loaders import TextLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "underlying_embeddings = OpenAIEmbeddings()\n", "\n", "store = LocalFileStore(\"./cache/\")\n", "\n", "cached_embedder = CacheBackedEmbeddings.from_bytes_store(\n", " underlying_embeddings, store, namespace=underlying_embeddings.model\n", ")" ] }, { "cell_type": "markdown", "id": "f8cdf33c-321d-4d2c-b76b-d6f5f8b42a92", "metadata": {}, "source": [ "The cache is empty prior to embedding:" ] }, { "cell_type": "code", "execution_count": 4, "id": "f9ad627f-ced2-4277-b336-2434f22f2c8a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(store.yield_keys())" ] }, { "cell_type": "markdown", "id": "a4effe04-b40f-42f8-a449-72fe6991cf20", "metadata": {}, "source": [ "Load the document, split it into chunks, embed each chunk and load it into the vector store." ] }, { "cell_type": "code", "execution_count": 5, "id": "cf958ac2-e60e-4668-b32c-8bb2d78b3c61", "metadata": {}, "outputs": [], "source": [ "raw_documents = TextLoader(\"state_of_the_union.txt\").load()\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "documents = text_splitter.split_documents(raw_documents)" ] }, { "cell_type": "markdown", "id": "f526444b-93f8-423f-b6d1-dab539450921", "metadata": {}, "source": [ "Create the vector store:" ] }, { "cell_type": "code", "execution_count": 6, "id": "3a1d7bb8-3b72-4bb5-9013-cf7729caca61", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 218 ms, sys: 29.7 ms, total: 248 ms\n", "Wall time: 1.02 s\n" ] } ], "source": [ "%%time\n", "db = FAISS.from_documents(documents, cached_embedder)" ] }, { "cell_type": "markdown", "id": "64fc53f5-d559-467f-bf62-5daef32ffbc0", "metadata": {}, "source": [ "If we try to create the vector store again, it'll be much faster since it does not need to re-compute any embeddings." ] }, { "cell_type": "code", "execution_count": 7, "id": "714cb2e2-77ba-41a8-bb83-84e75342af2d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 15.7 ms, sys: 2.22 ms, total: 18 ms\n", "Wall time: 17.2 ms\n" ] } ], "source": [ "%%time\n", "db2 = FAISS.from_documents(documents, cached_embedder)" ] }, { "cell_type": "markdown", "id": "1acc76b9-9c70-4160-b593-5f932c75e2b6", "metadata": {}, "source": [ "And here are some of the embeddings that got created:" ] }, { "cell_type": "code", "execution_count": 8, "id": "f2ca32dd-3712-4093-942b-4122f3dc8a8e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['text-embedding-ada-00217a6727d-8916-54eb-b196-ec9c9d6ca472',\n", " 'text-embedding-ada-0025fc0d904-bd80-52da-95c9-441015bfb438',\n", " 'text-embedding-ada-002e4ad20ef-dfaa-5916-9459-f90c6d8e8159',\n", " 'text-embedding-ada-002ed199159-c1cd-5597-9757-f80498e8f17b',\n", " 'text-embedding-ada-0021297d37a-2bc1-5e19-bf13-6c950f075062']" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(store.yield_keys())[:5]" ] }, { "cell_type": "markdown", "id": "c1a7fafd", "metadata": {}, "source": [ "# Swapping the `ByteStore`\n", "\n", "In order to use a different `ByteStore`, just use it when creating your `CacheBackedEmbeddings`. Below, we create an equivalent cached embeddings object, except using the non-persistent `InMemoryByteStore` instead:" ] }, { "cell_type": "code", "execution_count": 9, "id": "336a0538", "metadata": {}, "outputs": [], "source": [ "from langchain.embeddings import CacheBackedEmbeddings\n", "from langchain.storage import InMemoryByteStore\n", "\n", "store = InMemoryByteStore()\n", "\n", "cached_embedder = CacheBackedEmbeddings.from_bytes_store(\n", " underlying_embeddings, store, namespace=underlying_embeddings.model\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/callbacks_async.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to use callbacks in async environments\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", ":::\n", "\n", "If you are planning to use the async APIs, it is recommended to use and extend [`AsyncCallbackHandler`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) to avoid blocking the event.\n", "\n", "\n", ":::{.callout-warning}\n", "If you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe.\n", ":::\n", "\n", ":::{.callout-danger}\n", "\n", "If you're on `python<=3.10`, you need to remember to propagate `config` or `callbacks` when invoking other `runnable` from within a `RunnableLambda`, `RunnableGenerator` or `@tool`. If you do not do this,\n", "the callbacks will not be propagated to the child runnables being invoked.\n", ":::" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "zzzz....\n", "Hi! I just woke up. Your llm is starting\n", "Sync handler being called in a `thread_pool_executor`: token: Here\n", "Sync handler being called in a `thread_pool_executor`: token: 's\n", "Sync handler being called in a `thread_pool_executor`: token: a\n", "Sync handler being called in a `thread_pool_executor`: token: little\n", "Sync handler being called in a `thread_pool_executor`: token: joke\n", "Sync handler being called in a `thread_pool_executor`: token: for\n", "Sync handler being called in a `thread_pool_executor`: token: you\n", "Sync handler being called in a `thread_pool_executor`: token: :\n", "Sync handler being called in a `thread_pool_executor`: token: \n", "\n", "Why\n", "Sync handler being called in a `thread_pool_executor`: token: can\n", "Sync handler being called in a `thread_pool_executor`: token: 't\n", "Sync handler being called in a `thread_pool_executor`: token: a\n", "Sync handler being called in a `thread_pool_executor`: token: bicycle\n", "Sync handler being called in a `thread_pool_executor`: token: stan\n", "Sync handler being called in a `thread_pool_executor`: token: d up\n", "Sync handler being called in a `thread_pool_executor`: token: by\n", "Sync handler being called in a `thread_pool_executor`: token: itself\n", "Sync handler being called in a `thread_pool_executor`: token: ?\n", "Sync handler being called in a `thread_pool_executor`: token: Because\n", "Sync handler being called in a `thread_pool_executor`: token: it\n", "Sync handler being called in a `thread_pool_executor`: token: 's\n", "Sync handler being called in a `thread_pool_executor`: token: two\n", "Sync handler being called in a `thread_pool_executor`: token: -\n", "Sync handler being called in a `thread_pool_executor`: token: tire\n", "zzzz....\n", "Hi! I just woke up. Your llm is ending\n" ] }, { "data": { "text/plain": [ "LLMResult(generations=[[ChatGeneration(text=\"Here's a little joke for you:\\n\\nWhy can't a bicycle stand up by itself? Because it's two-tire\", message=AIMessage(content=\"Here's a little joke for you:\\n\\nWhy can't a bicycle stand up by itself? Because it's two-tire\", id='run-8afc89e8-02c0-4522-8480-d96977240bd4-0'))]], llm_output={}, run=[RunInfo(run_id=UUID('8afc89e8-02c0-4522-8480-d96977240bd4'))])" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import asyncio\n", "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import AsyncCallbackHandler, BaseCallbackHandler\n", "from langchain_core.messages import HumanMessage\n", "from langchain_core.outputs import LLMResult\n", "\n", "\n", "class MyCustomSyncHandler(BaseCallbackHandler):\n", " def on_llm_new_token(self, token: str, **kwargs) -> None:\n", " print(f\"Sync handler being called in a `thread_pool_executor`: token: {token}\")\n", "\n", "\n", "class MyCustomAsyncHandler(AsyncCallbackHandler):\n", " \"\"\"Async callback handler that can be used to handle callbacks from langchain.\"\"\"\n", "\n", " async def on_llm_start(\n", " self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any\n", " ) -> None:\n", " \"\"\"Run when chain starts running.\"\"\"\n", " print(\"zzzz....\")\n", " await asyncio.sleep(0.3)\n", " class_name = serialized[\"name\"]\n", " print(\"Hi! I just woke up. Your llm is starting\")\n", "\n", " async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:\n", " \"\"\"Run when chain ends running.\"\"\"\n", " print(\"zzzz....\")\n", " await asyncio.sleep(0.3)\n", " print(\"Hi! I just woke up. Your llm is ending\")\n", "\n", "\n", "# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n", "# Additionally, we pass in a list with our custom handler\n", "chat = ChatAnthropic(\n", " model=\"claude-3-sonnet-20240229\",\n", " max_tokens=25,\n", " streaming=True,\n", " callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()],\n", ")\n", "\n", "await chat.agenerate([[HumanMessage(content=\"Tell me a joke\")]])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to create your own custom callback handlers.\n", "\n", "Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/docs/how_to/callbacks_attach)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/callbacks_attach.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to attach callbacks to a runnable\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "- [Chaining runnables](/docs/how_to/sequence)\n", "- [Attach runtime arguments to a Runnable](/docs/how_to/binding)\n", "\n", ":::\n", "\n", "If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.with_config()`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method. This saves you the need to pass callbacks in each time you invoke the chain.\n", "\n", ":::{.callout-important}\n", "\n", "`with_config()` binds a configuration which will be interpreted as **runtime** configuration. So these callbacks will propagate to all child components.\n", ":::\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Chain RunnableSequence started\n", "Chain ChatPromptTemplate started\n", "Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]\n", "Chat model started\n", "Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'))]] llm_output={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n", "Chain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0'\n" ] }, { "data": { "text/plain": [ "AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01NTYMsH9YxkoWsiPYs4Lemn', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-d6bcfd72-9c94-466d-bac0-f39e456ad6e3-0')" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.messages import BaseMessage\n", "from langchain_core.outputs import LLMResult\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class LoggingHandler(BaseCallbackHandler):\n", " def on_chat_model_start(\n", " self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs\n", " ) -> None:\n", " print(\"Chat model started\")\n", "\n", " def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n", " print(f\"Chat model ended, response: {response}\")\n", "\n", " def on_chain_start(\n", " self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs\n", " ) -> None:\n", " print(f\"Chain {serialized.get('name')} started\")\n", "\n", " def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:\n", " print(f\"Chain ended, outputs: {outputs}\")\n", "\n", "\n", "callbacks = [LoggingHandler()]\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n", "prompt = ChatPromptTemplate.from_template(\"What is 1 + {number}?\")\n", "\n", "chain = prompt | llm\n", "\n", "chain_with_callbacks = chain.with_config(callbacks=callbacks)\n", "\n", "chain_with_callbacks.invoke({\"number\": \"2\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The bound callbacks will run for all nested module runs.\n", "\n", "## Next steps\n", "\n", "You've now learned how to attach callbacks to a chain.\n", "\n", "Next, check out the other how-to guides in this section, such as how to [pass callbacks in at runtime](/docs/how_to/callbacks_runtime)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/callbacks_constructor.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to propagate callbacks constructor\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "\n", ":::\n", "\n", "Most LangChain modules allow you to pass `callbacks` directly into the constructor (i.e., initializer). In this case, the callbacks will only be called for that instance (and any nested runs).\n", "\n", ":::{.callout-warning}\n", "Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. This can lead to confusing behavior,\n", "and it's generally better to pass callbacks as a run time argument.\n", ":::\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Chat model started\n", "Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0'))]] llm_output={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n" ] }, { "data": { "text/plain": [ "AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01CdKsRmeS9WRb8BWnHDEHm7', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-2d7fdf2a-7405-4e17-97c0-67e6b2a65305-0')" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.messages import BaseMessage\n", "from langchain_core.outputs import LLMResult\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class LoggingHandler(BaseCallbackHandler):\n", " def on_chat_model_start(\n", " self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs\n", " ) -> None:\n", " print(\"Chat model started\")\n", "\n", " def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n", " print(f\"Chat model ended, response: {response}\")\n", "\n", " def on_chain_start(\n", " self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs\n", " ) -> None:\n", " print(f\"Chain {serialized.get('name')} started\")\n", "\n", " def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:\n", " print(f\"Chain ended, outputs: {outputs}\")\n", "\n", "\n", "callbacks = [LoggingHandler()]\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", callbacks=callbacks)\n", "prompt = ChatPromptTemplate.from_template(\"What is 1 + {number}?\")\n", "\n", "chain = prompt | llm\n", "\n", "chain.invoke({\"number\": \"2\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that we only see events from the chat model run - no chain events from the prompt or broader chain.\n", "\n", "## Next steps\n", "\n", "You've now learned how to pass callbacks into a constructor.\n", "\n", "Next, check out the other how-to guides in this section, such as how to [pass callbacks at runtime](/docs/how_to/callbacks_runtime)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/callbacks_runtime.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to pass callbacks in at runtime\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "- [Custom callback handlers](/docs/how_to/custom_callbacks)\n", "\n", ":::\n", "\n", "In many cases, it is advantageous to pass in handlers instead when running the object. When we pass through [`CallbackHandlers`](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) using the `callbacks` keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent's execution, in this case, the Tools and LLM.\n", "\n", "This prevents us from having to manually attach the handlers to each individual nested object. Here's an example:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Chain RunnableSequence started\n", "Chain ChatPromptTemplate started\n", "Chain ended, outputs: messages=[HumanMessage(content='What is 1 + 2?')]\n", "Chat model started\n", "Chat model ended, response: generations=[[ChatGeneration(text='1 + 2 = 3', message=AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'))]] llm_output={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} run=None\n", "Chain ended, outputs: content='1 + 2 = 3' response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}} id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0'\n" ] }, { "data": { "text/plain": [ "AIMessage(content='1 + 2 = 3', response_metadata={'id': 'msg_01D8Tt5FdtBk5gLTfBPm2tac', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 13}}, id='run-bb0dddd8-85f3-4e6b-8553-eaa79f859ef8-0')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Dict, List\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.messages import BaseMessage\n", "from langchain_core.outputs import LLMResult\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class LoggingHandler(BaseCallbackHandler):\n", " def on_chat_model_start(\n", " self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs\n", " ) -> None:\n", " print(\"Chat model started\")\n", "\n", " def on_llm_end(self, response: LLMResult, **kwargs) -> None:\n", " print(f\"Chat model ended, response: {response}\")\n", "\n", " def on_chain_start(\n", " self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs\n", " ) -> None:\n", " print(f\"Chain {serialized.get('name')} started\")\n", "\n", " def on_chain_end(self, outputs: Dict[str, Any], **kwargs) -> None:\n", " print(f\"Chain ended, outputs: {outputs}\")\n", "\n", "\n", "callbacks = [LoggingHandler()]\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n", "prompt = ChatPromptTemplate.from_template(\"What is 1 + {number}?\")\n", "\n", "chain = prompt | llm\n", "\n", "chain.invoke({\"number\": \"2\"}, config={\"callbacks\": callbacks})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If there are already existing callbacks associated with a module, these will run in addition to any passed in at runtime.\n", "\n", "## Next steps\n", "\n", "You've now learned how to pass callbacks at runtime.\n", "\n", "Next, check out the other how-to guides in this section, such as how to [pass callbacks into a module constructor](/docs/how_to/custom_callbacks)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 2 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/character_text_splitter.ipynb
{ "cells": [ { "cell_type": "raw", "id": "f781411d", "metadata": { "vscode": { "languageId": "raw" } }, "source": [ "---\n", "keywords: [charactertextsplitter]\n", "---" ] }, { "cell_type": "markdown", "id": "c3ee8d00", "metadata": {}, "source": [ "# How to split by character\n", "\n", "This is the simplest method. This splits based on a given character sequence, which defaults to `\"\\n\\n\"`. Chunk length is measured by number of characters.\n", "\n", "1. How the text is split: by single character separator.\n", "2. How the chunk size is measured: by number of characters.\n", "\n", "To obtain the string content directly, use `.split_text`.\n", "\n", "To create LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects (e.g., for use in downstream tasks), use `.create_documents`." ] }, { "cell_type": "code", "execution_count": null, "id": "bf8698ce-44b2-4944-b9a9-254344b537af", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-text-splitters" ] }, { "cell_type": "code", "execution_count": 1, "id": "313fb032", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'\n" ] } ], "source": [ "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "# Load an example document\n", "with open(\"state_of_the_union.txt\") as f:\n", " state_of_the_union = f.read()\n", "\n", "text_splitter = CharacterTextSplitter(\n", " separator=\"\\n\\n\",\n", " chunk_size=1000,\n", " chunk_overlap=200,\n", " length_function=len,\n", " is_separator_regex=False,\n", ")\n", "texts = text_splitter.create_documents([state_of_the_union])\n", "print(texts[0])" ] }, { "cell_type": "markdown", "id": "dadcb9d6", "metadata": {}, "source": [ "Use `.create_documents` to propagate metadata associated with each document to the output chunks:" ] }, { "cell_type": "code", "execution_count": 2, "id": "1affda60", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.' metadata={'document': 1}\n" ] } ], "source": [ "metadatas = [{\"document\": 1}, {\"document\": 2}]\n", "documents = text_splitter.create_documents(\n", " [state_of_the_union, state_of_the_union], metadatas=metadatas\n", ")\n", "print(documents[0])" ] }, { "cell_type": "markdown", "id": "ee080e12-6f44-4311-b1ef-302520a41d66", "metadata": {}, "source": [ "Use `.split_text` to obtain the string content directly:" ] }, { "cell_type": "code", "execution_count": 7, "id": "2a830a9f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \\n\\nLast year COVID-19 kept us apart. This year we are finally together again. \\n\\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \\n\\nWith a duty to one another to the American people to the Constitution. \\n\\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \\n\\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \\n\\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \\n\\nHe met the Ukrainian people. \\n\\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world.'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "text_splitter.split_text(state_of_the_union)[0]" ] }, { "cell_type": "code", "execution_count": null, "id": "a9a3b9cd", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chat_model_caching.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "dcf87b32", "metadata": {}, "source": [ "# How to cache chat model responses\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "- [LLMs](/docs/concepts/#llms)\n", "\n", ":::\n", "\n", "LangChain provides an optional caching layer for chat models. This is useful for two main reasons:\n", "\n", "- It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. This is especially useful during app development.\n", "- It can speed up your application by reducing the number of API calls you make to the LLM provider.\n", "\n", "This guide will walk you through how to enable this in your apps." ] }, { "cell_type": "markdown", "id": "289b31de", "metadata": {}, "source": [ "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"llm\" />\n", "```" ] }, { "cell_type": "code", "execution_count": 1, "id": "c6641f37", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass()\n", "\n", "llm = ChatOpenAI()" ] }, { "cell_type": "code", "execution_count": 2, "id": "5472a032", "metadata": {}, "outputs": [], "source": [ "# <!-- ruff: noqa: F821 -->\n", "from langchain.globals import set_llm_cache" ] }, { "cell_type": "markdown", "id": "357b89a8", "metadata": {}, "source": [ "## In Memory Cache\n", "\n", "This is an ephemeral cache that stores model calls in memory. It will be wiped when your environment restarts, and is not shared across processes." ] }, { "cell_type": "code", "execution_count": 3, "id": "113e719a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 645 ms, sys: 214 ms, total: 859 ms\n", "Wall time: 829 ms\n" ] }, { "data": { "text/plain": [ "AIMessage(content=\"Why don't scientists trust atoms?\\n\\nBecause they make up everything!\", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "from langchain.cache import InMemoryCache\n", "\n", "set_llm_cache(InMemoryCache())\n", "\n", "# The first time, it is not yet in cache, so it should take longer\n", "llm.invoke(\"Tell me a joke\")" ] }, { "cell_type": "code", "execution_count": 4, "id": "a2121434", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 822 µs, sys: 288 µs, total: 1.11 ms\n", "Wall time: 1.06 ms\n" ] }, { "data": { "text/plain": [ "AIMessage(content=\"Why don't scientists trust atoms?\\n\\nBecause they make up everything!\", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "# The second time it is, so it goes faster\n", "llm.invoke(\"Tell me a joke\")" ] }, { "cell_type": "markdown", "id": "b88ff8af", "metadata": {}, "source": [ "## SQLite Cache\n", "\n", "This cache implementation uses a `SQLite` database to store responses, and will last across process restarts." ] }, { "cell_type": "code", "execution_count": 5, "id": "99290ab4", "metadata": {}, "outputs": [], "source": [ "!rm .langchain.db" ] }, { "cell_type": "code", "execution_count": 6, "id": "fe826c5c", "metadata": {}, "outputs": [], "source": [ "# We can do the same thing with a SQLite cache\n", "from langchain_community.cache import SQLiteCache\n", "\n", "set_llm_cache(SQLiteCache(database_path=\".langchain.db\"))" ] }, { "cell_type": "code", "execution_count": 7, "id": "eb558734", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 9.91 ms, sys: 7.68 ms, total: 17.6 ms\n", "Wall time: 657 ms\n" ] }, { "data": { "text/plain": [ "AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 11, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "# The first time, it is not yet in cache, so it should take longer\n", "llm.invoke(\"Tell me a joke\")" ] }, { "cell_type": "code", "execution_count": 8, "id": "497c7000", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 52.2 ms, sys: 60.5 ms, total: 113 ms\n", "Wall time: 127 ms\n" ] }, { "data": { "text/plain": [ "AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "# The second time it is, so it goes faster\n", "llm.invoke(\"Tell me a joke\")" ] }, { "cell_type": "markdown", "id": "2950a913", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to cache model responses to save time and money.\n", "\n", "Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/docs/how_to/structured_output) or [how to create your own custom chat model](/docs/how_to/custom_chat_model)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chat_models_universal_init.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "cfdf4f09-8125-4ed1-8063-6feed57da8a3", "metadata": {}, "source": [ "# How to init any model in one line\n", "\n", "Many LLM applications let end users specify what model provider and model they want the application to be powered by. This requires writing some logic to initialize different ChatModels based on some user configuration. The `init_chat_model()` helper method makes it easy to initialize a number of different model integrations without having to worry about import paths and class names.\n", "\n", ":::tip Supported models\n", "\n", "See the [init_chat_model()](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) API reference for a full list of supported integrations.\n", "\n", "Make sure you have the integration packages installed for any model providers you want to support. E.g. you should have `langchain-openai` installed to init an OpenAI model.\n", "\n", ":::" ] }, { "cell_type": "code", "execution_count": null, "id": "165b0de6-9ae3-4e3d-aa98-4fc8a97c4a06", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai langchain-anthropic langchain-google-vertexai" ] }, { "cell_type": "markdown", "id": "ea2c9f57-a796-45f8-b6f4-3efd3f361a9b", "metadata": {}, "source": [ "## Basic usage" ] }, { "cell_type": "code", "execution_count": 5, "id": "79e14913-803c-4382-9009-5c6af3d75d35", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GPT-4o: I'm an AI created by OpenAI, and I don't have a personal name. You can call me Assistant! How can I help you today?\n", "\n", "Claude Opus: My name is Claude. It's nice to meet you!\n", "\n", "Gemini 1.5: I am a large language model, trained by Google. I do not have a name. \n", "\n", "\n" ] } ], "source": [ "from langchain.chat_models import init_chat_model\n", "\n", "# Returns a langchain_openai.ChatOpenAI instance.\n", "gpt_4o = init_chat_model(\"gpt-4o\", model_provider=\"openai\", temperature=0)\n", "# Returns a langchain_anthropic.ChatAnthropic instance.\n", "claude_opus = init_chat_model(\n", " \"claude-3-opus-20240229\", model_provider=\"anthropic\", temperature=0\n", ")\n", "# Returns a langchain_google_vertexai.ChatVertexAI instance.\n", "gemini_15 = init_chat_model(\n", " \"gemini-1.5-pro\", model_provider=\"google_vertexai\", temperature=0\n", ")\n", "\n", "# Since all model integrations implement the ChatModel interface, you can use them in the same way.\n", "print(\"GPT-4o: \" + gpt_4o.invoke(\"what's your name\").content + \"\\n\")\n", "print(\"Claude Opus: \" + claude_opus.invoke(\"what's your name\").content + \"\\n\")\n", "print(\"Gemini 1.5: \" + gemini_15.invoke(\"what's your name\").content + \"\\n\")" ] }, { "cell_type": "markdown", "id": "fff9a4c8-b6ee-4a1a-8d3d-0ecaa312d4ed", "metadata": {}, "source": [ "## Simple config example" ] }, { "cell_type": "code", "execution_count": null, "id": "75c25d39-bf47-4b51-a6c6-64d9c572bfd6", "metadata": {}, "outputs": [], "source": [ "user_config = {\n", " \"model\": \"...user-specified...\",\n", " \"model_provider\": \"...user-specified...\",\n", " \"temperature\": 0,\n", " \"max_tokens\": 1000,\n", "}\n", "\n", "llm = init_chat_model(**user_config)\n", "llm.invoke(\"what's your name\")" ] }, { "cell_type": "markdown", "id": "f811f219-5e78-4b62-b495-915d52a22532", "metadata": {}, "source": [ "## Inferring model provider\n", "\n", "For common and distinct model names `init_chat_model()` will attempt to infer the model provider. See the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain.chat_models.base.init_chat_model.html) for a full list of inference behavior. E.g. any model that starts with `gpt-3...` or `gpt-4...` will be inferred as using model provider `openai`." ] }, { "cell_type": "code", "execution_count": 4, "id": "0378ccc6-95bc-4d50-be50-fccc193f0a71", "metadata": {}, "outputs": [], "source": [ "gpt_4o = init_chat_model(\"gpt-4o\", temperature=0)\n", "claude_opus = init_chat_model(\"claude-3-opus-20240229\", temperature=0)\n", "gemini_15 = init_chat_model(\"gemini-1.5-pro\", temperature=0)" ] }, { "cell_type": "code", "execution_count": null, "id": "da07b5c0-d2e6-42e4-bfcd-2efcfaae6221", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "poetry-venv-2", "language": "python", "name": "poetry-venv-2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chat_streaming.ipynb
{ "cells": [ { "cell_type": "raw", "id": "e9437c8a-d8b7-4bf6-8ff4-54068a5a266c", "metadata": {}, "source": [ "---\n", "sidebar_position: 1.5\n", "---" ] }, { "cell_type": "markdown", "id": "d0df7646-b1e1-4014-a841-6dae9b3c50d9", "metadata": {}, "source": [ "# How to stream chat model responses\n", "\n", "\n", "All [chat models](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) implement the [Runnable interface](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable), which comes with a **default** implementations of standard runnable methods (i.e. `ainvoke`, `batch`, `abatch`, `stream`, `astream`, `astream_events`).\n", "\n", "The **default** streaming implementation provides an`Iterator` (or `AsyncIterator` for asynchronous streaming) that yields a single value: the final output from the underlying chat model provider.\n", "\n", ":::{.callout-tip}\n", "\n", "The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the the model can be swapped in for any other model as it supports the same standard interface.\n", "\n", ":::\n", "\n", "The ability to stream the output token-by-token depends on whether the provider has implemented proper streaming support.\n", "\n", "See which [integrations support token-by-token streaming here](/docs/integrations/chat/)." ] }, { "cell_type": "markdown", "id": "7a76660e-7691-48b7-a2b4-2ccdff7875c3", "metadata": {}, "source": [ "## Sync streaming\n", "\n", "Below we use a `|` to help visualize the delimiter between tokens." ] }, { "cell_type": "code", "execution_count": 1, "id": "975c4f32-21f6-4a71-9091-f87b56347c33", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Here| is| a| |1| |verse| song| about| gol|dfish| on| the| moon|:|\n", "\n", "Floating| up| in| the| star|ry| night|,|\n", "Fins| a|-|gl|im|mer| in| the| pale| moon|light|.|\n", "Gol|dfish| swimming|,| peaceful| an|d free|,|\n", "Se|ren|ely| |drif|ting| across| the| lunar| sea|.|" ] } ], "source": [ "from langchain_anthropic.chat_models import ChatAnthropic\n", "\n", "chat = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "for chunk in chat.stream(\"Write me a 1 verse song about goldfish on the moon\"):\n", " print(chunk.content, end=\"|\", flush=True)" ] }, { "cell_type": "markdown", "id": "5482d3a7-ee4f-40ba-b871-4d3f52603cd5", "metadata": { "tags": [] }, "source": [ "## Async Streaming" ] }, { "cell_type": "code", "execution_count": 2, "id": "422f480c-df79-42e8-9bee-d0ebed31c557", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Here| is| a| |1| |verse| song| about| gol|dfish| on| the| moon|:|\n", "\n", "Floating| up| above| the| Earth|,|\n", "Gol|dfish| swim| in| alien| m|irth|.|\n", "In| their| bowl| of| lunar| dust|,|\n", "Gl|it|tering| scales| reflect| the| trust|\n", "Of| swimming| free| in| this| new| worl|d,|\n", "Where| their| aqu|atic| dream|'s| unf|ur|le|d.|" ] } ], "source": [ "from langchain_anthropic.chat_models import ChatAnthropic\n", "\n", "chat = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "async for chunk in chat.astream(\"Write me a 1 verse song about goldfish on the moon\"):\n", " print(chunk.content, end=\"|\", flush=True)" ] }, { "cell_type": "markdown", "id": "c61e1309-3b6e-42fb-820a-2e4e3e6bc074", "metadata": {}, "source": [ "## Astream events\n", "\n", "Chat models also support the standard [astream events](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_events) method.\n", "\n", "This method is useful if you're streaming output from a larger LLM application that contains multiple steps (e.g., an LLM chain composed of a prompt, llm and parser)." ] }, { "cell_type": "code", "execution_count": 11, "id": "27bd1dfd-8ae2-49d6-b526-97180c81b5f4", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'event': 'on_chat_model_start', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}, 'data': {'input': 'Write me a 1 verse song about goldfish on the moon'}}\n", "{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content='Here', id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n", "{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content=\"'s\", id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n", "{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content=' a', id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n", "...Truncated\n" ] } ], "source": [ "from langchain_anthropic.chat_models import ChatAnthropic\n", "\n", "chat = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "idx = 0\n", "\n", "async for event in chat.astream_events(\n", " \"Write me a 1 verse song about goldfish on the moon\", version=\"v1\"\n", "):\n", " idx += 1\n", " if idx >= 5: # Truncate the output\n", " print(\"...Truncated\")\n", " break\n", " print(event)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chat_token_usage_tracking.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "e5715368", "metadata": {}, "source": [ "# How to track token usage in ChatModels\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "\n", ":::\n", "\n", "Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n", "\n", "This guide requires `langchain-openai >= 0.1.8`." ] }, { "cell_type": "code", "execution_count": null, "id": "9c7d1338-dd1b-4d06-b33d-d5cffc49fd6a", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-openai" ] }, { "cell_type": "markdown", "id": "598ae1e2-a52d-4459-81fd-cdc68b06742a", "metadata": {}, "source": [ "## Using LangSmith\n", "\n", "You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n", "\n", "## Using AIMessage.usage_metadata\n", "\n", "A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n", "\n", "LangChain `AIMessage` objects include a [usage_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n", "\n", "Examples:\n", "\n", "**OpenAI**:" ] }, { "cell_type": "code", "execution_count": 1, "id": "b39bf807-4125-4db4-bbf7-28a46afff6b4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# # !pip install -qU langchain-openai\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n", "openai_response = llm.invoke(\"hello\")\n", "openai_response.usage_metadata" ] }, { "cell_type": "markdown", "id": "2299c44a-2fe6-4d52-a6a2-99ff6d231c73", "metadata": {}, "source": [ "**Anthropic**:" ] }, { "cell_type": "code", "execution_count": 2, "id": "9c82ff80-ec4e-4049-b019-5f0bbd7df82a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20}" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# !pip install -qU langchain-anthropic\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "anthropic_response = llm.invoke(\"hello\")\n", "anthropic_response.usage_metadata" ] }, { "cell_type": "markdown", "id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f", "metadata": {}, "source": [ "### Using AIMessage.response_metadata\n", "\n", "Metadata from the model response is also included in the AIMessage [response_metadata](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:" ] }, { "cell_type": "code", "execution_count": 3, "id": "f156f9da-21f2-4c81-a714-54cbf9ad393e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n", "\n", "Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n" ] } ], "source": [ "print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n", "print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')" ] }, { "cell_type": "markdown", "id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7", "metadata": {}, "source": [ "### Streaming\n", "\n", "Some providers support token count metadata in a streaming context.\n", "\n", "#### OpenAI\n", "\n", "For example, OpenAI will return a message [chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html) at the end of a stream with token usage information. This behavior is supported by `langchain-openai >= 0.1.8` and can be enabled by setting `stream_options={\"include_usage\": True}`.\n", "\n", "```{=mdx}\n", ":::note\n", "By default, the last message chunk in a stream will include a `\"finish_reason\"` in the message's `response_metadata` attribute. If we include token usage in streaming mode, an additional chunk containing usage metadata will be added to the end of the stream, such that `\"finish_reason\"` appears on the second to last message chunk.\n", ":::\n", "```" ] }, { "cell_type": "code", "execution_count": 4, "id": "07f0c872-6b6c-4fed-a129-9b5a858505be", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content='Hello' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content='!' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content=' How' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content=' can' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content=' I' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content=' assist' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content=' you' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content=' today' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content='?' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content='' response_metadata={'finish_reason': 'stop'} id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf'\n", "content='' id='run-b40e502e-d30e-4617-94ad-95b4dfee14bf' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n" ] } ], "source": [ "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")\n", "\n", "aggregate = None\n", "for chunk in llm.stream(\"hello\", stream_options={\"include_usage\": True}):\n", " print(chunk)\n", " aggregate = chunk if aggregate is None else aggregate + chunk" ] }, { "cell_type": "markdown", "id": "dd809ded-8b13-4d5f-be5e-277b79d51802", "metadata": {}, "source": [ "Note that the usage metadata will be included in the sum of the individual message chunks:" ] }, { "cell_type": "code", "execution_count": 5, "id": "3db7bc03-a7d4-4704-92ab-f8ba92ef59ae", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello! How can I assist you today?\n", "{'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}\n" ] } ], "source": [ "print(aggregate.content)\n", "print(aggregate.usage_metadata)" ] }, { "cell_type": "markdown", "id": "7dba63e8-0ed7-4533-8f0f-78e19c38a25c", "metadata": {}, "source": [ "To disable streaming token counts for OpenAI, set `\"include_usage\"` to False in `stream_options`, or omit it from the parameters:" ] }, { "cell_type": "code", "execution_count": 6, "id": "67117f2b-ce68-4c1e-9556-2d3849f90e1b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "content='' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content='Hello' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content='!' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content=' How' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content=' can' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content=' I' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content=' assist' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content=' you' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content=' today' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content='?' id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n", "content='' response_metadata={'finish_reason': 'stop'} id='run-0085d64c-13d2-431b-a0fa-399be8cd3c52'\n" ] } ], "source": [ "aggregate = None\n", "for chunk in llm.stream(\"hello\"):\n", " print(chunk)" ] }, { "cell_type": "markdown", "id": "6a5d9617-be3a-419a-9276-de9c29fa50ae", "metadata": {}, "source": [ "You can also enable streaming token usage by setting `model_kwargs` when instantiating the chat model. This can be useful when incorporating chat models into LangChain [chains](/docs/concepts#langchain-expression-language-lcel): usage metadata can be monitored when [streaming intermediate steps](/docs/how_to/streaming#using-stream-events) or using tracing software such as [LangSmith](https://docs.smith.langchain.com/).\n", "\n", "See the below example, where we return output structured to a desired schema, but can still observe token usage streamed from intermediate steps." ] }, { "cell_type": "code", "execution_count": 8, "id": "57dec1fb-bd9c-4c98-8798-8fbbe67f6b2c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Token usage: {'input_tokens': 79, 'output_tokens': 23, 'total_tokens': 102}\n", "\n", "setup='Why was the math book sad?' punchline='Because it had too many problems.'\n" ] } ], "source": [ "from langchain_core.pydantic_v1 import BaseModel, Field\n", "\n", "\n", "class Joke(BaseModel):\n", " \"\"\"Joke to tell user.\"\"\"\n", "\n", " setup: str = Field(description=\"question to set up a joke\")\n", " punchline: str = Field(description=\"answer to resolve the joke\")\n", "\n", "\n", "llm = ChatOpenAI(\n", " model=\"gpt-3.5-turbo-0125\",\n", " model_kwargs={\"stream_options\": {\"include_usage\": True}},\n", ")\n", "# Under the hood, .with_structured_output binds tools to the\n", "# chat model and appends a parser.\n", "structured_llm = llm.with_structured_output(Joke)\n", "\n", "async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n", " if event[\"event\"] == \"on_chat_model_end\":\n", " print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n", " elif event[\"event\"] == \"on_chain_end\":\n", " print(event[\"data\"][\"output\"])\n", " else:\n", " pass" ] }, { "cell_type": "markdown", "id": "2bc8d313-4bef-463e-89a5-236d8bb6ab2f", "metadata": {}, "source": [ "Token usage is also visible in the corresponding [LangSmith trace](https://smith.langchain.com/public/fe6513d5-7212-4045-82e0-fefa28bc7656/r) in the payload from the chat model." ] }, { "cell_type": "markdown", "id": "d6845407-af25-4eed-bc3e-50925c6661e0", "metadata": {}, "source": [ "## Using callbacks\n", "\n", "There are also some API-specific callback context managers that allow you to track token usage across multiple calls. It is currently only implemented for the OpenAI API and Bedrock Anthropic API.\n", "\n", "### OpenAI\n", "\n", "Let's first look at an extremely simple example of tracking token usage for a single Chat model call." ] }, { "cell_type": "code", "execution_count": 9, "id": "31667d54", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tokens Used: 27\n", "\tPrompt Tokens: 11\n", "\tCompletion Tokens: 16\n", "Successful Requests: 1\n", "Total Cost (USD): $2.95e-05\n" ] } ], "source": [ "# !pip install -qU langchain-community wikipedia\n", "\n", "from langchain_community.callbacks.manager import get_openai_callback\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n", "\n", "with get_openai_callback() as cb:\n", " result = llm.invoke(\"Tell me a joke\")\n", " print(cb)" ] }, { "cell_type": "markdown", "id": "c0ab6d27", "metadata": {}, "source": [ "Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence." ] }, { "cell_type": "code", "execution_count": 10, "id": "e09420f4", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "55\n" ] } ], "source": [ "with get_openai_callback() as cb:\n", " result = llm.invoke(\"Tell me a joke\")\n", " result2 = llm.invoke(\"Tell me a joke\")\n", " print(cb.total_tokens)" ] }, { "cell_type": "markdown", "id": "9ac51188-c8f4-4230-90fd-3cd78cdd955d", "metadata": {}, "source": [ "```{=mdx}\n", ":::note\n", "Cost information is currently not available in streaming mode. This is because model names are currently not propagated through chunks in streaming mode, and the model name is used to look up the correct pricing. Token counts however are available:\n", ":::\n", "```" ] }, { "cell_type": "code", "execution_count": 11, "id": "b241069a-265d-4497-af34-b0a5f95ae67f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "28\n" ] } ], "source": [ "with get_openai_callback() as cb:\n", " for chunk in llm.stream(\"Tell me a joke\", stream_options={\"include_usage\": True}):\n", " pass\n", " print(cb.total_tokens)" ] }, { "cell_type": "markdown", "id": "d8186e7b", "metadata": {}, "source": [ "If a chain or agent with multiple steps in it is used, it will track all those steps." ] }, { "cell_type": "code", "execution_count": 12, "id": "5d1125c6", "metadata": {}, "outputs": [], "source": [ "from langchain.agents import AgentExecutor, create_tool_calling_agent, load_tools\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", \"You're a helpful assistant\"),\n", " (\"human\", \"{input}\"),\n", " (\"placeholder\", \"{agent_scratchpad}\"),\n", " ]\n", ")\n", "tools = load_tools([\"wikipedia\"])\n", "agent = create_tool_calling_agent(llm, tools, prompt)\n", "agent_executor = AgentExecutor(\n", " agent=agent, tools=tools, verbose=True, stream_runnable=False\n", ")" ] }, { "cell_type": "markdown", "id": "9c1ae74d-8300-4041-9ff4-66093ee592b1", "metadata": {}, "source": [ "```{=mdx}\n", ":::note\n", "We have to set `stream_runnable=False` for cost information, as described above. By default the AgentExecutor will stream the underlying agent so that you can get the most granular results when streaming events via AgentExecutor.stream_events.\n", ":::\n", "```" ] }, { "cell_type": "code", "execution_count": 13, "id": "3950d88b-8bfb-4294-b75b-e6fd421e633c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3mPage: Hummingbird\n", "Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.\n", "Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.5–13 cm (3–5 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 18–24 grams (0.63–0.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n", "They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.\n", "Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 1⁄15 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).\n", "Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.\n", "\n", "Page: Rufous hummingbird\n", "Summary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.\n", "\n", "\n", "\n", "Page: Anna's hummingbird\n", "Summary: Anna's hummingbird (Calypte anna) is a North American species of hummingbird. It was named after Anna Masséna, Duchess of Rivoli.\n", "It is native to western coastal regions of North America. In the early 20th century, Anna's hummingbirds bred only in northern Baja California and Southern California. The transplanting of exotic ornamental plants in residential areas throughout the Pacific coast and inland deserts provided expanded nectar and nesting sites, allowing the species to expand its breeding range. Year-round residence of Anna's hummingbirds in the Pacific Northwest is an example of ecological release dependent on acclimation to colder winter temperatures, introduced plants, and human provision of nectar feeders during winter.\n", "These birds feed on nectar from flowers using a long extendable tongue. They also consume small insects and other arthropods caught in flight or gleaned from vegetation.\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `wikipedia` with `{'query': 'fastest bird species'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3mPage: List of birds by flight speed\n", "Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n", "\n", "\n", "\n", "Page: Fastest animals\n", "Summary: This is a list of the fastest animals in the world, by types of animal.\n", "\n", "\n", "\n", "Page: Falcon\n", "Summary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.\n", "Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.\n", "The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial \"tooth\" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.\n", "The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.\n", "Some small falcons with long, narrow wings are called \"hobbies\" and some which hover while hunting are called \"kestrels\".\n", "As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).\u001b[0m\u001b[32;1m\u001b[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species is the peregrine falcon (Falco peregrinus), which can exceed speeds of 320 km/h (200 mph) in its dives.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n", "Total Tokens: 1787\n", "Prompt Tokens: 1687\n", "Completion Tokens: 100\n", "Total Cost (USD): $0.0009935\n" ] } ], "source": [ "with get_openai_callback() as cb:\n", " response = agent_executor.invoke(\n", " {\n", " \"input\": \"What's a hummingbird's scientific name and what's the fastest bird species?\"\n", " }\n", " )\n", " print(f\"Total Tokens: {cb.total_tokens}\")\n", " print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n", " print(f\"Completion Tokens: {cb.completion_tokens}\")\n", " print(f\"Total Cost (USD): ${cb.total_cost}\")" ] }, { "cell_type": "markdown", "id": "ebc9122b-050b-4006-b763-264b0b26d9df", "metadata": {}, "source": [ "### Bedrock Anthropic\n", "\n", "The `get_bedrock_anthropic_callback` works very similarly:" ] }, { "cell_type": "code", "execution_count": 12, "id": "1837c807-136a-49d8-9c33-060e58dc16d2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Tokens Used: 96\n", "\tPrompt Tokens: 26\n", "\tCompletion Tokens: 70\n", "Successful Requests: 2\n", "Total Cost (USD): $0.001888\n" ] } ], "source": [ "# !pip install langchain-aws\n", "from langchain_aws import ChatBedrock\n", "from langchain_community.callbacks.manager import get_bedrock_anthropic_callback\n", "\n", "llm = ChatBedrock(model_id=\"anthropic.claude-v2\")\n", "\n", "with get_bedrock_anthropic_callback() as cb:\n", " result = llm.invoke(\"Tell me a joke\")\n", " result2 = llm.invoke(\"Tell me a joke\")\n", " print(cb)" ] }, { "cell_type": "markdown", "id": "33172f31", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now seen a few examples of how to track token usage for supported providers.\n", "\n", "Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/docs/how_to/structured_output) or [how to add caching to your chat models](/docs/how_to/chat_model_caching)." ] }, { "cell_type": "code", "execution_count": null, "id": "bb40375d", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chatbots_memory.ipynb
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 1\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add memory to chatbots\n", "\n", "A key feature of chatbots is their ability to use content of previous conversation turns as context. This state management can take several forms, including:\n", "\n", "- Simply stuffing previous messages into a chat model prompt.\n", "- The above, but trimming old messages to reduce the amount of distracting information the model has to deal with.\n", "- More complex modifications like synthesizing summaries for long running conversations.\n", "\n", "We'll go into more detail on a few techniques below!\n", "\n", "## Setup\n", "\n", "You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n", "You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] }, { "data": { "text/plain": [ "True" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%pip install --upgrade --quiet langchain langchain-openai\n", "\n", "# Set env var OPENAI_API_KEY or load from a .env file:\n", "import dotenv\n", "\n", "dotenv.load_dotenv()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also set up a chat model that we'll use for the below examples." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "chat = ChatOpenAI(model=\"gpt-3.5-turbo-0125\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Message passing\n", "\n", "The simplest form of memory is simply passing chat history messages into a chain. Here's an example:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "I said \"J'adore la programmation,\" which means \"I love programming\" in French.\n" ] } ], "source": [ "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n", " ),\n", " (\"placeholder\", \"{messages}\"),\n", " ]\n", ")\n", "\n", "chain = prompt | chat\n", "\n", "ai_msg = chain.invoke(\n", " {\n", " \"messages\": [\n", " (\n", " \"human\",\n", " \"Translate this sentence from English to French: I love programming.\",\n", " ),\n", " (\"ai\", \"J'adore la programmation.\"),\n", " (\"human\", \"What did you just say?\"),\n", " ],\n", " }\n", ")\n", "print(ai_msg.content)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that by passing the previous conversation into a chain, it can use it as context to answer questions. This is the basic concept underpinning chatbot memory - the rest of the guide will demonstrate convenient techniques for passing or reformatting messages.\n", "\n", "## Chat history\n", "\n", "It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.memory) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n", "\n", "Here's an example of the API:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content='Translate this sentence from English to French: I love programming.'),\n", " AIMessage(content=\"J'adore la programmation.\")]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.chat_message_histories import ChatMessageHistory\n", "\n", "demo_ephemeral_chat_history = ChatMessageHistory()\n", "\n", "demo_ephemeral_chat_history.add_user_message(\n", " \"Translate this sentence from English to French: I love programming.\"\n", ")\n", "\n", "demo_ephemeral_chat_history.add_ai_message(\"J'adore la programmation.\")\n", "\n", "demo_ephemeral_chat_history.messages" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use it directly to store conversation turns for our chain:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='You just asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 61, 'total_tokens': 79}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5cbb21c2-9c30-4031-8ea8-bfc497989535-0', usage_metadata={'input_tokens': 61, 'output_tokens': 18, 'total_tokens': 79})" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "demo_ephemeral_chat_history = ChatMessageHistory()\n", "\n", "input1 = \"Translate this sentence from English to French: I love programming.\"\n", "\n", "demo_ephemeral_chat_history.add_user_message(input1)\n", "\n", "response = chain.invoke(\n", " {\n", " \"messages\": demo_ephemeral_chat_history.messages,\n", " }\n", ")\n", "\n", "demo_ephemeral_chat_history.add_ai_message(response)\n", "\n", "input2 = \"What did I just ask you?\"\n", "\n", "demo_ephemeral_chat_history.add_user_message(input2)\n", "\n", "chain.invoke(\n", " {\n", " \"messages\": demo_ephemeral_chat_history.messages,\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Automatic history management\n", "\n", "The previous examples pass messages to the chain explicitly. This is a completely acceptable approach, but it does require external management of new messages. LangChain also includes an wrapper for LCEL chains that can handle this process automatically called `RunnableWithMessageHistory`.\n", "\n", "To show how it works, let's slightly modify the above prompt to take a final `input` variable that populates a `HumanMessage` template after the chat history. This means that we will expect a `chat_history` parameter that contains all messages BEFORE the current messages instead of all messages:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant. Answer all questions to the best of your ability.\",\n", " ),\n", " (\"placeholder\", \"{chat_history}\"),\n", " (\"human\", \"{input}\"),\n", " ]\n", ")\n", "\n", "chain = prompt | chat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " We'll pass the latest input to the conversation here and let the `RunnableWithMessageHistory` class wrap our chain and do the work of appending that `input` variable to the chat history.\n", " \n", " Next, let's declare our wrapped chain:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from langchain_core.runnables.history import RunnableWithMessageHistory\n", "\n", "demo_ephemeral_chat_history_for_chain = ChatMessageHistory()\n", "\n", "chain_with_message_history = RunnableWithMessageHistory(\n", " chain,\n", " lambda session_id: demo_ephemeral_chat_history_for_chain,\n", " input_messages_key=\"input\",\n", " history_messages_key=\"chat_history\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This class takes a few parameters in addition to the chain that we want to wrap:\n", "\n", "- A factory function that returns a message history for a given session id. This allows your chain to handle multiple users at once by loading different messages for different conversations.\n", "- An `input_messages_key` that specifies which part of the input should be tracked and stored in the chat history. In this example, we want to track the string passed in as `input`.\n", "- A `history_messages_key` that specifies what the previous messages should be injected into the prompt as. Our prompt has a `MessagesPlaceholder` named `chat_history`, so we specify this property to match.\n", "- (For chains with multiple outputs) an `output_messages_key` which specifies which output to store as history. This is the inverse of `input_messages_key`.\n", "\n", "We can invoke this new chain as normal, with an additional `configurable` field that specifies the particular `session_id` to pass to the factory function. This is unused for the demo, but in real-world chains, you'll want to return a chat history corresponding to the passed session:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Parent run dc4e2f79-4bcd-4a36-9506-55ace9040588 not found for run 34b5773e-3ced-46a6-8daf-4d464c15c940. Treating as a root run.\n" ] }, { "data": { "text/plain": [ "AIMessage(content='\"J\\'adore la programmation.\"', response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 39, 'total_tokens': 48}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-648b0822-b0bb-47a2-8e7d-7d34744be8f2-0', usage_metadata={'input_tokens': 39, 'output_tokens': 9, 'total_tokens': 48})" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain_with_message_history.invoke(\n", " {\"input\": \"Translate this sentence from English to French: I love programming.\"},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Parent run cc14b9d8-c59e-40db-a523-d6ab3fc2fa4f not found for run 5b75e25c-131e-46ee-9982-68569db04330. Treating as a root run.\n" ] }, { "data": { "text/plain": [ "AIMessage(content='You asked me to translate the sentence \"I love programming\" from English to French.', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 63, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5950435c-1dc2-43a6-836f-f989fd62c95e-0', usage_metadata={'input_tokens': 63, 'output_tokens': 17, 'total_tokens': 80})" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain_with_message_history.invoke(\n", " {\"input\": \"What did I just ask you?\"}, {\"configurable\": {\"session_id\": \"unused\"}}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Modifying chat history\n", "\n", "Modifying stored chat messages can help your chatbot handle a variety of situations. Here are some examples:\n", "\n", "### Trimming messages\n", "\n", "LLMs and chat models have limited context windows, and even if you're not directly hitting limits, you may want to limit the amount of distraction the model has to deal with. One solution is trim the historic messages before passing them to the model. Let's use an example history with some preloaded messages:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n", " AIMessage(content='Hello!'),\n", " HumanMessage(content='How are you today?'),\n", " AIMessage(content='Fine thanks!')]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "demo_ephemeral_chat_history = ChatMessageHistory()\n", "\n", "demo_ephemeral_chat_history.add_user_message(\"Hey there! I'm Nemo.\")\n", "demo_ephemeral_chat_history.add_ai_message(\"Hello!\")\n", "demo_ephemeral_chat_history.add_user_message(\"How are you today?\")\n", "demo_ephemeral_chat_history.add_ai_message(\"Fine thanks!\")\n", "\n", "demo_ephemeral_chat_history.messages" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's use this message history with the `RunnableWithMessageHistory` chain we declared above:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Parent run 7ff2d8ec-65e2-4f67-8961-e498e2c4a591 not found for run 3881e990-6596-4326-84f6-2b76949e0657. Treating as a root run.\n" ] }, { "data": { "text/plain": [ "AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72})" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain_with_message_history = RunnableWithMessageHistory(\n", " chain,\n", " lambda session_id: demo_ephemeral_chat_history,\n", " input_messages_key=\"input\",\n", " history_messages_key=\"chat_history\",\n", ")\n", "\n", "chain_with_message_history.invoke(\n", " {\"input\": \"What's my name?\"},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see the chain remembers the preloaded name.\n", "\n", "But let's say we have a very small context window, and we want to trim the number of messages passed to the chain to only the 2 most recent ones. We can use the built in [trim_messages](/docs/how_to/trim_messages/) util to trim messages based on their token count before they reach our prompt. In this case we'll count each message as 1 \"token\" and keep only the last two messages:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "from operator import itemgetter\n", "\n", "from langchain_core.messages import trim_messages\n", "from langchain_core.runnables import RunnablePassthrough\n", "\n", "trimmer = trim_messages(strategy=\"last\", max_tokens=2, token_counter=len)\n", "\n", "chain_with_trimming = (\n", " RunnablePassthrough.assign(chat_history=itemgetter(\"chat_history\") | trimmer)\n", " | prompt\n", " | chat\n", ")\n", "\n", "chain_with_trimmed_history = RunnableWithMessageHistory(\n", " chain_with_trimming,\n", " lambda session_id: demo_ephemeral_chat_history,\n", " input_messages_key=\"input\",\n", " history_messages_key=\"chat_history\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's call this new chain and check the messages afterwards:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Parent run 775cde65-8d22-4c44-80bb-f0b9811c32ca not found for run 5cf71d0e-4663-41cd-8dbe-e9752689cfac. Treating as a root run.\n" ] }, { "data": { "text/plain": [ "AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain_with_trimmed_history.invoke(\n", " {\"input\": \"Where does P. Sherman live?\"},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n", " AIMessage(content='Hello!'),\n", " HumanMessage(content='How are you today?'),\n", " AIMessage(content='Fine thanks!'),\n", " HumanMessage(content=\"What's my name?\"),\n", " AIMessage(content='Your name is Nemo.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 66, 'total_tokens': 72}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-f8aabef8-631a-4238-a39b-701e881fbe47-0', usage_metadata={'input_tokens': 66, 'output_tokens': 6, 'total_tokens': 72}),\n", " HumanMessage(content='Where does P. Sherman live?'),\n", " AIMessage(content='P. Sherman is a fictional character from the animated movie \"Finding Nemo\" who lives at 42 Wallaby Way, Sydney.', response_metadata={'token_usage': {'completion_tokens': 27, 'prompt_tokens': 53, 'total_tokens': 80}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5642ef3a-fdbe-43cf-a575-d1785976a1b9-0', usage_metadata={'input_tokens': 53, 'output_tokens': 27, 'total_tokens': 80})]" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "demo_ephemeral_chat_history.messages" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And we can see that our history has removed the two oldest messages while still adding the most recent conversation at the end. The next time the chain is called, `trim_messages` will be called again, and only the two most recent messages will be passed to the model. In this case, this means that the model will forget the name we gave it the next time we invoke it:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Parent run fde7123f-6fd3-421a-a3fc-2fb37dead119 not found for run 061a4563-2394-470d-a3ed-9bf1388ca431. Treating as a root run.\n" ] }, { "data": { "text/plain": [ "AIMessage(content=\"I'm sorry, but I don't have access to your personal information, so I don't know your name. How else may I assist you today?\", response_metadata={'token_usage': {'completion_tokens': 31, 'prompt_tokens': 74, 'total_tokens': 105}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0ab03495-1f7c-4151-9070-56d2d1c565ff-0', usage_metadata={'input_tokens': 74, 'output_tokens': 31, 'total_tokens': 105})" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain_with_trimmed_history.invoke(\n", " {\"input\": \"What is my name?\"},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Check out our [how to guide on trimming messages](/docs/how_to/trim_messages/) for more." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Summary memory\n", "\n", "We can use this same pattern in other ways too. For example, we could use an additional LLM call to generate a summary of the conversation before calling our chain. Let's recreate our chat history and chatbot chain:" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content=\"Hey there! I'm Nemo.\"),\n", " AIMessage(content='Hello!'),\n", " HumanMessage(content='How are you today?'),\n", " AIMessage(content='Fine thanks!')]" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "demo_ephemeral_chat_history = ChatMessageHistory()\n", "\n", "demo_ephemeral_chat_history.add_user_message(\"Hey there! I'm Nemo.\")\n", "demo_ephemeral_chat_history.add_ai_message(\"Hello!\")\n", "demo_ephemeral_chat_history.add_user_message(\"How are you today?\")\n", "demo_ephemeral_chat_history.add_ai_message(\"Fine thanks!\")\n", "\n", "demo_ephemeral_chat_history.messages" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll slightly modify the prompt to make the LLM aware that will receive a condensed summary instead of a chat history:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant. Answer all questions to the best of your ability. The provided chat history includes facts about the user you are speaking with.\",\n", " ),\n", " (\"placeholder\", \"{chat_history}\"),\n", " (\"user\", \"{input}\"),\n", " ]\n", ")\n", "\n", "chain = prompt | chat\n", "\n", "chain_with_message_history = RunnableWithMessageHistory(\n", " chain,\n", " lambda session_id: demo_ephemeral_chat_history,\n", " input_messages_key=\"input\",\n", " history_messages_key=\"chat_history\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, let's create a function that will distill previous interactions into a summary. We can add this one to the front of the chain too:" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [], "source": [ "def summarize_messages(chain_input):\n", " stored_messages = demo_ephemeral_chat_history.messages\n", " if len(stored_messages) == 0:\n", " return False\n", " summarization_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"placeholder\", \"{chat_history}\"),\n", " (\n", " \"user\",\n", " \"Distill the above chat messages into a single summary message. Include as many specific details as you can.\",\n", " ),\n", " ]\n", " )\n", " summarization_chain = summarization_prompt | chat\n", "\n", " summary_message = summarization_chain.invoke({\"chat_history\": stored_messages})\n", "\n", " demo_ephemeral_chat_history.clear()\n", "\n", " demo_ephemeral_chat_history.add_message(summary_message)\n", "\n", " return True\n", "\n", "\n", "chain_with_summarization = (\n", " RunnablePassthrough.assign(messages_summarized=summarize_messages)\n", " | chain_with_message_history\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see if it remembers the name we gave it:" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='You introduced yourself as Nemo. How can I assist you today, Nemo?')" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain_with_summarization.invoke(\n", " {\"input\": \"What did I say my name was?\"},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[AIMessage(content='The conversation is between Nemo and an AI. Nemo introduces himself and the AI responds with a greeting. Nemo then asks the AI how it is doing, and the AI responds that it is fine.'),\n", " HumanMessage(content='What did I say my name was?'),\n", " AIMessage(content='You introduced yourself as Nemo. How can I assist you today, Nemo?')]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "demo_ephemeral_chat_history.messages" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that invoking the chain again will generate another summary generated from the initial summary plus new messages and so on. You could also design a hybrid approach where a certain number of messages are retained in chat history while others are summarized." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chatbots_retrieval.ipynb
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add retrieval to chatbots\n", "\n", "Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/how_to#qa-with-rag) that go into greater depth!\n", "\n", "## Setup\n", "\n", "You'll need to install a few packages, and have your OpenAI API key set as an environment variable named `OPENAI_API_KEY`:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 23.3.2 is available.\n", "You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] }, { "data": { "text/plain": [ "True" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%pip install -qU langchain langchain-openai langchain-chroma beautifulsoup4\n", "\n", "# Set env var OPENAI_API_KEY or load from a .env file:\n", "import dotenv\n", "\n", "dotenv.load_dotenv()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's also set up a chat model that we'll use for the below examples." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0.2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating a retriever\n", "\n", "We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/how_to#qa-with-rag).\n", "\n", "Let's use a document loader to pull text from the docs:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import WebBaseLoader\n", "\n", "loader = WebBaseLoader(\"https://docs.smith.langchain.com/overview\")\n", "data = loader.load()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we split it into smaller chunks that the LLM's context window can handle and store it in a vector database:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n", "all_splits = text_splitter.split_documents(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then we embed and store those chunks in a vector database:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from langchain_chroma import Chroma\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And finally, let's create a retriever from our initialized vectorstore:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# k is the number of chunks to retrieve\n", "retriever = vectorstore.as_retriever(k=4)\n", "\n", "docs = retriever.invoke(\"Can LangSmith help test my LLM applications?\")\n", "\n", "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that invoking the retriever above results in some parts of the LangSmith docs that contain information about testing that our chatbot can use as context when answering questions. And now we've got a retriever that can return related data from the LangSmith docs!\n", "\n", "## Document chains\n", "\n", "Now that we have a retriever that can return LangChain docs, let's create a chain that can use them as context to answer questions. We'll use a `create_stuff_documents_chain` helper function to \"stuff\" all of the input documents into the prompt. It will also handle formatting the docs as strings.\n", "\n", "In addition to a chat model, the function also expects a prompt that has a `context` variables, as well as a placeholder for chat history messages named `messages`. We'll create an appropriate prompt and pass it as shown below:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "\n", "SYSTEM_TEMPLATE = \"\"\"\n", "Answer the user's questions based on the below context. \n", "If the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n", "\n", "<context>\n", "{context}\n", "</context>\n", "\"\"\"\n", "\n", "question_answering_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " SYSTEM_TEMPLATE,\n", " ),\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " ]\n", ")\n", "\n", "document_chain = create_stuff_documents_chain(chat, question_answering_prompt)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can invoke this `document_chain` by itself to answer questions. Let's use the docs we retrieved above and the same question, `how can langsmith help with testing?`:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Yes, LangSmith can help test and evaluate your LLM applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import HumanMessage\n", "\n", "document_chain.invoke(\n", " {\n", " \"context\": docs,\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks good! For comparison, we can try it with no context docs and compare the result:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I don't know about LangSmith's specific capabilities for testing LLM applications. It's best to reach out to LangSmith directly to inquire about their services and how they can assist with testing your LLM applications.\"" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "document_chain.invoke(\n", " {\n", " \"context\": [],\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that the LLM does not return any results.\n", "\n", "## Retrieval chains\n", "\n", "Let's combine this document chain with the retriever. Here's one way this can look:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "from typing import Dict\n", "\n", "from langchain_core.runnables import RunnablePassthrough\n", "\n", "\n", "def parse_retriever_input(params: Dict):\n", " return params[\"messages\"][-1].content\n", "\n", "\n", "retrieval_chain = RunnablePassthrough.assign(\n", " context=parse_retriever_input | retriever,\n", ").assign(\n", " answer=document_chain,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given a list of input messages, we extract the content of the last message in the list and pass that to the retriever to fetch some documents. Then, we pass those documents as context to our document chain to generate a final response.\n", "\n", "Invoking this chain combines both steps outlined above:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],\n", " 'context': [Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n", " 'answer': 'Yes, LangSmith can help test and evaluate your LLM applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retrieval_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\")\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looks good!\n", "\n", "## Query transformation\n", "\n", "Our retrieval chain is capable of answering questions about LangSmith, but there's a problem - chatbots interact with users conversationally, and therefore have to deal with followup questions.\n", "\n", "The chain in its current form will struggle with this. Consider a followup question to our original question like `Tell me more!`. If we invoke our retriever with that query directly, we get documents irrelevant to LLM application testing:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='playground. Here, you can modify the prompt and re-run it to observe the resulting changes to the output - as many times as needed!Currently, this feature supports only OpenAI and Anthropic models and works for LLM and Chat Model calls. We plan to extend its functionality to more LLM types, chains, agents, and retrievers in the future.What is the exact sequence of events?\\u200bIn complicated chains and agents, it can often be hard to understand what is going on under the hood. What calls are being', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='however, there is still no complete substitute for human review to get the utmost quality and reliability from your application.', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.invoke(\"Tell me more!\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is because the retriever has no innate concept of state, and will only pull documents most similar to the query given. To solve this, we can transform the query into a standalone query without any external references an LLM.\n", "\n", "Here's an example:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='\"LangSmith LLM application testing and evaluation\"')" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import AIMessage, HumanMessage\n", "\n", "query_transform_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " (\n", " \"user\",\n", " \"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.\",\n", " ),\n", " ]\n", ")\n", "\n", "query_transformation_chain = query_transform_prompt | chat\n", "\n", "query_transformation_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n", " AIMessage(\n", " content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n", " ),\n", " HumanMessage(content=\"Tell me more!\"),\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Awesome! That transformed query would pull up context documents related to LLM application testing.\n", "\n", "Let's add this to our retrieval chain. We can wrap our retriever as follows:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.runnables import RunnableBranch\n", "\n", "query_transforming_retriever_chain = RunnableBranch(\n", " (\n", " lambda x: len(x.get(\"messages\", [])) == 1,\n", " # If only one message, then we just pass that message's content to retriever\n", " (lambda x: x[\"messages\"][-1].content) | retriever,\n", " ),\n", " # If messages, then we pass inputs to LLM chain to transform the query, then pass to retriever\n", " query_transform_prompt | chat | StrOutputParser() | retriever,\n", ").with_config(run_name=\"chat_retriever_chain\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, we can use this query transformation chain to make our retrieval chain better able to handle such followup questions:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "SYSTEM_TEMPLATE = \"\"\"\n", "Answer the user's questions based on the below context. \n", "If the context doesn't contain any relevant information to the question, don't make something up and just say \"I don't know\":\n", "\n", "<context>\n", "{context}\n", "</context>\n", "\"\"\"\n", "\n", "question_answering_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " SYSTEM_TEMPLATE,\n", " ),\n", " MessagesPlaceholder(variable_name=\"messages\"),\n", " ]\n", ")\n", "\n", "document_chain = create_stuff_documents_chain(chat, question_answering_prompt)\n", "\n", "conversational_retrieval_chain = RunnablePassthrough.assign(\n", " context=query_transforming_retriever_chain,\n", ").assign(\n", " answer=document_chain,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Awesome! Let's invoke this new chain with the same inputs as earlier:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?')],\n", " 'context': [Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content=\"does that affect the output?\\u200bSo you notice a bad output, and you go into LangSmith to see what's going on. You find the faulty LLM call and are now looking at the exact input. You want to try changing a word or a phrase to see what happens -- what do you do?We constantly ran into this issue. Initially, we copied the prompt to a playground of sorts. But this got annoying, so we built a playground of our own! When examining an LLM call, you can click the Open in Playground button to access this\", metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n", " 'answer': 'Yes, LangSmith can help test and evaluate LLM (Language Model) applications. It simplifies the initial setup, and you can use it to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversational_retrieval_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n", " ]\n", " }\n", ")" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?'),\n", " AIMessage(content='Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'),\n", " HumanMessage(content='Tell me more!')],\n", " 'context': [Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}),\n", " Document(page_content='LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})],\n", " 'answer': 'LangSmith simplifies the initial setup for building reliable LLM applications, but it acknowledges that there is still work needed to bring the performance of prompts, chains, and agents up to the level where they are reliable enough to be used in production. It also provides the capability to manually review and annotate runs through annotation queues, allowing you to select runs based on criteria like model type or automatic evaluation scores for human review. This feature is particularly useful for assessing subjective qualities that automatic evaluators struggle with.'}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversational_retrieval_chain.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n", " AIMessage(\n", " content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n", " ),\n", " HumanMessage(content=\"Tell me more!\"),\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can check out [this LangSmith trace](https://smith.langchain.com/public/bb329a3b-e92a-4063-ad78-43f720fbb5a2/r) to see the internal query transformation step for yourself.\n", "\n", "## Streaming\n", "\n", "Because this chain is constructed with LCEL, you can use familiar methods like `.stream()` with it:" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'messages': [HumanMessage(content='Can LangSmith help test my LLM applications?'), AIMessage(content='Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.'), HumanMessage(content='Tell me more!')]}\n", "{'context': [Document(page_content='LangSmith Overview and User Guide | 🦜️🛠️ LangSmith', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}), Document(page_content='You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring\\u200bAfter all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}), Document(page_content='Skip to main content🦜️🛠️ LangSmith DocsPython DocsJS/TS DocsSearchGo to AppLangSmithOverviewTracingTesting & EvaluationOrganizationsHubLangSmith CookbookOverviewOn this pageLangSmith Overview and User GuideBuilding reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.Over the past two months, we at LangChain', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'}), Document(page_content='LangSmith makes it easy to manually review and annotate runs through annotation queues.These queues allow you to select any runs based on criteria like model type or automatic evaluation scores, and queue them up for human review. As a reviewer, you can then quickly step through the runs, viewing the input, output, and any existing tags before adding your own feedback.We often use this for a couple of reasons:To assess subjective qualities that automatic evaluators struggle with, like', metadata={'description': 'Building reliable LLM applications can be challenging. LangChain simplifies the initial setup, but there is still work needed to bring the performance of prompts, chains and agents up the level where they are reliable enough to be used in production.', 'language': 'en', 'source': 'https://docs.smith.langchain.com/overview', 'title': 'LangSmith Overview and User Guide | 🦜️🛠️ LangSmith'})]}\n", "{'answer': ''}\n", "{'answer': 'Lang'}\n", "{'answer': 'Smith'}\n", "{'answer': ' simpl'}\n", "{'answer': 'ifies'}\n", "{'answer': ' the'}\n", "{'answer': ' initial'}\n", "{'answer': ' setup'}\n", "{'answer': ' for'}\n", "{'answer': ' building'}\n", "{'answer': ' reliable'}\n", "{'answer': ' L'}\n", "{'answer': 'LM'}\n", "{'answer': ' applications'}\n", "{'answer': '.'}\n", "{'answer': ' It'}\n", "{'answer': ' provides'}\n", "{'answer': ' features'}\n", "{'answer': ' for'}\n", "{'answer': ' manually'}\n", "{'answer': ' reviewing'}\n", "{'answer': ' and'}\n", "{'answer': ' annot'}\n", "{'answer': 'ating'}\n", "{'answer': ' runs'}\n", "{'answer': ' through'}\n", "{'answer': ' annotation'}\n", "{'answer': ' queues'}\n", "{'answer': ','}\n", "{'answer': ' allowing'}\n", "{'answer': ' you'}\n", "{'answer': ' to'}\n", "{'answer': ' select'}\n", "{'answer': ' runs'}\n", "{'answer': ' based'}\n", "{'answer': ' on'}\n", "{'answer': ' criteria'}\n", "{'answer': ' like'}\n", "{'answer': ' model'}\n", "{'answer': ' type'}\n", "{'answer': ' or'}\n", "{'answer': ' automatic'}\n", "{'answer': ' evaluation'}\n", "{'answer': ' scores'}\n", "{'answer': ','}\n", "{'answer': ' and'}\n", "{'answer': ' queue'}\n", "{'answer': ' them'}\n", "{'answer': ' up'}\n", "{'answer': ' for'}\n", "{'answer': ' human'}\n", "{'answer': ' review'}\n", "{'answer': '.'}\n", "{'answer': ' As'}\n", "{'answer': ' a'}\n", "{'answer': ' reviewer'}\n", "{'answer': ','}\n", "{'answer': ' you'}\n", "{'answer': ' can'}\n", "{'answer': ' quickly'}\n", "{'answer': ' step'}\n", "{'answer': ' through'}\n", "{'answer': ' the'}\n", "{'answer': ' runs'}\n", "{'answer': ','}\n", "{'answer': ' view'}\n", "{'answer': ' the'}\n", "{'answer': ' input'}\n", "{'answer': ','}\n", "{'answer': ' output'}\n", "{'answer': ','}\n", "{'answer': ' and'}\n", "{'answer': ' any'}\n", "{'answer': ' existing'}\n", "{'answer': ' tags'}\n", "{'answer': ' before'}\n", "{'answer': ' adding'}\n", "{'answer': ' your'}\n", "{'answer': ' own'}\n", "{'answer': ' feedback'}\n", "{'answer': '.'}\n", "{'answer': ' This'}\n", "{'answer': ' can'}\n", "{'answer': ' be'}\n", "{'answer': ' particularly'}\n", "{'answer': ' useful'}\n", "{'answer': ' for'}\n", "{'answer': ' assessing'}\n", "{'answer': ' subjective'}\n", "{'answer': ' qualities'}\n", "{'answer': ' that'}\n", "{'answer': ' automatic'}\n", "{'answer': ' evalu'}\n", "{'answer': 'ators'}\n", "{'answer': ' struggle'}\n", "{'answer': ' with'}\n", "{'answer': '.'}\n", "{'answer': ''}\n" ] } ], "source": [ "stream = conversational_retrieval_chain.stream(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"Can LangSmith help test my LLM applications?\"),\n", " AIMessage(\n", " content=\"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise.\"\n", " ),\n", " HumanMessage(content=\"Tell me more!\"),\n", " ],\n", " }\n", ")\n", "\n", "for chunk in stream:\n", " print(chunk)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Further reading\n", "\n", "This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out the relevant how-to guides [here](/docs/how_to#document-loaders)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 2 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/chatbots_tools.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to add tools to chatbots\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Chatbots](/docs/concepts/#messages)\n", "- [Agents](/docs/tutorials/agents)\n", "- [Chat history](/docs/concepts/#chat-history)\n", "\n", ":::\n", "\n", "This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n", "\n", "## Setup\n", "\n", "For this guide, we'll be using a [tool calling agent](/docs/how_to/agent_executor) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n", "\n", "You'll need to [sign up for an account](https://tavily.com/) on the Tavily website, and install the following packages:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain-community langchain-openai tavily-python\n", "\n", "# Set env var OPENAI_API_KEY or load from a .env file:\n", "import dotenv\n", "\n", "dotenv.load_dotenv()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You will also need your OpenAI key set as `OPENAI_API_KEY` and your Tavily API key set as `TAVILY_API_KEY`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Creating an agent\n", "\n", "Our end goal is to create an agent that can respond conversationally to user questions while looking up information as needed.\n", "\n", "First, let's initialize Tavily and an OpenAI chat model capable of tool calling:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from langchain_community.tools.tavily_search import TavilySearchResults\n", "from langchain_openai import ChatOpenAI\n", "\n", "tools = [TavilySearchResults(max_results=1)]\n", "\n", "# Choose the LLM that will drive the agent\n", "# Only certain models support this\n", "chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To make our agent conversational, we must also choose a prompt with a placeholder for our chat history. Here's an example:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "# Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!\",\n", " ),\n", " (\"placeholder\", \"{messages}\"),\n", " (\"placeholder\", \"{agent_scratchpad}\"),\n", " ]\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Great! Now let's assemble our agent:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "from langchain.agents import AgentExecutor, create_tool_calling_agent\n", "\n", "agent = create_tool_calling_agent(chat, tools, prompt)\n", "\n", "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running the agent\n", "\n", "Now that we've set up our agent, let's try interacting with it! It can handle both trivial queries that require no lookup:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3mHello Nemo! It's great to meet you. How can I assist you today?\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'messages': [HumanMessage(content=\"I'm Nemo!\")],\n", " 'output': \"Hello Nemo! It's great to meet you. How can I assist you today?\"}" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import HumanMessage\n", "\n", "agent_executor.invoke({\"messages\": [HumanMessage(content=\"I'm Nemo!\")]})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Or, it can use of the passed search tool to get up to date information if needed:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `tavily_search_results_json` with `{'query': 'current conservation status of the Great Barrier Reef'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186', 'content': 'Great Barrier Reef hit with widespread and severe bleaching event\\n\\'Devastating\\': Over 90pc of reefs on Great Barrier Reef suffered bleaching over summer, report reveals\\nTop Stories\\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career — as it happened\\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\\nOpenAI launches video model that can instantly create short clips from text prompts\\nAntoinette Lattouf loses bid to force ABC to produce emails calling for her dismissal\\nCategory one cyclone makes landfall in Gulf of Carpentaria off NT-Queensland border\\nWhy the RBA may be forced to cut before the Fed\\nBrisbane records \\'wettest day since 2022\\', as woman dies in floodwaters near Mount Isa\\n$45m Sydney beachside home once owned by late radio star is demolished less than a year after sale\\nAnnabel Sutherland\\'s historic double century puts Australia within reach of Test victory over South Africa\\nAlmighty defensive effort delivers Indigenous victory in NRL All Stars clash\\nLisa Wilkinson feared she would have to sell home to pay legal costs of Bruce Lehrmann\\'s defamation case, court documents reveal\\nSupermarkets as you know them are disappearing from our cities\\nNRL issues Broncos\\' Reynolds, Carrigan with breach notices after public scrap\\nPopular Now\\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career — as it happened\\n$45m Sydney beachside home once owned by late radio star is demolished less than a year after sale\\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\\nDealer sentenced for injecting children as young as 12 with methylamphetamine\\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\\nTop Stories\\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\\nTaylor Swift puts an Aussie twist on a classic as she packs the MCG for the biggest show of her career — as it happened\\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\\nAustralian Border Force investigates after arrival of more than 20 men by boat north of Broome\\nOpenAI launches video model that can instantly create short clips from text prompts\\nJust In\\nJailed Russian opposition leader Alexei Navalny is dead, says prison service\\nMelbourne comes alive with Swifties, as even those without tickets turn up to soak in the atmosphere\\nTraveller alert after one-year-old in Adelaide reported with measles\\nAntoinette Lattouf loses bid to force ABC to produce emails calling for her dismissal\\nFooter\\nWe acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn, and work.\\n Increased coral cover could come at a cost\\nThe rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora.\\n Documents obtained by the ABC under Freedom of Information laws revealed the Morrison government had forced AIMS to rush the report\\'s release and orchestrated a \"leak\" of the material to select media outlets ahead of the reef being considered for inclusion on the World Heritage In Danger list.\\n The reef\\'s status and potential inclusion on the In Danger list were due to be discussed at the 45th session of the World Heritage Committee in Russia in June this year, but the meeting was indefinitely postponed due to the war in Ukraine.\\n More from ABC\\nEditorial Policies\\nGreat Barrier Reef coral cover at record levels after mass-bleaching events, report shows\\nGreat Barrier Reef coral cover at record levels after mass-bleaching events, report shows\\nRecord coral cover is being seen across much of the Great Barrier Reef as it recovers from past storms and mass-bleaching events.'}]\u001b[0m\u001b[32;1m\u001b[1;3mThe Great Barrier Reef is currently showing signs of recovery, with record coral cover being seen across much of the reef. This recovery comes after past storms and mass-bleaching events. However, the rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora. There were discussions about the reef's potential inclusion on the World Heritage In Danger list, but the meeting to consider this was indefinitely postponed due to the war in Ukraine.\n", "\n", "You can read more about it in this article: [Great Barrier Reef hit with widespread and severe bleaching event](https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186)\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'messages': [HumanMessage(content='What is the current conservation status of the Great Barrier Reef?')],\n", " 'output': \"The Great Barrier Reef is currently showing signs of recovery, with record coral cover being seen across much of the reef. This recovery comes after past storms and mass-bleaching events. However, the rapid growth in coral cover appears to have come at the expense of the diversity of coral on the reef, with most of the increases accounted for by fast-growing branching coral called Acropora. There were discussions about the reef's potential inclusion on the World Heritage In Danger list, but the meeting to consider this was indefinitely postponed due to the war in Ukraine.\\n\\nYou can read more about it in this article: [Great Barrier Reef hit with widespread and severe bleaching event](https://www.abc.net.au/news/2022-08-04/great-barrier-reef-report-says-coral-recovering-after-bleaching/101296186)\"}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(\n", " content=\"What is the current conservation status of the Great Barrier Reef?\"\n", " )\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conversational responses\n", "\n", "Because our prompt contains a placeholder for chat history messages, our agent can also take previous interactions into account and respond conversationally like a standard chatbot:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3mYour name is Nemo!\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'messages': [HumanMessage(content=\"I'm Nemo!\"),\n", " AIMessage(content='Hello Nemo! How can I assist you today?'),\n", " HumanMessage(content='What is my name?')],\n", " 'output': 'Your name is Nemo!'}" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import AIMessage, HumanMessage\n", "\n", "agent_executor.invoke(\n", " {\n", " \"messages\": [\n", " HumanMessage(content=\"I'm Nemo!\"),\n", " AIMessage(content=\"Hello Nemo! How can I assist you today?\"),\n", " HumanMessage(content=\"What is my name?\"),\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If preferred, you can also wrap the agent executor in a [`RunnableWithMessageHistory`](/docs/how_to/message_history/) class to internally manage history messages. Let's redeclare it this way:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "agent = create_tool_calling_agent(chat, tools, prompt)\n", "\n", "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, because our agent executor has multiple outputs, we also have to set the `output_messages_key` property when initializing the wrapper:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3mHi Nemo! It's great to meet you. How can I assist you today?\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'messages': [HumanMessage(content=\"I'm Nemo!\")],\n", " 'output': \"Hi Nemo! It's great to meet you. How can I assist you today?\"}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.chat_message_histories import ChatMessageHistory\n", "from langchain_core.runnables.history import RunnableWithMessageHistory\n", "\n", "demo_ephemeral_chat_history_for_chain = ChatMessageHistory()\n", "\n", "conversational_agent_executor = RunnableWithMessageHistory(\n", " agent_executor,\n", " lambda session_id: demo_ephemeral_chat_history_for_chain,\n", " input_messages_key=\"messages\",\n", " output_messages_key=\"output\",\n", ")\n", "\n", "conversational_agent_executor.invoke(\n", " {\"messages\": [HumanMessage(\"I'm Nemo!\")]},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And then if we rerun our wrapped agent executor:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3mYour name is Nemo! How can I assist you today, Nemo?\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'messages': [HumanMessage(content=\"I'm Nemo!\"),\n", " AIMessage(content=\"Hi Nemo! It's great to meet you. How can I assist you today?\"),\n", " HumanMessage(content='What is my name?')],\n", " 'output': 'Your name is Nemo! How can I assist you today, Nemo?'}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "conversational_agent_executor.invoke(\n", " {\"messages\": [HumanMessage(\"What is my name?\")]},\n", " {\"configurable\": {\"session_id\": \"unused\"}},\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This [LangSmith trace](https://smith.langchain.com/public/1a9f712a-7918-4661-b3ff-d979bcc2af42/r) shows what's going on under the hood.\n", "\n", "## Further reading\n", "\n", "Other types agents can also support conversational responses too - for more, check out the [agents section](/docs/tutorials/agents).\n", "\n", "For more on tool usage, you can also check out [this use case section](/docs/how_to#tools)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 2 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/code_splitter.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "44b9976d", "metadata": {}, "source": [ "# How to split code\n", "\n", "[RecursiveCharacterTextSplitter](https://api.python.langchain.com/en/latest/character/langchain_text_splitters.character.RecursiveCharacterTextSplitter.html) includes pre-built lists of separators that are useful for splitting text in a specific programming language.\n", "\n", "Supported languages are stored in the `langchain_text_splitters.Language` enum. They include:\n", "\n", "```\n", "\"cpp\",\n", "\"go\",\n", "\"java\",\n", "\"kotlin\",\n", "\"js\",\n", "\"ts\",\n", "\"php\",\n", "\"proto\",\n", "\"python\",\n", "\"rst\",\n", "\"ruby\",\n", "\"rust\",\n", "\"scala\",\n", "\"swift\",\n", "\"markdown\",\n", "\"latex\",\n", "\"html\",\n", "\"sol\",\n", "\"csharp\",\n", "\"cobol\",\n", "\"c\",\n", "\"lua\",\n", "\"perl\",\n", "\"haskell\"\n", "```\n", "\n", "To view the list of separators for a given language, pass a value from this enum into\n", "```python\n", "RecursiveCharacterTextSplitter.get_separators_for_language`\n", "```\n", "\n", "To instantiate a splitter that is tailored for a specific language, pass a value from the enum into\n", "```python\n", "RecursiveCharacterTextSplitter.from_language\n", "```\n", "\n", "Below we demonstrate examples for the various languages." ] }, { "cell_type": "code", "execution_count": null, "id": "9e4144de-d925-4d4c-91c3-685ef8baa57c", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-text-splitters" ] }, { "cell_type": "code", "execution_count": 1, "id": "a9e37aa1", "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import (\n", " Language,\n", " RecursiveCharacterTextSplitter,\n", ")" ] }, { "cell_type": "markdown", "id": "082807cb-dfba-4495-af12-0441f63f30e1", "metadata": {}, "source": [ "To view the full list of supported languages:" ] }, { "cell_type": "code", "execution_count": 3, "id": "e21a2434", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['cpp',\n", " 'go',\n", " 'java',\n", " 'kotlin',\n", " 'js',\n", " 'ts',\n", " 'php',\n", " 'proto',\n", " 'python',\n", " 'rst',\n", " 'ruby',\n", " 'rust',\n", " 'scala',\n", " 'swift',\n", " 'markdown',\n", " 'latex',\n", " 'html',\n", " 'sol',\n", " 'csharp',\n", " 'cobol',\n", " 'c',\n", " 'lua',\n", " 'perl',\n", " 'haskell']" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "[e.value for e in Language]" ] }, { "cell_type": "markdown", "id": "56669f16-266a-4820-a7e7-d90ade9e642f", "metadata": {}, "source": [ "You can also see the separators used for a given language:" ] }, { "cell_type": "code", "execution_count": 3, "id": "c92fb913", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['\\nclass ', '\\ndef ', '\\n\\tdef ', '\\n\\n', '\\n', ' ', '']" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON)" ] }, { "cell_type": "markdown", "id": "dcb8931b", "metadata": {}, "source": [ "## Python\n", "\n", "Here's an example using the PythonTextSplitter:\n", "\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "a58512b9", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='def hello_world():\\n print(\"Hello, World!\")'),\n", " Document(page_content='# Call the function\\nhello_world()')]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "PYTHON_CODE = \"\"\"\n", "def hello_world():\n", " print(\"Hello, World!\")\n", "\n", "# Call the function\n", "hello_world()\n", "\"\"\"\n", "python_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.PYTHON, chunk_size=50, chunk_overlap=0\n", ")\n", "python_docs = python_splitter.create_documents([PYTHON_CODE])\n", "python_docs" ] }, { "cell_type": "markdown", "id": "354f60a5", "metadata": {}, "source": [ "## JS\n", "Here's an example using the JS text splitter:" ] }, { "cell_type": "code", "execution_count": 6, "id": "7db0d486", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='function helloWorld() {\\n console.log(\"Hello, World!\");\\n}'),\n", " Document(page_content='// Call the function\\nhelloWorld();')]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "JS_CODE = \"\"\"\n", "function helloWorld() {\n", " console.log(\"Hello, World!\");\n", "}\n", "\n", "// Call the function\n", "helloWorld();\n", "\"\"\"\n", "\n", "js_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.JS, chunk_size=60, chunk_overlap=0\n", ")\n", "js_docs = js_splitter.create_documents([JS_CODE])\n", "js_docs" ] }, { "cell_type": "markdown", "id": "a739f545", "metadata": {}, "source": [ "## TS\n", "Here's an example using the TS text splitter:" ] }, { "cell_type": "code", "execution_count": 7, "id": "aee738a4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='function helloWorld(): void {'),\n", " Document(page_content='console.log(\"Hello, World!\");\\n}'),\n", " Document(page_content='// Call the function\\nhelloWorld();')]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "TS_CODE = \"\"\"\n", "function helloWorld(): void {\n", " console.log(\"Hello, World!\");\n", "}\n", "\n", "// Call the function\n", "helloWorld();\n", "\"\"\"\n", "\n", "ts_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.TS, chunk_size=60, chunk_overlap=0\n", ")\n", "ts_docs = ts_splitter.create_documents([TS_CODE])\n", "ts_docs" ] }, { "cell_type": "markdown", "id": "ee2361f8", "metadata": {}, "source": [ "## Markdown\n", "\n", "Here's an example using the Markdown text splitter:\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "ac9295d3", "metadata": {}, "outputs": [], "source": [ "markdown_text = \"\"\"\n", "# 🦜️🔗 LangChain\n", "\n", "⚡ Building applications with LLMs through composability ⚡\n", "\n", "## Quick Install\n", "\n", "```bash\n", "# Hopefully this code block isn't split\n", "pip install langchain\n", "```\n", "\n", "As an open-source project in a rapidly developing field, we are extremely open to contributions.\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 9, "id": "3a0cb17a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='# 🦜️🔗 LangChain'),\n", " Document(page_content='⚡ Building applications with LLMs through composability ⚡'),\n", " Document(page_content='## Quick Install\\n\\n```bash'),\n", " Document(page_content=\"# Hopefully this code block isn't split\"),\n", " Document(page_content='pip install langchain'),\n", " Document(page_content='```'),\n", " Document(page_content='As an open-source project in a rapidly developing field, we'),\n", " Document(page_content='are extremely open to contributions.')]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "md_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n", ")\n", "md_docs = md_splitter.create_documents([markdown_text])\n", "md_docs" ] }, { "cell_type": "markdown", "id": "7aa306f6", "metadata": {}, "source": [ "## Latex\n", "\n", "Here's an example on Latex text:\n" ] }, { "cell_type": "code", "execution_count": 10, "id": "77d1049d", "metadata": {}, "outputs": [], "source": [ "latex_text = \"\"\"\n", "\\documentclass{article}\n", "\n", "\\begin{document}\n", "\n", "\\maketitle\n", "\n", "\\section{Introduction}\n", "Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.\n", "\n", "\\subsection{History of LLMs}\n", "The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.\n", "\n", "\\subsection{Applications of LLMs}\n", "LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.\n", "\n", "\\end{document}\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 11, "id": "4dbc47e1", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='\\\\documentclass{article}\\n\\n\\x08egin{document}\\n\\n\\\\maketitle'),\n", " Document(page_content='\\\\section{Introduction}'),\n", " Document(page_content='Large language models (LLMs) are a type of machine learning'),\n", " Document(page_content='model that can be trained on vast amounts of text data to'),\n", " Document(page_content='generate human-like language. In recent years, LLMs have'),\n", " Document(page_content='made significant advances in a variety of natural language'),\n", " Document(page_content='processing tasks, including language translation, text'),\n", " Document(page_content='generation, and sentiment analysis.'),\n", " Document(page_content='\\\\subsection{History of LLMs}'),\n", " Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,'),\n", " Document(page_content='but they were limited by the amount of data that could be'),\n", " Document(page_content='processed and the computational power available at the'),\n", " Document(page_content='time. In the past decade, however, advances in hardware and'),\n", " Document(page_content='software have made it possible to train LLMs on massive'),\n", " Document(page_content='datasets, leading to significant improvements in'),\n", " Document(page_content='performance.'),\n", " Document(page_content='\\\\subsection{Applications of LLMs}'),\n", " Document(page_content='LLMs have many applications in industry, including'),\n", " Document(page_content='chatbots, content creation, and virtual assistants. They'),\n", " Document(page_content='can also be used in academia for research in linguistics,'),\n", " Document(page_content='psychology, and computational linguistics.'),\n", " Document(page_content='\\\\end{document}')]" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "latex_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0\n", ")\n", "latex_docs = latex_splitter.create_documents([latex_text])\n", "latex_docs" ] }, { "cell_type": "markdown", "id": "c29adadf", "metadata": {}, "source": [ "## HTML\n", "\n", "Here's an example using an HTML text splitter:\n" ] }, { "cell_type": "code", "execution_count": 12, "id": "0fc78794", "metadata": {}, "outputs": [], "source": [ "html_text = \"\"\"\n", "<!DOCTYPE html>\n", "<html>\n", " <head>\n", " <title>🦜️🔗 LangChain</title>\n", " <style>\n", " body {\n", " font-family: Arial, sans-serif;\n", " }\n", " h1 {\n", " color: darkblue;\n", " }\n", " </style>\n", " </head>\n", " <body>\n", " <div>\n", " <h1>🦜️🔗 LangChain</h1>\n", " <p>⚡ Building applications with LLMs through composability ⚡</p>\n", " </div>\n", " <div>\n", " As an open-source project in a rapidly developing field, we are extremely open to contributions.\n", " </div>\n", " </body>\n", "</html>\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 13, "id": "e3e3fca1", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='<!DOCTYPE html>\\n<html>'),\n", " Document(page_content='<head>\\n <title>🦜️🔗 LangChain</title>'),\n", " Document(page_content='<style>\\n body {\\n font-family: Aria'),\n", " Document(page_content='l, sans-serif;\\n }\\n h1 {'),\n", " Document(page_content='color: darkblue;\\n }\\n </style>\\n </head'),\n", " Document(page_content='>'),\n", " Document(page_content='<body>'),\n", " Document(page_content='<div>\\n <h1>🦜️🔗 LangChain</h1>'),\n", " Document(page_content='<p>⚡ Building applications with LLMs through composability ⚡'),\n", " Document(page_content='</p>\\n </div>'),\n", " Document(page_content='<div>\\n As an open-source project in a rapidly dev'),\n", " Document(page_content='eloping field, we are extremely open to contributions.'),\n", " Document(page_content='</div>\\n </body>\\n</html>')]" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "html_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.HTML, chunk_size=60, chunk_overlap=0\n", ")\n", "html_docs = html_splitter.create_documents([html_text])\n", "html_docs" ] }, { "cell_type": "markdown", "id": "fcaf7abf", "metadata": {}, "source": [ "## Solidity\n", "Here's an example using the Solidity text splitter:" ] }, { "cell_type": "code", "execution_count": 14, "id": "49a1df11", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='pragma solidity ^0.8.20;'),\n", " Document(page_content='contract HelloWorld {\\n function add(uint a, uint b) pure public returns(uint) {\\n return a + b;\\n }\\n}')]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "SOL_CODE = \"\"\"\n", "pragma solidity ^0.8.20;\n", "contract HelloWorld {\n", " function add(uint a, uint b) pure public returns(uint) {\n", " return a + b;\n", " }\n", "}\n", "\"\"\"\n", "\n", "sol_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.SOL, chunk_size=128, chunk_overlap=0\n", ")\n", "sol_docs = sol_splitter.create_documents([SOL_CODE])\n", "sol_docs" ] }, { "cell_type": "markdown", "id": "edd0052c", "metadata": {}, "source": [ "## C#\n", "Here's an example using the C# text splitter:\n" ] }, { "cell_type": "code", "execution_count": 15, "id": "1524ae0f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='using System;'),\n", " Document(page_content='class Program\\n{\\n static void Main()\\n {\\n int age = 30; // Change the age value as needed'),\n", " Document(page_content='// Categorize the age without any console output\\n if (age < 18)\\n {\\n // Age is under 18'),\n", " Document(page_content='}\\n else if (age >= 18 && age < 65)\\n {\\n // Age is an adult\\n }\\n else\\n {'),\n", " Document(page_content='// Age is a senior citizen\\n }\\n }\\n}')]" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "C_CODE = \"\"\"\n", "using System;\n", "class Program\n", "{\n", " static void Main()\n", " {\n", " int age = 30; // Change the age value as needed\n", "\n", " // Categorize the age without any console output\n", " if (age < 18)\n", " {\n", " // Age is under 18\n", " }\n", " else if (age >= 18 && age < 65)\n", " {\n", " // Age is an adult\n", " }\n", " else\n", " {\n", " // Age is a senior citizen\n", " }\n", " }\n", "}\n", "\"\"\"\n", "c_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.CSHARP, chunk_size=128, chunk_overlap=0\n", ")\n", "c_docs = c_splitter.create_documents([C_CODE])\n", "c_docs" ] }, { "cell_type": "markdown", "id": "af9de667-230e-4c2a-8c5f-122a28515d97", "metadata": {}, "source": [ "## Haskell\n", "Here's an example using the Haskell text splitter:" ] }, { "cell_type": "code", "execution_count": 3, "id": "688185b5", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='main :: IO ()'),\n", " Document(page_content='main = do\\n putStrLn \"Hello, World!\"\\n-- Some'),\n", " Document(page_content='sample functions\\nadd :: Int -> Int -> Int\\nadd x y'),\n", " Document(page_content='= x + y')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "HASKELL_CODE = \"\"\"\n", "main :: IO ()\n", "main = do\n", " putStrLn \"Hello, World!\"\n", "-- Some sample functions\n", "add :: Int -> Int -> Int\n", "add x y = x + y\n", "\"\"\"\n", "haskell_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.HASKELL, chunk_size=50, chunk_overlap=0\n", ")\n", "haskell_docs = haskell_splitter.create_documents([HASKELL_CODE])\n", "haskell_docs" ] }, { "cell_type": "markdown", "id": "4a11f7cd-cd85-430c-b307-5b5b5f07f8db", "metadata": {}, "source": [ "## PHP\n", "Here's an example using the PHP text splitter:" ] }, { "cell_type": "code", "execution_count": 2, "id": "90c66e7e-87a5-4a81-bece-7949aabf2369", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='<?php\\nnamespace foo;'),\n", " Document(page_content='class Hello {'),\n", " Document(page_content='public function __construct() { }\\n}'),\n", " Document(page_content='function hello() {\\n echo \"Hello World!\";\\n}'),\n", " Document(page_content='interface Human {\\n public function breath();\\n}'),\n", " Document(page_content='trait Foo { }\\nenum Color\\n{\\n case Red;'),\n", " Document(page_content='case Blue;\\n}')]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "PHP_CODE = \"\"\"<?php\n", "namespace foo;\n", "class Hello {\n", " public function __construct() { }\n", "}\n", "function hello() {\n", " echo \"Hello World!\";\n", "}\n", "interface Human {\n", " public function breath();\n", "}\n", "trait Foo { }\n", "enum Color\n", "{\n", " case Red;\n", " case Blue;\n", "}\"\"\"\n", "php_splitter = RecursiveCharacterTextSplitter.from_language(\n", " language=Language.PHP, chunk_size=50, chunk_overlap=0\n", ")\n", "haskell_docs = php_splitter.create_documents([PHP_CODE])\n", "haskell_docs" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/configure.ipynb
{ "cells": [ { "cell_type": "raw", "id": "9ede5870", "metadata": {}, "source": [ "---\n", "sidebar_position: 7\n", "keywords: [ConfigurableField, configurable_fields, ConfigurableAlternatives, configurable_alternatives, LCEL]\n", "---" ] }, { "cell_type": "markdown", "id": "39eaf61b", "metadata": {}, "source": [ "# How to configure runtime chain internals\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "- [Binding runtime arguments](/docs/how_to/binding/)\n", "\n", ":::\n", "\n", "Sometimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things within your chains.\n", "This can include tweaking parameters such as temperature or even swapping out one model for another.\n", "In order to make this experience as easy as possible, we have defined two methods.\n", "\n", "- A `configurable_fields` method. This lets you configure particular fields of a runnable.\n", " - This is related to the [`.bind`](/docs/how_to/binding) method on runnables, but allows you to specify parameters for a given step in a chain at runtime rather than specifying them beforehand.\n", "- A `configurable_alternatives` method. With this method, you can list out alternatives for any particular runnable that can be set during runtime, and swap them for those specified alternatives." ] }, { "cell_type": "markdown", "id": "f2347a11", "metadata": {}, "source": [ "## Configurable Fields\n", "\n", "Let's walk through an example that configures chat model fields like temperature at runtime:" ] }, { "cell_type": "code", "execution_count": 1, "id": "40ed76a2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n", "You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade --quiet langchain langchain-openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "7ba735f4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='17', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba26a0da-0a69-4533-ab7f-21178a73d303-0')" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.prompts import PromptTemplate\n", "from langchain_core.runnables import ConfigurableField\n", "from langchain_openai import ChatOpenAI\n", "\n", "model = ChatOpenAI(temperature=0).configurable_fields(\n", " temperature=ConfigurableField(\n", " id=\"llm_temperature\",\n", " name=\"LLM Temperature\",\n", " description=\"The temperature of the LLM\",\n", " )\n", ")\n", "\n", "model.invoke(\"pick a random number\")" ] }, { "cell_type": "markdown", "id": "b0f74589", "metadata": {}, "source": [ "Above, we defined `temperature` as a [`ConfigurableField`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.utils.ConfigurableField.html#langchain_core.runnables.utils.ConfigurableField) that we can set at runtime. To do so, we use the [`with_config`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method like this:" ] }, { "cell_type": "code", "execution_count": 3, "id": "4f83245c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='12', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 11, 'total_tokens': 12}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ba8422ad-be77-4cb1-ac45-ad0aae74e3d9-0')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.with_config(configurable={\"llm_temperature\": 0.9}).invoke(\"pick a random number\")" ] }, { "cell_type": "markdown", "id": "9da1fcd2", "metadata": {}, "source": [ "Note that the passed `llm_temperature` entry in the dict has the same key as the `id` of the `ConfigurableField`.\n", "\n", "We can also do this to affect just one step that's part of a chain:" ] }, { "cell_type": "code", "execution_count": 4, "id": "e75ae678", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='27', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-ecd4cadd-1b72-4f92-b9a0-15e08091f537-0')" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prompt = PromptTemplate.from_template(\"Pick a random number above {x}\")\n", "chain = prompt | model\n", "\n", "chain.invoke({\"x\": 0})" ] }, { "cell_type": "code", "execution_count": 5, "id": "c09fac15", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='35', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 14, 'total_tokens': 15}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-a916602b-3460-46d3-a4a8-7c926ec747c0-0')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.with_config(configurable={\"llm_temperature\": 0.9}).invoke({\"x\": 0})" ] }, { "cell_type": "markdown", "id": "fb9637d0", "metadata": {}, "source": [ "### With HubRunnables\n", "\n", "This is useful to allow for switching of prompts" ] }, { "cell_type": "code", "execution_count": 6, "id": "9a9ea077", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ChatPromptValue(messages=[HumanMessage(content=\"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: foo \\nContext: bar \\nAnswer:\")])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.runnables.hub import HubRunnable\n", "\n", "prompt = HubRunnable(\"rlm/rag-prompt\").configurable_fields(\n", " owner_repo_commit=ConfigurableField(\n", " id=\"hub_commit\",\n", " name=\"Hub Commit\",\n", " description=\"The Hub commit to pull from\",\n", " )\n", ")\n", "\n", "prompt.invoke({\"question\": \"foo\", \"context\": \"bar\"})" ] }, { "cell_type": "code", "execution_count": 7, "id": "f33f3cf2", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ChatPromptValue(messages=[HumanMessage(content=\"[INST]<<SYS>> You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.<</SYS>> \\nQuestion: foo \\nContext: bar \\nAnswer: [/INST]\")])" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prompt.with_config(configurable={\"hub_commit\": \"rlm/rag-prompt-llama\"}).invoke(\n", " {\"question\": \"foo\", \"context\": \"bar\"}\n", ")" ] }, { "cell_type": "markdown", "id": "79d51519", "metadata": {}, "source": [ "## Configurable Alternatives\n", "\n" ] }, { "cell_type": "markdown", "id": "ac733d35", "metadata": {}, "source": [ "The `configurable_alternatives()` method allows us to swap out steps in a chain with an alternative. Below, we swap out one chat model for another:" ] }, { "cell_type": "code", "execution_count": 8, "id": "3db59f45", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n", "You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade --quiet langchain-anthropic\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 18, "id": "71248a9f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Here's a bear joke for you:\\n\\nWhy don't bears wear socks? \\nBecause they have bear feet!\\n\\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!\", response_metadata={'id': 'msg_018edUHh5fUbWdiimhrC3dZD', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-775bc58c-28d7-4e6b-a268-48fa6661f02f-0')" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_core.runnables import ConfigurableField\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatAnthropic(\n", " model=\"claude-3-haiku-20240307\", temperature=0\n", ").configurable_alternatives(\n", " # This gives this field an id\n", " # When configuring the end runnable, we can then use this id to configure this field\n", " ConfigurableField(id=\"llm\"),\n", " # This sets a default_key.\n", " # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n", " default_key=\"anthropic\",\n", " # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`\n", " openai=ChatOpenAI(),\n", " # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model=\"gpt-4\")`\n", " gpt4=ChatOpenAI(model=\"gpt-4\"),\n", " # You can add more configuration options here\n", ")\n", "prompt = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n", "chain = prompt | llm\n", "\n", "# By default it will call Anthropic\n", "chain.invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "code", "execution_count": 19, "id": "48b45337", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Why don't bears like fast food?\\n\\nBecause they can't catch it!\", response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 13, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-7bdaa992-19c9-4f0d-9a0c-1f326bc992d4-0')" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# We can use `.with_config(configurable={\"llm\": \"openai\"})` to specify an llm to use\n", "chain.with_config(configurable={\"llm\": \"openai\"}).invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "code", "execution_count": 20, "id": "42647fb7", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Here's a bear joke for you:\\n\\nWhy don't bears wear socks? \\nBecause they have bear feet!\\n\\nHow's that? I tried to come up with a simple, silly pun-based joke about bears. Puns and wordplay are a common way to create humorous bear jokes. Let me know if you'd like to hear another one!\", response_metadata={'id': 'msg_01BZvbmnEPGBtcxRWETCHkct', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 80}}, id='run-59b6ee44-a1cd-41b8-a026-28ee67cdd718-0')" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# If we use the `default_key` then it uses the default\n", "chain.with_config(configurable={\"llm\": \"anthropic\"}).invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "id": "a9134559", "metadata": {}, "source": [ "### With Prompts\n", "\n", "We can do a similar thing, but alternate between prompts\n" ] }, { "cell_type": "code", "execution_count": 22, "id": "9f6a7c6c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Here's a bear joke for you:\\n\\nWhy don't bears wear socks? \\nBecause they have bear feet!\", response_metadata={'id': 'msg_01DtM1cssjNFZYgeS3gMZ49H', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 28}}, id='run-8199af7d-ea31-443d-b064-483693f2e0a1-0')" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm = ChatAnthropic(model=\"claude-3-haiku-20240307\", temperature=0)\n", "prompt = PromptTemplate.from_template(\n", " \"Tell me a joke about {topic}\"\n", ").configurable_alternatives(\n", " # This gives this field an id\n", " # When configuring the end runnable, we can then use this id to configure this field\n", " ConfigurableField(id=\"prompt\"),\n", " # This sets a default_key.\n", " # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n", " default_key=\"joke\",\n", " # This adds a new option, with name `poem`\n", " poem=PromptTemplate.from_template(\"Write a short poem about {topic}\"),\n", " # You can add more configuration options here\n", ")\n", "chain = prompt | llm\n", "\n", "# By default it will write a joke\n", "chain.invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "code", "execution_count": 23, "id": "927297a1", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Here is a short poem about bears:\\n\\nMajestic bears, strong and true,\\nRoaming the forests, wild and free.\\nPowerful paws, fur soft and brown,\\nCommanding respect, nature's crown.\\n\\nForaging for berries, fishing streams,\\nProtecting their young, fierce and keen.\\nMighty bears, a sight to behold,\\nGuardians of the wilderness, untold.\\n\\nIn the wild they reign supreme,\\nEmbodying nature's grand theme.\\nBears, a symbol of strength and grace,\\nCaptivating all who see their face.\", response_metadata={'id': 'msg_01Wck3qPxrjURtutvtodaJFn', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 13, 'output_tokens': 134}}, id='run-69414a1e-51d7-4bec-a307-b34b7d61025e-0')" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# We can configure it write a poem\n", "chain.with_config(configurable={\"prompt\": \"poem\"}).invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "id": "0c77124e", "metadata": {}, "source": [ "### With Prompts and LLMs\n", "\n", "We can also have multiple things configurable!\n", "Here's an example doing that with both prompts and LLMs." ] }, { "cell_type": "code", "execution_count": 25, "id": "97538c23", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"In the forest deep and wide,\\nBears roam with grace and pride.\\nWith fur as dark as night,\\nThey rule the land with all their might.\\n\\nIn winter's chill, they hibernate,\\nIn spring they emerge, hungry and great.\\nWith claws sharp and eyes so keen,\\nThey hunt for food, fierce and lean.\\n\\nBut beneath their tough exterior,\\nLies a gentle heart, warm and superior.\\nThey love their cubs with all their might,\\nProtecting them through day and night.\\n\\nSo let us admire these majestic creatures,\\nIn awe of their strength and features.\\nFor in the wild, they reign supreme,\\nThe mighty bears, a timeless dream.\", response_metadata={'token_usage': {'completion_tokens': 133, 'prompt_tokens': 13, 'total_tokens': 146}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-5eec0b96-d580-49fd-ac4e-e32a0803b49b-0')" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm = ChatAnthropic(\n", " model=\"claude-3-haiku-20240307\", temperature=0\n", ").configurable_alternatives(\n", " # This gives this field an id\n", " # When configuring the end runnable, we can then use this id to configure this field\n", " ConfigurableField(id=\"llm\"),\n", " # This sets a default_key.\n", " # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n", " default_key=\"anthropic\",\n", " # This adds a new option, with name `openai` that is equal to `ChatOpenAI()`\n", " openai=ChatOpenAI(),\n", " # This adds a new option, with name `gpt4` that is equal to `ChatOpenAI(model=\"gpt-4\")`\n", " gpt4=ChatOpenAI(model=\"gpt-4\"),\n", " # You can add more configuration options here\n", ")\n", "prompt = PromptTemplate.from_template(\n", " \"Tell me a joke about {topic}\"\n", ").configurable_alternatives(\n", " # This gives this field an id\n", " # When configuring the end runnable, we can then use this id to configure this field\n", " ConfigurableField(id=\"prompt\"),\n", " # This sets a default_key.\n", " # If we specify this key, the default LLM (ChatAnthropic initialized above) will be used\n", " default_key=\"joke\",\n", " # This adds a new option, with name `poem`\n", " poem=PromptTemplate.from_template(\"Write a short poem about {topic}\"),\n", " # You can add more configuration options here\n", ")\n", "chain = prompt | llm\n", "\n", "# We can configure it write a poem with OpenAI\n", "chain.with_config(configurable={\"prompt\": \"poem\", \"llm\": \"openai\"}).invoke(\n", " {\"topic\": \"bears\"}\n", ")" ] }, { "cell_type": "code", "execution_count": 26, "id": "e4ee9fbc", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 13, 'total_tokens': 26}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-c1b14c9c-4988-49b8-9363-15bfd479973a-0')" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# We can always just configure only one if we want\n", "chain.with_config(configurable={\"llm\": \"openai\"}).invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "id": "02fc4841", "metadata": {}, "source": [ "### Saving configurations\n", "\n", "We can also easily save configured chains as their own objects" ] }, { "cell_type": "code", "execution_count": 27, "id": "5cf53202", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=\"Why did the bear break up with his girlfriend? \\nBecause he couldn't bear the relationship anymore!\", response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 13, 'total_tokens': 33}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-391ebd55-9137-458b-9a11-97acaff6a892-0')" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "openai_joke = chain.with_config(configurable={\"llm\": \"openai\"})\n", "\n", "openai_joke.invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "id": "76702b0e", "metadata": {}, "source": [ "## Next steps\n", "\n", "You now know how to configure a chain's internal steps at runtime.\n", "\n", "To learn more, see the other how-to guides on runnables in this section, including:\n", "\n", "- Using [.bind()](/docs/how_to/binding) as a simpler way to set a runnable's runtime parameters" ] }, { "cell_type": "code", "execution_count": null, "id": "a43e3b70", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/contextual_compression.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "612eac0a", "metadata": {}, "source": [ "# How to do retrieval with contextual compression\n", "\n", "One challenge with retrieval is that usually you don't know the specific queries your document storage system will face when you ingest data into the system. This means that the information most relevant to a query may be buried in a document with a lot of irrelevant text. Passing that full document through your application can lead to more expensive LLM calls and poorer responses.\n", "\n", "Contextual compression is meant to fix this. The idea is simple: instead of immediately returning retrieved documents as-is, you can compress them using the context of the given query, so that only the relevant information is returned. “Compressing” here refers to both compressing the contents of an individual document and filtering out documents wholesale.\n", "\n", "To use the Contextual Compression Retriever, you'll need:\n", "\n", "- a base retriever\n", "- a Document Compressor\n", "\n", "The Contextual Compression Retriever passes queries to the base retriever, takes the initial documents and passes them through the Document Compressor. The Document Compressor takes a list of documents and shortens it by reducing the contents of documents or dropping documents altogether.\n", "\n", "## Get started" ] }, { "cell_type": "code", "execution_count": 1, "id": "e0029369", "metadata": {}, "outputs": [], "source": [ "# Helper function for printing docs\n", "\n", "\n", "def pretty_print_docs(docs):\n", " print(\n", " f\"\\n{'-' * 100}\\n\".join(\n", " [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n", " )\n", " )" ] }, { "cell_type": "markdown", "id": "9d2360fc", "metadata": {}, "source": [ "## Using a vanilla vector store retriever\n", "Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can see that given an example question our retriever returns one or two relevant docs and a few irrelevant docs. And even the relevant docs have a lot of irrelevant information in them.\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "25c26947-958d-4219-8ca0-daa3a51bd344", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", "\n", "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", "\n", "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", "\n", "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n", "----------------------------------------------------------------------------------------------------\n", "Document 2:\n", "\n", "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n", "\n", "And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n", "\n", "We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n", "\n", "We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n", "\n", "We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n", "\n", "We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n", "----------------------------------------------------------------------------------------------------\n", "Document 3:\n", "\n", "And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n", "\n", "As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n", "\n", "While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n", "\n", "And soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n", "\n", "So tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n", "\n", "First, beat the opioid epidemic.\n", "----------------------------------------------------------------------------------------------------\n", "Document 4:\n", "\n", "Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n", "\n", "And as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n", "\n", "That ends on my watch. \n", "\n", "Medicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n", "\n", "We’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n", "\n", "Let’s pass the Paycheck Fairness Act and paid leave. \n", "\n", "Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n", "\n", "Let’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.\n" ] } ], "source": [ "from langchain_community.document_loaders import TextLoader\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "documents = TextLoader(\"state_of_the_union.txt\").load()\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n", "texts = text_splitter.split_documents(documents)\n", "retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()\n", "\n", "docs = retriever.invoke(\"What did the president say about Ketanji Brown Jackson\")\n", "pretty_print_docs(docs)" ] }, { "cell_type": "markdown", "id": "3473c553", "metadata": {}, "source": [ "## Adding contextual compression with an `LLMChainExtractor`\n", "Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll add an `LLMChainExtractor`, which will iterate over the initially returned documents and extract from each only the content that is relevant to the query.\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "d83e3c63-bcde-43e9-998e-35bf2ebef49b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson.\n" ] } ], "source": [ "from langchain.retrievers import ContextualCompressionRetriever\n", "from langchain.retrievers.document_compressors import LLMChainExtractor\n", "from langchain_openai import OpenAI\n", "\n", "llm = OpenAI(temperature=0)\n", "compressor = LLMChainExtractor.from_llm(llm)\n", "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=compressor, base_retriever=retriever\n", ")\n", "\n", "compressed_docs = compression_retriever.invoke(\n", " \"What did the president say about Ketanji Jackson Brown\"\n", ")\n", "pretty_print_docs(compressed_docs)" ] }, { "cell_type": "markdown", "id": "8a97cd9b", "metadata": {}, "source": [ "## More built-in compressors: filters\n", "### `LLMChainFilter`\n", "The `LLMChainFilter` is slightly simpler but more robust compressor that uses an LLM chain to decide which of the initially retrieved documents to filter out and which ones to return, without manipulating the document contents.\n", "\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "39b13654-01d9-4006-9550-5f3e77cb4f23", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", "\n", "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", "\n", "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", "\n", "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n" ] } ], "source": [ "from langchain.retrievers.document_compressors import LLMChainFilter\n", "\n", "_filter = LLMChainFilter.from_llm(llm)\n", "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=_filter, base_retriever=retriever\n", ")\n", "\n", "compressed_docs = compression_retriever.invoke(\n", " \"What did the president say about Ketanji Jackson Brown\"\n", ")\n", "pretty_print_docs(compressed_docs)" ] }, { "cell_type": "markdown", "id": "7194da42", "metadata": {}, "source": [ "### `EmbeddingsFilter`\n", "\n", "Making an extra LLM call over each retrieved document is expensive and slow. The `EmbeddingsFilter` provides a cheaper and faster option by embedding the documents and query and only returning those documents which have sufficiently similar embeddings to the query.\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "ee8d9486-db9a-4e24-aa11-ae40f34cc908", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n", "\n", "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n", "\n", "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", "\n", "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.\n", "----------------------------------------------------------------------------------------------------\n", "Document 2:\n", "\n", "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n", "\n", "And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n", "\n", "We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n", "\n", "We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n", "\n", "We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n", "\n", "We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.\n" ] } ], "source": [ "from langchain.retrievers.document_compressors import EmbeddingsFilter\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "embeddings = OpenAIEmbeddings()\n", "embeddings_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)\n", "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=embeddings_filter, base_retriever=retriever\n", ")\n", "\n", "compressed_docs = compression_retriever.invoke(\n", " \"What did the president say about Ketanji Jackson Brown\"\n", ")\n", "pretty_print_docs(compressed_docs)" ] }, { "cell_type": "markdown", "id": "2074462b", "metadata": {}, "source": [ "## Stringing compressors and document transformers together\n", "Using the `DocumentCompressorPipeline` we can also easily combine multiple compressors in sequence. Along with compressors we can add `BaseDocumentTransformer`s to our pipeline, which don't perform any contextual compression but simply perform some transformation on a set of documents. For example `TextSplitter`s can be used as document transformers to split documents into smaller pieces, and the `EmbeddingsRedundantFilter` can be used to filter out redundant documents based on embedding similarity between documents.\n", "\n", "Below we create a compressor pipeline by first splitting our docs into smaller chunks, then removing redundant documents, and then filtering based on relevance to the query.\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "617a1756", "metadata": {}, "outputs": [], "source": [ "from langchain.retrievers.document_compressors import DocumentCompressorPipeline\n", "from langchain_community.document_transformers import EmbeddingsRedundantFilter\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=0, separator=\". \")\n", "redundant_filter = EmbeddingsRedundantFilter(embeddings=embeddings)\n", "relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.76)\n", "pipeline_compressor = DocumentCompressorPipeline(\n", " transformers=[splitter, redundant_filter, relevant_filter]\n", ")" ] }, { "cell_type": "code", "execution_count": 8, "id": "40b9c1db-7ac2-4257-935a-b107da50bb43", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Document 1:\n", "\n", "One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n", "\n", "And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson\n", "----------------------------------------------------------------------------------------------------\n", "Document 2:\n", "\n", "As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n", "\n", "While it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year\n", "----------------------------------------------------------------------------------------------------\n", "Document 3:\n", "\n", "A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder\n", "----------------------------------------------------------------------------------------------------\n", "Document 4:\n", "\n", "Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n", "\n", "And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n", "\n", "We can do both\n" ] } ], "source": [ "compression_retriever = ContextualCompressionRetriever(\n", " base_compressor=pipeline_compressor, base_retriever=retriever\n", ")\n", "\n", "compressed_docs = compression_retriever.invoke(\n", " \"What did the president say about Ketanji Jackson Brown\"\n", ")\n", "pretty_print_docs(compressed_docs)" ] }, { "cell_type": "code", "execution_count": null, "id": "78581dcb", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_callbacks.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to create custom callback handlers\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [Callbacks](/docs/concepts/#callbacks)\n", "\n", ":::\n", "\n", "LangChain has some built-in callback handlers, but you will often want to create your own handlers with custom logic.\n", "\n", "To create a custom callback handler, we need to determine the [event(s)](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) we want our callback handler to handle as well as what we want our callback handler to do when the event is triggered. Then all we need to do is attach the callback handler to the object, for example via [the constructor](/docs/how_to/callbacks_constructor) or [at runtime](/docs/how_to/callbacks_runtime).\n", "\n", "In the example below, we'll implement streaming with a custom handler.\n", "\n", "In our custom callback handler `MyCustomHandler`, we implement the `on_llm_new_token` handler to print the token we have just received. We then attach our custom handler to the model object as a constructor callback." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "%pip install -qU langchain langchain_anthropic\n", "\n", "import getpass\n", "import os\n", "\n", "os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "My custom handler, token: Here\n", "My custom handler, token: 's\n", "My custom handler, token: a\n", "My custom handler, token: bear\n", "My custom handler, token: joke\n", "My custom handler, token: for\n", "My custom handler, token: you\n", "My custom handler, token: :\n", "My custom handler, token: \n", "\n", "Why\n", "My custom handler, token: di\n", "My custom handler, token: d the\n", "My custom handler, token: bear\n", "My custom handler, token: dissol\n", "My custom handler, token: ve\n", "My custom handler, token: in\n", "My custom handler, token: water\n", "My custom handler, token: ?\n", "My custom handler, token: \n", "Because\n", "My custom handler, token: it\n", "My custom handler, token: was\n", "My custom handler, token: a\n", "My custom handler, token: polar\n", "My custom handler, token: bear\n", "My custom handler, token: !\n" ] } ], "source": [ "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.callbacks import BaseCallbackHandler\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "\n", "class MyCustomHandler(BaseCallbackHandler):\n", " def on_llm_new_token(self, token: str, **kwargs) -> None:\n", " print(f\"My custom handler, token: {token}\")\n", "\n", "\n", "prompt = ChatPromptTemplate.from_messages([\"Tell me a joke about {animal}\"])\n", "\n", "# To enable streaming, we pass in `streaming=True` to the ChatModel constructor\n", "# Additionally, we pass in our custom handler as a list to the callbacks parameter\n", "model = ChatAnthropic(\n", " model=\"claude-3-sonnet-20240229\", streaming=True, callbacks=[MyCustomHandler()]\n", ")\n", "\n", "chain = prompt | model\n", "\n", "response = chain.invoke({\"animal\": \"bears\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see [this reference page](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html#langchain-core-callbacks-base-basecallbackhandler) for a list of events you can handle. Note that the `handle_chain_*` events run for most LCEL runnables.\n", "\n", "## Next steps\n", "\n", "You've now learned how to create your own custom callback handlers.\n", "\n", "Next, check out the other how-to guides in this section, such as [how to attach callbacks to a runnable](/docs/how_to/callbacks_attach)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.5" } }, "nbformat": 4, "nbformat_minor": 2 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_chat_model.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "e3da9a3f-f583-4ba6-994e-0e8c1158f5eb", "metadata": {}, "source": [ "# How to create a custom chat model class\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "\n", ":::\n", "\n", "In this guide, we'll learn how to create a custom chat model using LangChain abstractions.\n", "\n", "Wrapping your LLM with the standard [`BaseChatModel`](https://api.python.langchain.com/en/latest/language_models/langchain_core.language_models.chat_models.BaseChatModel.html) interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n", "\n", "As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.\n", "\n", "## Inputs and outputs\n", "\n", "First, we need to talk about **messages**, which are the inputs and outputs of chat models.\n", "\n", "### Messages\n", "\n", "Chat models take messages as inputs and return a message as output. \n", "\n", "LangChain has a few [built-in message types](/docs/concepts/#message-types):\n", "\n", "| Message Type | Description |\n", "|-----------------------|-------------------------------------------------------------------------------------------------|\n", "| `SystemMessage` | Used for priming AI behavior, usually passed in as the first of a sequence of input messages. |\n", "| `HumanMessage` | Represents a message from a person interacting with the chat model. |\n", "| `AIMessage` | Represents a message from the chat model. This can be either text or a request to invoke a tool.|\n", "| `FunctionMessage` / `ToolMessage` | Message for passing the results of tool invocation back to the model. |\n", "| `AIMessageChunk` / `HumanMessageChunk` / ... | Chunk variant of each type of message. |\n", "\n", "\n", "::: {.callout-note}\n", "`ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles.\n", "\n", "This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema.\n", ":::" ] }, { "cell_type": "code", "execution_count": 1, "id": "c5046e6a-8b09-4a99-b6e6-7a605aac5738", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.messages import (\n", " AIMessage,\n", " BaseMessage,\n", " FunctionMessage,\n", " HumanMessage,\n", " SystemMessage,\n", " ToolMessage,\n", ")" ] }, { "cell_type": "markdown", "id": "53033447-8260-4f53-bd6f-b2f744e04e75", "metadata": {}, "source": [ "### Streaming Variant\n", "\n", "All the chat messages have a streaming variant that contains `Chunk` in the name." ] }, { "cell_type": "code", "execution_count": 2, "id": "d4656e9d-bfa1-4703-8f79-762fe6421294", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.messages import (\n", " AIMessageChunk,\n", " FunctionMessageChunk,\n", " HumanMessageChunk,\n", " SystemMessageChunk,\n", " ToolMessageChunk,\n", ")" ] }, { "cell_type": "markdown", "id": "81ebf3f4-c760-4898-b921-fdb469453d4a", "metadata": {}, "source": [ "These chunks are used when streaming output from chat models, and they all define an additive property!" ] }, { "cell_type": "code", "execution_count": 3, "id": "9c15c299-6f8a-49cf-a072-09924fd44396", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "AIMessageChunk(content='Hello World!')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "AIMessageChunk(content=\"Hello\") + AIMessageChunk(content=\" World!\")" ] }, { "cell_type": "markdown", "id": "bbfebea1", "metadata": {}, "source": [ "## Base Chat Model\n", "\n", "Let's implement a chat model that echoes back the first `n` characetrs of the last message in the prompt!\n", "\n", "To do so, we will inherit from `BaseChatModel` and we'll need to implement the following:\n", "\n", "| Method/Property | Description | Required/Optional |\n", "|------------------------------------|-------------------------------------------------------------------|--------------------|\n", "| `_generate` | Use to generate a chat result from a prompt | Required |\n", "| `_llm_type` (property) | Used to uniquely identify the type of the model. Used for logging.| Required |\n", "| `_identifying_params` (property) | Represent model parameterization for tracing purposes. | Optional |\n", "| `_stream` | Use to implement streaming. | Optional |\n", "| `_agenerate` | Use to implement a native async method. | Optional |\n", "| `_astream` | Use to implement async version of `_stream`. | Optional |\n", "\n", "\n", ":::{.callout-tip}\n", "The `_astream` implementation uses `run_in_executor` to launch the sync `_stream` in a separate thread if `_stream` is implemented, otherwise it fallsback to use `_agenerate`.\n", "\n", "You can use this trick if you want to reuse the `_stream` implementation, but if you're able to implement code that's natively async that's a better solution since that code will run with less overhead.\n", ":::" ] }, { "cell_type": "markdown", "id": "8e7047bd-c235-46f6-85e1-d6d7e0868eb1", "metadata": {}, "source": [ "### Implementation" ] }, { "cell_type": "code", "execution_count": 4, "id": "25ba32e5-5a6d-49f4-bb68-911827b84d61", "metadata": { "tags": [] }, "outputs": [], "source": [ "from typing import Any, AsyncIterator, Dict, Iterator, List, Optional\n", "\n", "from langchain_core.callbacks import (\n", " AsyncCallbackManagerForLLMRun,\n", " CallbackManagerForLLMRun,\n", ")\n", "from langchain_core.language_models import BaseChatModel, SimpleChatModel\n", "from langchain_core.messages import AIMessageChunk, BaseMessage, HumanMessage\n", "from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult\n", "from langchain_core.runnables import run_in_executor\n", "\n", "\n", "class CustomChatModelAdvanced(BaseChatModel):\n", " \"\"\"A custom chat model that echoes the first `n` characters of the input.\n", "\n", " When contributing an implementation to LangChain, carefully document\n", " the model including the initialization parameters, include\n", " an example of how to initialize the model and include any relevant\n", " links to the underlying models documentation or API.\n", "\n", " Example:\n", "\n", " .. code-block:: python\n", "\n", " model = CustomChatModel(n=2)\n", " result = model.invoke([HumanMessage(content=\"hello\")])\n", " result = model.batch([[HumanMessage(content=\"hello\")],\n", " [HumanMessage(content=\"world\")]])\n", " \"\"\"\n", "\n", " model_name: str\n", " \"\"\"The name of the model\"\"\"\n", " n: int\n", " \"\"\"The number of characters from the last message of the prompt to be echoed.\"\"\"\n", "\n", " def _generate(\n", " self,\n", " messages: List[BaseMessage],\n", " stop: Optional[List[str]] = None,\n", " run_manager: Optional[CallbackManagerForLLMRun] = None,\n", " **kwargs: Any,\n", " ) -> ChatResult:\n", " \"\"\"Override the _generate method to implement the chat model logic.\n", "\n", " This can be a call to an API, a call to a local model, or any other\n", " implementation that generates a response to the input prompt.\n", "\n", " Args:\n", " messages: the prompt composed of a list of messages.\n", " stop: a list of strings on which the model should stop generating.\n", " If generation stops due to a stop token, the stop token itself\n", " SHOULD BE INCLUDED as part of the output. This is not enforced\n", " across models right now, but it's a good practice to follow since\n", " it makes it much easier to parse the output of the model\n", " downstream and understand why generation stopped.\n", " run_manager: A run manager with callbacks for the LLM.\n", " \"\"\"\n", " # Replace this with actual logic to generate a response from a list\n", " # of messages.\n", " last_message = messages[-1]\n", " tokens = last_message.content[: self.n]\n", " message = AIMessage(\n", " content=tokens,\n", " additional_kwargs={}, # Used to add additional payload (e.g., function calling request)\n", " response_metadata={ # Use for response metadata\n", " \"time_in_seconds\": 3,\n", " },\n", " )\n", " ##\n", "\n", " generation = ChatGeneration(message=message)\n", " return ChatResult(generations=[generation])\n", "\n", " def _stream(\n", " self,\n", " messages: List[BaseMessage],\n", " stop: Optional[List[str]] = None,\n", " run_manager: Optional[CallbackManagerForLLMRun] = None,\n", " **kwargs: Any,\n", " ) -> Iterator[ChatGenerationChunk]:\n", " \"\"\"Stream the output of the model.\n", "\n", " This method should be implemented if the model can generate output\n", " in a streaming fashion. If the model does not support streaming,\n", " do not implement it. In that case streaming requests will be automatically\n", " handled by the _generate method.\n", "\n", " Args:\n", " messages: the prompt composed of a list of messages.\n", " stop: a list of strings on which the model should stop generating.\n", " If generation stops due to a stop token, the stop token itself\n", " SHOULD BE INCLUDED as part of the output. This is not enforced\n", " across models right now, but it's a good practice to follow since\n", " it makes it much easier to parse the output of the model\n", " downstream and understand why generation stopped.\n", " run_manager: A run manager with callbacks for the LLM.\n", " \"\"\"\n", " last_message = messages[-1]\n", " tokens = last_message.content[: self.n]\n", "\n", " for token in tokens:\n", " chunk = ChatGenerationChunk(message=AIMessageChunk(content=token))\n", "\n", " if run_manager:\n", " # This is optional in newer versions of LangChain\n", " # The on_llm_new_token will be called automatically\n", " run_manager.on_llm_new_token(token, chunk=chunk)\n", "\n", " yield chunk\n", "\n", " # Let's add some other information (e.g., response metadata)\n", " chunk = ChatGenerationChunk(\n", " message=AIMessageChunk(content=\"\", response_metadata={\"time_in_sec\": 3})\n", " )\n", " if run_manager:\n", " # This is optional in newer versions of LangChain\n", " # The on_llm_new_token will be called automatically\n", " run_manager.on_llm_new_token(token, chunk=chunk)\n", " yield chunk\n", "\n", " @property\n", " def _llm_type(self) -> str:\n", " \"\"\"Get the type of language model used by this chat model.\"\"\"\n", " return \"echoing-chat-model-advanced\"\n", "\n", " @property\n", " def _identifying_params(self) -> Dict[str, Any]:\n", " \"\"\"Return a dictionary of identifying parameters.\n", "\n", " This information is used by the LangChain callback system, which\n", " is used for tracing purposes make it possible to monitor LLMs.\n", " \"\"\"\n", " return {\n", " # The model name allows users to specify custom token counting\n", " # rules in LLM monitoring applications (e.g., in LangSmith users\n", " # can provide per token pricing for their model and monitor\n", " # costs for the given LLM.)\n", " \"model_name\": self.model_name,\n", " }" ] }, { "cell_type": "markdown", "id": "1e9af284-f2d3-44e2-ac6a-09b73d89ada3", "metadata": {}, "source": [ "### Let's test it 🧪\n", "\n", "The chat model will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support!" ] }, { "cell_type": "code", "execution_count": 6, "id": "27689f30-dcd2-466b-ba9d-f60b7d434110", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='Meo', response_metadata={'time_in_seconds': 3}, id='run-ddb42bd6-4fdd-4bd2-8be5-e11b67d3ac29-0')" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model = CustomChatModelAdvanced(n=3, model_name=\"my_custom_model\")\n", "\n", "model.invoke(\n", " [\n", " HumanMessage(content=\"hello!\"),\n", " AIMessage(content=\"Hi there human!\"),\n", " HumanMessage(content=\"Meow!\"),\n", " ]\n", ")" ] }, { "cell_type": "code", "execution_count": 7, "id": "406436df-31bf-466b-9c3d-39db9d6b6407", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='hel', response_metadata={'time_in_seconds': 3}, id='run-4d3cc912-44aa-454b-977b-ca02be06c12e-0')" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.invoke(\"hello\")" ] }, { "cell_type": "code", "execution_count": 8, "id": "a72ffa46-6004-41ef-bbe4-56fa17a029e2", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "[AIMessage(content='hel', response_metadata={'time_in_seconds': 3}, id='run-9620e228-1912-4582-8aa1-176813afec49-0'),\n", " AIMessage(content='goo', response_metadata={'time_in_seconds': 3}, id='run-1ce8cdf8-6f75-448e-82f7-1bb4a121df93-0')]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.batch([\"hello\", \"goodbye\"])" ] }, { "cell_type": "code", "execution_count": 9, "id": "3633be2c-2ea0-42f9-a72f-3b5240690b55", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "c|a|t||" ] } ], "source": [ "for chunk in model.stream(\"cat\"):\n", " print(chunk.content, end=\"|\")" ] }, { "cell_type": "markdown", "id": "3f8a7c42-aec4-4116-adf3-93133d409827", "metadata": {}, "source": [ "Please see the implementation of `_astream` in the model! If you do not implement it, then no output will stream.!" ] }, { "cell_type": "code", "execution_count": 10, "id": "b7d73995-eeab-48c6-a7d8-32c98ba29fc2", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "c|a|t||" ] } ], "source": [ "async for chunk in model.astream(\"cat\"):\n", " print(chunk.content, end=\"|\")" ] }, { "cell_type": "markdown", "id": "f80dc55b-d159-4527-9191-407a7c6d6042", "metadata": {}, "source": [ "Let's try to use the astream events API which will also help double check that all the callbacks were implemented!" ] }, { "cell_type": "code", "execution_count": 11, "id": "17840eba-8ff4-4e73-8e4f-85f16eb1c9d0", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'event': 'on_chat_model_start', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'name': 'CustomChatModelAdvanced', 'tags': [], 'metadata': {}, 'data': {'input': 'cat'}}\n", "{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='c', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}\n", "{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='a', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}\n", "{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='t', id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}\n", "{'event': 'on_chat_model_stream', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'time_in_sec': 3}, id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}\n", "{'event': 'on_chat_model_end', 'name': 'CustomChatModelAdvanced', 'run_id': '125a2a16-b9cd-40de-aa08-8aa9180b07d0', 'tags': [], 'metadata': {}, 'data': {'output': AIMessageChunk(content='cat', response_metadata={'time_in_sec': 3}, id='run-125a2a16-b9cd-40de-aa08-8aa9180b07d0')}}\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.\n", " warn_beta(\n" ] } ], "source": [ "async for event in model.astream_events(\"cat\", version=\"v1\"):\n", " print(event)" ] }, { "cell_type": "markdown", "id": "44ee559b-b1da-4851-8c97-420ab394aff9", "metadata": {}, "source": [ "## Contributing\n", "\n", "We appreciate all chat model integration contributions. \n", "\n", "Here's a checklist to help make sure your contribution gets added to LangChain:\n", "\n", "Documentation:\n", "\n", "* The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://api.python.langchain.com/en/stable/langchain_api_reference.html).\n", "* The class doc-string for the model contains a link to the model API if the model is powered by a service.\n", "\n", "Tests:\n", "\n", "* [ ] Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code.\n", "\n", "\n", "Streaming (if you're implementing it):\n", "\n", "* [ ] Implement the _stream method to get streaming working\n", "\n", "Stop Token Behavior:\n", "\n", "* [ ] Stop token should be respected\n", "* [ ] Stop token should be INCLUDED as part of the response\n", "\n", "Secret API Keys:\n", "\n", "* [ ] If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model.\n", "\n", "\n", "Identifying Params:\n", "\n", "* [ ] Include a `model_name` in identifying params\n", "\n", "\n", "Optimizations:\n", "\n", "Consider providing native async support to reduce the overhead from the model!\n", " \n", "* [ ] Provided a native async of `_agenerate` (used by `ainvoke`)\n", "* [ ] Provided a native async of `_astream` (used by `astream`)\n", "\n", "## Next steps\n", "\n", "You've now learned how to create your own custom chat models.\n", "\n", "Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/docs/how_to/structured_output) or [how to track chat model token usage](/docs/how_to/chat_token_usage_tracking)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_llm.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "9e9b7651", "metadata": {}, "source": [ "# How to create a custom LLM class\n", "\n", "This notebook goes over how to create a custom LLM wrapper, in case you want to use your own LLM or a different wrapper than one that is supported in LangChain.\n", "\n", "Wrapping your LLM with the standard `LLM` interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n", "\n", "As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box, async support, the `astream_events` API, etc.\n", "\n", "## Implementation\n", "\n", "There are only two required things that a custom LLM needs to implement:\n", "\n", "\n", "| Method | Description |\n", "|---------------|---------------------------------------------------------------------------|\n", "| `_call` | Takes in a string and some optional stop words, and returns a string. Used by `invoke`. |\n", "| `_llm_type` | A property that returns a string, used for logging purposes only. \n", "\n", "\n", "\n", "Optional implementations: \n", "\n", "\n", "| Method | Description |\n", "|----------------------|-----------------------------------------------------------------------------------------------------------|\n", "| `_identifying_params` | Used to help with identifying the model and printing the LLM; should return a dictionary. This is a **@property**. |\n", "| `_acall` | Provides an async native implementation of `_call`, used by `ainvoke`. |\n", "| `_stream` | Method to stream the output token by token. |\n", "| `_astream` | Provides an async native implementation of `_stream`; in newer LangChain versions, defaults to `_stream`. |\n", "\n", "\n", "\n", "Let's implement a simple custom LLM that just returns the first n characters of the input." ] }, { "cell_type": "code", "execution_count": 1, "id": "2e9bb32f-6fd1-46ac-b32f-d175663710c0", "metadata": { "tags": [] }, "outputs": [], "source": [ "from typing import Any, Dict, Iterator, List, Mapping, Optional\n", "\n", "from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n", "from langchain_core.language_models.llms import LLM\n", "from langchain_core.outputs import GenerationChunk\n", "\n", "\n", "class CustomLLM(LLM):\n", " \"\"\"A custom chat model that echoes the first `n` characters of the input.\n", "\n", " When contributing an implementation to LangChain, carefully document\n", " the model including the initialization parameters, include\n", " an example of how to initialize the model and include any relevant\n", " links to the underlying models documentation or API.\n", "\n", " Example:\n", "\n", " .. code-block:: python\n", "\n", " model = CustomChatModel(n=2)\n", " result = model.invoke([HumanMessage(content=\"hello\")])\n", " result = model.batch([[HumanMessage(content=\"hello\")],\n", " [HumanMessage(content=\"world\")]])\n", " \"\"\"\n", "\n", " n: int\n", " \"\"\"The number of characters from the last message of the prompt to be echoed.\"\"\"\n", "\n", " def _call(\n", " self,\n", " prompt: str,\n", " stop: Optional[List[str]] = None,\n", " run_manager: Optional[CallbackManagerForLLMRun] = None,\n", " **kwargs: Any,\n", " ) -> str:\n", " \"\"\"Run the LLM on the given input.\n", "\n", " Override this method to implement the LLM logic.\n", "\n", " Args:\n", " prompt: The prompt to generate from.\n", " stop: Stop words to use when generating. Model output is cut off at the\n", " first occurrence of any of the stop substrings.\n", " If stop tokens are not supported consider raising NotImplementedError.\n", " run_manager: Callback manager for the run.\n", " **kwargs: Arbitrary additional keyword arguments. These are usually passed\n", " to the model provider API call.\n", "\n", " Returns:\n", " The model output as a string. Actual completions SHOULD NOT include the prompt.\n", " \"\"\"\n", " if stop is not None:\n", " raise ValueError(\"stop kwargs are not permitted.\")\n", " return prompt[: self.n]\n", "\n", " def _stream(\n", " self,\n", " prompt: str,\n", " stop: Optional[List[str]] = None,\n", " run_manager: Optional[CallbackManagerForLLMRun] = None,\n", " **kwargs: Any,\n", " ) -> Iterator[GenerationChunk]:\n", " \"\"\"Stream the LLM on the given prompt.\n", "\n", " This method should be overridden by subclasses that support streaming.\n", "\n", " If not implemented, the default behavior of calls to stream will be to\n", " fallback to the non-streaming version of the model and return\n", " the output as a single chunk.\n", "\n", " Args:\n", " prompt: The prompt to generate from.\n", " stop: Stop words to use when generating. Model output is cut off at the\n", " first occurrence of any of these substrings.\n", " run_manager: Callback manager for the run.\n", " **kwargs: Arbitrary additional keyword arguments. These are usually passed\n", " to the model provider API call.\n", "\n", " Returns:\n", " An iterator of GenerationChunks.\n", " \"\"\"\n", " for char in prompt[: self.n]:\n", " chunk = GenerationChunk(text=char)\n", " if run_manager:\n", " run_manager.on_llm_new_token(chunk.text, chunk=chunk)\n", "\n", " yield chunk\n", "\n", " @property\n", " def _identifying_params(self) -> Dict[str, Any]:\n", " \"\"\"Return a dictionary of identifying parameters.\"\"\"\n", " return {\n", " # The model name allows users to specify custom token counting\n", " # rules in LLM monitoring applications (e.g., in LangSmith users\n", " # can provide per token pricing for their model and monitor\n", " # costs for the given LLM.)\n", " \"model_name\": \"CustomChatModel\",\n", " }\n", "\n", " @property\n", " def _llm_type(self) -> str:\n", " \"\"\"Get the type of language model used by this chat model. Used for logging purposes only.\"\"\"\n", " return \"custom\"" ] }, { "cell_type": "markdown", "id": "f614fb7b-e476-4d81-821b-57a2ebebe21c", "metadata": { "tags": [] }, "source": [ "### Let's test it 🧪" ] }, { "cell_type": "markdown", "id": "e3feae15-4afc-49f4-8542-93867d4ea769", "metadata": { "tags": [] }, "source": [ "This LLM will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support!" ] }, { "cell_type": "code", "execution_count": 2, "id": "dfff4a95-99b2-4dba-b80d-9c3855046ef1", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1mCustomLLM\u001b[0m\n", "Params: {'model_name': 'CustomChatModel'}\n" ] } ], "source": [ "llm = CustomLLM(n=5)\n", "print(llm)" ] }, { "cell_type": "code", "execution_count": 3, "id": "8cd49199", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'This '" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.invoke(\"This is a foobar thing\")" ] }, { "cell_type": "code", "execution_count": 4, "id": "511b3cb1-9c6f-49b6-9002-a2ec490632b0", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'world'" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await llm.ainvoke(\"world\")" ] }, { "cell_type": "code", "execution_count": 5, "id": "d9d5bec2-d60a-4ebd-a97d-ac32c98ab02f", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "['woof ', 'meow ']" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.batch([\"woof woof woof\", \"meow meow meow\"])" ] }, { "cell_type": "code", "execution_count": 6, "id": "fe246b29-7a93-4bef-8861-389445598c25", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "['woof ', 'meow ']" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await llm.abatch([\"woof woof woof\", \"meow meow meow\"])" ] }, { "cell_type": "code", "execution_count": 7, "id": "3a67c38f-b83b-4eb9-a231-441c55ee8c82", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "h|e|l|l|o|" ] } ], "source": [ "async for token in llm.astream(\"hello\"):\n", " print(token, end=\"|\", flush=True)" ] }, { "cell_type": "markdown", "id": "b62c282b-3a35-4529-aac4-2c2f0916790e", "metadata": {}, "source": [ "Let's confirm that in integrates nicely with other `LangChain` APIs." ] }, { "cell_type": "code", "execution_count": 15, "id": "d5578e74-7fa8-4673-afee-7a59d442aaff", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.prompts import ChatPromptTemplate" ] }, { "cell_type": "code", "execution_count": 16, "id": "672ff664-8673-4832-9f4f-335253880141", "metadata": { "tags": [] }, "outputs": [], "source": [ "prompt = ChatPromptTemplate.from_messages(\n", " [(\"system\", \"you are a bot\"), (\"human\", \"{input}\")]\n", ")" ] }, { "cell_type": "code", "execution_count": 17, "id": "c400538a-9146-4c93-9fac-293d8f9ca6bf", "metadata": { "tags": [] }, "outputs": [], "source": [ "llm = CustomLLM(n=7)\n", "chain = prompt | llm" ] }, { "cell_type": "code", "execution_count": 18, "id": "080964af-3e2d-4573-85cb-0d7cc58a6f42", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'event': 'on_chain_start', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'name': 'RunnableSequence', 'tags': [], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}\n", "{'event': 'on_prompt_start', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}}}\n", "{'event': 'on_prompt_end', 'name': 'ChatPromptTemplate', 'run_id': '7e996251-a926-4344-809e-c425a9846d21', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'input': 'hello there!'}, 'output': ChatPromptValue(messages=[SystemMessage(content='you are a bot'), HumanMessage(content='hello there!')])}}\n", "{'event': 'on_llm_start', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'input': {'prompts': ['System: you are a bot\\nHuman: hello there!']}}}\n", "{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'S'}}\n", "{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'S'}}\n", "{'event': 'on_llm_stream', 'name': 'CustomLLM', 'run_id': 'a8766beb-10f4-41de-8750-3ea7cf0ca7e2', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': 'y'}}\n", "{'event': 'on_chain_stream', 'run_id': '05f24b4f-7ea3-4fb6-8417-3aa21633462f', 'tags': [], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': 'y'}}\n" ] } ], "source": [ "idx = 0\n", "async for event in chain.astream_events({\"input\": \"hello there!\"}, version=\"v1\"):\n", " print(event)\n", " idx += 1\n", " if idx > 7:\n", " # Truncate\n", " break" ] }, { "cell_type": "markdown", "id": "a85e848a-5316-4318-b770-3f8fd34f4231", "metadata": {}, "source": [ "## Contributing\n", "\n", "We appreciate all chat model integration contributions. \n", "\n", "Here's a checklist to help make sure your contribution gets added to LangChain:\n", "\n", "Documentation:\n", "\n", "* The model contains doc-strings for all initialization arguments, as these will be surfaced in the [APIReference](https://api.python.langchain.com/en/stable/langchain_api_reference.html).\n", "* The class doc-string for the model contains a link to the model API if the model is powered by a service.\n", "\n", "Tests:\n", "\n", "* [ ] Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code.\n", "\n", "Streaming (if you're implementing it):\n", "\n", "* [ ] Make sure to invoke the `on_llm_new_token` callback\n", "* [ ] `on_llm_new_token` is invoked BEFORE yielding the chunk\n", "\n", "Stop Token Behavior:\n", "\n", "* [ ] Stop token should be respected\n", "* [ ] Stop token should be INCLUDED as part of the response\n", "\n", "Secret API Keys:\n", "\n", "* [ ] If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_retriever.ipynb
{ "cells": [ { "cell_type": "raw", "id": "b5fc1fc7-c4c5-418f-99da-006c604a7ea6", "metadata": {}, "source": [ "---\n", "title: Custom Retriever\n", "---" ] }, { "cell_type": "markdown", "id": "ff6f3c79-0848-4956-9115-54f6b2134587", "metadata": {}, "source": [ "# How to create a custom Retriever\n", "\n", "## Overview\n", "\n", "Many LLM applications involve retrieving information from external data sources using a `Retriever`. \n", "\n", "A retriever is responsible for retrieving a list of relevant `Documents` to a given user `query`.\n", "\n", "The retrieved documents are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the to generate an appropriate response (e.g., answering a user question based on a knowledge base).\n", "\n", "## Interface\n", "\n", "To create your own retriever, you need to extend the `BaseRetriever` class and implement the following methods:\n", "\n", "| Method | Description | Required/Optional |\n", "|--------------------------------|--------------------------------------------------|-------------------|\n", "| `_get_relevant_documents` | Get documents relevant to a query. | Required |\n", "| `_aget_relevant_documents` | Implement to provide async native support. | Optional |\n", "\n", "\n", "The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n", "\n", ":::{.callout-tip}\n", "By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n", ":::\n", "\n", "\n", ":::{.callout-info}\n", "You can use a `RunnableLambda` or `RunnableGenerator` to implement a retriever.\n", "\n", "The main benefit of implementing a retriever as a `BaseRetriever` vs. a `RunnableLambda` (a custom [runnable function](/docs/how_to/functions)) is that a `BaseRetriever` is a well\n", "known LangChain entity so some tooling for monitoring may implement specialized behavior for retrievers. Another difference\n", "is that a `BaseRetriever` will behave slightly differently from `RunnableLambda` in some APIs; e.g., the `start` event\n", "in `astream_events` API will be `on_retriever_start` instead of `on_chain_start`.\n", ":::\n" ] }, { "cell_type": "markdown", "id": "2be9fe82-0757-41d1-a647-15bed11fd3bf", "metadata": {}, "source": [ "## Example\n", "\n", "Let's implement a toy retriever that returns all documents whose text contains the text in the user query." ] }, { "cell_type": "code", "execution_count": 26, "id": "bdf61902-2984-493b-a002-d4fced6df590", "metadata": {}, "outputs": [], "source": [ "from typing import List\n", "\n", "from langchain_core.callbacks import CallbackManagerForRetrieverRun\n", "from langchain_core.documents import Document\n", "from langchain_core.retrievers import BaseRetriever\n", "\n", "\n", "class ToyRetriever(BaseRetriever):\n", " \"\"\"A toy retriever that contains the top k documents that contain the user query.\n", "\n", " This retriever only implements the sync method _get_relevant_documents.\n", "\n", " If the retriever were to involve file access or network access, it could benefit\n", " from a native async implementation of `_aget_relevant_documents`.\n", "\n", " As usual, with Runnables, there's a default async implementation that's provided\n", " that delegates to the sync implementation running on another thread.\n", " \"\"\"\n", "\n", " documents: List[Document]\n", " \"\"\"List of documents to retrieve from.\"\"\"\n", " k: int\n", " \"\"\"Number of top results to return\"\"\"\n", "\n", " def _get_relevant_documents(\n", " self, query: str, *, run_manager: CallbackManagerForRetrieverRun\n", " ) -> List[Document]:\n", " \"\"\"Sync implementations for retriever.\"\"\"\n", " matching_documents = []\n", " for document in documents:\n", " if len(matching_documents) > self.k:\n", " return matching_documents\n", "\n", " if query.lower() in document.page_content.lower():\n", " matching_documents.append(document)\n", " return matching_documents\n", "\n", " # Optional: Provide a more efficient native implementation by overriding\n", " # _aget_relevant_documents\n", " # async def _aget_relevant_documents(\n", " # self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun\n", " # ) -> List[Document]:\n", " # \"\"\"Asynchronously get documents relevant to a query.\n", "\n", " # Args:\n", " # query: String to find relevant documents for\n", " # run_manager: The callbacks handler to use\n", "\n", " # Returns:\n", " # List of relevant documents\n", " # \"\"\"" ] }, { "cell_type": "markdown", "id": "2eac1f28-29c1-4888-b3aa-b4fa70c73b4c", "metadata": {}, "source": [ "## Test it 🧪" ] }, { "cell_type": "code", "execution_count": 21, "id": "ea868db5-48cc-4ec2-9b0a-1ab94c32b302", "metadata": {}, "outputs": [], "source": [ "documents = [\n", " Document(\n", " page_content=\"Dogs are great companions, known for their loyalty and friendliness.\",\n", " metadata={\"type\": \"dog\", \"trait\": \"loyalty\"},\n", " ),\n", " Document(\n", " page_content=\"Cats are independent pets that often enjoy their own space.\",\n", " metadata={\"type\": \"cat\", \"trait\": \"independence\"},\n", " ),\n", " Document(\n", " page_content=\"Goldfish are popular pets for beginners, requiring relatively simple care.\",\n", " metadata={\"type\": \"fish\", \"trait\": \"low maintenance\"},\n", " ),\n", " Document(\n", " page_content=\"Parrots are intelligent birds capable of mimicking human speech.\",\n", " metadata={\"type\": \"bird\", \"trait\": \"intelligence\"},\n", " ),\n", " Document(\n", " page_content=\"Rabbits are social animals that need plenty of space to hop around.\",\n", " metadata={\"type\": \"rabbit\", \"trait\": \"social\"},\n", " ),\n", "]\n", "retriever = ToyRetriever(documents=documents, k=3)" ] }, { "cell_type": "code", "execution_count": 22, "id": "18be85e9-6ef0-4ee0-ae5d-a0810c38b254", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}),\n", " Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})]" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.invoke(\"that\")" ] }, { "cell_type": "markdown", "id": "13f76f6e-cf2b-4f67-859b-0ef8be98abbe", "metadata": {}, "source": [ "It's a **runnable** so it'll benefit from the standard Runnable Interface! 🤩" ] }, { "cell_type": "code", "execution_count": 23, "id": "3672e9fe-4365-4628-9d25-31924cfaf784", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'}),\n", " Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'type': 'rabbit', 'trait': 'social'})]" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await retriever.ainvoke(\"that\")" ] }, { "cell_type": "code", "execution_count": 24, "id": "e2c96eed-6813-421c-acf2-6554839840ee", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[[Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'type': 'dog', 'trait': 'loyalty'})],\n", " [Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'type': 'cat', 'trait': 'independence'})]]" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "retriever.batch([\"dog\", \"cat\"])" ] }, { "cell_type": "code", "execution_count": 25, "id": "978b6636-bf36-42c2-969c-207718f084cf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'event': 'on_retriever_start', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'name': 'ToyRetriever', 'tags': [], 'metadata': {}, 'data': {'input': 'bar'}}\n", "{'event': 'on_retriever_stream', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'tags': [], 'metadata': {}, 'name': 'ToyRetriever', 'data': {'chunk': []}}\n", "{'event': 'on_retriever_end', 'name': 'ToyRetriever', 'run_id': 'f96f268d-8383-4921-b175-ca583924d9ff', 'tags': [], 'metadata': {}, 'data': {'output': []}}\n" ] } ], "source": [ "async for event in retriever.astream_events(\"bar\", version=\"v1\"):\n", " print(event)" ] }, { "cell_type": "markdown", "id": "7b45c404-37bf-4370-bb7c-26556777ff46", "metadata": {}, "source": [ "## Contributing\n", "\n", "We appreciate contributions of interesting retrievers!\n", "\n", "Here's a checklist to help make sure your contribution gets added to LangChain:\n", "\n", "Documentation:\n", "\n", "* The retriever contains doc-strings for all initialization arguments, as these will be surfaced in the [API Reference](https://api.python.langchain.com/en/stable/langchain_api_reference.html).\n", "* The class doc-string for the model contains a link to any relevant APIs used for the retriever (e.g., if the retriever is retrieving from wikipedia, it'll be good to link to the wikipedia API!)\n", "\n", "Tests:\n", "\n", "* [ ] Add unit or integration tests to verify that `invoke` and `ainvoke` work.\n", "\n", "Optimizations:\n", "\n", "If the retriever is connecting to external data sources (e.g., an API or a file), it'll almost certainly benefit from an async native optimization!\n", " \n", "* [ ] Provide a native async implementation of `_aget_relevant_documents` (used by `ainvoke`)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/custom_tools.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "5436020b", "metadata": {}, "source": [ "# How to create custom tools\n", "\n", "When constructing an agent, you will need to provide it with a list of `Tool`s that it can use. Besides the actual function that is called, the Tool consists of several components:\n", "\n", "| Attribute | Type | Description |\n", "|-----------------|---------------------------|------------------------------------------------------------------------------------------------------------------|\n", "| name | str | Must be unique within a set of tools provided to an LLM or agent. |\n", "| description | str | Describes what the tool does. Used as context by the LLM or agent. |\n", "| args_schema | Pydantic BaseModel | Optional but recommended, can be used to provide more information (e.g., few-shot examples) or validation for expected parameters |\n", "| return_direct | boolean | Only relevant for agents. When True, after invoking the given tool, the agent will stop and return the result direcly to the user. |\n", "\n", "LangChain provides 3 ways to create tools:\n", "\n", "1. Using [@tool decorator](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.tool.html#langchain_core.tools.tool) -- the simplest way to define a custom tool.\n", "2. Using [StructuredTool.from_function](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.StructuredTool.html#langchain_core.tools.StructuredTool.from_function) class method -- this is similar to the `@tool` decorator, but allows more configuration and specification of both sync and async implementations.\n", "3. By sub-classing from [BaseTool](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n", "\n", "The `@tool` or the `StructuredTool.from_function` class method should be sufficient for most use cases.\n", "\n", ":::{.callout-tip}\n", "\n", "Models will perform better if the tools have well chosen names, descriptions and JSON schemas.\n", ":::" ] }, { "cell_type": "markdown", "id": "c7326b23", "metadata": {}, "source": [ "## @tool decorator\n", "\n", "This `@tool` decorator is the simplest way to define a custom tool. The decorator uses the function name as the tool name by default, but this can be overridden by passing a string as the first argument. Additionally, the decorator will use the function's docstring as the tool's description - so a docstring MUST be provided. " ] }, { "cell_type": "code", "execution_count": 1, "id": "cc7005cd-072f-4d37-8453-6297468e5192", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "multiply\n", "multiply(a: int, b: int) -> int - Multiply two numbers.\n", "{'a': {'title': 'A', 'type': 'integer'}, 'b': {'title': 'B', 'type': 'integer'}}\n" ] } ], "source": [ "from langchain_core.tools import tool\n", "\n", "\n", "@tool\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "# Let's inspect some of the attributes associated with the tool.\n", "print(multiply.name)\n", "print(multiply.description)\n", "print(multiply.args)" ] }, { "cell_type": "markdown", "id": "96698b67-993a-4c97-b867-333132e1eb14", "metadata": {}, "source": [ "Or create an **async** implementation, like this:" ] }, { "cell_type": "code", "execution_count": 2, "id": "0c0991db-b997-4611-be37-4346e660506b", "metadata": {}, "outputs": [], "source": [ "from langchain_core.tools import tool\n", "\n", "\n", "@tool\n", "async def amultiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b" ] }, { "cell_type": "markdown", "id": "98d6eee9", "metadata": {}, "source": [ "You can also customize the tool name and JSON args by passing them into the tool decorator." ] }, { "cell_type": "code", "execution_count": 3, "id": "9216d03a-f6ea-4216-b7e1-0661823a4c0b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "multiplication-tool\n", "multiplication-tool(a: int, b: int) -> int - Multiply two numbers.\n", "{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n", "True\n" ] } ], "source": [ "from langchain.pydantic_v1 import BaseModel, Field\n", "\n", "\n", "class CalculatorInput(BaseModel):\n", " a: int = Field(description=\"first number\")\n", " b: int = Field(description=\"second number\")\n", "\n", "\n", "@tool(\"multiplication-tool\", args_schema=CalculatorInput, return_direct=True)\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "# Let's inspect some of the attributes associated with the tool.\n", "print(multiply.name)\n", "print(multiply.description)\n", "print(multiply.args)\n", "print(multiply.return_direct)" ] }, { "cell_type": "markdown", "id": "b63fcc3b", "metadata": {}, "source": [ "## StructuredTool\n", "\n", "The `StrurcturedTool.from_function` class method provides a bit more configurability than the `@tool` decorator, without requiring much additional code." ] }, { "cell_type": "code", "execution_count": 4, "id": "564fbe6f-11df-402d-b135-ef6ff25e1e63", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n", "10\n" ] } ], "source": [ "from langchain_core.tools import StructuredTool\n", "\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "async def amultiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "calculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)\n", "\n", "print(calculator.invoke({\"a\": 2, \"b\": 3}))\n", "print(await calculator.ainvoke({\"a\": 2, \"b\": 5}))" ] }, { "cell_type": "markdown", "id": "26b3712a-b38d-4582-b6e6-bc7cfb1d6680", "metadata": {}, "source": [ "To configure it:" ] }, { "cell_type": "code", "execution_count": 5, "id": "6bc055d4-1fbe-4db5-8881-9c382eba6b1b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n", "Calculator\n", "Calculator(a: int, b: int) -> int - multiply numbers\n", "{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n" ] } ], "source": [ "class CalculatorInput(BaseModel):\n", " a: int = Field(description=\"first number\")\n", " b: int = Field(description=\"second number\")\n", "\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "calculator = StructuredTool.from_function(\n", " func=multiply,\n", " name=\"Calculator\",\n", " description=\"multiply numbers\",\n", " args_schema=CalculatorInput,\n", " return_direct=True,\n", " # coroutine= ... <- you can specify an async method if desired as well\n", ")\n", "\n", "print(calculator.invoke({\"a\": 2, \"b\": 3}))\n", "print(calculator.name)\n", "print(calculator.description)\n", "print(calculator.args)" ] }, { "cell_type": "markdown", "id": "b840074b-9c10-4ca0-aed8-626c52b2398f", "metadata": {}, "source": [ "## Subclass BaseTool\n", "\n", "You can define a custom tool by sub-classing from `BaseTool`. This provides maximal control over the tool definition, but requires writing more code." ] }, { "cell_type": "code", "execution_count": 16, "id": "1dad8f8e", "metadata": {}, "outputs": [], "source": [ "from typing import Optional, Type\n", "\n", "from langchain.pydantic_v1 import BaseModel\n", "from langchain_core.callbacks import (\n", " AsyncCallbackManagerForToolRun,\n", " CallbackManagerForToolRun,\n", ")\n", "from langchain_core.tools import BaseTool\n", "\n", "\n", "class CalculatorInput(BaseModel):\n", " a: int = Field(description=\"first number\")\n", " b: int = Field(description=\"second number\")\n", "\n", "\n", "class CustomCalculatorTool(BaseTool):\n", " name = \"Calculator\"\n", " description = \"useful for when you need to answer questions about math\"\n", " args_schema: Type[BaseModel] = CalculatorInput\n", " return_direct: bool = True\n", "\n", " def _run(\n", " self, a: int, b: int, run_manager: Optional[CallbackManagerForToolRun] = None\n", " ) -> str:\n", " \"\"\"Use the tool.\"\"\"\n", " return a * b\n", "\n", " async def _arun(\n", " self,\n", " a: int,\n", " b: int,\n", " run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n", " ) -> str:\n", " \"\"\"Use the tool asynchronously.\"\"\"\n", " # If the calculation is cheap, you can just delegate to the sync implementation\n", " # as shown below.\n", " # If the sync calculation is expensive, you should delete the entire _arun method.\n", " # LangChain will automatically provide a better implementation that will\n", " # kick off the task in a thread to make sure it doesn't block other async code.\n", " return self._run(a, b, run_manager=run_manager.get_sync())" ] }, { "cell_type": "code", "execution_count": 7, "id": "bb551c33", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Calculator\n", "useful for when you need to answer questions about math\n", "{'a': {'title': 'A', 'description': 'first number', 'type': 'integer'}, 'b': {'title': 'B', 'description': 'second number', 'type': 'integer'}}\n", "True\n", "6\n", "6\n" ] } ], "source": [ "multiply = CustomCalculatorTool()\n", "print(multiply.name)\n", "print(multiply.description)\n", "print(multiply.args)\n", "print(multiply.return_direct)\n", "\n", "print(multiply.invoke({\"a\": 2, \"b\": 3}))\n", "print(await multiply.ainvoke({\"a\": 2, \"b\": 3}))" ] }, { "cell_type": "markdown", "id": "97aba6cc-4bdf-4fab-aff3-d89e7d9c3a09", "metadata": {}, "source": [ "## How to create async tools\n", "\n", "LangChain Tools implement the [Runnable interface 🏃](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html).\n", "\n", "All Runnables expose the `invoke` and `ainvoke` methods (as well as other methods like `batch`, `abatch`, `astream` etc).\n", "\n", "So even if you only provide an `sync` implementation of a tool, you could still use the `ainvoke` interface, but there\n", "are some important things to know:\n", "\n", "* LangChain's by default provides an async implementation that assumes that the function is expensive to compute, so it'll delegate execution to another thread.\n", "* If you're working in an async codebase, you should create async tools rather than sync tools, to avoid incuring a small overhead due to that thread.\n", "* If you need both sync and async implementations, use `StructuredTool.from_function` or sub-class from `BaseTool`.\n", "* If implementing both sync and async, and the sync code is fast to run, override the default LangChain async implementation and simply call the sync code.\n", "* You CANNOT and SHOULD NOT use the sync `invoke` with an `async` tool." ] }, { "cell_type": "code", "execution_count": 8, "id": "6615cb77-fd4c-4676-8965-f92cc71d4944", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n", "10\n" ] } ], "source": [ "from langchain_core.tools import StructuredTool\n", "\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "calculator = StructuredTool.from_function(func=multiply)\n", "\n", "print(calculator.invoke({\"a\": 2, \"b\": 3}))\n", "print(\n", " await calculator.ainvoke({\"a\": 2, \"b\": 5})\n", ") # Uses default LangChain async implementation incurs small overhead" ] }, { "cell_type": "code", "execution_count": 9, "id": "bb2af583-eadd-41f4-a645-bf8748bd3dcd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "6\n", "10\n" ] } ], "source": [ "from langchain_core.tools import StructuredTool\n", "\n", "\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "async def amultiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "calculator = StructuredTool.from_function(func=multiply, coroutine=amultiply)\n", "\n", "print(calculator.invoke({\"a\": 2, \"b\": 3}))\n", "print(\n", " await calculator.ainvoke({\"a\": 2, \"b\": 5})\n", ") # Uses use provided amultiply without additional overhead" ] }, { "cell_type": "markdown", "id": "c80ffdaa-e4ba-4a70-8500-32bf4f60cc1a", "metadata": {}, "source": [ "You should not and cannot use `.invoke` when providing only an async definition." ] }, { "cell_type": "code", "execution_count": 10, "id": "4ad0932c-8610-4278-8c57-f9218f654c8a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Raised not implemented error. You should not be doing this.\n" ] } ], "source": [ "@tool\n", "async def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiply two numbers.\"\"\"\n", " return a * b\n", "\n", "\n", "try:\n", " multiply.invoke({\"a\": 2, \"b\": 3})\n", "except NotImplementedError:\n", " print(\"Raised not implemented error. You should not be doing this.\")" ] }, { "cell_type": "markdown", "id": "f9c746a7-88d7-4afb-bcb8-0e98b891e8b6", "metadata": {}, "source": [ "## Handling Tool Errors \n", "\n", "If you're using tools with agents, you will likely need an error handling strategy, so the agent can recover from the error and continue execution.\n", "\n", "A simple strategy is to throw a `ToolException` from inside the tool and specify an error handler using `handle_tool_error`. \n", "\n", "When the error handler is specified, the exception will be caught and the error handler will decide which output to return from the tool.\n", "\n", "You can set `handle_tool_error` to `True`, a string value, or a function. If it's a function, the function should take a `ToolException` as a parameter and return a value.\n", "\n", "Please note that only raising a `ToolException` won't be effective. You need to first set the `handle_tool_error` of the tool because its default value is `False`." ] }, { "cell_type": "code", "execution_count": 11, "id": "7094c0e8-6192-4870-a942-aad5b5ae48fd", "metadata": {}, "outputs": [], "source": [ "from langchain_core.tools import ToolException\n", "\n", "\n", "def get_weather(city: str) -> int:\n", " \"\"\"Get weather for the given city.\"\"\"\n", " raise ToolException(f\"Error: There is no city by the name of {city}.\")" ] }, { "cell_type": "markdown", "id": "9d93b217-1d44-4d31-8956-db9ea680ff4f", "metadata": {}, "source": [ "Here's an example with the default `handle_tool_error=True` behavior." ] }, { "cell_type": "code", "execution_count": 12, "id": "b4d22022-b105-4ccc-a15b-412cb9ea3097", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Error: There is no city by the name of foobar.'" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_weather_tool = StructuredTool.from_function(\n", " func=get_weather,\n", " handle_tool_error=True,\n", ")\n", "\n", "get_weather_tool.invoke({\"city\": \"foobar\"})" ] }, { "cell_type": "markdown", "id": "f91d6dc0-3271-4adc-a155-21f2e62ffa56", "metadata": {}, "source": [ "We can set `handle_tool_error` to a string that will always be returned." ] }, { "cell_type": "code", "execution_count": 13, "id": "3fad1728-d367-4e1b-9b54-3172981271cf", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"There is no such city, but it's probably above 0K there!\"" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_weather_tool = StructuredTool.from_function(\n", " func=get_weather,\n", " handle_tool_error=\"There is no such city, but it's probably above 0K there!\",\n", ")\n", "\n", "get_weather_tool.invoke({\"city\": \"foobar\"})" ] }, { "cell_type": "markdown", "id": "b0a640c1-e08f-4413-83b6-f599f304935f", "metadata": {}, "source": [ "Handling the error using a function:" ] }, { "cell_type": "code", "execution_count": 14, "id": "ebfe7c1f-318d-4e58-99e1-f31e69473c46", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'The following errors occurred during tool execution: `Error: There is no city by the name of foobar.`'" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def _handle_error(error: ToolException) -> str:\n", " return f\"The following errors occurred during tool execution: `{error.args[0]}`\"\n", "\n", "\n", "get_weather_tool = StructuredTool.from_function(\n", " func=get_weather,\n", " handle_tool_error=_handle_error,\n", ")\n", "\n", "get_weather_tool.invoke({\"city\": \"foobar\"})" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" }, "vscode": { "interpreter": { "hash": "e90c8aa204a57276aa905271aff2d11799d0acb3547adabc5892e639a5e45e34" } } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/debugging.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to debug your LLM apps\n", "\n", "Like building any type of software, at some point you'll need to debug when building with LLMs. A model call will fail, or model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.\n", "\n", "There are three main methods for debugging:\n", "\n", "- Verbose Mode: This adds print statements for \"important\" events in your chain.\n", "- Debug Mode: This add logging statements for ALL events in your chain.\n", "- LangSmith Tracing: This logs events to [LangSmith](https://docs.smith.langchain.com/) to allow for visualization there.\n", "\n", "| | Verbose Mode | Debug Mode | LangSmith Tracing |\n", "|------------------------|--------------|------------|-------------------|\n", "| Free | ✅ | ✅ | ✅ |\n", "| UI | ❌ | ❌ | ✅ |\n", "| Persisted | ❌ | ❌ | ✅ |\n", "| See all events | ❌ | ✅ | ✅ |\n", "| See \"important\" events | ✅ | ❌ | ✅ |\n", "| Runs Locally | ✅ | ✅ | ❌ |\n", "\n", "\n", "## Tracing\n", "\n", "Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls.\n", "As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent.\n", "The best way to do this is with [LangSmith](https://smith.langchain.com).\n", "\n", "After you sign up at the link above, make sure to set your environment variables to start logging traces:\n", "\n", "```shell\n", "export LANGCHAIN_TRACING_V2=\"true\"\n", "export LANGCHAIN_API_KEY=\"...\"\n", "```\n", "\n", "Or, if in a notebook, you can set them with:\n", "\n", "```python\n", "import getpass\n", "import os\n", "\n", "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n", "os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "```\n", "\n", "Let's suppose we have an agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", "/>\n", "```" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4-turbo\", temperature=0)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?',\n", " 'output': 'The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\\n\\nTo calculate Christopher Nolan\\'s age in days, we first need his birthdate, which is July 30, 1970. Let\\'s calculate his age in days from his birthdate to today\\'s date, December 7, 2023.\\n\\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\\n2. Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\\n3. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nNow, calculate the total days:\\n- 53 years = 53 x 365 = 19,345 days\\n- Adding leap years from 1970 to 2023: There are 13 leap years (1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020). So, add 13 days.\\n- Total days from years and leap years = 19,345 + 13 = 19,358 days\\n- Add the days from July 30, 2023, to December 7, 2023 = 130 days\\n\\nTotal age in days = 19,358 + 130 = 19,488 days\\n\\nChristopher Nolan is 19,488 days old as of December 7, 2023.'}" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.agents import AgentExecutor, create_tool_calling_agent\n", "from langchain_community.tools.tavily_search import TavilySearchResults\n", "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "tools = [TavilySearchResults(max_results=1)]\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant.\",\n", " ),\n", " (\"placeholder\", \"{chat_history}\"),\n", " (\"human\", \"{input}\"),\n", " (\"placeholder\", \"{agent_scratchpad}\"),\n", " ]\n", ")\n", "\n", "# Construct the Tools agent\n", "agent = create_tool_calling_agent(llm, tools, prompt)\n", "\n", "# Create an agent executor by passing in the agent and tools\n", "agent_executor = AgentExecutor(agent=agent, tools=tools)\n", "agent_executor.invoke(\n", " {\"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\"}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We don't get much output, but since we set up LangSmith we can easily see what happened under the hood:\n", "\n", "https://smith.langchain.com/public/a89ff88f-9ddc-4757-a395-3a1b365655bf/r" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## `set_debug` and `set_verbose`\n", "\n", "If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a chain run.\n", "\n", "There are a number of ways to enable printing at varying degrees of verbosity.\n", "\n", "Note: These still work even with LangSmith enabled, so you can have both turned on and running at the same time\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `set_verbose(True)`\n", "\n", "Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `tavily_search_results_json` with `{'query': 'director of the 2023 film Oppenheimer'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://m.imdb.com/title/tt15398776/', 'content': 'Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb.'}]\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `tavily_search_results_json` with `{'query': 'birth date of Christopher Nolan'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}]\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `tavily_search_results_json` with `{'query': 'Christopher Nolan birth date'}`\n", "responded: The 2023 film **Oppenheimer** was directed by **Christopher Nolan**.\n", "\n", "To calculate Christopher Nolan's age in days, I need his exact birth date. Let me find that information for you.\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}]\u001b[0m\u001b[32;1m\u001b[1;3m\n", "Invoking: `tavily_search_results_json` with `{'query': 'Christopher Nolan date of birth'}`\n", "responded: It appears that I need to refine my search to get the exact birth date of Christopher Nolan. Let me try again to find that specific information.\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3m[{'url': 'https://m.imdb.com/name/nm0634240/bio/', 'content': 'Christopher Nolan. Writer: Tenet. Best known for his cerebral, often nonlinear, storytelling, acclaimed Academy Award winner writer/director/producer Sir Christopher Nolan CBE was born in London, England. Over the course of more than 25 years of filmmaking, Nolan has gone from low-budget independent films to working on some of the biggest blockbusters ever made and became one of the most ...'}]\u001b[0m\u001b[32;1m\u001b[1;3mI am currently unable to retrieve the exact birth date of Christopher Nolan from the sources available. However, it is widely known that he was born on July 30, 1970. Using this date, I can calculate his age in days as of today.\n", "\n", "Let's calculate:\n", "\n", "- Christopher Nolan's birth date: July 30, 1970.\n", "- Today's date: December 7, 2023.\n", "\n", "The number of days between these two dates can be calculated as follows:\n", "\n", "1. From July 30, 1970, to July 30, 2023, is 53 years.\n", "2. From July 30, 2023, to December 7, 2023, is 130 days.\n", "\n", "Calculating the total days for 53 years (considering leap years):\n", "- 53 years × 365 days/year = 19,345 days\n", "- Adding leap years (1972, 1976, ..., 2020, 2024 - 13 leap years): 13 days\n", "\n", "Total days from birth until July 30, 2023: 19,345 + 13 = 19,358 days\n", "Adding the days from July 30, 2023, to December 7, 2023: 130 days\n", "\n", "Total age in days as of December 7, 2023: 19,358 + 130 = 19,488 days.\n", "\n", "Therefore, Christopher Nolan is 19,488 days old as of December 7, 2023.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?',\n", " 'output': \"I am currently unable to retrieve the exact birth date of Christopher Nolan from the sources available. However, it is widely known that he was born on July 30, 1970. Using this date, I can calculate his age in days as of today.\\n\\nLet's calculate:\\n\\n- Christopher Nolan's birth date: July 30, 1970.\\n- Today's date: December 7, 2023.\\n\\nThe number of days between these two dates can be calculated as follows:\\n\\n1. From July 30, 1970, to July 30, 2023, is 53 years.\\n2. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nCalculating the total days for 53 years (considering leap years):\\n- 53 years × 365 days/year = 19,345 days\\n- Adding leap years (1972, 1976, ..., 2020, 2024 - 13 leap years): 13 days\\n\\nTotal days from birth until July 30, 2023: 19,345 + 13 = 19,358 days\\nAdding the days from July 30, 2023, to December 7, 2023: 130 days\\n\\nTotal age in days as of December 7, 2023: 19,358 + 130 = 19,488 days.\\n\\nTherefore, Christopher Nolan is 19,488 days old as of December 7, 2023.\"}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.globals import set_verbose\n", "\n", "set_verbose(True)\n", "agent_executor = AgentExecutor(agent=agent, tools=tools)\n", "agent_executor.invoke(\n", " {\"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\"}\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `set_debug(True)`\n", "\n", "Setting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad> > 5:chain:RunnableLambda] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad> > 5:chain:RunnableLambda] [1ms] Exiting Chain run with output:\n", "\u001b[0m{\n", " \"output\": []\n", "}\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad> > 4:chain:RunnableParallel<agent_scratchpad>] [2ms] Exiting Chain run with output:\n", "\u001b[0m{\n", " \"agent_scratchpad\": []\n", "}\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableAssign<agent_scratchpad>] [5ms] Exiting Chain run with output:\n", "\u001b[0m{\n", " \"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\",\n", " \"intermediate_steps\": [],\n", " \"agent_scratchpad\": []\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] Entering Prompt run with input:\n", "\u001b[0m{\n", " \"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\",\n", " \"intermediate_steps\": [],\n", " \"agent_scratchpad\": []\n", "}\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:\n", "\u001b[0m[outputs]\n", "\u001b[32;1m\u001b[1;3m[llm/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] Entering LLM run with input:\n", "\u001b[0m{\n", " \"prompts\": [\n", " \"System: You are a helpful assistant.\\nHuman: Who directed the 2023 film Oppenheimer and what is their age in days?\"\n", " ]\n", "}\n", "\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] [3.17s] Exiting LLM run with output:\n", "\u001b[0m{\n", " \"generations\": [\n", " [\n", " {\n", " \"text\": \"\",\n", " \"generation_info\": {\n", " \"finish_reason\": \"tool_calls\"\n", " },\n", " \"type\": \"ChatGenerationChunk\",\n", " \"message\": {\n", " \"lc\": 1,\n", " \"type\": \"constructor\",\n", " \"id\": [\n", " \"langchain\",\n", " \"schema\",\n", " \"messages\",\n", " \"AIMessageChunk\"\n", " ],\n", " \"kwargs\": {\n", " \"content\": \"\",\n", " \"example\": false,\n", " \"additional_kwargs\": {\n", " \"tool_calls\": [\n", " {\n", " \"index\": 0,\n", " \"id\": \"call_fnfq6GjSQED4iF6lo4rxkUup\",\n", " \"function\": {\n", " \"arguments\": \"{\\\"query\\\": \\\"director of the 2023 film Oppenheimer\\\"}\",\n", " \"name\": \"tavily_search_results_json\"\n", " },\n", " \"type\": \"function\"\n", " },\n", " {\n", " \"index\": 1,\n", " \"id\": \"call_mwhVi6pk49f4OIo5rOWrr4TD\",\n", " \"function\": {\n", " \"arguments\": \"{\\\"query\\\": \\\"birth date of Christopher Nolan\\\"}\",\n", " \"name\": \"tavily_search_results_json\"\n", " },\n", " \"type\": \"function\"\n", " }\n", " ]\n", " },\n", " \"tool_call_chunks\": [\n", " {\n", " \"name\": \"tavily_search_results_json\",\n", " \"args\": \"{\\\"query\\\": \\\"director of the 2023 film Oppenheimer\\\"}\",\n", " \"id\": \"call_fnfq6GjSQED4iF6lo4rxkUup\",\n", " \"index\": 0\n", " },\n", " {\n", " \"name\": \"tavily_search_results_json\",\n", " \"args\": \"{\\\"query\\\": \\\"birth date of Christopher Nolan\\\"}\",\n", " \"id\": \"call_mwhVi6pk49f4OIo5rOWrr4TD\",\n", " \"index\": 1\n", " }\n", " ],\n", " \"response_metadata\": {\n", " \"finish_reason\": \"tool_calls\"\n", " },\n", " \"id\": \"run-6e160323-15f9-491d-aadf-b5d337e9e2a1\",\n", " \"tool_calls\": [\n", " {\n", " \"name\": \"tavily_search_results_json\",\n", " \"args\": {\n", " \"query\": \"director of the 2023 film Oppenheimer\"\n", " },\n", " \"id\": \"call_fnfq6GjSQED4iF6lo4rxkUup\"\n", " },\n", " {\n", " \"name\": \"tavily_search_results_json\",\n", " \"args\": {\n", " \"query\": \"birth date of Christopher Nolan\"\n", " },\n", " \"id\": \"call_mwhVi6pk49f4OIo5rOWrr4TD\"\n", " }\n", " ],\n", " \"invalid_tool_calls\": []\n", " }\n", " }\n", " }\n", " ]\n", " ],\n", " \"llm_output\": null,\n", " \"run\": null\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:ToolsAgentOutputParser] Entering Parser run with input:\n", "\u001b[0m[inputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:ToolsAgentOutputParser] [1ms] Exiting Parser run with output:\n", "\u001b[0m[outputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:chain:RunnableSequence] [3.18s] Exiting Chain run with output:\n", "\u001b[0m[outputs]\n", "\u001b[32;1m\u001b[1;3m[tool/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 9:tool:tavily_search_results_json] Entering Tool run with input:\n", "\u001b[0m\"{'query': 'director of the 2023 film Oppenheimer'}\"\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Error in ConsoleCallbackHandler.on_tool_end callback: AttributeError(\"'list' object has no attribute 'strip'\")\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[32;1m\u001b[1;3m[tool/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 10:tool:tavily_search_results_json] Entering Tool run with input:\n", "\u001b[0m\"{'query': 'birth date of Christopher Nolan'}\"\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Error in ConsoleCallbackHandler.on_tool_end callback: AttributeError(\"'list' object has no attribute 'strip'\")\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad>] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad>] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad> > 14:chain:RunnableLambda] Entering Chain run with input:\n", "\u001b[0m{\n", " \"input\": \"\"\n", "}\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad> > 14:chain:RunnableLambda] [1ms] Exiting Chain run with output:\n", "\u001b[0m[outputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad> > 13:chain:RunnableParallel<agent_scratchpad>] [4ms] Exiting Chain run with output:\n", "\u001b[0m[outputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 12:chain:RunnableAssign<agent_scratchpad>] [8ms] Exiting Chain run with output:\n", "\u001b[0m[outputs]\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] Entering Prompt run with input:\n", "\u001b[0m[inputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:\n", "\u001b[0m[outputs]\n", "\u001b[32;1m\u001b[1;3m[llm/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 16:llm:ChatOpenAI] Entering LLM run with input:\n", "\u001b[0m{\n", " \"prompts\": [\n", " \"System: You are a helpful assistant.\\nHuman: Who directed the 2023 film Oppenheimer and what is their age in days?\\nAI: \\nTool: [{\\\"url\\\": \\\"https://m.imdb.com/title/tt15398776/fullcredits/\\\", \\\"content\\\": \\\"Oppenheimer (2023) cast and crew credits, including actors, actresses, directors, writers and more. Menu. ... director of photography: behind-the-scenes Jason Gary ... best boy grip ... film loader Luc Poullain ... aerial coordinator\\\"}]\\nTool: [{\\\"url\\\": \\\"https://en.wikipedia.org/wiki/Christopher_Nolan\\\", \\\"content\\\": \\\"In early 2003, Nolan approached Warner Bros. with the idea of making a new Batman film, based on the character's origin story.[58] Nolan was fascinated by the notion of grounding it in a more realistic world than a comic-book fantasy.[59] He relied heavily on traditional stunts and miniature effects during filming, with minimal use of computer-generated imagery (CGI).[60] Batman Begins (2005), the biggest project Nolan had undertaken to that point,[61] was released to critical acclaim and commercial success.[62][63] Starring Christian Bale as Bruce Wayne / Batman—along with Michael Caine, Gary Oldman, Morgan Freeman and Liam Neeson—Batman Begins revived the franchise.[64][65] Batman Begins was 2005's ninth-highest-grossing film and was praised for its psychological depth and contemporary relevance;[63][66] it is cited as one of the most influential films of the 2000s.[67] Film author Ian Nathan wrote that within five years of his career, Nolan \\\\\\\"[went] from unknown to indie darling to gaining creative control over one of the biggest properties in Hollywood, and (perhaps unwittingly) fomenting the genre that would redefine the entire industry\\\\\\\".[68]\\\\nNolan directed, co-wrote and produced The Prestige (2006), an adaptation of the Christopher Priest novel about two rival 19th-century magicians.[69] He directed, wrote and edited the short film Larceny (1996),[19] which was filmed over a weekend in black and white with limited equipment and a small cast and crew.[12][20] Funded by Nolan and shot with the UCL Union Film society's equipment, it appeared at the Cambridge Film Festival in 1996 and is considered one of UCL's best shorts.[21] For unknown reasons, the film has since been removed from public view.[19] Nolan filmed a third short, Doodlebug (1997), about a man seemingly chasing an insect with his shoe, only to discover that it is a miniature of himself.[14][22] Nolan and Thomas first attempted to make a feature in the mid-1990s with Larry Mahoney, which they scrapped.[23] During this period in his career, Nolan had little to no success getting his projects off the ground, facing several rejections; he added, \\\\\\\"[T]here's a very limited pool of finance in the UK. Philosophy professor David Kyle Johnson wrote that \\\\\\\"Inception became a classic almost as soon as it was projected on silver screens\\\\\\\", praising its exploration of philosophical ideas, including leap of faith and allegory of the cave.[97] The film grossed over $836 million worldwide.[98] Nominated for eight Academy Awards—including Best Picture and Best Original Screenplay—it won Best Cinematography, Best Sound Mixing, Best Sound Editing and Best Visual Effects.[99] Nolan was nominated for a BAFTA Award and a Golden Globe Award for Best Director, among other accolades.[40]\\\\nAround the release of The Dark Knight Rises (2012), Nolan's third and final Batman film, Joseph Bevan of the British Film Institute wrote a profile on him: \\\\\\\"In the space of just over a decade, Christopher Nolan has shot from promising British indie director to undisputed master of a new brand of intelligent escapism. He further wrote that Nolan's body of work reflect \\\\\\\"a heterogeneity of conditions of products\\\\\\\" extending from low-budget films to lucrative blockbusters, \\\\\\\"a wide range of genres and settings\\\\\\\" and \\\\\\\"a diversity of styles that trumpet his versatility\\\\\\\".[193]\\\\nDavid Bordwell, a film theorist, wrote that Nolan has been able to blend his \\\\\\\"experimental impulses\\\\\\\" with the demands of mainstream entertainment, describing his oeuvre as \\\\\\\"experiments with cinematic time by means of techniques of subjective viewpoint and crosscutting\\\\\\\".[194] Nolan's use of practical, in-camera effects, miniatures and models, as well as shooting on celluloid film, has been highly influential in early 21st century cinema.[195][196] IndieWire wrote in 2019 that, Nolan \\\\\\\"kept a viable alternate model of big-budget filmmaking alive\\\\\\\", in an era where blockbuster filmmaking has become \\\\\\\"a largely computer-generated art form\\\\\\\".[196] Initially reluctant to make a sequel, he agreed after Warner Bros. repeatedly insisted.[78] Nolan wanted to expand on the noir quality of the first film by broadening the canvas and taking on \\\\\\\"the dynamic of a story of the city, a large crime story ... where you're looking at the police, the justice system, the vigilante, the poor people, the rich people, the criminals\\\\\\\".[79] Continuing to minimalise the use of CGI, Nolan employed high-resolution IMAX cameras, making it the first major motion picture to use this technology.[80][81]\\\"}]\"\n", " ]\n", "}\n", "\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 16:llm:ChatOpenAI] [20.22s] Exiting LLM run with output:\n", "\u001b[0m{\n", " \"generations\": [\n", " [\n", " {\n", " \"text\": \"The 2023 film \\\"Oppenheimer\\\" was directed by Christopher Nolan.\\n\\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\\n\\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\\n3. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nNow, calculate the total days for 53 years:\\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\\n- Total days from leap years: 14 days.\\n\\nAdding all together:\\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\\n\\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.\",\n", " \"generation_info\": {\n", " \"finish_reason\": \"stop\"\n", " },\n", " \"type\": \"ChatGenerationChunk\",\n", " \"message\": {\n", " \"lc\": 1,\n", " \"type\": \"constructor\",\n", " \"id\": [\n", " \"langchain\",\n", " \"schema\",\n", " \"messages\",\n", " \"AIMessageChunk\"\n", " ],\n", " \"kwargs\": {\n", " \"content\": \"The 2023 film \\\"Oppenheimer\\\" was directed by Christopher Nolan.\\n\\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\\n\\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\\n3. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nNow, calculate the total days for 53 years:\\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\\n- Total days from leap years: 14 days.\\n\\nAdding all together:\\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\\n\\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.\",\n", " \"example\": false,\n", " \"additional_kwargs\": {},\n", " \"tool_call_chunks\": [],\n", " \"response_metadata\": {\n", " \"finish_reason\": \"stop\"\n", " },\n", " \"id\": \"run-1c08a44f-db70-4836-935b-417caaf422a5\",\n", " \"tool_calls\": [],\n", " \"invalid_tool_calls\": []\n", " }\n", " }\n", " }\n", " ]\n", " ],\n", " \"llm_output\": null,\n", " \"run\": null\n", "}\n", "\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 17:parser:ToolsAgentOutputParser] Entering Parser run with input:\n", "\u001b[0m[inputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence > 17:parser:ToolsAgentOutputParser] [2ms] Exiting Parser run with output:\n", "\u001b[0m[outputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 11:chain:RunnableSequence] [20.27s] Exiting Chain run with output:\n", "\u001b[0m[outputs]\n", "\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor] [26.37s] Exiting Chain run with output:\n", "\u001b[0m{\n", " \"output\": \"The 2023 film \\\"Oppenheimer\\\" was directed by Christopher Nolan.\\n\\nTo calculate Christopher Nolan's age in days, we first need his birth date, which is July 30, 1970. Let's calculate his age in days from his birth date to today's date, December 7, 2023.\\n\\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\\n3. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nNow, calculate the total days for 53 years:\\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\\n- Total days from leap years: 14 days.\\n\\nAdding all together:\\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\\n\\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.\"\n", "}\n" ] }, { "data": { "text/plain": [ "{'input': 'Who directed the 2023 film Oppenheimer and what is their age in days?',\n", " 'output': 'The 2023 film \"Oppenheimer\" was directed by Christopher Nolan.\\n\\nTo calculate Christopher Nolan\\'s age in days, we first need his birth date, which is July 30, 1970. Let\\'s calculate his age in days from his birth date to today\\'s date, December 7, 2023.\\n\\n1. Calculate the total number of days from July 30, 1970, to December 7, 2023.\\n2. Christopher Nolan was born on July 30, 1970. From July 30, 1970, to July 30, 2023, is 53 years.\\n3. From July 30, 2023, to December 7, 2023, is 130 days.\\n\\nNow, calculate the total days for 53 years:\\n- Each year has 365 days, so 53 years × 365 days/year = 19,345 days.\\n- Adding the leap years from 1970 to 2023: 1972, 1976, 1980, 1984, 1988, 1992, 1996, 2000, 2004, 2008, 2012, 2016, 2020, and 2024 (up to February). This gives us 14 leap years.\\n- Total days from leap years: 14 days.\\n\\nAdding all together:\\n- Total days = 19,345 days (from years) + 14 days (from leap years) + 130 days (from July 30, 2023, to December 7, 2023) = 19,489 days.\\n\\nTherefore, as of December 7, 2023, Christopher Nolan is 19,489 days old.'}" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.globals import set_debug\n", "\n", "set_debug(True)\n", "set_verbose(False)\n", "agent_executor = AgentExecutor(agent=agent, tools=tools)\n", "\n", "agent_executor.invoke(\n", " {\"input\": \"Who directed the 2023 film Oppenheimer and what is their age in days?\"}\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 2 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_csv.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "dfc274c4-0c24-4c5f-865a-ee7fcdaafdac", "metadata": {}, "source": [ "# How to load CSVs\n", "\n", "A [comma-separated values (CSV)](https://en.wikipedia.org/wiki/Comma-separated_values) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.\n", "\n", "LangChain implements a [CSV Loader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.csv_loader.CSVLoader.html) that will load CSV files into a sequence of [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Each row of the CSV file is translated to one document." ] }, { "cell_type": "code", "execution_count": 1, "id": "64a25376-c31a-422e-845b-6538dcc68898", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 0}\n", "page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 1}\n" ] } ], "source": [ "from langchain_community.document_loaders.csv_loader import CSVLoader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv\"\n", ")\n", "\n", "loader = CSVLoader(file_path=file_path)\n", "data = loader.load()\n", "\n", "for record in data[:2]:\n", " print(record)" ] }, { "cell_type": "markdown", "id": "1c716f76-364d-4515-ada9-0ae7c75e61b2", "metadata": {}, "source": [ "## Customizing the CSV parsing and loading\n", "\n", "`CSVLoader` will accept a `csv_args` kwarg that supports customization of arguments passed to Python's `csv.DictReader`. See the [csv module](https://docs.python.org/3/library/csv.html) documentation for more information of what csv args are supported." ] }, { "cell_type": "code", "execution_count": 2, "id": "bf07fdee-d3a6-49c3-a517-bcba6819e8ea", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='MLB Team: Team\\nPayroll in millions: \"Payroll (millions)\"\\nWins: \"Wins\"' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 0}\n", "page_content='MLB Team: Nationals\\nPayroll in millions: 81.34\\nWins: 98' metadata={'source': '../../../docs/integrations/document_loaders/example_data/mlb_teams_2012.csv', 'row': 1}\n" ] } ], "source": [ "loader = CSVLoader(\n", " file_path=file_path,\n", " csv_args={\n", " \"delimiter\": \",\",\n", " \"quotechar\": '\"',\n", " \"fieldnames\": [\"MLB Team\", \"Payroll in millions\", \"Wins\"],\n", " },\n", ")\n", "\n", "data = loader.load()\n", "for record in data[:2]:\n", " print(record)" ] }, { "cell_type": "markdown", "id": "433536be-1531-43ae-920a-14fe4deef844", "metadata": {}, "source": [ "## Specify a column to identify the document source\n", "\n", "The `\"source\"` key on [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) metadata can be set using a column of the CSV. Use the `source_column` argument to specify a source for the document created from each row. Otherwise `file_path` will be used as the source for all documents created from the CSV file.\n", "\n", "This is useful when using documents loaded from CSV files for chains that answer questions using sources." ] }, { "cell_type": "code", "execution_count": 3, "id": "d927392c-95e6-4a82-86c2-978387ebe91a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98' metadata={'source': 'Nationals', 'row': 0}\n", "page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97' metadata={'source': 'Reds', 'row': 1}\n" ] } ], "source": [ "loader = CSVLoader(file_path=file_path, source_column=\"Team\")\n", "\n", "data = loader.load()\n", "for record in data[:2]:\n", " print(record)" ] }, { "cell_type": "markdown", "id": "cab6a4bd-476b-4f4c-92e0-5d1cbcd1f6bf", "metadata": {}, "source": [ "## Load from a string\n", "\n", "Python's `tempfile` can be used when working with CSV strings directly." ] }, { "cell_type": "code", "execution_count": 4, "id": "f3fb28b7-8ebe-4af9-9b7d-719e9a252a46", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "page_content='Team: Nationals\\n\"Payroll (millions)\": 81.34\\n\"Wins\": 98' metadata={'source': 'Nationals', 'row': 0}\n", "page_content='Team: Reds\\n\"Payroll (millions)\": 82.20\\n\"Wins\": 97' metadata={'source': 'Reds', 'row': 1}\n" ] } ], "source": [ "import tempfile\n", "from io import StringIO\n", "\n", "string_data = \"\"\"\n", "\"Team\", \"Payroll (millions)\", \"Wins\"\n", "\"Nationals\", 81.34, 98\n", "\"Reds\", 82.20, 97\n", "\"Yankees\", 197.96, 95\n", "\"Giants\", 117.62, 94\n", "\"\"\".strip()\n", "\n", "\n", "with tempfile.NamedTemporaryFile(delete=False, mode=\"w+\") as temp_file:\n", " temp_file.write(string_data)\n", " temp_file_path = temp_file.name\n", "\n", "loader = CSVLoader(file_path=temp_file_path)\n", "loader.load()\n", "for record in data[:2]:\n", " print(record)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_custom.ipynb
{ "cells": [ { "cell_type": "raw", "id": "c5990f4f-4430-4bbb-8d25-9703d7d8e95c", "metadata": {}, "source": [ "---\n", "title: Custom Document Loader\n", "sidebar_position: 10\n", "---" ] }, { "cell_type": "markdown", "id": "4be0aa7c-aee3-4e11-b7f4-059611ab8626", "metadata": {}, "source": [ "# How to create a custom Document Loader\n", "\n", "## Overview\n", "\n", "\n", "Applications based on LLMs frequently entail extracting data from databases or files, like PDFs, and converting it into a format that LLMs can utilize. In LangChain, this usually involves creating Document objects, which encapsulate the extracted text (`page_content`) along with metadata—a dictionary containing details about the document, such as the author's name or the date of publication.\n", "\n", "`Document` objects are often formatted into prompts that are fed into an LLM, allowing the LLM to use the information in the `Document` to generate a desired response (e.g., summarizing the document).\n", "`Documents` can be either used immediately or indexed into a vectorstore for future retrieval and use.\n", "\n", "The main abstractions for Document Loading are:\n", "\n", "\n", "| Component | Description |\n", "|----------------|--------------------------------|\n", "| Document | Contains `text` and `metadata` |\n", "| BaseLoader | Use to convert raw data into `Documents` |\n", "| Blob | A representation of binary data that's located either in a file or in memory |\n", "| BaseBlobParser | Logic to parse a `Blob` to yield `Document` objects |\n", "\n", "This guide will demonstrate how to write custom document loading and file parsing logic; specifically, we'll see how to:\n", "\n", "1. Create a standard document Loader by sub-classing from `BaseLoader`.\n", "2. Create a parser using `BaseBlobParser` and use it in conjunction with `Blob` and `BlobLoaders`. This is useful primarily when working with files." ] }, { "cell_type": "markdown", "id": "20dc4c18-accc-4009-805c-961f3e8dc50a", "metadata": {}, "source": [ "## Standard Document Loader\n", "\n", "A document loader can be implemented by sub-classing from a `BaseLoader` which provides a standard interface for loading documents.\n", "\n", "### Interface \n", "\n", "| Method Name | Explanation |\n", "|-------------|-------------|\n", "| lazy_load | Used to load documents one by one **lazily**. Use for production code. |\n", "| alazy_load | Async variant of `lazy_load` |\n", "| load | Used to load all the documents into memory **eagerly**. Use for prototyping or interactive work. |\n", "| aload | Used to load all the documents into memory **eagerly**. Use for prototyping or interactive work. **Added in 2024-04 to LangChain.** |\n", "\n", "* The `load` methods is a convenience method meant solely for prototyping work -- it just invokes `list(self.lazy_load())`.\n", "* The `alazy_load` has a default implementation that will delegate to `lazy_load`. If you're using async, we recommend overriding the default implementation and providing a native async implementation.\n", "\n", "::: {.callout-important}\n", "When implementing a document loader do **NOT** provide parameters via the `lazy_load` or `alazy_load` methods.\n", "\n", "All configuration is expected to be passed through the initializer (__init__). This was a design choice made by LangChain to make sure that once a document loader has been instantiated it has all the information needed to load documents.\n", ":::\n", "\n", "\n", "### Implementation\n", "\n", "Let's create an example of a standard document loader that loads a file and creates a document from each line in the file." ] }, { "cell_type": "code", "execution_count": 1, "id": "20f128c1-1a2c-43b9-9e7b-cf9b3a86d1db", "metadata": { "tags": [] }, "outputs": [], "source": [ "from typing import AsyncIterator, Iterator\n", "\n", "from langchain_core.document_loaders import BaseLoader\n", "from langchain_core.documents import Document\n", "\n", "\n", "class CustomDocumentLoader(BaseLoader):\n", " \"\"\"An example document loader that reads a file line by line.\"\"\"\n", "\n", " def __init__(self, file_path: str) -> None:\n", " \"\"\"Initialize the loader with a file path.\n", "\n", " Args:\n", " file_path: The path to the file to load.\n", " \"\"\"\n", " self.file_path = file_path\n", "\n", " def lazy_load(self) -> Iterator[Document]: # <-- Does not take any arguments\n", " \"\"\"A lazy loader that reads a file line by line.\n", "\n", " When you're implementing lazy load methods, you should use a generator\n", " to yield documents one by one.\n", " \"\"\"\n", " with open(self.file_path, encoding=\"utf-8\") as f:\n", " line_number = 0\n", " for line in f:\n", " yield Document(\n", " page_content=line,\n", " metadata={\"line_number\": line_number, \"source\": self.file_path},\n", " )\n", " line_number += 1\n", "\n", " # alazy_load is OPTIONAL.\n", " # If you leave out the implementation, a default implementation which delegates to lazy_load will be used!\n", " async def alazy_load(\n", " self,\n", " ) -> AsyncIterator[Document]: # <-- Does not take any arguments\n", " \"\"\"An async lazy loader that reads a file line by line.\"\"\"\n", " # Requires aiofiles\n", " # Install with `pip install aiofiles`\n", " # https://github.com/Tinche/aiofiles\n", " import aiofiles\n", "\n", " async with aiofiles.open(self.file_path, encoding=\"utf-8\") as f:\n", " line_number = 0\n", " async for line in f:\n", " yield Document(\n", " page_content=line,\n", " metadata={\"line_number\": line_number, \"source\": self.file_path},\n", " )\n", " line_number += 1" ] }, { "cell_type": "markdown", "id": "eb845512-3d46-44fa-a4c6-ff723533abbe", "metadata": { "tags": [] }, "source": [ "### Test 🧪\n", "\n", "\n", "To test out the document loader, we need a file with some quality content." ] }, { "cell_type": "code", "execution_count": 2, "id": "b1751198-c6dd-4149-95bd-6370ce8fa06f", "metadata": { "tags": [] }, "outputs": [], "source": [ "with open(\"./meow.txt\", \"w\", encoding=\"utf-8\") as f:\n", " quality_content = \"meow meow🐱 \\n meow meow🐱 \\n meow😻😻\"\n", " f.write(quality_content)\n", "\n", "loader = CustomDocumentLoader(\"./meow.txt\")" ] }, { "cell_type": "code", "execution_count": 3, "id": "71ef1482-f9de-4852-b5a4-0938f350612e", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "<class 'langchain_core.documents.base.Document'>\n", "page_content='meow meow🐱 \\n' metadata={'line_number': 0, 'source': './meow.txt'}\n", "\n", "<class 'langchain_core.documents.base.Document'>\n", "page_content=' meow meow🐱 \\n' metadata={'line_number': 1, 'source': './meow.txt'}\n", "\n", "<class 'langchain_core.documents.base.Document'>\n", "page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'}\n" ] } ], "source": [ "## Test out the lazy load interface\n", "for doc in loader.lazy_load():\n", " print()\n", " print(type(doc))\n", " print(doc)" ] }, { "cell_type": "code", "execution_count": 4, "id": "1588e78c-e81a-4d40-b36c-634242c84a6a", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "<class 'langchain_core.documents.base.Document'>\n", "page_content='meow meow🐱 \\n' metadata={'line_number': 0, 'source': './meow.txt'}\n", "\n", "<class 'langchain_core.documents.base.Document'>\n", "page_content=' meow meow🐱 \\n' metadata={'line_number': 1, 'source': './meow.txt'}\n", "\n", "<class 'langchain_core.documents.base.Document'>\n", "page_content=' meow😻😻' metadata={'line_number': 2, 'source': './meow.txt'}\n" ] } ], "source": [ "## Test out the async implementation\n", "async for doc in loader.alazy_load():\n", " print()\n", " print(type(doc))\n", " print(doc)" ] }, { "cell_type": "markdown", "id": "56cb443e-f987-4386-b4ec-975ee129adb2", "metadata": {}, "source": [ "::: {.callout-tip}\n", "\n", "`load()` can be helpful in an interactive environment such as a jupyter notebook.\n", "\n", "Avoid using it for production code since eager loading assumes that all the content\n", "can fit into memory, which is not always the case, especially for enterprise data.\n", ":::" ] }, { "cell_type": "code", "execution_count": 6, "id": "df5ad46a-9e00-4073-8505-489fc4f3799e", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='meow meow🐱 \\n', metadata={'line_number': 0, 'source': './meow.txt'}),\n", " Document(page_content=' meow meow🐱 \\n', metadata={'line_number': 1, 'source': './meow.txt'}),\n", " Document(page_content=' meow😻😻', metadata={'line_number': 2, 'source': './meow.txt'})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loader.load()" ] }, { "cell_type": "markdown", "id": "639fe87c-b65f-4bef-8fe2-d10be85589f4", "metadata": {}, "source": [ "## Working with Files\n", "\n", "Many document loaders invovle parsing files. The difference between such loaders usually stems from how the file is parsed rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.\n", "\n", "As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded.\n", "\n", "### BaseBlobParser\n", "\n", "A `BaseBlobParser` is an interface that accepts a `blob` and outputs a list of `Document` objects. A `blob` is a representation of data that lives either in memory or in a file. LangChain python has a `Blob` primitive which is inspired by the [Blob WebAPI spec](https://developer.mozilla.org/en-US/docs/Web/API/Blob)." ] }, { "cell_type": "code", "execution_count": 7, "id": "209f6a91-2f15-4cb2-9237-f79fc9493b82", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.document_loaders import BaseBlobParser, Blob\n", "\n", "\n", "class MyParser(BaseBlobParser):\n", " \"\"\"A simple parser that creates a document from each line.\"\"\"\n", "\n", " def lazy_parse(self, blob: Blob) -> Iterator[Document]:\n", " \"\"\"Parse a blob into a document line by line.\"\"\"\n", " line_number = 0\n", " with blob.as_bytes_io() as f:\n", " for line in f:\n", " line_number += 1\n", " yield Document(\n", " page_content=line,\n", " metadata={\"line_number\": line_number, \"source\": blob.source},\n", " )" ] }, { "cell_type": "code", "execution_count": 8, "id": "b1275c59-06d4-458f-abd2-fcbad0bde442", "metadata": { "tags": [] }, "outputs": [], "source": [ "blob = Blob.from_path(\"./meow.txt\")\n", "parser = MyParser()" ] }, { "cell_type": "code", "execution_count": 8, "id": "56a3d707-2086-413b-ae82-50e92ddb27f6", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='meow meow🐱 \\n', metadata={'line_number': 1, 'source': './meow.txt'}),\n", " Document(page_content=' meow meow🐱 \\n', metadata={'line_number': 2, 'source': './meow.txt'}),\n", " Document(page_content=' meow😻😻', metadata={'line_number': 3, 'source': './meow.txt'})]" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(parser.lazy_parse(blob))" ] }, { "cell_type": "markdown", "id": "433bfb7c-7767-43bc-b71e-42413d7494a8", "metadata": {}, "source": [ "Using the **blob** API also allows one to load content direclty from memory without having to read it from a file!" ] }, { "cell_type": "code", "execution_count": 9, "id": "20d03092-ba35-47d7-b612-9d1631c261cd", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='some data from memory\\n', metadata={'line_number': 1, 'source': None}),\n", " Document(page_content='meow', metadata={'line_number': 2, 'source': None})]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob = Blob(data=b\"some data from memory\\nmeow\")\n", "list(parser.lazy_parse(blob))" ] }, { "cell_type": "markdown", "id": "d401c5e9-32cc-41e2-973f-c70d1cd3ba76", "metadata": {}, "source": [ "### Blob\n", "\n", "Let's take a quick look through some of the Blob API." ] }, { "cell_type": "code", "execution_count": 10, "id": "a9e92e0e-c8da-401c-b8c6-f0676004cf58", "metadata": { "tags": [] }, "outputs": [], "source": [ "blob = Blob.from_path(\"./meow.txt\", metadata={\"foo\": \"bar\"})" ] }, { "cell_type": "code", "execution_count": 11, "id": "6b559d30-8b0c-4e45-86b1-e4602d9aaa7e", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'utf-8'" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob.encoding" ] }, { "cell_type": "code", "execution_count": 12, "id": "2f7b145a-9c6f-47f9-9487-1f4b25aff46f", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "b'meow meow\\xf0\\x9f\\x90\\xb1 \\n meow meow\\xf0\\x9f\\x90\\xb1 \\n meow\\xf0\\x9f\\x98\\xbb\\xf0\\x9f\\x98\\xbb'" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob.as_bytes()" ] }, { "cell_type": "code", "execution_count": 13, "id": "9b9482fa-c49c-42cd-a2ef-80bc93214631", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'meow meow🐱 \\n meow meow🐱 \\n meow😻😻'" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob.as_string()" ] }, { "cell_type": "code", "execution_count": 14, "id": "04cc7a81-290e-4ef8-b7e1-d885fcc59ece", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "<contextlib._GeneratorContextManager at 0x743f34324450>" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob.as_bytes_io()" ] }, { "cell_type": "code", "execution_count": 15, "id": "ec8de0ab-51d7-4e41-82c9-3ce0a6fdc2cd", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "{'foo': 'bar'}" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob.metadata" ] }, { "cell_type": "code", "execution_count": 16, "id": "19eae991-ae48-43c2-8952-7347cdb76a34", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "'./meow.txt'" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "blob.source" ] }, { "cell_type": "markdown", "id": "3ea67645-a367-48ce-b164-0d9f00c17370", "metadata": {}, "source": [ "### Blob Loaders\n", "\n", "While a parser encapsulates the logic needed to parse binary data into documents, *blob loaders* encapsulate the logic that's necessary to load blobs from a given storage location.\n", "\n", "A the moment, `LangChain` only supports `FileSystemBlobLoader`.\n", "\n", "You can use the `FileSystemBlobLoader` to load blobs and then use the parser to parse them." ] }, { "cell_type": "code", "execution_count": 17, "id": "c093becb-2e84-4329-89e3-956a3bd765e5", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_community.document_loaders.blob_loaders import FileSystemBlobLoader\n", "\n", "blob_loader = FileSystemBlobLoader(path=\".\", glob=\"*.mdx\", show_progress=True)" ] }, { "cell_type": "code", "execution_count": 18, "id": "77739dab-2a1e-4b64-8daa-fee8aa029972", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "45e85d3f63224bb59db02a40ae2e3268", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/8 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "page_content='# Microsoft Office\\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}\n", "page_content='# Markdown\\n' metadata={'line_number': 1, 'source': 'markdown.mdx'}\n", "page_content='# JSON\\n' metadata={'line_number': 1, 'source': 'json.mdx'}\n", "page_content='---\\n' metadata={'line_number': 1, 'source': 'pdf.mdx'}\n", "page_content='---\\n' metadata={'line_number': 1, 'source': 'index.mdx'}\n", "page_content='# File Directory\\n' metadata={'line_number': 1, 'source': 'file_directory.mdx'}\n", "page_content='# CSV\\n' metadata={'line_number': 1, 'source': 'csv.mdx'}\n", "page_content='# HTML\\n' metadata={'line_number': 1, 'source': 'html.mdx'}\n" ] } ], "source": [ "parser = MyParser()\n", "for blob in blob_loader.yield_blobs():\n", " for doc in parser.lazy_parse(blob):\n", " print(doc)\n", " break" ] }, { "cell_type": "markdown", "id": "f016390c-d38b-4261-946d-34eefe546df7", "metadata": {}, "source": [ "### Generic Loader\n", "\n", "LangChain has a `GenericLoader` abstraction which composes a `BlobLoader` with a `BaseBlobParser`.\n", "\n", "`GenericLoader` is meant to provide standardized classmethods that make it easy to use existing `BlobLoader` implementations. At the moment, only the `FileSystemBlobLoader` is supported." ] }, { "cell_type": "code", "execution_count": 19, "id": "1de74daf-70ee-4616-9089-d28e26b16851", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "5f1f6810a71a4909ac9fe1e8f8cb9e0a", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/8 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "page_content='# Microsoft Office\\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}\n", "page_content='\\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}\n", "page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}\n", "page_content='\\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}\n", "page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}\n", "... output truncated for demo purposes\n" ] } ], "source": [ "from langchain_community.document_loaders.generic import GenericLoader\n", "\n", "loader = GenericLoader.from_filesystem(\n", " path=\".\", glob=\"*.mdx\", show_progress=True, parser=MyParser()\n", ")\n", "\n", "for idx, doc in enumerate(loader.lazy_load()):\n", " if idx < 5:\n", " print(doc)\n", "\n", "print(\"... output truncated for demo purposes\")" ] }, { "cell_type": "markdown", "id": "902048b7-ff04-46c0-97b5-935b40ff8511", "metadata": {}, "source": [ "#### Custom Generic Loader\n", "\n", "If you really like creating classes, you can sub-class and create a class to encapsulate the logic together.\n", "\n", "You can sub-class from this class to load content using an existing loader." ] }, { "cell_type": "code", "execution_count": 20, "id": "23633102-dc44-4fed-a4e1-8159489101c8", "metadata": { "tags": [] }, "outputs": [], "source": [ "from typing import Any\n", "\n", "\n", "class MyCustomLoader(GenericLoader):\n", " @staticmethod\n", " def get_parser(**kwargs: Any) -> BaseBlobParser:\n", " \"\"\"Override this method to associate a default parser with the class.\"\"\"\n", " return MyParser()" ] }, { "cell_type": "code", "execution_count": 21, "id": "dc95be85-4a29-4c6f-a260-08afa3c95538", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "4320598ea3b44a52b1873e1c801db312", "version_major": 2, "version_minor": 0 }, "text/plain": [ " 0%| | 0/8 [00:00<?, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "page_content='# Microsoft Office\\n' metadata={'line_number': 1, 'source': 'office_file.mdx'}\n", "page_content='\\n' metadata={'line_number': 2, 'source': 'office_file.mdx'}\n", "page_content='>[The Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS.\\n' metadata={'line_number': 3, 'source': 'office_file.mdx'}\n", "page_content='\\n' metadata={'line_number': 4, 'source': 'office_file.mdx'}\n", "page_content='This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a document format that we can use downstream.\\n' metadata={'line_number': 5, 'source': 'office_file.mdx'}\n", "... output truncated for demo purposes\n" ] } ], "source": [ "loader = MyCustomLoader.from_filesystem(path=\".\", glob=\"*.mdx\", show_progress=True)\n", "\n", "for idx, doc in enumerate(loader.lazy_load()):\n", " if idx < 5:\n", " print(doc)\n", "\n", "print(\"... output truncated for demo purposes\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_directory.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "9122e4b9-4883-4e6e-940b-ab44a70f0951", "metadata": {}, "source": [ "# How to load documents from a directory\n", "\n", "LangChain's [DirectoryLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.directory.DirectoryLoader.html) implements functionality for reading files from disk into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. Here we demonstrate:\n", "\n", "- How to load from a filesystem, including use of wildcard patterns;\n", "- How to use multithreading for file I/O;\n", "- How to use custom loader classes to parse specific file types (e.g., code);\n", "- How to handle errors, such as those due to decoding." ] }, { "cell_type": "code", "execution_count": 1, "id": "1c1e3796-bee8-4882-8065-6b98e48ec53a", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import DirectoryLoader" ] }, { "cell_type": "markdown", "id": "e3cdb7bb-1f58-4a7a-af83-599443127834", "metadata": {}, "source": [ "`DirectoryLoader` accepts a `loader_cls` kwarg, which defaults to [UnstructuredLoader](/docs/integrations/document_loaders/unstructured_file). [Unstructured](https://unstructured-io.github.io/unstructured/) supports parsing for a number of formats, such as PDF and HTML. Here we use it to read in a markdown (.md) file.\n", "\n", "We can use the `glob` parameter to control which files to load. Note that here it doesn't load the `.rst` file or the `.html` files." ] }, { "cell_type": "code", "execution_count": 2, "id": "bd2fcd1f-8286-499b-b43a-0c17084ae8ee", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "20" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loader = DirectoryLoader(\"../\", glob=\"**/*.md\")\n", "docs = loader.load()\n", "len(docs)" ] }, { "cell_type": "code", "execution_count": 3, "id": "9ff1503d-3ac0-4172-99ec-15c9a4a707d8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Security\n", "\n", "LangChain has a large ecosystem of integrations with various external resources like local\n" ] } ], "source": [ "print(docs[0].page_content[:100])" ] }, { "cell_type": "markdown", "id": "b8b1cee8-626a-461a-8d33-1c56120f1cc0", "metadata": {}, "source": [ "## Show a progress bar\n", "\n", "By default a progress bar will not be shown. To show a progress bar, install the `tqdm` library (e.g. `pip install tqdm`), and set the `show_progress` parameter to `True`." ] }, { "cell_type": "code", "execution_count": 4, "id": "cfa48224-5d02-4aa7-93c7-ce48241645d5", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 54.56it/s]\n" ] } ], "source": [ "loader = DirectoryLoader(\"../\", glob=\"**/*.md\", show_progress=True)\n", "docs = loader.load()" ] }, { "cell_type": "markdown", "id": "5e02c922-6a4b-48e6-8c46-5015553eafbe", "metadata": {}, "source": [ "## Use multithreading\n", "\n", "By default the loading happens in one thread. In order to utilize several threads set the `use_multithreading` flag to true." ] }, { "cell_type": "code", "execution_count": 5, "id": "aae1c580-6d7c-409c-bfc8-3049fa8bdbf9", "metadata": {}, "outputs": [], "source": [ "loader = DirectoryLoader(\"../\", glob=\"**/*.md\", use_multithreading=True)\n", "docs = loader.load()" ] }, { "cell_type": "markdown", "id": "5add3f54-f303-4006-90c9-540a90ab8c46", "metadata": {}, "source": [ "## Change loader class\n", "By default this uses the `UnstructuredLoader` class. To customize the loader, specify the loader class in the `loader_cls` kwarg. Below we show an example using [TextLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.text.TextLoader.html):" ] }, { "cell_type": "code", "execution_count": 6, "id": "d369ee78-ea24-48cc-9f46-1f5cd4b56f48", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import TextLoader\n", "\n", "loader = DirectoryLoader(\"../\", glob=\"**/*.md\", loader_cls=TextLoader)\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": 7, "id": "2863d7dd-2d56-4fef-8bfd-95c48a6b4a71", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "# Security\n", "\n", "LangChain has a large ecosystem of integrations with various external resources like loc\n" ] } ], "source": [ "print(docs[0].page_content[:100])" ] }, { "cell_type": "markdown", "id": "c97ed37b-38c0-4f31-9403-d3a5d5444f78", "metadata": {}, "source": [ "Notice that while the `UnstructuredLoader` parses Markdown headers, `TextLoader` does not.\n", "\n", "If you need to load Python source code files, use the `PythonLoader`:" ] }, { "cell_type": "code", "execution_count": 8, "id": "5ef483a8-57d3-45e5-93be-37c8416c543c", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PythonLoader\n", "\n", "loader = DirectoryLoader(\"../../../../../\", glob=\"**/*.py\", loader_cls=PythonLoader)" ] }, { "cell_type": "markdown", "id": "61dd1428-8246-47e3-b1da-f6a3d6f05566", "metadata": {}, "source": [ "## Auto-detect file encodings with TextLoader\n", "\n", "`DirectoryLoader` can help manage errors due to variations in file encodings. Below we will attempt to load in a collection of files, one of which includes non-UTF8 encodings." ] }, { "cell_type": "code", "execution_count": 9, "id": "e69db7ae-0385-4129-968f-17c42c7a635c", "metadata": {}, "outputs": [], "source": [ "path = \"../../../../libs/langchain/tests/unit_tests/examples/\"\n", "\n", "loader = DirectoryLoader(path, glob=\"**/*.txt\", loader_cls=TextLoader)" ] }, { "cell_type": "markdown", "id": "e3b61cf0-809b-4c97-b1a4-17c6aa4343e1", "metadata": {}, "source": [ "### A. Default Behavior\n", "\n", "By default we raise an error:" ] }, { "cell_type": "code", "execution_count": 10, "id": "4b8f56be-122a-4c56-86a5-a70631a78ec7", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Error loading file ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt\n" ] }, { "ename": "RuntimeError", "evalue": "Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mUnicodeDecodeError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m~/repos/langchain/libs/community/langchain_community/document_loaders/text.py:43\u001b[0m, in \u001b[0;36mTextLoader.lazy_load\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 42\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m \u001b[38;5;28mopen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mfile_path, encoding\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mencoding) \u001b[38;5;28;01mas\u001b[39;00m f:\n\u001b[0;32m---> 43\u001b[0m text \u001b[38;5;241m=\u001b[39m \u001b[43mf\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 44\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mUnicodeDecodeError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n", "File \u001b[0;32m~/.pyenv/versions/3.10.4/lib/python3.10/codecs.py:322\u001b[0m, in \u001b[0;36mBufferedIncrementalDecoder.decode\u001b[0;34m(self, input, final)\u001b[0m\n\u001b[1;32m 321\u001b[0m data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mbuffer \u001b[38;5;241m+\u001b[39m \u001b[38;5;28minput\u001b[39m\n\u001b[0;32m--> 322\u001b[0m (result, consumed) \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_buffer_decode\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdata\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43merrors\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mfinal\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 323\u001b[0m \u001b[38;5;66;03m# keep undecoded input until the next call\u001b[39;00m\n", "\u001b[0;31mUnicodeDecodeError\u001b[0m: 'utf-8' codec can't decode byte 0xca in position 0: invalid continuation byte", "\nThe above exception was the direct cause of the following exception:\n", "\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)", "Cell \u001b[0;32mIn[10], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[43mloader\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mload\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n", "File \u001b[0;32m~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:117\u001b[0m, in \u001b[0;36mDirectoryLoader.load\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 115\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mload\u001b[39m(\u001b[38;5;28mself\u001b[39m) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m List[Document]:\n\u001b[1;32m 116\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"Load documents.\"\"\"\u001b[39;00m\n\u001b[0;32m--> 117\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mlist\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mlazy_load\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n", "File \u001b[0;32m~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:182\u001b[0m, in \u001b[0;36mDirectoryLoader.lazy_load\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 180\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 181\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m i \u001b[38;5;129;01min\u001b[39;00m items:\n\u001b[0;32m--> 182\u001b[0m \u001b[38;5;28;01myield from\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_lazy_load_file(i, p, pbar)\n\u001b[1;32m 184\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m pbar:\n\u001b[1;32m 185\u001b[0m pbar\u001b[38;5;241m.\u001b[39mclose()\n", "File \u001b[0;32m~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:220\u001b[0m, in \u001b[0;36mDirectoryLoader._lazy_load_file\u001b[0;34m(self, item, path, pbar)\u001b[0m\n\u001b[1;32m 218\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 219\u001b[0m logger\u001b[38;5;241m.\u001b[39merror(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mError loading file \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mstr\u001b[39m(item)\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m--> 220\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[1;32m 221\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[1;32m 222\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m pbar:\n", "File \u001b[0;32m~/repos/langchain/libs/community/langchain_community/document_loaders/directory.py:210\u001b[0m, in \u001b[0;36mDirectoryLoader._lazy_load_file\u001b[0;34m(self, item, path, pbar)\u001b[0m\n\u001b[1;32m 208\u001b[0m loader \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mloader_cls(\u001b[38;5;28mstr\u001b[39m(item), \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mloader_kwargs)\n\u001b[1;32m 209\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 210\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m subdoc \u001b[38;5;129;01min\u001b[39;00m loader\u001b[38;5;241m.\u001b[39mlazy_load():\n\u001b[1;32m 211\u001b[0m \u001b[38;5;28;01myield\u001b[39;00m subdoc\n\u001b[1;32m 212\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mNotImplementedError\u001b[39;00m:\n", "File \u001b[0;32m~/repos/langchain/libs/community/langchain_community/document_loaders/text.py:56\u001b[0m, in \u001b[0;36mTextLoader.lazy_load\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 54\u001b[0m \u001b[38;5;28;01mcontinue\u001b[39;00m\n\u001b[1;32m 55\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m---> 56\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mRuntimeError\u001b[39;00m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mError loading \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mfile_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01me\u001b[39;00m\n\u001b[1;32m 57\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 58\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mRuntimeError\u001b[39;00m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mError loading \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mfile_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01me\u001b[39;00m\n", "\u001b[0;31mRuntimeError\u001b[0m: Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt" ] } ], "source": [ "loader.load()" ] }, { "cell_type": "markdown", "id": "48308077-2d99-4dd6-9bf1-dd1ad6c64b0f", "metadata": {}, "source": [ "The file `example-non-utf8.txt` uses a different encoding, so the `load()` function fails with a helpful message indicating which file failed decoding.\n", "\n", "With the default behavior of `TextLoader` any failure to load any of the documents will fail the whole loading process and no documents are loaded.\n", "\n", "### B. Silent fail\n", "\n", "We can pass the parameter `silent_errors` to the `DirectoryLoader` to skip the files which could not be loaded and continue the load process." ] }, { "cell_type": "code", "execution_count": 11, "id": "b333c652-a7ad-47f4-8be8-d27c18ef11b7", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Error loading file ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt: Error loading ../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt\n" ] } ], "source": [ "loader = DirectoryLoader(\n", " path, glob=\"**/*.txt\", loader_cls=TextLoader, silent_errors=True\n", ")\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": 12, "id": "b99ef682-b892-4790-8964-40185fea41a2", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['../../../../libs/langchain/tests/unit_tests/examples/example-utf8.txt']" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "doc_sources = [doc.metadata[\"source\"] for doc in docs]\n", "doc_sources" ] }, { "cell_type": "markdown", "id": "da475bff-2f4f-4ea3-a058-2979042c5326", "metadata": {}, "source": [ "### C. Auto detect encodings\n", "\n", "We can also ask `TextLoader` to auto detect the file encoding before failing, by passing the `autodetect_encoding` to the loader class." ] }, { "cell_type": "code", "execution_count": 13, "id": "832760da-ed9f-4e68-a67c-35493bde2214", "metadata": {}, "outputs": [], "source": [ "text_loader_kwargs = {\"autodetect_encoding\": True}\n", "loader = DirectoryLoader(\n", " path, glob=\"**/*.txt\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs\n", ")\n", "docs = loader.load()" ] }, { "cell_type": "code", "execution_count": 14, "id": "5c4f4dba-f84f-496e-9378-3e6858305619", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['../../../../libs/langchain/tests/unit_tests/examples/example-utf8.txt',\n", " '../../../../libs/langchain/tests/unit_tests/examples/example-non-utf8.txt']" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "doc_sources = [doc.metadata[\"source\"] for doc in docs]\n", "doc_sources" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_html.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "0c6c50fc-15e1-4767-925a-53a37c430b9b", "metadata": {}, "source": [ "# How to load HTML\n", "\n", "The HyperText Markup Language or [HTML](https://en.wikipedia.org/wiki/HTML) is the standard markup language for documents designed to be displayed in a web browser.\n", "\n", "This covers how to load `HTML` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.\n", "\n", "Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/integrations/document_loaders/firecrawl).\n", "\n", "## Loading HTML with Unstructured" ] }, { "cell_type": "code", "execution_count": null, "id": "617a5e2b-1e92-4bdd-bd04-95a4d2379410", "metadata": {}, "outputs": [], "source": [ "%pip install \"unstructured[html]\"" ] }, { "cell_type": "code", "execution_count": 1, "id": "7d167ca3-c7c7-4ef0-b509-080629f0f482", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Document(page_content='My First Heading\\n\\nMy first paragraph.', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html'})]\n" ] } ], "source": [ "from langchain_community.document_loaders import UnstructuredHTMLLoader\n", "\n", "file_path = \"../../../docs/integrations/document_loaders/example_data/fake-content.html\"\n", "\n", "loader = UnstructuredHTMLLoader(file_path)\n", "data = loader.load()\n", "\n", "print(data)" ] }, { "cell_type": "markdown", "id": "cc85f7e8-f62e-49bc-910e-d0b151c9d651", "metadata": {}, "source": [ "## Loading HTML with BeautifulSoup4\n", "\n", "We can also use `BeautifulSoup4` to load HTML documents using the `BSHTMLLoader`. This will extract the text from the HTML into `page_content`, and the page title as `title` into `metadata`." ] }, { "cell_type": "code", "execution_count": null, "id": "06a5e555-8e1f-44a7-b921-4dd8aedd3bca", "metadata": {}, "outputs": [], "source": [ "%pip install bs4" ] }, { "cell_type": "code", "execution_count": 2, "id": "0a2050a8-6df6-4696-9889-ba367d6f9caa", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Document(page_content='\\nTest Title\\n\\n\\nMy First Heading\\nMy first paragraph.\\n\\n\\n', metadata={'source': '../../../docs/integrations/document_loaders/example_data/fake-content.html', 'title': 'Test Title'})]\n" ] } ], "source": [ "from langchain_community.document_loaders import BSHTMLLoader\n", "\n", "loader = BSHTMLLoader(file_path)\n", "data = loader.load()\n", "\n", "print(data)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_json.mdx
# How to load JSON [JSON (JavaScript Object Notation)](https://en.wikipedia.org/wiki/JSON) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other serializable values). [JSON Lines](https://jsonlines.org/) is a file format where each line is a valid JSON value. LangChain implements a [JSONLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.json_loader.JSONLoader.html) to convert JSON and JSONL data into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects. It uses a specified [jq schema](https://en.wikipedia.org/wiki/Jq_(programming_language)) to parse the JSON files, allowing for the extraction of specific fields into the content and metadata of the LangChain Document. It uses the `jq` python package. Check out this [manual](https://stedolan.github.io/jq/manual/#Basicfilters) for a detailed documentation of the `jq` syntax. Here we will demonstrate: - How to load JSON and JSONL data into the content of a LangChain `Document`; - How to load JSON and JSONL data into metadata associated with a `Document`. ```python #!pip install jq ``` ```python from langchain_community.document_loaders import JSONLoader ``` ```python import json from pathlib import Path from pprint import pprint file_path='./example_data/facebook_chat.json' data = json.loads(Path(file_path).read_text()) ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` {'image': {'creation_timestamp': 1675549016, 'uri': 'image_of_the_chat.jpg'}, 'is_still_participant': True, 'joinable_mode': {'link': '', 'mode': 1}, 'magic_words': [], 'messages': [{'content': 'Bye!', 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}, {'content': 'Oh no worries! Bye', 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}, {'content': 'No Im sorry it was my mistake, the blue one is not ' 'for sale', 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}, {'content': 'I thought you were selling the blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}, {'content': 'Im not interested in this bag. Im interested in the ' 'blue one!', 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}, {'content': 'Here is $129', 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}, {'photos': [{'creation_timestamp': 1675595059, 'uri': 'url_of_some_picture.jpg'}], 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}, {'content': 'Online is at least $100', 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}, {'content': 'How much do you want?', 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}, {'content': 'Goodmorning! $50 is too low.', 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}, {'content': 'Hi! Im interested in your bag. Im offering $50. Let ' 'me know if you are interested. Thanks!', 'sender_name': 'User 1', 'timestamp_ms': 1675549022673}], 'participants': [{'name': 'User 1'}, {'name': 'User 2'}], 'thread_path': 'inbox/User 1 and User 2 chat', 'title': 'User 1 and User 2 chat'} ``` </CodeOutputBlock> ## Using `JSONLoader` Suppose we are interested in extracting the values under the `content` field within the `messages` key of the JSON data. This can easily be done through the `JSONLoader` as shown below. ### JSON file ```python loader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[].content', text_content=False) data = loader.load() ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11})] ``` </CodeOutputBlock> ### JSON Lines file If you want to load documents from a JSON Lines file, you pass `json_lines=True` and specify `jq_schema` to extract `page_content` from a single JSON object. ```python file_path = './example_data/facebook_chat_messages.jsonl' pprint(Path(file_path).read_text()) ``` <CodeOutputBlock lang="python"> ``` ('{"sender_name": "User 2", "timestamp_ms": 1675597571851, "content": "Bye!"}\n' '{"sender_name": "User 1", "timestamp_ms": 1675597435669, "content": "Oh no ' 'worries! Bye"}\n' '{"sender_name": "User 2", "timestamp_ms": 1675596277579, "content": "No Im ' 'sorry it was my mistake, the blue one is not for sale"}\n') ``` </CodeOutputBlock> ```python loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.content', text_content=False, json_lines=True) data = loader.load() ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})] ``` </CodeOutputBlock> Another option is set `jq_schema='.'` and provide `content_key`: ```python loader = JSONLoader( file_path='./example_data/facebook_chat_messages.jsonl', jq_schema='.', content_key='sender_name', json_lines=True) data = loader.load() ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` [Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 1}), Document(page_content='User 1', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 2}), Document(page_content='User 2', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat_messages.jsonl', 'seq_num': 3})] ``` </CodeOutputBlock> ### JSON file with jq schema `content_key` To load documents from a JSON file using the content_key within the jq schema, set is_content_key_jq_parsable=True. Ensure that content_key is compatible and can be parsed using the jq schema. ```python file_path = './sample.json' pprint(Path(file_path).read_text()) ``` <CodeOutputBlock lang="python"> ```json {"data": [ {"attributes": { "message": "message1", "tags": [ "tag1"]}, "id": "1"}, {"attributes": { "message": "message2", "tags": [ "tag2"]}, "id": "2"}]} ``` </CodeOutputBlock> ```python loader = JSONLoader( file_path=file_path, jq_schema=".data[]", content_key=".attributes.message", is_content_key_jq_parsable=True, ) data = loader.load() ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` [Document(page_content='message1', metadata={'source': '/path/to/sample.json', 'seq_num': 1}), Document(page_content='message2', metadata={'source': '/path/to/sample.json', 'seq_num': 2})] ``` </CodeOutputBlock> ## Extracting metadata Generally, we want to include metadata available in the JSON file into the documents that we create from the content. The following demonstrates how metadata can be extracted using the `JSONLoader`. There are some key changes to be noted. In the previous example where we didn't collect the metadata, we managed to directly specify in the schema where the value for the `page_content` can be extracted from. ``` .messages[].content ``` In the current example, we have to tell the loader to iterate over the records in the `messages` field. The jq_schema then has to be: ``` .messages[] ``` This allows us to pass the records (dict) into the `metadata_func` that has to be implemented. The `metadata_func` is responsible for identifying which pieces of information in the record should be included in the metadata stored in the final `Document` object. Additionally, we now have to explicitly specify in the loader, via the `content_key` argument, the key from the record where the value for the `page_content` needs to be extracted from. ```python # Define the metadata extraction function. def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") return metadata loader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func ) data = loader.load() ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` [Document(page_content='Bye!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': '/Users/avsolatorio/WBG/langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})] ``` </CodeOutputBlock> Now, you will see that the documents contain the metadata associated with the content we extracted. ## The `metadata_func` As shown above, the `metadata_func` accepts the default metadata generated by the `JSONLoader`. This allows full control to the user with respect to how the metadata is formatted. For example, the default metadata contains the `source` and the `seq_num` keys. However, it is possible that the JSON data contain these keys as well. The user can then exploit the `metadata_func` to rename the default keys and use the ones from the JSON data. The example below shows how we can modify the `source` to only contain information of the file source relative to the `langchain` directory. ```python # Define the metadata extraction function. def metadata_func(record: dict, metadata: dict) -> dict: metadata["sender_name"] = record.get("sender_name") metadata["timestamp_ms"] = record.get("timestamp_ms") if "source" in metadata: source = metadata["source"].split("/") source = source[source.index("langchain"):] metadata["source"] = "/".join(source) return metadata loader = JSONLoader( file_path='./example_data/facebook_chat.json', jq_schema='.messages[]', content_key="content", metadata_func=metadata_func ) data = loader.load() ``` ```python pprint(data) ``` <CodeOutputBlock lang="python"> ``` [Document(page_content='Bye!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 1, 'sender_name': 'User 2', 'timestamp_ms': 1675597571851}), Document(page_content='Oh no worries! Bye', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 2, 'sender_name': 'User 1', 'timestamp_ms': 1675597435669}), Document(page_content='No Im sorry it was my mistake, the blue one is not for sale', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 3, 'sender_name': 'User 2', 'timestamp_ms': 1675596277579}), Document(page_content='I thought you were selling the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 4, 'sender_name': 'User 1', 'timestamp_ms': 1675595140251}), Document(page_content='Im not interested in this bag. Im interested in the blue one!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 5, 'sender_name': 'User 1', 'timestamp_ms': 1675595109305}), Document(page_content='Here is $129', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 6, 'sender_name': 'User 2', 'timestamp_ms': 1675595068468}), Document(page_content='', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 7, 'sender_name': 'User 2', 'timestamp_ms': 1675595060730}), Document(page_content='Online is at least $100', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 8, 'sender_name': 'User 2', 'timestamp_ms': 1675595045152}), Document(page_content='How much do you want?', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 9, 'sender_name': 'User 1', 'timestamp_ms': 1675594799696}), Document(page_content='Goodmorning! $50 is too low.', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 10, 'sender_name': 'User 2', 'timestamp_ms': 1675577876645}), Document(page_content='Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!', metadata={'source': 'langchain/docs/modules/indexes/document_loaders/examples/example_data/facebook_chat.json', 'seq_num': 11, 'sender_name': 'User 1', 'timestamp_ms': 1675549022673})] ``` </CodeOutputBlock> ## Common JSON structures with jq schema The list below provides a reference to the possible `jq_schema` the user can use to extract content from the JSON data depending on the structure. ``` JSON -> [{"text": ...}, {"text": ...}, {"text": ...}] jq_schema -> ".[].text" JSON -> {"key": [{"text": ...}, {"text": ...}, {"text": ...}]} jq_schema -> ".key[].text" JSON -> ["...", "...", "..."] jq_schema -> ".[]" ```
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_markdown.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "d836a98a-ad14-4bed-af76-e1877f7ef8a4", "metadata": {}, "source": [ "# How to load Markdown\n", "\n", "[Markdown](https://en.wikipedia.org/wiki/Markdown) is a lightweight markup language for creating formatted text using a plain-text editor.\n", "\n", "Here we cover how to load `Markdown` documents into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.\n", "\n", "We will cover:\n", "\n", "- Basic usage;\n", "- Parsing of Markdown into elements such as titles, list items, and text.\n", "\n", "LangChain implements an [UnstructuredMarkdownLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.markdown.UnstructuredMarkdownLoader.html) object which requires the [Unstructured](https://unstructured-io.github.io/unstructured/) package. First we install it:" ] }, { "cell_type": "code", "execution_count": 19, "id": "c8b147fb-6877-4f7a-b2ee-ee971c7bc662", "metadata": {}, "outputs": [], "source": [ "# !pip install \"unstructured[md]\"" ] }, { "cell_type": "markdown", "id": "ea8c41f8-a8dc-48cc-b78d-7b3e2427a34c", "metadata": {}, "source": [ "Basic usage will ingest a Markdown file to a single document. Here we demonstrate on LangChain's readme:" ] }, { "cell_type": "code", "execution_count": 1, "id": "80c50cc4-7ce9-4418-81b9-29c52c7b3627", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "🦜️🔗 LangChain\n", "\n", "⚡ Build context-aware reasoning applications ⚡\n", "\n", "Looking for the JS/TS library? Check out LangChain.js.\n", "\n", "To help you ship LangChain apps to production faster, check out LangSmith. \n", "LangSmith is a unified developer platform for building,\n" ] } ], "source": [ "from langchain_community.document_loaders import UnstructuredMarkdownLoader\n", "from langchain_core.documents import Document\n", "\n", "markdown_path = \"../../../../README.md\"\n", "loader = UnstructuredMarkdownLoader(markdown_path)\n", "\n", "data = loader.load()\n", "assert len(data) == 1\n", "assert isinstance(data[0], Document)\n", "readme_content = data[0].page_content\n", "print(readme_content[:250])" ] }, { "cell_type": "markdown", "id": "b7560a6e-ca5d-47e1-b176-a9c40e763ff3", "metadata": {}, "source": [ "## Retain Elements\n", "\n", "Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`." ] }, { "cell_type": "code", "execution_count": 2, "id": "a986bbce-7fd3-41d1-bc47-49f9f57c7cd1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of documents: 65\n", "\n", "page_content='🦜️🔗 LangChain' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'Title'}\n", "\n", "page_content='⚡ Build context-aware reasoning applications ⚡' metadata={'source': '../../../../README.md', 'last_modified': '2024-04-29T13:40:19', 'page_number': 1, 'languages': ['eng'], 'parent_id': 'c3223b6f7100be08a78f1e8c0c28fde1', 'filetype': 'text/markdown', 'file_directory': '../../../..', 'filename': 'README.md', 'category': 'NarrativeText'}\n", "\n" ] } ], "source": [ "loader = UnstructuredMarkdownLoader(markdown_path, mode=\"elements\")\n", "\n", "data = loader.load()\n", "print(f\"Number of documents: {len(data)}\\n\")\n", "\n", "for document in data[:2]:\n", " print(f\"{document}\\n\")" ] }, { "cell_type": "markdown", "id": "117dc6b0-9baa-44a2-9d1d-fc38ecf7a233", "metadata": {}, "source": [ "Note that in this case we recover three distinct element types:" ] }, { "cell_type": "code", "execution_count": 3, "id": "75abc139-3ded-4e8e-9f21-d0c8ec40fdfc", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'Title', 'NarrativeText', 'ListItem'}\n" ] } ], "source": [ "print(set(document.metadata[\"category\"] for document in data))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_office_file.mdx
# How to load Microsoft Office files The [Microsoft Office](https://www.office.com/) suite of productivity software includes Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Microsoft Outlook, and Microsoft OneNote. It is available for Microsoft Windows and macOS operating systems. It is also available on Android and iOS. This covers how to load commonly used file formats including `DOCX`, `XLSX` and `PPTX` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) object that we can use downstream. ## Loading DOCX, XLSX, PPTX with AzureAIDocumentIntelligenceLoader [Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`. This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode="single"` or `mode="page"` to return pure texts in a single page or document split by page. ### Prerequisite An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader. ```python %pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader file_path = "<filepath>" endpoint = "<endpoint>" key = "<key>" loader = AzureAIDocumentIntelligenceLoader( api_endpoint=endpoint, api_key=key, file_path=file_path, api_model="prebuilt-layout" ) documents = loader.load() ```
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/document_loader_pdf.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "d3dd7178-8337-44f0-a468-bc1af5c0e811", "metadata": {}, "source": [ "# How to load PDFs\n", "\n", "[Portable Document Format (PDF)](https://en.wikipedia.org/wiki/PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.\n", "\n", "This guide covers how to load `PDF` documents into the LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) format that we use downstream.\n", "\n", "LangChain integrates with a host of PDF parsers. Some are simple and relatively low-level; others will support OCR and image-processing, or perform advanced document layout analysis. The right choice will depend on your application. Below we enumerate the possibilities.\n", "\n", "## Using PyPDF\n", "\n", "Here we load a PDF using `pypdf` into array of documents, where each document contains the page content and metadata with `page` number." ] }, { "cell_type": "code", "execution_count": null, "id": "35c08d82-8b0a-45e2-8167-73e70f88208a", "metadata": {}, "outputs": [], "source": [ "%pip install pypdf" ] }, { "cell_type": "code", "execution_count": 1, "id": "7d8ccd0b-8415-4916-af32-0e6d30b9496b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(page_content='LayoutParser : A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1( \\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1Allen Institute for AI\\nshannons@allenai.org\\n2Brown University\\nruochen zhang@brown.edu\\n3Harvard University\\n{melissadell,jacob carlson }@fas.harvard.edu\\n4University of Washington\\nbcgl@cs.washington.edu\\n5University of Waterloo\\nw422li@uwaterloo.ca\\nAbstract. Recent advances in document image analysis (DIA) have been\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of im-\\nportant innovations by a wide audience. Though there have been on-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser , an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout de-\\ntection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io .\\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\\n·Character Recognition ·Open Source library ·Toolkit.\\n1 Introduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classification [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', metadata={'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf', 'page': 0})" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.document_loaders import PyPDFLoader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = PyPDFLoader(file_path)\n", "pages = loader.load_and_split()\n", "\n", "pages[0]" ] }, { "cell_type": "markdown", "id": "78ce6d1d-86cc-45e3-8259-e21fbd2c7e6c", "metadata": {}, "source": [ "An advantage of this approach is that documents can be retrieved with page numbers.\n", "\n", "### Vector search over PDFs\n", "\n", "Once we have loaded PDFs into LangChain `Document` objects, we can index them (e.g., a RAG application) in the usual way:" ] }, { "cell_type": "code", "execution_count": null, "id": "c3b932bb", "metadata": {}, "outputs": [], "source": [ "%pip install faiss-cpu \n", "# use `pip install faiss-gpu` for CUDA GPU support" ] }, { "cell_type": "code", "execution_count": null, "id": "7ba35f1c-0a85-4f2f-a56e-3a994c69180d", "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")" ] }, { "cell_type": "code", "execution_count": 4, "id": "e0eaec77-f5cf-4172-8e39-41e1520eabba", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "13: 14 Z. Shen et al.\n", "6 Conclusion\n", "LayoutParser provides a comprehensive toolkit for deep learning-based document\n", "image analysis. The off-the-shelf library is easy to install, and can be used to\n", "build flexible and accurate pipelines for processing documents with complicated\n", "structures. It also supports hi\n", "0: LayoutParser : A Unified Toolkit for Deep\n", "Learning Based Document Image Analysis\n", "Zejiang Shen1( \u0000), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\n", "Lee4, Jacob Carlson3, and Weining Li5\n", "1Allen Institute for AI\n", "shannons@allenai.org\n", "2Brown University\n", "ruochen zhang@brown.edu\n", "3Harvard University\n", "\n" ] } ], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())\n", "docs = faiss_index.similarity_search(\"What is LayoutParser?\", k=2)\n", "for doc in docs:\n", " print(str(doc.metadata[\"page\"]) + \":\", doc.page_content[:300])" ] }, { "cell_type": "markdown", "id": "9ac123ca-386f-4b06-b3a7-9205ea3d6da7", "metadata": {}, "source": [ "### Extract text from images\n", "\n", "Some PDFs contain images of text-- e.g., within scanned documents, or figures. Using the `rapidocr-onnxruntime` package we can extract images as text as well:" ] }, { "cell_type": "code", "execution_count": null, "id": "347f67fb-67f3-4be7-9af3-23a73cf00f71", "metadata": {}, "outputs": [], "source": [ "%pip install rapidocr-onnxruntime" ] }, { "cell_type": "code", "execution_count": 9, "id": "babc138a-2188-49f7-a8d6-3570fa3ad802", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'LayoutParser : A Unified Toolkit for DL-Based DIA 5\\nTable 1: Current layout detection models in the LayoutParser model zoo\\nDataset Base Model1Large Model Notes\\nPubLayNet [38] F / M M Layouts of modern scientific documents\\nPRImA [3] M - Layouts of scanned modern magazines and scientific reports\\nNewspaper [17] F - Layouts of scanned US newspapers from the 20th century\\nTableBank [18] F F Table region on modern scientific and business document\\nHJDataset [31] F / M - Layouts of history Japanese documents\\n1For each dataset, we train several models of different sizes for different needs (the trade-off between accuracy\\nvs. computational cost). For “base model” and “large model”, we refer to using the ResNet 50 or ResNet 101\\nbackbones [ 13], respectively. One can train models of different architectures, like Faster R-CNN [ 28] (F) and Mask\\nR-CNN [ 12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained\\nusing the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model\\nzoo in coming months.\\nlayout data structures , which are optimized for efficiency and versatility. 3) When\\nnecessary, users can employ existing or customized OCR models via the unified\\nAPI provided in the OCR module . 4)LayoutParser comes with a set of utility\\nfunctions for the visualization and storage of the layout data. 5) LayoutParser\\nis also highly customizable, via its integration with functions for layout data\\nannotation and model training . We now provide detailed descriptions for each\\ncomponent.\\n3.1 Layout Detection Models\\nInLayoutParser , a layout model takes a document image as an input and\\ngenerates a list of rectangular boxes for the target content regions. Different\\nfrom traditional methods, it relies on deep convolutional neural networks rather\\nthan manually curated rules to identify content regions. It is formulated as an\\nobject detection problem and state-of-the-art models like Faster R-CNN [ 28] and\\nMask R-CNN [ 12] are used. This yields prediction results of high accuracy and\\nmakes it possible to build a concise, generalized interface for layout detection.\\nLayoutParser , built upon Detectron2 [ 35], provides a minimal API that can\\nperform layout detection with only four lines of code in Python:\\n1import layoutparser as lp\\n2image = cv2. imread (\" image_file \") # load images\\n3model = lp. Detectron2LayoutModel (\\n4 \"lp :// PubLayNet / faster_rcnn_R_50_FPN_3x / config \")\\n5layout = model . detect ( image )\\nLayoutParser provides a wealth of pre-trained model weights using various\\ndatasets covering different languages, time periods, and document types. Due to\\ndomain shift [ 7], the prediction performance can notably drop when models are ap-\\nplied to target samples that are significantly different from the training dataset. As\\ndocument structures and layouts vary greatly in different domains, it is important\\nto select models trained on a dataset similar to the test samples. A semantic syntax\\nis used for initializing the model weights in LayoutParser , using both the dataset\\nname and model name lp://<dataset-name>/<model-architecture-name> .'" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loader = PyPDFLoader(\"https://arxiv.org/pdf/2103.15348.pdf\", extract_images=True)\n", "pages = loader.load()\n", "pages[4].page_content" ] }, { "cell_type": "markdown", "id": "eaf6c92e-ad2f-4157-ad35-9a2dc4dd1b66", "metadata": {}, "source": [ "## Using PyMuPDF\n", "\n", "This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page." ] }, { "cell_type": "code", "execution_count": null, "id": "1be9463c-e08b-432e-be46-dc41f6d0ec28", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PyMuPDFLoader\n", "\n", "loader = PyMuPDFLoader(\"example_data/layout-parser-paper.pdf\")\n", "data = loader.load()\n", "data[0]" ] }, { "cell_type": "markdown", "id": "7839a181-f042-4b30-a31f-4ae8631fba42", "metadata": {}, "source": [ "Additionally, you can pass along any of the options from the [PyMuPDF documentation](https://pymupdf.readthedocs.io/en/latest/app1.html#plain-text/) as keyword arguments in the `load` call, and it will be pass along to the `get_text()` call.\n", "\n", "## Using MathPix\n", "\n", "Inspired by Daniel Gross's [https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21](https://gist.github.com/danielgross/3ab4104e14faccc12b49200843adab21)" ] }, { "cell_type": "code", "execution_count": null, "id": "b5f17610-2b24-43a0-908b-8144a5a79916", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import MathpixPDFLoader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = MathpixPDFLoader(file_path)\n", "data = loader.load()" ] }, { "cell_type": "markdown", "id": "17c40629-09b8-42d0-a3de-3a43939c4cd8", "metadata": {}, "source": [ "## Using Unstructured\n", "\n", "[Unstructured](https://unstructured-io.github.io/unstructured/) supports a common interface for working with unstructured or semi-structured file formats, such as Markdown or PDF. LangChain's [UnstructuredPDFLoader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.pdf.UnstructuredPDFLoader.html) integrates with Unstructured to parse PDF documents into LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html) objects." ] }, { "cell_type": "code", "execution_count": 12, "id": "c6a15bd3-aaa4-49dc-935a-f18617a7dbdd", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import UnstructuredPDFLoader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = UnstructuredPDFLoader(file_path)\n", "data = loader.load()" ] }, { "cell_type": "markdown", "id": "4263ba1f-4ccc-413c-9644-46a3ab3ae6fb", "metadata": {}, "source": [ "### Retain Elements\n", "\n", "Under the hood, Unstructured creates different \"elements\" for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying `mode=\"elements\"`." ] }, { "cell_type": "code", "execution_count": 13, "id": "efd80620-0bb8-4298-ab3b-07d7ef9c0085", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(page_content='1 2 0 2', metadata={'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf', 'coordinates': {'points': ((16.34, 213.36), (16.34, 253.36), (36.34, 253.36), (36.34, 213.36)), 'system': 'PixelSpace', 'layout_width': 612, 'layout_height': 792}, 'file_directory': '../../../docs/integrations/document_loaders/example_data', 'filename': 'layout-parser-paper.pdf', 'languages': ['eng'], 'last_modified': '2024-03-18T13:22:22', 'page_number': 1, 'filetype': 'application/pdf', 'category': 'UncategorizedText'})" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = UnstructuredPDFLoader(file_path, mode=\"elements\")\n", "\n", "data = loader.load()\n", "data[0]" ] }, { "cell_type": "markdown", "id": "9b269d2a-2385-48a0-95c0-07202e1dff5f", "metadata": {}, "source": [ "See the full set of element types for this particular document:" ] }, { "cell_type": "code", "execution_count": 16, "id": "3c40d9e8-5bf7-466d-b2bb-ce2ae08bea35", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'ListItem', 'NarrativeText', 'Title', 'UncategorizedText'}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "set(doc.metadata[\"category\"] for doc in data)" ] }, { "cell_type": "markdown", "id": "90fa9e65-6b00-456c-a0ee-23056f7dacdf", "metadata": {}, "source": [ "### Fetching remote PDFs using Unstructured\n", "\n", "This covers how to load online PDFs into a document format that we can use downstream. This can be used for various online PDF sites such as https://open.umn.edu/opentextbooks/textbooks/ and https://arxiv.org/archive/\n", "\n", "Note: all other PDF loaders can also be used to fetch remote PDFs, but `OnlinePDFLoader` is a legacy function, and works specifically with `UnstructuredPDFLoader`." ] }, { "cell_type": "code", "execution_count": 18, "id": "54737607-072e-4eb9-aac8-6615472fefc1", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import OnlinePDFLoader\n", "\n", "loader = OnlinePDFLoader(\"https://arxiv.org/pdf/2302.03803.pdf\")\n", "data = loader.load()" ] }, { "cell_type": "markdown", "id": "2c7199f9-bbc5-4b03-873a-3d54c1bf4f68", "metadata": {}, "source": [ "## Using PyPDFium2" ] }, { "cell_type": "code", "execution_count": null, "id": "f209821b-1fe9-402b-adf7-d472c8a24939", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PyPDFium2Loader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = PyPDFium2Loader(file_path)\n", "data = loader.load()" ] }, { "cell_type": "markdown", "id": "885a8c0e-25e4-4f3b-bb84-9db3f2c9367d", "metadata": {}, "source": [ "## Using PDFMiner" ] }, { "cell_type": "code", "execution_count": null, "id": "4f465592-15be-4b8f-8f8c-0ffe207d0e4d", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PDFMinerLoader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = PDFMinerLoader(file_path)\n", "data = loader.load()" ] }, { "cell_type": "markdown", "id": "b9345c37-b0ba-4803-813c-f1c344a90a7c", "metadata": {}, "source": [ "### Using PDFMiner to generate HTML text\n", "\n", "This can be helpful for chunking texts semantically into sections as the output html content can be parsed via `BeautifulSoup` to get more structured and rich information about font size, page numbers, PDF headers/footers, etc." ] }, { "cell_type": "code", "execution_count": 19, "id": "2d39159e-61a5-4ac2-a6c2-3981c3aa6f4d", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PDFMinerPDFasHTMLLoader\n", "\n", "file_path = (\n", " \"../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf\"\n", ")\n", "loader = PDFMinerPDFasHTMLLoader(file_path)\n", "data = loader.load()[0]" ] }, { "cell_type": "code", "execution_count": 20, "id": "2f18fc1e-988f-4778-ab79-4fac739bec8f", "metadata": {}, "outputs": [], "source": [ "from bs4 import BeautifulSoup\n", "\n", "soup = BeautifulSoup(data.page_content, \"html.parser\")\n", "content = soup.find_all(\"div\")" ] }, { "cell_type": "code", "execution_count": 21, "id": "0b40f5bd-631e-4444-b79e-ef55e088807e", "metadata": {}, "outputs": [], "source": [ "import re\n", "\n", "cur_fs = None\n", "cur_text = \"\"\n", "snippets = [] # first collect all snippets that have the same font size\n", "for c in content:\n", " sp = c.find(\"span\")\n", " if not sp:\n", " continue\n", " st = sp.get(\"style\")\n", " if not st:\n", " continue\n", " fs = re.findall(\"font-size:(\\d+)px\", st)\n", " if not fs:\n", " continue\n", " fs = int(fs[0])\n", " if not cur_fs:\n", " cur_fs = fs\n", " if fs == cur_fs:\n", " cur_text += c.text\n", " else:\n", " snippets.append((cur_text, cur_fs))\n", " cur_fs = fs\n", " cur_text = c.text\n", "snippets.append((cur_text, cur_fs))\n", "# Note: The above logic is very straightforward. One can also add more strategies such as removing duplicate snippets (as\n", "# headers/footers in a PDF appear on multiple pages so if we find duplicates it's safe to assume that it is redundant info)" ] }, { "cell_type": "code", "execution_count": 22, "id": "953b168f-4ae1-4279-b370-c21961206c0a", "metadata": {}, "outputs": [], "source": [ "from langchain_core.documents import Document\n", "\n", "cur_idx = -1\n", "semantic_snippets = []\n", "# Assumption: headings have higher font size than their respective content\n", "for s in snippets:\n", " # if current snippet's font size > previous section's heading => it is a new heading\n", " if (\n", " not semantic_snippets\n", " or s[1] > semantic_snippets[cur_idx].metadata[\"heading_font\"]\n", " ):\n", " metadata = {\"heading\": s[0], \"content_font\": 0, \"heading_font\": s[1]}\n", " metadata.update(data.metadata)\n", " semantic_snippets.append(Document(page_content=\"\", metadata=metadata))\n", " cur_idx += 1\n", " continue\n", "\n", " # if current snippet's font size <= previous section's content => content belongs to the same section (one can also create\n", " # a tree like structure for sub sections if needed but that may require some more thinking and may be data specific)\n", " if (\n", " not semantic_snippets[cur_idx].metadata[\"content_font\"]\n", " or s[1] <= semantic_snippets[cur_idx].metadata[\"content_font\"]\n", " ):\n", " semantic_snippets[cur_idx].page_content += s[0]\n", " semantic_snippets[cur_idx].metadata[\"content_font\"] = max(\n", " s[1], semantic_snippets[cur_idx].metadata[\"content_font\"]\n", " )\n", " continue\n", "\n", " # if current snippet's font size > previous section's content but less than previous section's heading than also make a new\n", " # section (e.g. title of a PDF will have the highest font size but we don't want it to subsume all sections)\n", " metadata = {\"heading\": s[0], \"content_font\": 0, \"heading_font\": s[1]}\n", " metadata.update(data.metadata)\n", " semantic_snippets.append(Document(page_content=\"\", metadata=metadata))\n", " cur_idx += 1" ] }, { "cell_type": "code", "execution_count": 28, "id": "9bf28b73-dad4-4f51-9238-4af523fa7225", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Document(page_content='Recently, various DL models and datasets have been developed for layout analysis\\ntasks. The dhSegment [22] utilizes fully convolutional networks [20] for segmen-\\ntation tasks on historical documents. Object detection-based methods like Faster\\nR-CNN [28] and Mask R-CNN [12] are used for identifying document elements [38]\\nand detecting tables [30, 26]. Most recently, Graph Neural Networks [29] have also\\nbeen used in table detection [27]. However, these models are usually implemented\\nindividually and there is no unified framework to load and use such models.\\nThere has been a surge of interest in creating open-source tools for document\\nimage processing: a search of document image analysis in Github leads to 5M\\nrelevant code pieces 6; yet most of them rely on traditional rule-based methods\\nor provide limited functionalities. The closest prior research to our work is the\\nOCR-D project7, which also tries to build a complete toolkit for DIA. However,\\nsimilar to the platform developed by Neudecker et al. [21], it is designed for\\nanalyzing historical documents, and provides no supports for recent DL models.\\nThe DocumentLayoutAnalysis project8 focuses on processing born-digital PDF\\ndocuments via analyzing the stored PDF data. Repositories like DeepLayout9\\nand Detectron2-PubLayNet10 are individual deep learning models trained on\\nlayout analysis datasets without support for the full DIA pipeline. The Document\\nAnalysis and Exploitation (DAE) platform [15] and the DeepDIVA project [2]\\naim to improve the reproducibility of DIA methods (or DL models), yet they\\nare not actively maintained. OCR engines like Tesseract [14], easyOCR11 and\\npaddleOCR12 usually do not come with comprehensive functionalities for other\\nDIA tasks like layout analysis.\\nRecent years have also seen numerous efforts to create libraries for promoting\\nreproducibility and reusability in the field of DL. Libraries like Dectectron2 [35],\\n6 The number shown is obtained by specifying the search type as ‘code’.\\n7 https://ocr-d.de/en/about\\n8 https://github.com/BobLd/DocumentLayoutAnalysis\\n9 https://github.com/leonlulu/DeepLayout\\n10 https://github.com/hpanwar08/detectron2\\n11 https://github.com/JaidedAI/EasyOCR\\n12 https://github.com/PaddlePaddle/PaddleOCR\\n4\\nZ. Shen et al.\\nFig. 1: The overall architecture of LayoutParser. For an input document image,\\nthe core LayoutParser library provides a set of off-the-shelf tools for layout\\ndetection, OCR, visualization, and storage, backed by a carefully designed layout\\ndata structure. LayoutParser also supports high level customization via efficient\\nlayout annotation and model training functions. These improve model accuracy\\non the target samples. The community platform enables the easy sharing of DIA\\nmodels and whole digitization pipelines to promote reusability and reproducibility.\\nA collection of detailed documentation, tutorials and exemplar projects make\\nLayoutParser easy to learn and use.\\nAllenNLP [8] and transformers [34] have provided the community with complete\\nDL-based support for developing and deploying models for general computer\\nvision and natural language processing problems. LayoutParser, on the other\\nhand, specializes specifically in DIA tasks. LayoutParser is also equipped with a\\ncommunity platform inspired by established model hubs such as Torch Hub [23]\\nand TensorFlow Hub [1]. It enables the sharing of pretrained models as well as\\nfull document processing pipelines that are unique to DIA tasks.\\nThere have been a variety of document data collections to facilitate the\\ndevelopment of DL models. Some examples include PRImA [3](magazine layouts),\\nPubLayNet [38](academic paper layouts), Table Bank [18](tables in academic\\npapers), Newspaper Navigator Dataset [16, 17](newspaper figure layouts) and\\nHJDataset [31](historical Japanese document layouts). A spectrum of models\\ntrained on these datasets are currently available in the LayoutParser model zoo\\nto support different use cases.\\n', metadata={'heading': '2 Related Work\\n', 'content_font': 9, 'heading_font': 11, 'source': '../../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf'})" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "semantic_snippets[4]" ] }, { "cell_type": "markdown", "id": "e87d7447-c620-4f48-b4fd-8933a614e4e1", "metadata": {}, "source": [ "## PyPDF Directory\n", "\n", "Load PDFs from directory" ] }, { "cell_type": "code", "execution_count": 30, "id": "78e5a485-ff53-4b0c-ba5f-9f442079b529", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PyPDFDirectoryLoader" ] }, { "cell_type": "code", "execution_count": 31, "id": "51b2fe13-3755-4031-b7ce-84d9983db71c", "metadata": {}, "outputs": [], "source": [ "directory_path = \"../../../docs/integrations/document_loaders/example_data/\"\n", "loader = PyPDFDirectoryLoader(\"example_data/\")\n", "\n", "\n", "docs = loader.load()" ] }, { "cell_type": "markdown", "id": "78365a16-c011-4de1-8c32-873b88e7fead", "metadata": {}, "source": [ "## Using PDFPlumber\n", "\n", "Like PyMuPDF, the output Documents contain detailed metadata about the PDF and its pages, and returns one document per page." ] }, { "cell_type": "code", "execution_count": null, "id": "c8c1001b-48b1-4777-a34f-2fbdca5457df", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import PDFPlumberLoader\n", "\n", "data = loader.load()\n", "data[0]" ] }, { "cell_type": "markdown", "id": "94795ae5-161d-4d64-963c-dbcf1e60ca15", "metadata": {}, "source": [ "## Using AmazonTextractPDFParser\n", "\n", "The AmazonTextractPDFLoader calls the [Amazon Textract Service](https://aws.amazon.com/textract/) to convert PDFs into a Document structure. The loader does pure OCR at the moment, with more features like layout support planned, depending on demand. Single and multi-page documents are supported with up to 3000 pages and 512 MB of size.\n", "\n", "For the call to be successful an AWS account is required, similar to the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) requirements.\n", "\n", "Besides the AWS configuration, it is very similar to the other PDF loaders, while also supporting JPEG, PNG and TIFF and non-native PDF formats." ] }, { "cell_type": "code", "execution_count": null, "id": "5329e301-4bb6-4d51-aced-c9984ff6808a", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import AmazonTextractPDFLoader\n", "\n", "loader = AmazonTextractPDFLoader(\"example_data/alejandro_rosalez_sample-small.jpeg\")\n", "documents = loader.load()" ] }, { "cell_type": "markdown", "id": "e8291366-e2ec-4460-8e97-3fae3971986e", "metadata": {}, "source": [ "## Using AzureAIDocumentIntelligenceLoader\n", "\n", "[Azure AI Document Intelligence](https://aka.ms/doc-intelligence) (formerly known as `Azure Form Recognizer`) is machine-learning \n", "based service that extracts texts (including handwriting), tables, document structures (e.g., titles, section headings, etc.) and key-value-pairs from\n", "digital or scanned PDFs, images, Office and HTML files. Document Intelligence supports `PDF`, `JPEG/JPG`, `PNG`, `BMP`, `TIFF`, `HEIF`, `DOCX`, `XLSX`, `PPTX` and `HTML`.\n", "\n", "This [current implementation](https://aka.ms/di-langchain) of a loader using `Document Intelligence` can incorporate content page-wise and turn it into LangChain documents. The default output format is markdown, which can be easily chained with `MarkdownHeaderTextSplitter` for semantic document chunking. You can also use `mode=\"single\"` or `mode=\"page\"` to return pure texts in a single page or document split by page.\n", "\n", "### Prerequisite\n", "\n", "An Azure AI Document Intelligence resource in one of the 3 preview regions: **East US**, **West US2**, **West Europe** - follow [this document](https://learn.microsoft.com/azure/ai-services/document-intelligence/create-document-intelligence-resource?view=doc-intel-4.0.0) to create one if you don't have. You will be passing `<endpoint>` and `<key>` as parameters to the loader." ] }, { "cell_type": "code", "execution_count": null, "id": "12dfb5ff-ddd5-40a7-a5db-25d149d556ce", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-community azure-ai-documentintelligence" ] }, { "cell_type": "code", "execution_count": null, "id": "b06bd5d4-7093-4d12-8963-1eb41f82d21d", "metadata": {}, "outputs": [], "source": [ "from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader\n", "\n", "file_path = \"<filepath>\"\n", "endpoint = \"<endpoint>\"\n", "key = \"<key>\"\n", "loader = AzureAIDocumentIntelligenceLoader(\n", " api_endpoint=endpoint, api_key=key, file_path=file_path, api_model=\"prebuilt-layout\"\n", ")\n", "\n", "documents = loader.load()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/dynamic_chain.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1", "metadata": {}, "source": [ "# How to create a dynamic (self-constructing) chain\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [How to turn any function into a runnable](/docs/how_to/functions)\n", "\n", ":::\n", "\n", "Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", "/>\n", "```" ] }, { "cell_type": "code", "execution_count": 4, "id": "406bffc2-86d0-4cb9-9262-5c1e3442397a", "metadata": {}, "outputs": [], "source": [ "# | echo: false\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")" ] }, { "cell_type": "code", "execution_count": 10, "id": "0ae6692b-983e-40b8-aa2a-6c078d945b9e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.\"" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import Runnable, RunnablePassthrough, chain\n", "\n", "contextualize_instructions = \"\"\"Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text).\"\"\"\n", "contextualize_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", contextualize_instructions),\n", " (\"placeholder\", \"{chat_history}\"),\n", " (\"human\", \"{question}\"),\n", " ]\n", ")\n", "contextualize_question = contextualize_prompt | llm | StrOutputParser()\n", "\n", "qa_instructions = (\n", " \"\"\"Answer the user question given the following context:\\n\\n{context}.\"\"\"\n", ")\n", "qa_prompt = ChatPromptTemplate.from_messages(\n", " [(\"system\", qa_instructions), (\"human\", \"{question}\")]\n", ")\n", "\n", "\n", "@chain\n", "def contextualize_if_needed(input_: dict) -> Runnable:\n", " if input_.get(\"chat_history\"):\n", " # NOTE: This is returning another Runnable, not an actual output.\n", " return contextualize_question\n", " else:\n", " return RunnablePassthrough()\n", "\n", "\n", "@chain\n", "def fake_retriever(input_: dict) -> str:\n", " return \"egypt's population in 2024 is about 111 million\"\n", "\n", "\n", "full_chain = (\n", " RunnablePassthrough.assign(question=contextualize_if_needed).assign(\n", " context=fake_retriever\n", " )\n", " | qa_prompt\n", " | llm\n", " | StrOutputParser()\n", ")\n", "\n", "full_chain.invoke(\n", " {\n", " \"question\": \"what about egypt\",\n", " \"chat_history\": [\n", " (\"human\", \"what's the population of indonesia\"),\n", " (\"ai\", \"about 276 million\"),\n", " ],\n", " }\n", ")" ] }, { "cell_type": "markdown", "id": "5076ddb4-4a99-47ad-b549-8ac27ca3e2c6", "metadata": {}, "source": [ "The key here is that `contextualize_if_needed` returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed.\n", "\n", "Looking at the trace we can see that, since we passed in chat_history, we executed the contextualize_question chain as part of the full chain: https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r" ] }, { "cell_type": "markdown", "id": "4fe6ca44-a643-4859-a290-be68403f51f0", "metadata": {}, "source": [ "Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved" ] }, { "cell_type": "code", "execution_count": 11, "id": "6def37fa-5105-4090-9b07-77cb488ecd9c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "What\n", " is\n", " the\n", " population\n", " of\n", " Egypt\n", "?\n" ] } ], "source": [ "for chunk in contextualize_if_needed.stream(\n", " {\n", " \"question\": \"what about egypt\",\n", " \"chat_history\": [\n", " (\"human\", \"what's the population of indonesia\"),\n", " (\"ai\", \"about 276 million\"),\n", " ],\n", " }\n", "):\n", " print(chunk)" ] } ], "metadata": { "kernelspec": { "display_name": "poetry-venv-2", "language": "python", "name": "poetry-venv-2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/embed_text.mdx
# Text embedding models :::info Head to [Integrations](/docs/integrations/text_embedding/) for documentation on built-in integrations with text embedding model providers. ::: The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them. Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space. The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former, `.embed_documents`, takes as input multiple texts, while the latter, `.embed_query`, takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself). `.embed_query` will return a list of floats, whereas `.embed_documents` returns a list of lists of floats. ## Get started ### Setup import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; <Tabs> <TabItem value="openai" label="OpenAI" default> To start we'll need to install the OpenAI partner package: ```bash pip install langchain-openai ``` Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running: ```bash export OPENAI_API_KEY="..." ``` If you'd prefer not to set an environment variable you can pass the key in directly via the `api_key` named parameter when initiating the OpenAI LLM class: ```python from langchain_openai import OpenAIEmbeddings embeddings_model = OpenAIEmbeddings(api_key="...") ``` Otherwise you can initialize without any params: ```python from langchain_openai import OpenAIEmbeddings embeddings_model = OpenAIEmbeddings() ``` </TabItem> <TabItem value="cohere" label="Cohere"> To start we'll need to install the Cohere SDK package: ```bash pip install langchain-cohere ``` Accessing the API requires an API key, which you can get by creating an account and heading [here](https://dashboard.cohere.com/api-keys). Once we have a key we'll want to set it as an environment variable by running: ```shell export COHERE_API_KEY="..." ``` If you'd prefer not to set an environment variable you can pass the key in directly via the `cohere_api_key` named parameter when initiating the Cohere LLM class: ```python from langchain_cohere import CohereEmbeddings embeddings_model = CohereEmbeddings(cohere_api_key="...") ``` Otherwise you can initialize without any params: ```python from langchain_cohere import CohereEmbeddings embeddings_model = CohereEmbeddings() ``` </TabItem> <TabItem value="huggingface" label="Hugging Face"> To start we'll need to install the Hugging Face partner package: ```bash pip install langchain-huggingface ``` You can then load any [Sentence Transformers model](https://huggingface.co/models?library=sentence-transformers) from the Hugging Face Hub. ```python from langchain_huggingface import HuggingFaceEmbeddings embeddings_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") ``` You can also leave the `model_name` blank to use the default [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) model. ```python from langchain_huggingface import HuggingFaceEmbeddings embeddings_model = HuggingFaceEmbeddings() ``` </TabItem> </Tabs> ### `embed_documents` #### Embed list of texts Use `.embed_documents` to embed a list of strings, recovering a list of embeddings: ```python embeddings = embeddings_model.embed_documents( [ "Hi there!", "Oh, hello!", "What's your name?", "My friends call me World", "Hello World!" ] ) len(embeddings), len(embeddings[0]) ``` <CodeOutputBlock language="python"> ``` (5, 1536) ``` </CodeOutputBlock> ### `embed_query` #### Embed single query Use `.embed_query` to embed a single piece of text (e.g., for the purpose of comparing to other embedded pieces of texts). ```python embedded_query = embeddings_model.embed_query("What was the name mentioned in the conversation?") embedded_query[:5] ``` <CodeOutputBlock language="python"> ``` [0.0053587136790156364, -0.0004999046213924885, 0.038883671164512634, -0.003001077566295862, -0.00900818221271038] ``` </CodeOutputBlock>
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/ensemble_retriever.ipynb
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# How to combine results from multiple retrievers\n", "\n", "The [EnsembleRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.ensemble.EnsembleRetriever.html) supports ensembling of results from multiple retrievers. It is initialized with a list of [BaseRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_core.retrievers.BaseRetriever.html) objects. EnsembleRetrievers rerank the results of the constituent retrievers based on the [Reciprocal Rank Fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) algorithm.\n", "\n", "By leveraging the strengths of different algorithms, the `EnsembleRetriever` can achieve better performance than any single algorithm. \n", "\n", "The most common pattern is to combine a sparse retriever (like BM25) with a dense retriever (like embedding similarity), because their strengths are complementary. It is also known as \"hybrid search\". The sparse retriever is good at finding relevant documents based on keywords, while the dense retriever is good at finding relevant documents based on semantic similarity.\n", "\n", "## Basic usage\n", "\n", "Below we demonstrate ensembling of a [BM25Retriever](https://api.python.langchain.com/en/latest/retrievers/langchain_community.retrievers.bm25.BM25Retriever.html) with a retriever derived from the [FAISS vector store](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html)." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet rank_bm25 > /dev/null" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from langchain.retrievers import EnsembleRetriever\n", "from langchain_community.retrievers import BM25Retriever\n", "from langchain_community.vectorstores import FAISS\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "doc_list_1 = [\n", " \"I like apples\",\n", " \"I like oranges\",\n", " \"Apples and oranges are fruits\",\n", "]\n", "\n", "# initialize the bm25 retriever and faiss retriever\n", "bm25_retriever = BM25Retriever.from_texts(\n", " doc_list_1, metadatas=[{\"source\": 1}] * len(doc_list_1)\n", ")\n", "bm25_retriever.k = 2\n", "\n", "doc_list_2 = [\n", " \"You like apples\",\n", " \"You like oranges\",\n", "]\n", "\n", "embedding = OpenAIEmbeddings()\n", "faiss_vectorstore = FAISS.from_texts(\n", " doc_list_2, embedding, metadatas=[{\"source\": 2}] * len(doc_list_2)\n", ")\n", "faiss_retriever = faiss_vectorstore.as_retriever(search_kwargs={\"k\": 2})\n", "\n", "# initialize the ensemble retriever\n", "ensemble_retriever = EnsembleRetriever(\n", " retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='I like apples', metadata={'source': 1}),\n", " Document(page_content='You like apples', metadata={'source': 2}),\n", " Document(page_content='Apples and oranges are fruits', metadata={'source': 1}),\n", " Document(page_content='You like oranges', metadata={'source': 2})]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "docs = ensemble_retriever.invoke(\"apples\")\n", "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Runtime Configuration\n", "\n", "We can also configure the individual retrievers at runtime using [configurable fields](/docs/how_to/configure). Below we update the \"top-k\" parameter for the FAISS retriever specifically:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from langchain_core.runnables import ConfigurableField\n", "\n", "faiss_retriever = faiss_vectorstore.as_retriever(\n", " search_kwargs={\"k\": 2}\n", ").configurable_fields(\n", " search_kwargs=ConfigurableField(\n", " id=\"search_kwargs_faiss\",\n", " name=\"Search Kwargs\",\n", " description=\"The search kwargs to use\",\n", " )\n", ")\n", "\n", "ensemble_retriever = EnsembleRetriever(\n", " retrievers=[bm25_retriever, faiss_retriever], weights=[0.5, 0.5]\n", ")" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='I like apples', metadata={'source': 1}),\n", " Document(page_content='You like apples', metadata={'source': 2}),\n", " Document(page_content='Apples and oranges are fruits', metadata={'source': 1})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "config = {\"configurable\": {\"search_kwargs_faiss\": {\"k\": 1}}}\n", "docs = ensemble_retriever.invoke(\"apples\", config=config)\n", "docs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that this only returns one source from the FAISS retriever, because we pass in the relevant configuration at run time" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/example_selectors.ipynb
{ "cells": [ { "cell_type": "raw", "id": "af408f61", "metadata": {}, "source": [ "---\n", "sidebar_position: 1\n", "---" ] }, { "cell_type": "markdown", "id": "1a65e4c9", "metadata": {}, "source": [ "# How to use example selectors\n", "\n", "If you have a large number of examples, you may need to select which ones to include in the prompt. The Example Selector is the class responsible for doing so.\n", "\n", "The base interface is defined as below:\n", "\n", "```python\n", "class BaseExampleSelector(ABC):\n", " \"\"\"Interface for selecting examples to include in prompts.\"\"\"\n", "\n", " @abstractmethod\n", " def select_examples(self, input_variables: Dict[str, str]) -> List[dict]:\n", " \"\"\"Select which examples to use based on the inputs.\"\"\"\n", " \n", " @abstractmethod\n", " def add_example(self, example: Dict[str, str]) -> Any:\n", " \"\"\"Add new example to store.\"\"\"\n", "```\n", "\n", "The only method it needs to define is a ``select_examples`` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected.\n", "\n", "LangChain has a few different types of example selectors. For an overview of all these types, see the below table.\n", "\n", "In this guide, we will walk through creating a custom example selector." ] }, { "cell_type": "markdown", "id": "638e9039", "metadata": {}, "source": [ "## Examples\n", "\n", "In order to use an example selector, we need to create a list of examples. These should generally be example inputs and outputs. For this demo purpose, let's imagine we are selecting examples of how to translate English to Italian." ] }, { "cell_type": "code", "execution_count": 36, "id": "48658d53", "metadata": {}, "outputs": [], "source": [ "examples = [\n", " {\"input\": \"hi\", \"output\": \"ciao\"},\n", " {\"input\": \"bye\", \"output\": \"arrivederci\"},\n", " {\"input\": \"soccer\", \"output\": \"calcio\"},\n", "]" ] }, { "cell_type": "markdown", "id": "c2830b49", "metadata": {}, "source": [ "## Custom Example Selector\n", "\n", "Let's write an example selector that chooses what example to pick based on the length of the word." ] }, { "cell_type": "code", "execution_count": 37, "id": "56b740a1", "metadata": {}, "outputs": [], "source": [ "from langchain_core.example_selectors.base import BaseExampleSelector\n", "\n", "\n", "class CustomExampleSelector(BaseExampleSelector):\n", " def __init__(self, examples):\n", " self.examples = examples\n", "\n", " def add_example(self, example):\n", " self.examples.append(example)\n", "\n", " def select_examples(self, input_variables):\n", " # This assumes knowledge that part of the input will be a 'text' key\n", " new_word = input_variables[\"input\"]\n", " new_word_length = len(new_word)\n", "\n", " # Initialize variables to store the best match and its length difference\n", " best_match = None\n", " smallest_diff = float(\"inf\")\n", "\n", " # Iterate through each example\n", " for example in self.examples:\n", " # Calculate the length difference with the first word of the example\n", " current_diff = abs(len(example[\"input\"]) - new_word_length)\n", "\n", " # Update the best match if the current one is closer in length\n", " if current_diff < smallest_diff:\n", " smallest_diff = current_diff\n", " best_match = example\n", "\n", " return [best_match]" ] }, { "cell_type": "code", "execution_count": 38, "id": "ce928187", "metadata": {}, "outputs": [], "source": [ "example_selector = CustomExampleSelector(examples)" ] }, { "cell_type": "code", "execution_count": 39, "id": "37ef3149", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'input': 'bye', 'output': 'arrivederci'}]" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example_selector.select_examples({\"input\": \"okay\"})" ] }, { "cell_type": "code", "execution_count": 40, "id": "c5ad9f35", "metadata": {}, "outputs": [], "source": [ "example_selector.add_example({\"input\": \"hand\", \"output\": \"mano\"})" ] }, { "cell_type": "code", "execution_count": 41, "id": "e4127fe0", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'input': 'hand', 'output': 'mano'}]" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example_selector.select_examples({\"input\": \"okay\"})" ] }, { "cell_type": "markdown", "id": "786c920c", "metadata": {}, "source": [ "## Use in a Prompt\n", "\n", "We can now use this example selector in a prompt" ] }, { "cell_type": "code", "execution_count": 42, "id": "619090e2", "metadata": {}, "outputs": [], "source": [ "from langchain_core.prompts.few_shot import FewShotPromptTemplate\n", "from langchain_core.prompts.prompt import PromptTemplate\n", "\n", "example_prompt = PromptTemplate.from_template(\"Input: {input} -> Output: {output}\")" ] }, { "cell_type": "code", "execution_count": 43, "id": "5934c415", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Translate the following words from English to Italian:\n", "\n", "Input: hand -> Output: mano\n", "\n", "Input: word -> Output:\n" ] } ], "source": [ "prompt = FewShotPromptTemplate(\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " suffix=\"Input: {input} -> Output:\",\n", " prefix=\"Translate the following words from English to Italian:\",\n", " input_variables=[\"input\"],\n", ")\n", "\n", "print(prompt.format(input=\"word\"))" ] }, { "cell_type": "markdown", "id": "e767f69d", "metadata": {}, "source": [ "## Example Selector Types\n", "\n", "| Name | Description |\n", "|------------|---------------------------------------------------------------------------------------------|\n", "| Similarity | Uses semantic similarity between inputs and examples to decide which examples to choose. |\n", "| MMR | Uses Max Marginal Relevance between inputs and examples to decide which examples to choose. |\n", "| Length | Selects examples based on how many can fit within a certain length |\n", "| Ngram | Uses ngram overlap between inputs and examples to decide which examples to choose. |" ] }, { "cell_type": "code", "execution_count": null, "id": "8a6e0abe", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/example_selectors_length_based.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "1036fdb2", "metadata": {}, "source": [ "# How to select examples by length\n", "\n", "This example selector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while for shorter inputs it will select more." ] }, { "cell_type": "code", "execution_count": 1, "id": "1bd45644", "metadata": {}, "outputs": [], "source": [ "from langchain_core.example_selectors import LengthBasedExampleSelector\n", "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n", "\n", "# Examples of a pretend task of creating antonyms.\n", "examples = [\n", " {\"input\": \"happy\", \"output\": \"sad\"},\n", " {\"input\": \"tall\", \"output\": \"short\"},\n", " {\"input\": \"energetic\", \"output\": \"lethargic\"},\n", " {\"input\": \"sunny\", \"output\": \"gloomy\"},\n", " {\"input\": \"windy\", \"output\": \"calm\"},\n", "]\n", "\n", "example_prompt = PromptTemplate(\n", " input_variables=[\"input\", \"output\"],\n", " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "example_selector = LengthBasedExampleSelector(\n", " # The examples it has available to choose from.\n", " examples=examples,\n", " # The PromptTemplate being used to format the examples.\n", " example_prompt=example_prompt,\n", " # The maximum length that the formatted examples should be.\n", " # Length is measured by the get_text_length function below.\n", " max_length=25,\n", " # The function used to get the length of a string, which is used\n", " # to determine which examples to include. It is commented out because\n", " # it is provided as a default value if none is specified.\n", " # get_text_length: Callable[[str], int] = lambda x: len(re.split(\"\\n| \", x))\n", ")\n", "dynamic_prompt = FewShotPromptTemplate(\n", " # We provide an ExampleSelector instead of examples.\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " prefix=\"Give the antonym of every input\",\n", " suffix=\"Input: {adjective}\\nOutput:\",\n", " input_variables=[\"adjective\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 3, "id": "f62c140b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: happy\n", "Output: sad\n", "\n", "Input: tall\n", "Output: short\n", "\n", "Input: energetic\n", "Output: lethargic\n", "\n", "Input: sunny\n", "Output: gloomy\n", "\n", "Input: windy\n", "Output: calm\n", "\n", "Input: big\n", "Output:\n" ] } ], "source": [ "# An example with small input, so it selects all examples.\n", "print(dynamic_prompt.format(adjective=\"big\"))" ] }, { "cell_type": "code", "execution_count": 4, "id": "3ca959eb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: happy\n", "Output: sad\n", "\n", "Input: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\n", "Output:\n" ] } ], "source": [ "# An example with long input, so it selects only one example.\n", "long_string = \"big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else\"\n", "print(dynamic_prompt.format(adjective=long_string))" ] }, { "cell_type": "code", "execution_count": 5, "id": "da43f9a7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: happy\n", "Output: sad\n", "\n", "Input: tall\n", "Output: short\n", "\n", "Input: energetic\n", "Output: lethargic\n", "\n", "Input: sunny\n", "Output: gloomy\n", "\n", "Input: windy\n", "Output: calm\n", "\n", "Input: big\n", "Output: small\n", "\n", "Input: enthusiastic\n", "Output:\n" ] } ], "source": [ "# You can add an example to an example selector as well.\n", "new_example = {\"input\": \"big\", \"output\": \"small\"}\n", "dynamic_prompt.example_selector.add_example(new_example)\n", "print(dynamic_prompt.format(adjective=\"enthusiastic\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "be3cf8aa", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/example_selectors_mmr.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "bc35afd0", "metadata": {}, "source": [ "# How to select examples by maximal marginal relevance (MMR)\n", "\n", "The `MaxMarginalRelevanceExampleSelector` selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs, and then iteratively adding them while penalizing them for closeness to already selected examples.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "ac95c968", "metadata": {}, "outputs": [], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.example_selectors import (\n", " MaxMarginalRelevanceExampleSelector,\n", " SemanticSimilarityExampleSelector,\n", ")\n", "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "example_prompt = PromptTemplate(\n", " input_variables=[\"input\", \"output\"],\n", " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "\n", "# Examples of a pretend task of creating antonyms.\n", "examples = [\n", " {\"input\": \"happy\", \"output\": \"sad\"},\n", " {\"input\": \"tall\", \"output\": \"short\"},\n", " {\"input\": \"energetic\", \"output\": \"lethargic\"},\n", " {\"input\": \"sunny\", \"output\": \"gloomy\"},\n", " {\"input\": \"windy\", \"output\": \"calm\"},\n", "]" ] }, { "cell_type": "code", "execution_count": 2, "id": "db579bea", "metadata": {}, "outputs": [], "source": [ "example_selector = MaxMarginalRelevanceExampleSelector.from_examples(\n", " # The list of examples available to select from.\n", " examples,\n", " # The embedding class used to produce embeddings which are used to measure semantic similarity.\n", " OpenAIEmbeddings(),\n", " # The VectorStore class that is used to store the embeddings and do a similarity search over.\n", " FAISS,\n", " # The number of examples to produce.\n", " k=2,\n", ")\n", "mmr_prompt = FewShotPromptTemplate(\n", " # We provide an ExampleSelector instead of examples.\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " prefix=\"Give the antonym of every input\",\n", " suffix=\"Input: {adjective}\\nOutput:\",\n", " input_variables=[\"adjective\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 3, "id": "cd76e344", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: happy\n", "Output: sad\n", "\n", "Input: windy\n", "Output: calm\n", "\n", "Input: worried\n", "Output:\n" ] } ], "source": [ "# Input is a feeling, so should select the happy/sad example as the first one\n", "print(mmr_prompt.format(adjective=\"worried\"))" ] }, { "cell_type": "code", "execution_count": 4, "id": "cf82956b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: happy\n", "Output: sad\n", "\n", "Input: sunny\n", "Output: gloomy\n", "\n", "Input: worried\n", "Output:\n" ] } ], "source": [ "# Let's compare this to what we would just get if we went solely off of similarity,\n", "# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.\n", "example_selector = SemanticSimilarityExampleSelector.from_examples(\n", " # The list of examples available to select from.\n", " examples,\n", " # The embedding class used to produce embeddings which are used to measure semantic similarity.\n", " OpenAIEmbeddings(),\n", " # The VectorStore class that is used to store the embeddings and do a similarity search over.\n", " FAISS,\n", " # The number of examples to produce.\n", " k=2,\n", ")\n", "similar_prompt = FewShotPromptTemplate(\n", " # We provide an ExampleSelector instead of examples.\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " prefix=\"Give the antonym of every input\",\n", " suffix=\"Input: {adjective}\\nOutput:\",\n", " input_variables=[\"adjective\"],\n", ")\n", "print(similar_prompt.format(adjective=\"worried\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "39f30097", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/example_selectors_ngram.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "4aaeed2f", "metadata": {}, "source": [ "# How to select examples by n-gram overlap\n", "\n", "The `NGramOverlapExampleSelector` selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. \n", "\n", "The selector allows for a threshold score to be set. Examples with an ngram overlap score less than or equal to the threshold are excluded. The threshold is set to -1.0, by default, so will not exclude any examples, only reorder them. Setting the threshold to 0.0 will exclude examples that have no ngram overlaps with the input.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "9cbc0acc", "metadata": {}, "outputs": [], "source": [ "from langchain_community.example_selectors import NGramOverlapExampleSelector\n", "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n", "\n", "example_prompt = PromptTemplate(\n", " input_variables=[\"input\", \"output\"],\n", " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "\n", "# Examples of a fictional translation task.\n", "examples = [\n", " {\"input\": \"See Spot run.\", \"output\": \"Ver correr a Spot.\"},\n", " {\"input\": \"My dog barks.\", \"output\": \"Mi perro ladra.\"},\n", " {\"input\": \"Spot can run.\", \"output\": \"Spot puede correr.\"},\n", "]" ] }, { "cell_type": "code", "execution_count": 3, "id": "bf75e0fe", "metadata": {}, "outputs": [], "source": [ "example_selector = NGramOverlapExampleSelector(\n", " # The examples it has available to choose from.\n", " examples=examples,\n", " # The PromptTemplate being used to format the examples.\n", " example_prompt=example_prompt,\n", " # The threshold, at which selector stops.\n", " # It is set to -1.0 by default.\n", " threshold=-1.0,\n", " # For negative threshold:\n", " # Selector sorts examples by ngram overlap score, and excludes none.\n", " # For threshold greater than 1.0:\n", " # Selector excludes all examples, and returns an empty list.\n", " # For threshold equal to 0.0:\n", " # Selector sorts examples by ngram overlap score,\n", " # and excludes those with no ngram overlap with input.\n", ")\n", "dynamic_prompt = FewShotPromptTemplate(\n", " # We provide an ExampleSelector instead of examples.\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " prefix=\"Give the Spanish translation of every input\",\n", " suffix=\"Input: {sentence}\\nOutput:\",\n", " input_variables=[\"sentence\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "id": "83fb218a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the Spanish translation of every input\n", "\n", "Input: Spot can run.\n", "Output: Spot puede correr.\n", "\n", "Input: See Spot run.\n", "Output: Ver correr a Spot.\n", "\n", "Input: My dog barks.\n", "Output: Mi perro ladra.\n", "\n", "Input: Spot can run fast.\n", "Output:\n" ] } ], "source": [ "# An example input with large ngram overlap with \"Spot can run.\"\n", "# and no overlap with \"My dog barks.\"\n", "print(dynamic_prompt.format(sentence=\"Spot can run fast.\"))" ] }, { "cell_type": "code", "execution_count": 5, "id": "485f5307", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the Spanish translation of every input\n", "\n", "Input: Spot can run.\n", "Output: Spot puede correr.\n", "\n", "Input: See Spot run.\n", "Output: Ver correr a Spot.\n", "\n", "Input: Spot plays fetch.\n", "Output: Spot juega a buscar.\n", "\n", "Input: My dog barks.\n", "Output: Mi perro ladra.\n", "\n", "Input: Spot can run fast.\n", "Output:\n" ] } ], "source": [ "# You can add examples to NGramOverlapExampleSelector as well.\n", "new_example = {\"input\": \"Spot plays fetch.\", \"output\": \"Spot juega a buscar.\"}\n", "\n", "example_selector.add_example(new_example)\n", "print(dynamic_prompt.format(sentence=\"Spot can run fast.\"))" ] }, { "cell_type": "code", "execution_count": 6, "id": "606ce697", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the Spanish translation of every input\n", "\n", "Input: Spot can run.\n", "Output: Spot puede correr.\n", "\n", "Input: See Spot run.\n", "Output: Ver correr a Spot.\n", "\n", "Input: Spot plays fetch.\n", "Output: Spot juega a buscar.\n", "\n", "Input: Spot can run fast.\n", "Output:\n" ] } ], "source": [ "# You can set a threshold at which examples are excluded.\n", "# For example, setting threshold equal to 0.0\n", "# excludes examples with no ngram overlaps with input.\n", "# Since \"My dog barks.\" has no ngram overlaps with \"Spot can run fast.\"\n", "# it is excluded.\n", "example_selector.threshold = 0.0\n", "print(dynamic_prompt.format(sentence=\"Spot can run fast.\"))" ] }, { "cell_type": "code", "execution_count": 7, "id": "7f8d72f7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the Spanish translation of every input\n", "\n", "Input: Spot can run.\n", "Output: Spot puede correr.\n", "\n", "Input: Spot plays fetch.\n", "Output: Spot juega a buscar.\n", "\n", "Input: Spot can play fetch.\n", "Output:\n" ] } ], "source": [ "# Setting small nonzero threshold\n", "example_selector.threshold = 0.09\n", "print(dynamic_prompt.format(sentence=\"Spot can play fetch.\"))" ] }, { "cell_type": "code", "execution_count": 8, "id": "09633aa8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the Spanish translation of every input\n", "\n", "Input: Spot can play fetch.\n", "Output:\n" ] } ], "source": [ "# Setting threshold greater than 1.0\n", "example_selector.threshold = 1.0 + 1e-9\n", "print(dynamic_prompt.format(sentence=\"Spot can play fetch.\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "39f30097", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/example_selectors_similarity.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "8c1e7149", "metadata": {}, "source": [ "# How to select examples by similarity\n", "\n", "This object selects examples based on similarity to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "abc30764", "metadata": {}, "outputs": [], "source": [ "from langchain_chroma import Chroma\n", "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n", "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "example_prompt = PromptTemplate(\n", " input_variables=[\"input\", \"output\"],\n", " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "\n", "# Examples of a pretend task of creating antonyms.\n", "examples = [\n", " {\"input\": \"happy\", \"output\": \"sad\"},\n", " {\"input\": \"tall\", \"output\": \"short\"},\n", " {\"input\": \"energetic\", \"output\": \"lethargic\"},\n", " {\"input\": \"sunny\", \"output\": \"gloomy\"},\n", " {\"input\": \"windy\", \"output\": \"calm\"},\n", "]" ] }, { "cell_type": "code", "execution_count": 2, "id": "8a37fc84", "metadata": {}, "outputs": [], "source": [ "example_selector = SemanticSimilarityExampleSelector.from_examples(\n", " # The list of examples available to select from.\n", " examples,\n", " # The embedding class used to produce embeddings which are used to measure semantic similarity.\n", " OpenAIEmbeddings(),\n", " # The VectorStore class that is used to store the embeddings and do a similarity search over.\n", " Chroma,\n", " # The number of examples to produce.\n", " k=1,\n", ")\n", "similar_prompt = FewShotPromptTemplate(\n", " # We provide an ExampleSelector instead of examples.\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " prefix=\"Give the antonym of every input\",\n", " suffix=\"Input: {adjective}\\nOutput:\",\n", " input_variables=[\"adjective\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 3, "id": "eabd2020", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: happy\n", "Output: sad\n", "\n", "Input: worried\n", "Output:\n" ] } ], "source": [ "# Input is a feeling, so should select the happy/sad example\n", "print(similar_prompt.format(adjective=\"worried\"))" ] }, { "cell_type": "code", "execution_count": 4, "id": "c02225a8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: tall\n", "Output: short\n", "\n", "Input: large\n", "Output:\n" ] } ], "source": [ "# Input is a measurement, so should select the tall/short example\n", "print(similar_prompt.format(adjective=\"large\"))" ] }, { "cell_type": "code", "execution_count": 5, "id": "09836c64", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Give the antonym of every input\n", "\n", "Input: enthusiastic\n", "Output: apathetic\n", "\n", "Input: passionate\n", "Output:\n" ] } ], "source": [ "# You can add new examples to the SemanticSimilarityExampleSelector as well\n", "similar_prompt.example_selector.add_example(\n", " {\"input\": \"enthusiastic\", \"output\": \"apathetic\"}\n", ")\n", "print(similar_prompt.format(adjective=\"passionate\"))" ] }, { "cell_type": "code", "execution_count": null, "id": "92e2c85f", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/extraction_examples.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "70403d4f-50c1-43f8-a7ea-a211167649a5", "metadata": {}, "source": [ "# How to use reference examples when doing extraction\n", "\n", "The quality of extractions can often be improved by providing reference examples to the LLM.\n", "\n", "Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts#functiontool-calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n", "\n", ":::{.callout-tip}\n", "While this guide focuses how to use examples with a tool calling model, this technique is generally applicable, and will work\n", "also with JSON more or prompt based techniques.\n", ":::\n", "\n", "LangChain implements a [tool-call attribute](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.tool_calls) on messages from LLMs that include tool calls. See our [how-to guide on tool calling](/docs/how_to/tool_calling) for more detail. To build reference examples for data extraction, we build a chat history containing a sequence of: \n", "\n", "- [HumanMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.human.HumanMessage.html) containing example inputs;\n", "- [AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) containing example tool calls;\n", "- [ToolMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolMessage.html) containing example tool outputs.\n", "\n", "LangChain adopts this convention for structuring tool calls into conversation across LLM model providers.\n", "\n", "First we build a prompt template that includes a placeholder for these messages:" ] }, { "cell_type": "code", "execution_count": 2, "id": "89579144-bcb3-490a-8036-86a0a6bcd56b", "metadata": {}, "outputs": [], "source": [ "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "\n", "# Define a custom prompt to provide instructions and any additional context.\n", "# 1) You can add examples into the prompt template to improve extraction quality\n", "# 2) Introduce additional parameters to take context into account (e.g., include metadata\n", "# about the document from which the text was extracted.)\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are an expert extraction algorithm. \"\n", " \"Only extract relevant information from the text. \"\n", " \"If you do not know the value of an attribute asked \"\n", " \"to extract, return null for the attribute's value.\",\n", " ),\n", " # ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓\n", " MessagesPlaceholder(\"examples\"), # <-- EXAMPLES!\n", " # ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑\n", " (\"human\", \"{text}\"),\n", " ]\n", ")" ] }, { "cell_type": "markdown", "id": "2484008c-ba1a-42a5-87a1-628a900de7fd", "metadata": {}, "source": [ "Test out the template:" ] }, { "cell_type": "code", "execution_count": 3, "id": "610c3025-ea63-4cd7-88bd-c8cbcb4d8a3f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "ChatPromptValue(messages=[SystemMessage(content=\"You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.\"), HumanMessage(content='testing 1 2 3'), HumanMessage(content='this is some text')])" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import (\n", " HumanMessage,\n", ")\n", "\n", "prompt.invoke(\n", " {\"text\": \"this is some text\", \"examples\": [HumanMessage(content=\"testing 1 2 3\")]}\n", ")" ] }, { "cell_type": "markdown", "id": "368abd80-0cf0-41a7-8224-acf90dd6830d", "metadata": {}, "source": [ "## Define the schema\n", "\n", "Let's re-use the person schema from the [extraction tutorial](/docs/tutorials/extraction)." ] }, { "cell_type": "code", "execution_count": 4, "id": "d875a49a-d2cb-4b9e-b5bf-41073bc3905c", "metadata": {}, "outputs": [], "source": [ "from typing import List, Optional\n", "\n", "from langchain_core.pydantic_v1 import BaseModel, Field\n", "from langchain_openai import ChatOpenAI\n", "\n", "\n", "class Person(BaseModel):\n", " \"\"\"Information about a person.\"\"\"\n", "\n", " # ^ Doc-string for the entity Person.\n", " # This doc-string is sent to the LLM as the description of the schema Person,\n", " # and it can help to improve extraction results.\n", "\n", " # Note that:\n", " # 1. Each field is an `optional` -- this allows the model to decline to extract it!\n", " # 2. Each field has a `description` -- this description is used by the LLM.\n", " # Having a good description can help improve extraction results.\n", " name: Optional[str] = Field(..., description=\"The name of the person\")\n", " hair_color: Optional[str] = Field(\n", " ..., description=\"The color of the person's hair if known\"\n", " )\n", " height_in_meters: Optional[str] = Field(..., description=\"Height in METERs\")\n", "\n", "\n", "class Data(BaseModel):\n", " \"\"\"Extracted data about people.\"\"\"\n", "\n", " # Creates a model so that we can extract multiple entities.\n", " people: List[Person]" ] }, { "cell_type": "markdown", "id": "96c42162-e4f6-4461-88fd-c76f5aab7e32", "metadata": {}, "source": [ "## Define reference examples\n", "\n", "Examples can be defined as a list of input-output pairs. \n", "\n", "Each example contains an example `input` text and an example `output` showing what should be extracted from the text.\n", "\n", ":::{.callout-important}\n", "This is a bit in the weeds, so feel free to skip.\n", "\n", "The format of the example needs to match the API used (e.g., tool calling or JSON mode etc.).\n", "\n", "Here, the formatted examples will match the format expected for the tool calling API since that's what we're using.\n", ":::" ] }, { "cell_type": "code", "execution_count": 5, "id": "08356810-77ce-4e68-99d9-faa0326f2cee", "metadata": {}, "outputs": [], "source": [ "import uuid\n", "from typing import Dict, List, TypedDict\n", "\n", "from langchain_core.messages import (\n", " AIMessage,\n", " BaseMessage,\n", " HumanMessage,\n", " SystemMessage,\n", " ToolMessage,\n", ")\n", "from langchain_core.pydantic_v1 import BaseModel, Field\n", "\n", "\n", "class Example(TypedDict):\n", " \"\"\"A representation of an example consisting of text input and expected tool calls.\n", "\n", " For extraction, the tool calls are represented as instances of pydantic model.\n", " \"\"\"\n", "\n", " input: str # This is the example text\n", " tool_calls: List[BaseModel] # Instances of pydantic model that should be extracted\n", "\n", "\n", "def tool_example_to_messages(example: Example) -> List[BaseMessage]:\n", " \"\"\"Convert an example into a list of messages that can be fed into an LLM.\n", "\n", " This code is an adapter that converts our example to a list of messages\n", " that can be fed into a chat model.\n", "\n", " The list of messages per example corresponds to:\n", "\n", " 1) HumanMessage: contains the content from which content should be extracted.\n", " 2) AIMessage: contains the extracted information from the model\n", " 3) ToolMessage: contains confirmation to the model that the model requested a tool correctly.\n", "\n", " The ToolMessage is required because some of the chat models are hyper-optimized for agents\n", " rather than for an extraction use case.\n", " \"\"\"\n", " messages: List[BaseMessage] = [HumanMessage(content=example[\"input\"])]\n", " tool_calls = []\n", " for tool_call in example[\"tool_calls\"]:\n", " tool_calls.append(\n", " {\n", " \"id\": str(uuid.uuid4()),\n", " \"args\": tool_call.dict(),\n", " # The name of the function right now corresponds\n", " # to the name of the pydantic model\n", " # This is implicit in the API right now,\n", " # and will be improved over time.\n", " \"name\": tool_call.__class__.__name__,\n", " },\n", " )\n", " messages.append(AIMessage(content=\"\", tool_calls=tool_calls))\n", " tool_outputs = example.get(\"tool_outputs\") or [\n", " \"You have correctly called this tool.\"\n", " ] * len(tool_calls)\n", " for output, tool_call in zip(tool_outputs, tool_calls):\n", " messages.append(ToolMessage(content=output, tool_call_id=tool_call[\"id\"]))\n", " return messages" ] }, { "cell_type": "markdown", "id": "463aa282-51c4-42bf-9463-6ca3b2c08de6", "metadata": {}, "source": [ "Next let's define our examples and then convert them into message format." ] }, { "cell_type": "code", "execution_count": 6, "id": "7f59a745-5c81-4011-a4c5-a33ec1eca7ef", "metadata": {}, "outputs": [], "source": [ "examples = [\n", " (\n", " \"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.\",\n", " Person(name=None, height_in_meters=None, hair_color=None),\n", " ),\n", " (\n", " \"Fiona traveled far from France to Spain.\",\n", " Person(name=\"Fiona\", height_in_meters=None, hair_color=None),\n", " ),\n", "]\n", "\n", "\n", "messages = []\n", "\n", "for text, tool_call in examples:\n", " messages.extend(\n", " tool_example_to_messages({\"input\": text, \"tool_calls\": [tool_call]})\n", " )" ] }, { "cell_type": "markdown", "id": "6fdbda30-e7e3-46b5-a54a-1769c580af93", "metadata": {}, "source": [ "Let's test out the prompt" ] }, { "cell_type": "code", "execution_count": 7, "id": "976bb7b8-09c4-4a3e-80df-49a483705c08", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "system: content=\"You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.\"\n", "human: content=\"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.\"\n", "ai: content='' tool_calls=[{'name': 'Person', 'args': {'name': None, 'hair_color': None, 'height_in_meters': None}, 'id': 'b843ba77-4c9c-48ef-92a4-54e534f24521'}]\n", "tool: content='You have correctly called this tool.' tool_call_id='b843ba77-4c9c-48ef-92a4-54e534f24521'\n", "human: content='Fiona traveled far from France to Spain.'\n", "ai: content='' tool_calls=[{'name': 'Person', 'args': {'name': 'Fiona', 'hair_color': None, 'height_in_meters': None}, 'id': '46f00d6b-50e5-4482-9406-b07bb10340f6'}]\n", "tool: content='You have correctly called this tool.' tool_call_id='46f00d6b-50e5-4482-9406-b07bb10340f6'\n", "human: content='this is some text'\n" ] } ], "source": [ "example_prompt = prompt.invoke({\"text\": \"this is some text\", \"examples\": messages})\n", "\n", "for message in example_prompt.messages:\n", " print(f\"{message.type}: {message}\")" ] }, { "cell_type": "markdown", "id": "47b0bbef-bc6b-4535-a8e2-5c84f09d5637", "metadata": {}, "source": [ "## Create an extractor\n", "\n", "Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", " openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n", "/>\n", "```" ] }, { "cell_type": "code", "execution_count": 8, "id": "df2e1ee1-69e8-4c4d-b349-95f2e320317b", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4-0125-preview\", temperature=0)" ] }, { "cell_type": "markdown", "id": "ef21e8cb-c4df-4e12-9be7-37ac9d291d42", "metadata": {}, "source": [ "Following the [extraction tutorial](/docs/tutorials/extraction), we use the `.with_structured_output` method to structure model outputs according to the desired schema:" ] }, { "cell_type": "code", "execution_count": 9, "id": "dbfea43d-769b-42e9-a76f-ce722f7d6f93", "metadata": {}, "outputs": [], "source": [ "runnable = prompt | llm.with_structured_output(\n", " schema=Data,\n", " method=\"function_calling\",\n", " include_raw=False,\n", ")" ] }, { "cell_type": "markdown", "id": "58a8139e-f201-4b8e-baf0-16a83e5fa987", "metadata": {}, "source": [ "## Without examples 😿\n", "\n", "Notice that even capable models can fail with a **very simple** test case!" ] }, { "cell_type": "code", "execution_count": 10, "id": "66545cab-af2a-40a4-9dc9-b4110458b7d3", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "people=[Person(name='earth', hair_color='null', height_in_meters='null')]\n", "people=[Person(name='earth', hair_color='null', height_in_meters='null')]\n", "people=[]\n", "people=[Person(name='earth', hair_color='null', height_in_meters='null')]\n", "people=[]\n" ] } ], "source": [ "for _ in range(5):\n", " text = \"The solar system is large, but earth has only 1 moon.\"\n", " print(runnable.invoke({\"text\": text, \"examples\": []}))" ] }, { "cell_type": "markdown", "id": "09840f17-ab26-4ea2-8a39-c747103804ec", "metadata": {}, "source": [ "## With examples 😻\n", "\n", "Reference examples helps to fix the failure!" ] }, { "cell_type": "code", "execution_count": 11, "id": "1c09d805-ec16-4123-aef9-6a5b59499b5c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "people=[]\n", "people=[]\n", "people=[]\n", "people=[]\n", "people=[]\n" ] } ], "source": [ "for _ in range(5):\n", " text = \"The solar system is large, but earth has only 1 moon.\"\n", " print(runnable.invoke({\"text\": text, \"examples\": messages}))" ] }, { "cell_type": "markdown", "id": "3855cad5-dfee-4b42-ad35-b28d4d98902e", "metadata": {}, "source": [ "Note that we can see the few-shot examples as tool-calls in the [Langsmith trace](https://smith.langchain.com/public/4c436bc2-a1ce-440b-82f5-093947542e40/r).\n", "\n", "And we retain performance on a positive sample:" ] }, { "cell_type": "code", "execution_count": 12, "id": "a9b7a762-1b75-4f9f-b9d9-6732dd05802c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Data(people=[Person(name='Harrison', hair_color='black', height_in_meters=None)])" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "runnable.invoke(\n", " {\n", " \"text\": \"My name is Harrison. My hair is black.\",\n", " \"examples\": messages,\n", " }\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/extraction_long_text.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "9e161a8a-fcf0-4d55-933e-da271ce28d7e", "metadata": {}, "source": [ "# How to handle long text when doing extraction\n", "\n", "When working with files, like PDFs, you're likely to encounter text that exceeds your language model's context window. To process this text, consider these strategies:\n", "\n", "1. **Change LLM** Choose a different LLM that supports a larger context window.\n", "2. **Brute Force** Chunk the document, and extract content from each chunk.\n", "3. **RAG** Chunk the document, index the chunks, and only extract content from a subset of chunks that look \"relevant\".\n", "\n", "Keep in mind that these strategies have different trade off and the best strategy likely depends on the application that you're designing!\n", "\n", "This guide demonstrates how to implement strategies 2 and 3." ] }, { "cell_type": "markdown", "id": "57969139-ad0a-487e-97d8-cb30e2af9742", "metadata": {}, "source": [ "## Set up\n", "\n", "We need some example data! Let's download an article about [cars from wikipedia](https://en.wikipedia.org/wiki/Car) and load it as a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html)." ] }, { "cell_type": "code", "execution_count": 1, "id": "84460db2-36e1-4037-bfa6-2a11883c2ba5", "metadata": {}, "outputs": [], "source": [ "import re\n", "\n", "import requests\n", "from langchain_community.document_loaders import BSHTMLLoader\n", "\n", "# Download the content\n", "response = requests.get(\"https://en.wikipedia.org/wiki/Car\")\n", "# Write it to a file\n", "with open(\"car.html\", \"w\", encoding=\"utf-8\") as f:\n", " f.write(response.text)\n", "# Load it with an HTML parser\n", "loader = BSHTMLLoader(\"car.html\")\n", "document = loader.load()[0]\n", "# Clean up code\n", "# Replace consecutive new lines with a single new line\n", "document.page_content = re.sub(\"\\n\\n+\", \"\\n\", document.page_content)" ] }, { "cell_type": "code", "execution_count": 2, "id": "fcb6917b-123d-4630-a0ce-ed8b293d482d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "79174\n" ] } ], "source": [ "print(len(document.page_content))" ] }, { "cell_type": "markdown", "id": "af3ffb8d-587a-4370-886a-e56e617bcb9c", "metadata": {}, "source": [ "## Define the schema\n", "\n", "Following the [extraction tutorial](/docs/tutorials/extraction), we will use Pydantic to define the schema of information we wish to extract. In this case, we will extract a list of \"key developments\" (e.g., important historical events) that include a year and description.\n", "\n", "Note that we also include an `evidence` key and instruct the model to provide in verbatim the relevant sentences of text from the article. This allows us to compare the extraction results to (the model's reconstruction of) text from the original document." ] }, { "cell_type": "code", "execution_count": 4, "id": "a3b288ed-87a6-4af0-aac8-20921dc370d4", "metadata": {}, "outputs": [], "source": [ "from typing import List, Optional\n", "\n", "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "from langchain_core.pydantic_v1 import BaseModel, Field\n", "\n", "\n", "class KeyDevelopment(BaseModel):\n", " \"\"\"Information about a development in the history of cars.\"\"\"\n", "\n", " year: int = Field(\n", " ..., description=\"The year when there was an important historic development.\"\n", " )\n", " description: str = Field(\n", " ..., description=\"What happened in this year? What was the development?\"\n", " )\n", " evidence: str = Field(\n", " ...,\n", " description=\"Repeat in verbatim the sentence(s) from which the year and description information were extracted\",\n", " )\n", "\n", "\n", "class ExtractionData(BaseModel):\n", " \"\"\"Extracted information about key developments in the history of cars.\"\"\"\n", "\n", " key_developments: List[KeyDevelopment]\n", "\n", "\n", "# Define a custom prompt to provide instructions and any additional context.\n", "# 1) You can add examples into the prompt template to improve extraction quality\n", "# 2) Introduce additional parameters to take context into account (e.g., include metadata\n", "# about the document from which the text was extracted.)\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are an expert at identifying key historic development in text. \"\n", " \"Only extract important historic developments. Extract nothing if no important information can be found in the text.\",\n", " ),\n", " (\"human\", \"{text}\"),\n", " ]\n", ")" ] }, { "cell_type": "markdown", "id": "3909e22e-8a00-4f3d-bbf2-4762a0558af3", "metadata": {}, "source": [ "## Create an extractor\n", "\n", "Let's select an LLM. Because we are using tool-calling, we will need a model that supports a tool-calling feature. See [this table](/docs/integrations/chat) for available LLMs.\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", " openaiParams={`model=\"gpt-4-0125-preview\", temperature=0`}\n", "/>\n", "```" ] }, { "cell_type": "code", "execution_count": 5, "id": "109f4f05-d0ff-431d-93d9-8f5aa34979a6", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-4-0125-preview\", temperature=0)" ] }, { "cell_type": "code", "execution_count": 6, "id": "aa4ae224-6d3d-4fe2-b210-7db19a9fe580", "metadata": {}, "outputs": [], "source": [ "extractor = prompt | llm.with_structured_output(\n", " schema=ExtractionData,\n", " include_raw=False,\n", ")" ] }, { "cell_type": "markdown", "id": "13aebafb-26b5-42b2-ae8e-9c05cd56e5c5", "metadata": {}, "source": [ "## Brute force approach\n", "\n", "Split the documents into chunks such that each chunk fits into the context window of the LLMs." ] }, { "cell_type": "code", "execution_count": 7, "id": "27b8a373-14b3-45ea-8bf5-9749122ad927", "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import TokenTextSplitter\n", "\n", "text_splitter = TokenTextSplitter(\n", " # Controls the size of each chunk\n", " chunk_size=2000,\n", " # Controls overlap between chunks\n", " chunk_overlap=20,\n", ")\n", "\n", "texts = text_splitter.split_text(document.page_content)" ] }, { "cell_type": "markdown", "id": "5b43d7e0-3c85-4d97-86c7-e8c984b60b0a", "metadata": {}, "source": [ "Use [batch](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) functionality to run the extraction in **parallel** across each chunk! \n", "\n", ":::{.callout-tip}\n", "You can often use .batch() to parallelize the extractions! `.batch` uses a threadpool under the hood to help you parallelize workloads.\n", "\n", "If your model is exposed via an API, this will likely speed up your extraction flow!\n", ":::" ] }, { "cell_type": "code", "execution_count": 8, "id": "6ba766b5-8d6c-48e6-8d69-f391a66b65d2", "metadata": {}, "outputs": [], "source": [ "# Limit just to the first 3 chunks\n", "# so the code can be re-run quickly\n", "first_few = texts[:3]\n", "\n", "extractions = extractor.batch(\n", " [{\"text\": text} for text in first_few],\n", " {\"max_concurrency\": 5}, # limit the concurrency by passing max concurrency!\n", ")" ] }, { "cell_type": "markdown", "id": "67da8904-e927-406b-a439-2a16f6087ccf", "metadata": {}, "source": [ "### Merge results\n", "\n", "After extracting data from across the chunks, we'll want to merge the extractions together." ] }, { "cell_type": "code", "execution_count": 9, "id": "c3f77470-ce6c-477f-8957-650913218632", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[KeyDevelopment(year=1966, description='The Toyota Corolla began production, becoming the best-selling series of automobile in history.', evidence='The Toyota Corolla, which has been in production since 1966, is the best-selling series of automobile in history.'),\n", " KeyDevelopment(year=1769, description='Nicolas-Joseph Cugnot built the first steam-powered road vehicle.', evidence='The French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769.'),\n", " KeyDevelopment(year=1808, description='François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile.', evidence='the Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808.'),\n", " KeyDevelopment(year=1886, description='Carl Benz patented his Benz Patent-Motorwagen, inventing the modern car.', evidence='The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventor Carl Benz patented his Benz Patent-Motorwagen.'),\n", " KeyDevelopment(year=1908, description='Ford Model T, one of the first cars affordable by the masses, began production.', evidence='One of the first cars affordable by the masses was the Ford Model T, begun in 1908, an American car manufactured by the Ford Motor Company.'),\n", " KeyDevelopment(year=1888, description=\"Bertha Benz undertook the first road trip by car to prove the road-worthiness of her husband's invention.\", evidence=\"In August 1888, Bertha Benz, the wife of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention.\"),\n", " KeyDevelopment(year=1896, description='Benz designed and patented the first internal-combustion flat engine, called boxermotor.', evidence='In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor.'),\n", " KeyDevelopment(year=1897, description='Nesselsdorfer Wagenbau produced the Präsident automobil, one of the first factory-made cars in the world.', evidence='The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.'),\n", " KeyDevelopment(year=1890, description='Daimler Motoren Gesellschaft (DMG) was founded by Daimler and Maybach in Cannstatt.', evidence='Daimler and Maybach founded Daimler Motoren Gesellschaft (DMG) in Cannstatt in 1890.'),\n", " KeyDevelopment(year=1891, description='Auguste Doriot and Louis Rigoulot completed the longest trip by a petrol-driven vehicle with a Daimler powered Peugeot Type 3.', evidence='In 1891, Auguste Doriot and his Peugeot colleague Louis Rigoulot completed the longest trip by a petrol-driven vehicle when their self-designed and built Daimler powered Peugeot Type 3 completed 2,100 kilometres (1,300 mi) from Valentigney to Paris and Brest and back again.')]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "key_developments = []\n", "\n", "for extraction in extractions:\n", " key_developments.extend(extraction.key_developments)\n", "\n", "key_developments[:10]" ] }, { "cell_type": "markdown", "id": "48afd4a7-abcd-48b4-8ff1-6ca485f529e3", "metadata": {}, "source": [ "## RAG based approach\n", "\n", "Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the the most relevant chunks.\n", "\n", ":::{.callout-caution}\n", "It can be difficult to identify which chunks are relevant.\n", "\n", "For example, in the `car` article we're using here, most of the article contains key development information. So by using\n", "**RAG**, we'll likely be throwing out a lot of relevant information.\n", "\n", "We suggest experimenting with your use case and determining whether this approach works or not.\n", ":::\n", "\n", "To implement the RAG based approach: \n", "\n", "1. Chunk up your document(s) and index them (e.g., in a vectorstore);\n", "2. Prepend the `extractor` chain with a retrieval step using the vectorstore.\n", "\n", "Here's a simple example that relies on the `FAISS` vectorstore." ] }, { "cell_type": "code", "execution_count": 10, "id": "aaf37c82-625b-4fa1-8e88-73303f08ac16", "metadata": {}, "outputs": [], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.documents import Document\n", "from langchain_core.runnables import RunnableLambda\n", "from langchain_openai import OpenAIEmbeddings\n", "from langchain_text_splitters import CharacterTextSplitter\n", "\n", "texts = text_splitter.split_text(document.page_content)\n", "vectorstore = FAISS.from_texts(texts, embedding=OpenAIEmbeddings())\n", "\n", "retriever = vectorstore.as_retriever(\n", " search_kwargs={\"k\": 1}\n", ") # Only extract from first document" ] }, { "cell_type": "markdown", "id": "013ecad9-f80f-477c-b954-494b46a02a07", "metadata": {}, "source": [ "In this case the RAG extractor is only looking at the top document." ] }, { "cell_type": "code", "execution_count": 11, "id": "47aad00b-7013-4f7f-a1b0-02ef269093bf", "metadata": {}, "outputs": [], "source": [ "rag_extractor = {\n", " \"text\": retriever | (lambda docs: docs[0].page_content) # fetch content of top doc\n", "} | extractor" ] }, { "cell_type": "code", "execution_count": 12, "id": "68f2de01-0cd8-456e-a959-db236189d41b", "metadata": {}, "outputs": [], "source": [ "results = rag_extractor.invoke(\"Key developments associated with cars\")" ] }, { "cell_type": "code", "execution_count": 13, "id": "1788e2d6-77bb-417f-827c-eb96c035164e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "year=1869 description='Mary Ward became one of the first documented car fatalities in Parsonstown, Ireland.' evidence='Mary Ward became one of the first documented car fatalities in 1869 in Parsonstown, Ireland,'\n", "year=1899 description=\"Henry Bliss one of the US's first pedestrian car casualties in New York City.\" evidence=\"Henry Bliss one of the US's first pedestrian car casualties in 1899 in New York City.\"\n", "year=2030 description='All fossil fuel vehicles will be banned in Amsterdam.' evidence='all fossil fuel vehicles will be banned in Amsterdam from 2030.'\n" ] } ], "source": [ "for key_development in results.key_developments:\n", " print(key_development)" ] }, { "cell_type": "markdown", "id": "cf36e626-cf5d-4324-ba29-9bd602be9b97", "metadata": {}, "source": [ "## Common issues\n", "\n", "Different methods have their own pros and cons related to cost, speed, and accuracy.\n", "\n", "Watch out for these issues:\n", "\n", "* Chunking content means that the LLM can fail to extract information if the information is spread across multiple chunks.\n", "* Large chunk overlap may cause the same information to be extracted twice, so be prepared to de-duplicate!\n", "* LLMs can make up data. If looking for a single fact across a large text and using a brute force approach, you may end up getting more made up data." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/extraction_parse.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "ea37db49-d389-4291-be73-885d06c1fb7e", "metadata": {}, "source": [ "# How to use prompting alone (no tool calling) to do extraction\n", "\n", "Tool calling features are not required for generating structured output from LLMs. LLMs that are able to follow prompt instructions well can be tasked with outputting information in a given format.\n", "\n", "This approach relies on designing good prompts and then parsing the output of the LLMs to make them extract information well.\n", "\n", "To extract data without tool-calling features: \n", "\n", "1. Instruct the LLM to generate text following an expected format (e.g., JSON with a certain schema);\n", "2. Use [output parsers](/docs/concepts#output-parsers) to structure the model response into a desired Python object.\n", "\n", "First we select a LLM:\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs customVarName=\"model\" />\n", "```" ] }, { "cell_type": "code", "execution_count": 2, "id": "25487939-8713-4ec7-b774-e4a761ac8298", "metadata": {}, "outputs": [], "source": [ "# | output: false\n", "# | echo: false\n", "\n", "from langchain_anthropic.chat_models import ChatAnthropic\n", "\n", "model = ChatAnthropic(model_name=\"claude-3-sonnet-20240229\", temperature=0)" ] }, { "cell_type": "markdown", "id": "3e412374-3beb-4bbf-966b-400c1f66a258", "metadata": {}, "source": [ ":::{.callout-tip}\n", "This tutorial is meant to be simple, but generally should really include reference examples to squeeze out performance!\n", ":::" ] }, { "cell_type": "markdown", "id": "abc1a945-0f80-4953-add4-cd572b6f2a51", "metadata": {}, "source": [ "## Using PydanticOutputParser\n", "\n", "The following example uses the built-in `PydanticOutputParser` to parse the output of a chat model." ] }, { "cell_type": "code", "execution_count": 3, "id": "497eb023-c043-443d-ac62-2d4ea85fe1b0", "metadata": {}, "outputs": [], "source": [ "from typing import List, Optional\n", "\n", "from langchain_core.output_parsers import PydanticOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.pydantic_v1 import BaseModel, Field, validator\n", "\n", "\n", "class Person(BaseModel):\n", " \"\"\"Information about a person.\"\"\"\n", "\n", " name: str = Field(..., description=\"The name of the person\")\n", " height_in_meters: float = Field(\n", " ..., description=\"The height of the person expressed in meters.\"\n", " )\n", "\n", "\n", "class People(BaseModel):\n", " \"\"\"Identifying information about all people in a text.\"\"\"\n", "\n", " people: List[Person]\n", "\n", "\n", "# Set up a parser\n", "parser = PydanticOutputParser(pydantic_object=People)\n", "\n", "# Prompt\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Answer the user query. Wrap the output in `json` tags\\n{format_instructions}\",\n", " ),\n", " (\"human\", \"{query}\"),\n", " ]\n", ").partial(format_instructions=parser.get_format_instructions())" ] }, { "cell_type": "markdown", "id": "c31aa2c8-05a9-4a12-80c5-ea1250dea0ae", "metadata": {}, "source": [ "Let's take a look at what information is sent to the model" ] }, { "cell_type": "code", "execution_count": 4, "id": "20b99ffb-a114-49a9-a7be-154c525f8ada", "metadata": {}, "outputs": [], "source": [ "query = \"Anna is 23 years old and she is 6 feet tall\"" ] }, { "cell_type": "code", "execution_count": 5, "id": "4f3a66ce-de19-4571-9e54-67504ae3fba7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "System: Answer the user query. Wrap the output in `json` tags\n", "The output should be formatted as a JSON instance that conforms to the JSON schema below.\n", "\n", "As an example, for the schema {\"properties\": {\"foo\": {\"title\": \"Foo\", \"description\": \"a list of strings\", \"type\": \"array\", \"items\": {\"type\": \"string\"}}}, \"required\": [\"foo\"]}\n", "the object {\"foo\": [\"bar\", \"baz\"]} is a well-formatted instance of the schema. The object {\"properties\": {\"foo\": [\"bar\", \"baz\"]}} is not well-formatted.\n", "\n", "Here is the output schema:\n", "```\n", "{\"description\": \"Identifying information about all people in a text.\", \"properties\": {\"people\": {\"title\": \"People\", \"type\": \"array\", \"items\": {\"$ref\": \"#/definitions/Person\"}}}, \"required\": [\"people\"], \"definitions\": {\"Person\": {\"title\": \"Person\", \"description\": \"Information about a person.\", \"type\": \"object\", \"properties\": {\"name\": {\"title\": \"Name\", \"description\": \"The name of the person\", \"type\": \"string\"}, \"height_in_meters\": {\"title\": \"Height In Meters\", \"description\": \"The height of the person expressed in meters.\", \"type\": \"number\"}}, \"required\": [\"name\", \"height_in_meters\"]}}}\n", "```\n", "Human: Anna is 23 years old and she is 6 feet tall\n" ] } ], "source": [ "print(prompt.format_prompt(query=query).to_string())" ] }, { "cell_type": "markdown", "id": "6f1048e0-1bfd-49f9-b697-74389a5ce69c", "metadata": {}, "source": [ "Having defined our prompt, we simply chain together the prompt, model and output parser:" ] }, { "cell_type": "code", "execution_count": 6, "id": "7e0041eb-37dc-4384-9fe3-6dd8c356371e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "People(people=[Person(name='Anna', height_in_meters=1.83)])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain = prompt | model | parser\n", "chain.invoke({\"query\": query})" ] }, { "cell_type": "markdown", "id": "dd492fe4-110a-4b83-a191-79fffbc1055a", "metadata": {}, "source": [ "Check out the associated [Langsmith trace](https://smith.langchain.com/public/92ed52a3-92b9-45af-a663-0a9c00e5e396/r).\n", "\n", "Note that the schema shows up in two places: \n", "\n", "1. In the prompt, via `parser.get_format_instructions()`;\n", "2. In the chain, to receive the formatted output and structure it into a Python object (in this case, the Pydantic object `People`)." ] }, { "cell_type": "markdown", "id": "815b3b87-3bc6-4b56-835e-c6b6703cef5d", "metadata": {}, "source": [ "## Custom Parsing\n", "\n", "If desired, it's easy to create a custom prompt and parser with `LangChain` and `LCEL`.\n", "\n", "To create a custom parser, define a function to parse the output from the model (typically an [AIMessage](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html)) into an object of your choice.\n", "\n", "See below for a simple implementation of a JSON parser." ] }, { "cell_type": "code", "execution_count": 7, "id": "b1f11912-c1bb-4a2a-a482-79bf3996961f", "metadata": {}, "outputs": [], "source": [ "import json\n", "import re\n", "from typing import List, Optional\n", "\n", "from langchain_anthropic.chat_models import ChatAnthropic\n", "from langchain_core.messages import AIMessage\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.pydantic_v1 import BaseModel, Field, validator\n", "\n", "\n", "class Person(BaseModel):\n", " \"\"\"Information about a person.\"\"\"\n", "\n", " name: str = Field(..., description=\"The name of the person\")\n", " height_in_meters: float = Field(\n", " ..., description=\"The height of the person expressed in meters.\"\n", " )\n", "\n", "\n", "class People(BaseModel):\n", " \"\"\"Identifying information about all people in a text.\"\"\"\n", "\n", " people: List[Person]\n", "\n", "\n", "# Prompt\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Answer the user query. Output your answer as JSON that \"\n", " \"matches the given schema: ```json\\n{schema}\\n```. \"\n", " \"Make sure to wrap the answer in ```json and ``` tags\",\n", " ),\n", " (\"human\", \"{query}\"),\n", " ]\n", ").partial(schema=People.schema())\n", "\n", "\n", "# Custom parser\n", "def extract_json(message: AIMessage) -> List[dict]:\n", " \"\"\"Extracts JSON content from a string where JSON is embedded between ```json and ``` tags.\n", "\n", " Parameters:\n", " text (str): The text containing the JSON content.\n", "\n", " Returns:\n", " list: A list of extracted JSON strings.\n", " \"\"\"\n", " text = message.content\n", " # Define the regular expression pattern to match JSON blocks\n", " pattern = r\"```json(.*?)```\"\n", "\n", " # Find all non-overlapping matches of the pattern in the string\n", " matches = re.findall(pattern, text, re.DOTALL)\n", "\n", " # Return the list of matched JSON strings, stripping any leading or trailing whitespace\n", " try:\n", " return [json.loads(match.strip()) for match in matches]\n", " except Exception:\n", " raise ValueError(f\"Failed to parse: {message}\")" ] }, { "cell_type": "code", "execution_count": 8, "id": "9260d5e8-3b6c-4639-9f3b-fb2f90239e4b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "System: Answer the user query. Output your answer as JSON that matches the given schema: ```json\n", "{'title': 'People', 'description': 'Identifying information about all people in a text.', 'type': 'object', 'properties': {'people': {'title': 'People', 'type': 'array', 'items': {'$ref': '#/definitions/Person'}}}, 'required': ['people'], 'definitions': {'Person': {'title': 'Person', 'description': 'Information about a person.', 'type': 'object', 'properties': {'name': {'title': 'Name', 'description': 'The name of the person', 'type': 'string'}, 'height_in_meters': {'title': 'Height In Meters', 'description': 'The height of the person expressed in meters.', 'type': 'number'}}, 'required': ['name', 'height_in_meters']}}}\n", "```. Make sure to wrap the answer in ```json and ``` tags\n", "Human: Anna is 23 years old and she is 6 feet tall\n" ] } ], "source": [ "query = \"Anna is 23 years old and she is 6 feet tall\"\n", "print(prompt.format_prompt(query=query).to_string())" ] }, { "cell_type": "code", "execution_count": 9, "id": "c523301d-ae0e-45e3-b195-7fd28c67a5c4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'people': [{'name': 'Anna', 'height_in_meters': 1.83}]}]" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain = prompt | model | extract_json\n", "chain.invoke({\"query\": query})" ] }, { "cell_type": "markdown", "id": "d3601bde", "metadata": {}, "source": [ "## Other Libraries\n", "\n", "If you're looking at extracting using a parsing approach, check out the [Kor](https://eyurtsev.github.io/kor/) library. It's written by one of the `LangChain` maintainers and it\n", "helps to craft a prompt that takes examples into account, allows controlling formats (e.g., JSON or CSV) and expresses the schema in TypeScript. It seems to work pretty!" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/fallbacks.ipynb
{ "cells": [ { "cell_type": "raw", "id": "018f3868-e60d-4db6-a1c6-c6633c66b1f4", "metadata": {}, "source": [ "---\n", "keywords: [LCEL, fallbacks]\n", "---" ] }, { "cell_type": "markdown", "id": "19c9cbd6", "metadata": {}, "source": [ "# How to add fallbacks to a runnable\n", "\n", "When working with language models, you may often encounter issues from the underlying APIs, whether these be rate limiting or downtime. Therefore, as you go to move your LLM applications into production it becomes more and more important to safeguard against these. That's why we've introduced the concept of fallbacks. \n", "\n", "A **fallback** is an alternative plan that may be used in an emergency.\n", "\n", "Crucially, fallbacks can be applied not only on the LLM level but on the whole runnable level. This is important because often times different models require different prompts. So if your call to OpenAI fails, you don't just want to send the same prompt to Anthropic - you probably want to use a different prompt template and send a different version there." ] }, { "cell_type": "markdown", "id": "a6bb9ba9", "metadata": {}, "source": [ "## Fallback for LLM API Errors\n", "\n", "This is maybe the most common use case for fallbacks. A request to an LLM API can fail for a variety of reasons - the API could be down, you could have hit rate limits, any number of things. Therefore, using fallbacks can help protect against these types of things.\n", "\n", "IMPORTANT: By default, a lot of the LLM wrappers catch errors and retry. You will most likely want to turn those off when working with fallbacks. Otherwise the first wrapper will keep on retrying and not failing." ] }, { "cell_type": "code", "execution_count": null, "id": "3a449a2e", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-openai" ] }, { "cell_type": "code", "execution_count": 1, "id": "d3e893bf", "metadata": {}, "outputs": [], "source": [ "from langchain_anthropic import ChatAnthropic\n", "from langchain_openai import ChatOpenAI" ] }, { "cell_type": "markdown", "id": "4847c82d", "metadata": {}, "source": [ "First, let's mock out what happens if we hit a RateLimitError from OpenAI" ] }, { "cell_type": "code", "execution_count": 2, "id": "dfdd8bf5", "metadata": {}, "outputs": [], "source": [ "from unittest.mock import patch\n", "\n", "import httpx\n", "from openai import RateLimitError\n", "\n", "request = httpx.Request(\"GET\", \"/\")\n", "response = httpx.Response(200, request=request)\n", "error = RateLimitError(\"rate limit\", response=response, body=\"\")" ] }, { "cell_type": "code", "execution_count": 3, "id": "e6fdffc1", "metadata": {}, "outputs": [], "source": [ "# Note that we set max_retries = 0 to avoid retrying on RateLimits, etc\n", "openai_llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", max_retries=0)\n", "anthropic_llm = ChatAnthropic(model=\"claude-3-haiku-20240307\")\n", "llm = openai_llm.with_fallbacks([anthropic_llm])" ] }, { "cell_type": "code", "execution_count": 4, "id": "584461ab", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hit error\n" ] } ], "source": [ "# Let's use just the OpenAI LLm first, to show that we run into an error\n", "with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n", " try:\n", " print(openai_llm.invoke(\"Why did the chicken cross the road?\"))\n", " except RateLimitError:\n", " print(\"Hit error\")" ] }, { "cell_type": "code", "execution_count": 28, "id": "4fc1e673", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "content=' I don\\'t actually know why the chicken crossed the road, but here are some possible humorous answers:\\n\\n- To get to the other side!\\n\\n- It was too chicken to just stand there. \\n\\n- It wanted a change of scenery.\\n\\n- It wanted to show the possum it could be done.\\n\\n- It was on its way to a poultry farmers\\' convention.\\n\\nThe joke plays on the double meaning of \"the other side\" - literally crossing the road to the other side, or the \"other side\" meaning the afterlife. So it\\'s an anti-joke, with a silly or unexpected pun as the answer.' additional_kwargs={} example=False\n" ] } ], "source": [ "# Now let's try with fallbacks to Anthropic\n", "with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n", " try:\n", " print(llm.invoke(\"Why did the chicken cross the road?\"))\n", " except RateLimitError:\n", " print(\"Hit error\")" ] }, { "cell_type": "markdown", "id": "f00bea25", "metadata": {}, "source": [ "We can use our \"LLM with Fallbacks\" as we would a normal LLM." ] }, { "cell_type": "code", "execution_count": 29, "id": "4f8eaaa0", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "content=\" I don't actually know why the kangaroo crossed the road, but I can take a guess! Here are some possible reasons:\\n\\n- To get to the other side (the classic joke answer!)\\n\\n- It was trying to find some food or water \\n\\n- It was trying to find a mate during mating season\\n\\n- It was fleeing from a predator or perceived threat\\n\\n- It was disoriented and crossed accidentally \\n\\n- It was following a herd of other kangaroos who were crossing\\n\\n- It wanted a change of scenery or environment \\n\\n- It was trying to reach a new habitat or territory\\n\\nThe real reason is unknown without more context, but hopefully one of those potential explanations does the joke justice! Let me know if you have any other animal jokes I can try to decipher.\" additional_kwargs={} example=False\n" ] } ], "source": [ "from langchain_core.prompts import ChatPromptTemplate\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You're a nice assistant who always includes a compliment in your response\",\n", " ),\n", " (\"human\", \"Why did the {animal} cross the road\"),\n", " ]\n", ")\n", "chain = prompt | llm\n", "with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n", " try:\n", " print(chain.invoke({\"animal\": \"kangaroo\"}))\n", " except RateLimitError:\n", " print(\"Hit error\")" ] }, { "cell_type": "markdown", "id": "8d62241b", "metadata": {}, "source": [ "## Fallback for Sequences\n", "\n", "We can also create fallbacks for sequences, that are sequences themselves. Here we do that with two different models: ChatOpenAI and then normal OpenAI (which does not use a chat model). Because OpenAI is NOT a chat model, you likely want a different prompt." ] }, { "cell_type": "code", "execution_count": 30, "id": "6d0b8056", "metadata": {}, "outputs": [], "source": [ "# First let's create a chain with a ChatModel\n", "# We add in a string output parser here so the outputs between the two are the same type\n", "from langchain_core.output_parsers import StrOutputParser\n", "\n", "chat_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You're a nice assistant who always includes a compliment in your response\",\n", " ),\n", " (\"human\", \"Why did the {animal} cross the road\"),\n", " ]\n", ")\n", "# Here we're going to use a bad model name to easily create a chain that will error\n", "chat_model = ChatOpenAI(model=\"gpt-fake\")\n", "bad_chain = chat_prompt | chat_model | StrOutputParser()" ] }, { "cell_type": "code", "execution_count": 31, "id": "8d1fc2a5", "metadata": {}, "outputs": [], "source": [ "# Now lets create a chain with the normal OpenAI model\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import OpenAI\n", "\n", "prompt_template = \"\"\"Instructions: You should always include a compliment in your response.\n", "\n", "Question: Why did the {animal} cross the road?\"\"\"\n", "prompt = PromptTemplate.from_template(prompt_template)\n", "llm = OpenAI()\n", "good_chain = prompt | llm" ] }, { "cell_type": "code", "execution_count": 32, "id": "283bfa44", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'\\n\\nAnswer: The turtle crossed the road to get to the other side, and I have to say he had some impressive determination.'" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# We can now create a final chain which combines the two\n", "chain = bad_chain.with_fallbacks([good_chain])\n", "chain.invoke({\"animal\": \"turtle\"})" ] }, { "cell_type": "markdown", "id": "ec4685b4", "metadata": {}, "source": [ "## Fallback for Long Inputs\n", "\n", "One of the big limiting factors of LLMs is their context window. Usually, you can count and track the length of prompts before sending them to an LLM, but in situations where that is hard/complicated, you can fallback to a model with a longer context length." ] }, { "cell_type": "code", "execution_count": 34, "id": "564b84c9", "metadata": {}, "outputs": [], "source": [ "short_llm = ChatOpenAI()\n", "long_llm = ChatOpenAI(model=\"gpt-3.5-turbo-16k\")\n", "llm = short_llm.with_fallbacks([long_llm])" ] }, { "cell_type": "code", "execution_count": 38, "id": "5e27a775", "metadata": {}, "outputs": [], "source": [ "inputs = \"What is the next number: \" + \", \".join([\"one\", \"two\"] * 3000)" ] }, { "cell_type": "code", "execution_count": 40, "id": "0a502731", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This model's maximum context length is 4097 tokens. However, your messages resulted in 12012 tokens. Please reduce the length of the messages.\n" ] } ], "source": [ "try:\n", " print(short_llm.invoke(inputs))\n", "except Exception as e:\n", " print(e)" ] }, { "cell_type": "code", "execution_count": 41, "id": "d91ba5d7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "content='The next number in the sequence is two.' additional_kwargs={} example=False\n" ] } ], "source": [ "try:\n", " print(llm.invoke(inputs))\n", "except Exception as e:\n", " print(e)" ] }, { "cell_type": "markdown", "id": "2a6735df", "metadata": {}, "source": [ "## Fallback to Better Model\n", "\n", "Often times we ask models to output format in a specific format (like JSON). Models like GPT-3.5 can do this okay, but sometimes struggle. This naturally points to fallbacks - we can try with GPT-3.5 (faster, cheaper), but then if parsing fails we can use GPT-4." ] }, { "cell_type": "code", "execution_count": 42, "id": "867a3793", "metadata": {}, "outputs": [], "source": [ "from langchain.output_parsers import DatetimeOutputParser" ] }, { "cell_type": "code", "execution_count": 67, "id": "b8d9959d", "metadata": {}, "outputs": [], "source": [ "prompt = ChatPromptTemplate.from_template(\n", " \"what time was {event} (in %Y-%m-%dT%H:%M:%S.%fZ format - only return this value)\"\n", ")" ] }, { "cell_type": "code", "execution_count": 75, "id": "98087a76", "metadata": {}, "outputs": [], "source": [ "# In this case we are going to do the fallbacks on the LLM + output parser level\n", "# Because the error will get raised in the OutputParser\n", "openai_35 = ChatOpenAI() | DatetimeOutputParser()\n", "openai_4 = ChatOpenAI(model=\"gpt-4\") | DatetimeOutputParser()" ] }, { "cell_type": "code", "execution_count": 77, "id": "17ec9e8f", "metadata": {}, "outputs": [], "source": [ "only_35 = prompt | openai_35\n", "fallback_4 = prompt | openai_35.with_fallbacks([openai_4])" ] }, { "cell_type": "code", "execution_count": 80, "id": "7e536f0b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Error: Could not parse datetime string: The Super Bowl in 1994 took place on January 30th at 3:30 PM local time. Converting this to the specified format (%Y-%m-%dT%H:%M:%S.%fZ) results in: 1994-01-30T15:30:00.000Z\n" ] } ], "source": [ "try:\n", " print(only_35.invoke({\"event\": \"the superbowl in 1994\"}))\n", "except Exception as e:\n", " print(f\"Error: {e}\")" ] }, { "cell_type": "code", "execution_count": 81, "id": "01355c5e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1994-01-30 15:30:00\n" ] } ], "source": [ "try:\n", " print(fallback_4.invoke({\"event\": \"the superbowl in 1994\"}))\n", "except Exception as e:\n", " print(f\"Error: {e}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "c537f9d0", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/few_shot_examples.ipynb
{ "cells": [ { "cell_type": "raw", "id": "94c3ad61", "metadata": {}, "source": [ "---\n", "sidebar_position: 3\n", "---" ] }, { "cell_type": "markdown", "id": "b91e03f1", "metadata": {}, "source": [ "# How to use few shot examples\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Example selectors](/docs/concepts/#example-selectors)\n", "- [LLMs](/docs/concepts/#llms)\n", "- [Vectorstores](/docs/concepts/#vectorstores)\n", "\n", ":::\n", "\n", "In this guide, we'll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. Providing the LLM with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n", "\n", "A few-shot prompt template can be constructed from either a set of examples, or from an [Example Selector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.base.BaseExampleSelector.html) class responsible for choosing a subset of examples from the defined set.\n", "\n", "This guide will cover few-shotting with string prompt templates. For a guide on few-shotting with chat messages for chat models, see [here](/docs/how_to/few_shot_examples_chat/).\n", "\n", "## Create a formatter for the few-shot examples\n", "\n", "Configure a formatter that will format the few-shot examples into a string. This formatter should be a `PromptTemplate` object." ] }, { "cell_type": "code", "execution_count": 1, "id": "4e70bce2", "metadata": {}, "outputs": [], "source": [ "from langchain_core.prompts import PromptTemplate\n", "\n", "example_prompt = PromptTemplate.from_template(\"Question: {question}\\n{answer}\")" ] }, { "cell_type": "markdown", "id": "50846ad4", "metadata": {}, "source": [ "## Creating the example set\n", "\n", "Next, we'll create a list of few-shot examples. Each example should be a dictionary representing an example input to the formatter prompt we defined above." ] }, { "cell_type": "code", "execution_count": 2, "id": "a44be840", "metadata": {}, "outputs": [], "source": [ "examples = [\n", " {\n", " \"question\": \"Who lived longer, Muhammad Ali or Alan Turing?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: How old was Muhammad Ali when he died?\n", "Intermediate answer: Muhammad Ali was 74 years old when he died.\n", "Follow up: How old was Alan Turing when he died?\n", "Intermediate answer: Alan Turing was 41 years old when he died.\n", "So the final answer is: Muhammad Ali\n", "\"\"\",\n", " },\n", " {\n", " \"question\": \"When was the founder of craigslist born?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the founder of craigslist?\n", "Intermediate answer: Craigslist was founded by Craig Newmark.\n", "Follow up: When was Craig Newmark born?\n", "Intermediate answer: Craig Newmark was born on December 6, 1952.\n", "So the final answer is: December 6, 1952\n", "\"\"\",\n", " },\n", " {\n", " \"question\": \"Who was the maternal grandfather of George Washington?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the mother of George Washington?\n", "Intermediate answer: The mother of George Washington was Mary Ball Washington.\n", "Follow up: Who was the father of Mary Ball Washington?\n", "Intermediate answer: The father of Mary Ball Washington was Joseph Ball.\n", "So the final answer is: Joseph Ball\n", "\"\"\",\n", " },\n", " {\n", " \"question\": \"Are both the directors of Jaws and Casino Royale from the same country?\",\n", " \"answer\": \"\"\"\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who is the director of Jaws?\n", "Intermediate Answer: The director of Jaws is Steven Spielberg.\n", "Follow up: Where is Steven Spielberg from?\n", "Intermediate Answer: The United States.\n", "Follow up: Who is the director of Casino Royale?\n", "Intermediate Answer: The director of Casino Royale is Martin Campbell.\n", "Follow up: Where is Martin Campbell from?\n", "Intermediate Answer: New Zealand.\n", "So the final answer is: No\n", "\"\"\",\n", " },\n", "]" ] }, { "cell_type": "markdown", "id": "3d1ec9d5", "metadata": {}, "source": [ "Let's test the formatting prompt with one of our examples:" ] }, { "cell_type": "code", "execution_count": 13, "id": "8c6e48ad", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Question: Who lived longer, Muhammad Ali or Alan Turing?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: How old was Muhammad Ali when he died?\n", "Intermediate answer: Muhammad Ali was 74 years old when he died.\n", "Follow up: How old was Alan Turing when he died?\n", "Intermediate answer: Alan Turing was 41 years old when he died.\n", "So the final answer is: Muhammad Ali\n", "\n" ] } ], "source": [ "print(example_prompt.invoke(examples[0]).to_string())" ] }, { "cell_type": "markdown", "id": "dad66af1", "metadata": {}, "source": [ "### Pass the examples and formatter to `FewShotPromptTemplate`\n", "\n", "Finally, create a [`FewShotPromptTemplate`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotPromptTemplate.html) object. This object takes in the few-shot examples and the formatter for the few-shot examples. When this `FewShotPromptTemplate` is formatted, it formats the passed examples using the `example_prompt`, then and adds them to the final prompt before `suffix`:" ] }, { "cell_type": "code", "execution_count": 14, "id": "e76fa1ba", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Question: Who lived longer, Muhammad Ali or Alan Turing?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: How old was Muhammad Ali when he died?\n", "Intermediate answer: Muhammad Ali was 74 years old when he died.\n", "Follow up: How old was Alan Turing when he died?\n", "Intermediate answer: Alan Turing was 41 years old when he died.\n", "So the final answer is: Muhammad Ali\n", "\n", "\n", "Question: When was the founder of craigslist born?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the founder of craigslist?\n", "Intermediate answer: Craigslist was founded by Craig Newmark.\n", "Follow up: When was Craig Newmark born?\n", "Intermediate answer: Craig Newmark was born on December 6, 1952.\n", "So the final answer is: December 6, 1952\n", "\n", "\n", "Question: Who was the maternal grandfather of George Washington?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the mother of George Washington?\n", "Intermediate answer: The mother of George Washington was Mary Ball Washington.\n", "Follow up: Who was the father of Mary Ball Washington?\n", "Intermediate answer: The father of Mary Ball Washington was Joseph Ball.\n", "So the final answer is: Joseph Ball\n", "\n", "\n", "Question: Are both the directors of Jaws and Casino Royale from the same country?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who is the director of Jaws?\n", "Intermediate Answer: The director of Jaws is Steven Spielberg.\n", "Follow up: Where is Steven Spielberg from?\n", "Intermediate Answer: The United States.\n", "Follow up: Who is the director of Casino Royale?\n", "Intermediate Answer: The director of Casino Royale is Martin Campbell.\n", "Follow up: Where is Martin Campbell from?\n", "Intermediate Answer: New Zealand.\n", "So the final answer is: No\n", "\n", "\n", "Question: Who was the father of Mary Ball Washington?\n" ] } ], "source": [ "from langchain_core.prompts import FewShotPromptTemplate\n", "\n", "prompt = FewShotPromptTemplate(\n", " examples=examples,\n", " example_prompt=example_prompt,\n", " suffix=\"Question: {input}\",\n", " input_variables=[\"input\"],\n", ")\n", "\n", "print(\n", " prompt.invoke({\"input\": \"Who was the father of Mary Ball Washington?\"}).to_string()\n", ")" ] }, { "cell_type": "markdown", "id": "59c6f332", "metadata": {}, "source": [ "By providing the model with examples like this, we can guide the model to a better response." ] }, { "cell_type": "markdown", "id": "bbe1f843", "metadata": {}, "source": [ "## Using an example selector\n", "\n", "We will reuse the example set and the formatter from the previous section. However, instead of feeding the examples directly into the `FewShotPromptTemplate` object, we will feed them into an implementation of `ExampleSelector` called [`SemanticSimilarityExampleSelector`](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html) instance. This class selects few-shot examples from the initial set based on their similarity to the input. It uses an embedding model to compute the similarity between the input and the few-shot examples, as well as a vector store to perform the nearest neighbor search.\n", "\n", "To show what it looks like, let's initialize an instance and call it in isolation:" ] }, { "cell_type": "code", "execution_count": 4, "id": "80c5ac5c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Examples most similar to the input: Who was the father of Mary Ball Washington?\n", "\n", "\n", "answer: \n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the mother of George Washington?\n", "Intermediate answer: The mother of George Washington was Mary Ball Washington.\n", "Follow up: Who was the father of Mary Ball Washington?\n", "Intermediate answer: The father of Mary Ball Washington was Joseph Ball.\n", "So the final answer is: Joseph Ball\n", "\n", "question: Who was the maternal grandfather of George Washington?\n" ] } ], "source": [ "from langchain_chroma import Chroma\n", "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "example_selector = SemanticSimilarityExampleSelector.from_examples(\n", " # This is the list of examples available to select from.\n", " examples,\n", " # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n", " OpenAIEmbeddings(),\n", " # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n", " Chroma,\n", " # This is the number of examples to produce.\n", " k=1,\n", ")\n", "\n", "# Select the most similar example to the input.\n", "question = \"Who was the father of Mary Ball Washington?\"\n", "selected_examples = example_selector.select_examples({\"question\": question})\n", "print(f\"Examples most similar to the input: {question}\")\n", "for example in selected_examples:\n", " print(\"\\n\")\n", " for k, v in example.items():\n", " print(f\"{k}: {v}\")" ] }, { "cell_type": "markdown", "id": "89ac47fe", "metadata": {}, "source": [ "Now, let's create a `FewShotPromptTemplate` object. This object takes in the example selector and the formatter prompt for the few-shot examples." ] }, { "cell_type": "code", "execution_count": 5, "id": "de69a214", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Question: Who was the maternal grandfather of George Washington?\n", "\n", "Are follow up questions needed here: Yes.\n", "Follow up: Who was the mother of George Washington?\n", "Intermediate answer: The mother of George Washington was Mary Ball Washington.\n", "Follow up: Who was the father of Mary Ball Washington?\n", "Intermediate answer: The father of Mary Ball Washington was Joseph Ball.\n", "So the final answer is: Joseph Ball\n", "\n", "\n", "Question: Who was the father of Mary Ball Washington?\n" ] } ], "source": [ "prompt = FewShotPromptTemplate(\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " suffix=\"Question: {input}\",\n", " input_variables=[\"input\"],\n", ")\n", "\n", "print(\n", " prompt.invoke({\"input\": \"Who was the father of Mary Ball Washington?\"}).to_string()\n", ")" ] }, { "cell_type": "markdown", "id": "1b460794", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to add few-shot examples to your prompts.\n", "\n", "Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on [few shotting with chat models](/docs/how_to/few_shot_examples_chat), or the other [example selector how-to guides](/docs/how_to/example_selectors/)." ] }, { "cell_type": "code", "execution_count": null, "id": "bf06d2a6", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/few_shot_examples_chat.ipynb
{ "cells": [ { "cell_type": "raw", "id": "beba2e0e", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "---" ] }, { "cell_type": "markdown", "id": "bb0735c0", "metadata": {}, "source": [ "# How to use few shot examples in chat models\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Prompt templates](/docs/concepts/#prompt-templates)\n", "- [Example selectors](/docs/concepts/#example-selectors)\n", "- [Chat models](/docs/concepts/#chat-model)\n", "- [Vectorstores](/docs/concepts/#vectorstores)\n", "\n", ":::\n", "\n", "This guide covers how to prompt a chat model with example inputs and outputs. Providing the model with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance.\n", "\n", "There does not appear to be solid consensus on how best to do few-shot prompting, and the optimal prompt compilation will likely vary by model. Because of this, we provide few-shot prompt templates like the [FewShotChatMessagePromptTemplate](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate.html?highlight=fewshot#langchain_core.prompts.few_shot.FewShotChatMessagePromptTemplate) as a flexible starting point, and you can modify or replace them as you see fit.\n", "\n", "The goal of few-shot prompt templates are to dynamically select examples based on an input, and then format the examples in a final prompt to provide for the model.\n", "\n", "**Note:** The following code examples are for chat models only, since `FewShotChatMessagePromptTemplates` are designed to output formatted [chat messages](/docs/concepts/#message-types) rather than pure strings. For similar few-shot prompt examples for pure string templates compatible with completion models (LLMs), see the [few-shot prompt templates](/docs/how_to/few_shot_examples/) guide." ] }, { "cell_type": "markdown", "id": "d716f2de-cc29-4823-9360-a808c7bfdb86", "metadata": { "tags": [] }, "source": [ "## Fixed Examples\n", "\n", "The most basic (and common) few-shot prompting technique is to use fixed prompt examples. This way you can select a chain, evaluate it, and avoid worrying about additional moving parts in production.\n", "\n", "The basic components of the template are:\n", "- `examples`: A list of dictionary examples to include in the final prompt.\n", "- `example_prompt`: converts each example into 1 or more messages through its [`format_messages`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html?highlight=format_messages#langchain_core.prompts.chat.ChatPromptTemplate.format_messages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.\n", "\n", "Below is a simple demonstration. First, define the examples you'd like to include:" ] }, { "cell_type": "code", "execution_count": 1, "id": "5b79e400", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 22.0.4; however, version 24.0 is available.\n", "You should consider upgrading via the '/Users/jacoblee/.pyenv/versions/3.10.5/bin/python -m pip install --upgrade pip' command.\u001b[0m\u001b[33m\n", "\u001b[0mNote: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install -qU langchain langchain-openai langchain-chroma\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "0fc5a02a-6249-4e92-95c3-30fff9671e8b", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n", "\n", "examples = [\n", " {\"input\": \"2+2\", \"output\": \"4\"},\n", " {\"input\": \"2+3\", \"output\": \"5\"},\n", "]" ] }, { "cell_type": "markdown", "id": "e8710ecc-2aa0-4172-a74c-250f6bc3d9e2", "metadata": {}, "source": [ "Next, assemble them into the few-shot prompt template." ] }, { "cell_type": "code", "execution_count": 3, "id": "65e72ad1-9060-47d0-91a1-bc130c8b98ac", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[HumanMessage(content='2+2'), AIMessage(content='4'), HumanMessage(content='2+3'), AIMessage(content='5')]\n" ] } ], "source": [ "# This is a prompt template used to format each individual example.\n", "example_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"human\", \"{input}\"),\n", " (\"ai\", \"{output}\"),\n", " ]\n", ")\n", "few_shot_prompt = FewShotChatMessagePromptTemplate(\n", " example_prompt=example_prompt,\n", " examples=examples,\n", ")\n", "\n", "print(few_shot_prompt.invoke({}).to_messages())" ] }, { "cell_type": "markdown", "id": "5490bd59-b28f-46a4-bbdf-0191802dd3c5", "metadata": {}, "source": [ "Finally, we assemble the final prompt as shown below, passing `few_shot_prompt` directly into the `from_messages` factory method, and use it with a model:" ] }, { "cell_type": "code", "execution_count": 4, "id": "9f86d6d9-50de-41b6-b6c7-0f9980cc0187", "metadata": { "tags": [] }, "outputs": [], "source": [ "final_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", \"You are a wondrous wizard of math.\"),\n", " few_shot_prompt,\n", " (\"human\", \"{input}\"),\n", " ]\n", ")" ] }, { "cell_type": "code", "execution_count": 5, "id": "97d443b1-6fae-4b36-bede-3ff7306288a3", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='A triangle does not have a square. The square of a number is the result of multiplying the number by itself.', response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 52, 'total_tokens': 75}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-3456c4ef-7b4d-4adb-9e02-8079de82a47a-0')" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "chain = final_prompt | ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n", "\n", "chain.invoke({\"input\": \"What's the square of a triangle?\"})" ] }, { "cell_type": "markdown", "id": "70ab7114-f07f-46be-8874-3705a25aba5f", "metadata": {}, "source": [ "## Dynamic few-shot prompting\n", "\n", "Sometimes you may want to select only a few examples from your overall set to show based on the input. For this, you can replace the `examples` passed into `FewShotChatMessagePromptTemplate` with an `example_selector`. The other components remain the same as above! Our dynamic few-shot prompt template would look like:\n", "\n", "- `example_selector`: responsible for selecting few-shot examples (and the order in which they are returned) for a given input. These implement the [BaseExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.base.BaseExampleSelector.html?highlight=baseexampleselector#langchain_core.example_selectors.base.BaseExampleSelector) interface. A common example is the vectorstore-backed [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html?highlight=semanticsimilarityexampleselector#langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector)\n", "- `example_prompt`: convert each example into 1 or more messages through its [`format_messages`](https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.chat.ChatPromptTemplate.html?highlight=chatprompttemplate#langchain_core.prompts.chat.ChatPromptTemplate.format_messages) method. A common example would be to convert each example into one human message and one AI message response, or a human message followed by a function call message.\n", "\n", "These once again can be composed with other messages and chat templates to assemble your final prompt.\n", "\n", "Let's walk through an example with the `SemanticSimilarityExampleSelector`. Since this implementation uses a vectorstore to select examples based on semantic similarity, we will want to first populate the store. Since the basic idea here is that we want to search for and return examples most similar to the text input, we embed the `values` of our prompt examples rather than considering the keys:" ] }, { "cell_type": "code", "execution_count": 6, "id": "ad66f06a-66fd-4fcc-8166-5d0e3c801e57", "metadata": { "tags": [] }, "outputs": [], "source": [ "from langchain_chroma import Chroma\n", "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "examples = [\n", " {\"input\": \"2+2\", \"output\": \"4\"},\n", " {\"input\": \"2+3\", \"output\": \"5\"},\n", " {\"input\": \"2+4\", \"output\": \"6\"},\n", " {\"input\": \"What did the cow say to the moon?\", \"output\": \"nothing at all\"},\n", " {\n", " \"input\": \"Write me a poem about the moon\",\n", " \"output\": \"One for the moon, and one for me, who are we to talk about the moon?\",\n", " },\n", "]\n", "\n", "to_vectorize = [\" \".join(example.values()) for example in examples]\n", "embeddings = OpenAIEmbeddings()\n", "vectorstore = Chroma.from_texts(to_vectorize, embeddings, metadatas=examples)" ] }, { "cell_type": "markdown", "id": "2f7e384a-2031-432b-951c-7ea8cf9262f1", "metadata": {}, "source": [ "### Create the `example_selector`\n", "\n", "With a vectorstore created, we can create the `example_selector`. Here we will call it in isolation, and set `k` on it to only fetch the two example closest to the input." ] }, { "cell_type": "code", "execution_count": 7, "id": "7790303a-f722-452e-8921-b14bdf20bdff", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "[{'input': 'What did the cow say to the moon?', 'output': 'nothing at all'},\n", " {'input': '2+4', 'output': '6'}]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example_selector = SemanticSimilarityExampleSelector(\n", " vectorstore=vectorstore,\n", " k=2,\n", ")\n", "\n", "# The prompt template will load examples by passing the input do the `select_examples` method\n", "example_selector.select_examples({\"input\": \"horse\"})" ] }, { "cell_type": "markdown", "id": "cc77c40f-3f58-40a2-b757-a2a2ea43f24a", "metadata": {}, "source": [ "### Create prompt template\n", "\n", "We now assemble the prompt template, using the `example_selector` created above." ] }, { "cell_type": "code", "execution_count": 14, "id": "253c255e-41d7-45f6-9d88-c7a0ced4b1bd", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[HumanMessage(content='2+3'), AIMessage(content='5'), HumanMessage(content='2+2'), AIMessage(content='4')]\n" ] } ], "source": [ "from langchain_core.prompts import ChatPromptTemplate, FewShotChatMessagePromptTemplate\n", "\n", "# Define the few-shot prompt.\n", "few_shot_prompt = FewShotChatMessagePromptTemplate(\n", " # The input variables select the values to pass to the example_selector\n", " input_variables=[\"input\"],\n", " example_selector=example_selector,\n", " # Define how each example will be formatted.\n", " # In this case, each example will become 2 messages:\n", " # 1 human, and 1 AI\n", " example_prompt=ChatPromptTemplate.from_messages(\n", " [(\"human\", \"{input}\"), (\"ai\", \"{output}\")]\n", " ),\n", ")\n", "\n", "print(few_shot_prompt.invoke(input=\"What's 3+3?\").to_messages())" ] }, { "cell_type": "markdown", "id": "339cae7d-0eb0-44a6-852f-0267c5ff72b3", "metadata": {}, "source": [ "And we can pass this few-shot chat message prompt template into another chat prompt template:" ] }, { "cell_type": "code", "execution_count": 17, "id": "e731cb45-f0ea-422c-be37-42af2a6cb2c4", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "messages=[HumanMessage(content='2+3'), AIMessage(content='5'), HumanMessage(content='2+2'), AIMessage(content='4')]\n" ] } ], "source": [ "final_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", \"You are a wondrous wizard of math.\"),\n", " few_shot_prompt,\n", " (\"human\", \"{input}\"),\n", " ]\n", ")\n", "\n", "print(few_shot_prompt.invoke(input=\"What's 3+3?\"))" ] }, { "cell_type": "markdown", "id": "2408ea69-1880-4ef5-a0fa-ffa8d2026aa9", "metadata": {}, "source": [ "### Use with an chat model\n", "\n", "Finally, you can connect your model to the few-shot prompt." ] }, { "cell_type": "code", "execution_count": 18, "id": "0568cbc6-5354-47f1-ab4d-dfcc616cf583", "metadata": { "tags": [] }, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='6', response_metadata={'token_usage': {'completion_tokens': 1, 'prompt_tokens': 51, 'total_tokens': 52}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-6bcbe158-a8e3-4a85-a754-1ba274a9f147-0')" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain = final_prompt | ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0.0)\n", "\n", "chain.invoke({\"input\": \"What's 3+3?\"})" ] }, { "cell_type": "markdown", "id": "c87fad3c", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to add few-shot examples to your chat prompts.\n", "\n", "Next, check out the other how-to guides on prompt templates in this section, the related how-to guide on [few shotting with text completion models](/docs/how_to/few_shot_examples), or the other [example selector how-to guides](/docs/how_to/example_selectors/)." ] }, { "cell_type": "code", "execution_count": null, "id": "46e26b53", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/filter_messages.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "e389175d-8a65-4f0d-891c-dbdfabb3c3ef", "metadata": {}, "source": [ "# How to filter messages\n", "\n", "In more complex chains and agents we might track state with a list of messages. This list can start to accumulate messages from multiple different models, speakers, sub-chains, etc., and we may only want to pass subsets of this full list of messages to each model call in the chain/agent.\n", "\n", "The `filter_messages` utility makes it easy to filter messages by type, id, or name.\n", "\n", "## Basic usage" ] }, { "cell_type": "code", "execution_count": 1, "id": "f4ad2fd3-3cab-40d4-a989-972115865b8b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content='example input', name='example_user', id='2'),\n", " HumanMessage(content='real input', name='bob', id='4')]" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import (\n", " AIMessage,\n", " HumanMessage,\n", " SystemMessage,\n", " filter_messages,\n", ")\n", "\n", "messages = [\n", " SystemMessage(\"you are a good assistant\", id=\"1\"),\n", " HumanMessage(\"example input\", id=\"2\", name=\"example_user\"),\n", " AIMessage(\"example output\", id=\"3\", name=\"example_assistant\"),\n", " HumanMessage(\"real input\", id=\"4\", name=\"bob\"),\n", " AIMessage(\"real output\", id=\"5\", name=\"alice\"),\n", "]\n", "\n", "filter_messages(messages, include_types=\"human\")" ] }, { "cell_type": "code", "execution_count": 2, "id": "7b663a1e-a8ae-453e-a072-8dd75dfab460", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[SystemMessage(content='you are a good assistant', id='1'),\n", " HumanMessage(content='real input', name='bob', id='4'),\n", " AIMessage(content='real output', name='alice', id='5')]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "filter_messages(messages, exclude_names=[\"example_user\", \"example_assistant\"])" ] }, { "cell_type": "code", "execution_count": 3, "id": "db170e46-03f8-4710-b967-23c70c3ac054", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content='example input', name='example_user', id='2'),\n", " HumanMessage(content='real input', name='bob', id='4'),\n", " AIMessage(content='real output', name='alice', id='5')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "filter_messages(messages, include_types=[HumanMessage, AIMessage], exclude_ids=[\"3\"])" ] }, { "cell_type": "markdown", "id": "b7c4e5ad-d1b4-4c18-b250-864adde8f0dd", "metadata": {}, "source": [ "## Chaining\n", "\n", "`filter_messages` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:" ] }, { "cell_type": "code", "execution_count": 4, "id": "675f8f79-db39-401c-a582-1df2478cba30", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=[], response_metadata={'id': 'msg_01Wz7gBHahAwkZ1KCBNtXmwA', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 16, 'output_tokens': 3}}, id='run-b5d8a3fe-004f-4502-a071-a6c025031827-0', usage_metadata={'input_tokens': 16, 'output_tokens': 3, 'total_tokens': 19})" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# pip install -U langchain-anthropic\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n", "# Notice we don't pass in messages. This creates\n", "# a RunnableLambda that takes messages as input\n", "filter_ = filter_messages(exclude_names=[\"example_user\", \"example_assistant\"])\n", "chain = filter_ | llm\n", "chain.invoke(messages)" ] }, { "cell_type": "markdown", "id": "4133ab28-f49c-480f-be92-b51eb6559153", "metadata": {}, "source": [ "Looking at the LangSmith trace we can see that before the messages are passed to the model they are filtered: https://smith.langchain.com/public/f808a724-e072-438e-9991-657cc9e7e253/r\n", "\n", "Looking at just the filter_, we can see that it's a Runnable object that can be invoked like all Runnables:" ] }, { "cell_type": "code", "execution_count": 6, "id": "c090116a-1fef-43f6-a178-7265dff9db00", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content='real input', name='bob', id='4'),\n", " AIMessage(content='real output', name='alice', id='5')]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "filter_.invoke(messages)" ] }, { "cell_type": "markdown", "id": "ff339066-d424-4042-8cca-cd4b007c1a8e", "metadata": {}, "source": [ "## API reference\n", "\n", "For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.filter_messages.html" ] } ], "metadata": { "kernelspec": { "display_name": "poetry-venv-2", "language": "python", "name": "poetry-venv-2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/function_calling.ipynb
{ "cells": [ { "cell_type": "raw", "id": "a413ade7-48f0-4d43-a1f3-d87f550a8018", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "---" ] }, { "cell_type": "markdown", "id": "50d59b14-c434-4359-be8e-4a21378e762f", "metadata": {}, "source": [ "# How to do tool/function calling\n", "\n", "```{=mdx}\n", ":::info\n", "We use the term tool calling interchangeably with function calling. Although\n", "function calling is sometimes meant to refer to invocations of a single function,\n", "we treat all models as though they can return multiple tool or function calls in \n", "each message.\n", ":::\n", "```\n", "\n", "Tool calling allows a model to respond to a given prompt by generating output that \n", "matches a user-defined schema. While the name implies that the model is performing \n", "some action, this is actually not the case! The model is coming up with the \n", "arguments to a tool, and actually running the tool (or not) is up to the user - \n", "for example, if you want to [extract output matching some schema](/docs/tutorials/extraction) \n", "from unstructured text, you could give the model an \"extraction\" tool that takes \n", "parameters matching the desired schema, then treat the generated output as your final \n", "result.\n", "\n", "A tool call includes a name, arguments dict, and an optional identifier. The \n", "arguments dict is structured `{argument_name: argument_value}`.\n", "\n", "Many LLM providers, including [Anthropic](https://www.anthropic.com/), \n", "[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), \n", "[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, \n", "support variants of a tool calling feature. These features typically allow requests \n", "to the LLM to include available tools and their schemas, and for responses to include \n", "calls to these tools. For instance, given a search engine tool, an LLM might handle a \n", "query by first issuing a call to the search engine. The system calling the LLM can \n", "receive the tool call, execute it, and return the output to the LLM to inform its \n", "response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n", "and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools). \n", "Tool-calling is extremely useful for building [tool-using chains and agents](/docs/how_to#tools), \n", "and for getting structured outputs from models more generally.\n", "\n", "Providers adopt different conventions for formatting tool schemas and tool calls. \n", "For instance, Anthropic returns tool calls as parsed structures within a larger content block:\n", "```python\n", "[\n", " {\n", " \"text\": \"<thinking>\\nI should use a tool.\\n</thinking>\",\n", " \"type\": \"text\"\n", " },\n", " {\n", " \"id\": \"id_value\",\n", " \"input\": {\"arg_name\": \"arg_value\"},\n", " \"name\": \"tool_name\",\n", " \"type\": \"tool_use\"\n", " }\n", "]\n", "```\n", "whereas OpenAI separates tool calls into a distinct parameter, with arguments as JSON strings:\n", "```python\n", "{\n", " \"tool_calls\": [\n", " {\n", " \"id\": \"id_value\",\n", " \"function\": {\n", " \"arguments\": '{\"arg_name\": \"arg_value\"}',\n", " \"name\": \"tool_name\"\n", " },\n", " \"type\": \"function\"\n", " }\n", " ]\n", "}\n", "```\n", "LangChain implements standard interfaces for defining tools, passing them to LLMs, \n", "and representing tool calls.\n", "\n", "## Passing tools to LLMs\n", "\n", "Chat models supporting tool calling features implement a `.bind_tools` method, which \n", "receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n", "and binds them to the chat model in its expected format. Subsequent invocations of the \n", "chat model will include tool schemas in its calls to the LLM.\n", "\n", "For example, we can define the schema for custom tools using the `@tool` decorator \n", "on Python functions:" ] }, { "cell_type": "code", "execution_count": 22, "id": "841dca72-1b57-4a42-8e22-da4835c4cfe0", "metadata": {}, "outputs": [], "source": [ "from langchain_core.tools import tool\n", "\n", "\n", "@tool\n", "def add(a: int, b: int) -> int:\n", " \"\"\"Adds a and b.\"\"\"\n", " return a + b\n", "\n", "\n", "@tool\n", "def multiply(a: int, b: int) -> int:\n", " \"\"\"Multiplies a and b.\"\"\"\n", " return a * b\n", "\n", "\n", "tools = [add, multiply]" ] }, { "cell_type": "markdown", "id": "48058b7d-048d-48e6-a272-3931ad7ad146", "metadata": {}, "source": [ "Or below, we define the schema using Pydantic:\n" ] }, { "cell_type": "code", "execution_count": 23, "id": "fca56328-85e4-4839-97b7-b5dc55920602", "metadata": {}, "outputs": [], "source": [ "from langchain_core.pydantic_v1 import BaseModel, Field\n", "\n", "\n", "# Note that the docstrings here are crucial, as they will be passed along\n", "# to the model along with the class name.\n", "class Add(BaseModel):\n", " \"\"\"Add two integers together.\"\"\"\n", "\n", " a: int = Field(..., description=\"First integer\")\n", " b: int = Field(..., description=\"Second integer\")\n", "\n", "\n", "class Multiply(BaseModel):\n", " \"\"\"Multiply two integers together.\"\"\"\n", "\n", " a: int = Field(..., description=\"First integer\")\n", " b: int = Field(..., description=\"Second integer\")\n", "\n", "\n", "tools = [Add, Multiply]" ] }, { "cell_type": "markdown", "id": "ead9068d-11f6-42f3-a508-3c1830189947", "metadata": {}, "source": [ "We can bind them to chat models as follows:\n", "\n", "```{=mdx}\n", "import ChatModelTabs from \"@theme/ChatModelTabs\";\n", "\n", "<ChatModelTabs\n", " customVarName=\"llm\"\n", " fireworksParams={`model=\"accounts/fireworks/models/firefunction-v1\", temperature=0`}\n", "/>\n", "```\n", "\n", "We can use the `bind_tools()` method to handle converting\n", "`Multiply` to a \"tool\" and binding it to the model (i.e.,\n", "passing it in each time the model is invoked)." ] }, { "cell_type": "code", "execution_count": 67, "id": "44eb8327-a03d-4c7c-945e-30f13f455346", "metadata": {}, "outputs": [], "source": [ "# | echo: false\n", "# | output: false\n", "\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)" ] }, { "cell_type": "code", "execution_count": 68, "id": "af2a83ac-e43f-43ce-b107-9ed8376bfb75", "metadata": {}, "outputs": [], "source": [ "llm_with_tools = llm.bind_tools(tools)" ] }, { "cell_type": "markdown", "id": "16208230-f64f-4935-9aa1-280a91f34ba3", "metadata": {}, "source": [ "## Tool calls\n", "\n", "If tool calls are included in a LLM response, they are attached to the corresponding \n", "[message](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage) \n", "or [message chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n", "as a list of [tool call](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall) \n", "objects in the `.tool_calls` attribute. A `ToolCall` is a typed dict that includes a \n", "tool name, dict of argument values, and (optionally) an identifier. Messages with no \n", "tool calls default to an empty list for this attribute.\n", "\n", "Example:" ] }, { "cell_type": "code", "execution_count": 15, "id": "1640a4b4-c201-4b23-b257-738d854fb9fd", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'name': 'Multiply',\n", " 'args': {'a': 3, 'b': 12},\n", " 'id': 'call_1Tdp5wUXbYQzpkBoagGXqUTo'},\n", " {'name': 'Add',\n", " 'args': {'a': 11, 'b': 49},\n", " 'id': 'call_k9v09vYioS3X0Qg35zESuUKI'}]" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "query = \"What is 3 * 12? Also, what is 11 + 49?\"\n", "\n", "llm_with_tools.invoke(query).tool_calls" ] }, { "cell_type": "markdown", "id": "ac3ff0fe-5119-46b8-a578-530245bff23f", "metadata": {}, "source": [ "The `.tool_calls` attribute should contain valid tool calls. Note that on occasion, \n", "model providers may output malformed tool calls (e.g., arguments that are not \n", "valid JSON). When parsing fails in these cases, instances \n", "of [InvalidToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.InvalidToolCall.html#langchain_core.messages.tool.InvalidToolCall) \n", "are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n", "a name, string arguments, identifier, and error message.\n", "\n", "If desired, [output parsers](/docs/how_to#output-parsers) can further \n", "process the output. For example, we can convert back to the original Pydantic class:" ] }, { "cell_type": "code", "execution_count": 16, "id": "ca15fcad-74fe-4109-a1b1-346c3eefe238", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Multiply(a=3, b=12), Add(a=11, b=49)]" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n", "\n", "chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n", "chain.invoke(query)" ] }, { "cell_type": "markdown", "id": "0ba3505d-f405-43ba-93c4-7fbd84f6464b", "metadata": {}, "source": [ "### Streaming\n", "\n", "When tools are called in a streaming context, \n", "[message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n", "will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) \n", "objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes \n", "optional string fields for the tool `name`, `args`, and `id`, and includes an optional \n", "integer field `index` that can be used to join chunks together. Fields are optional \n", "because portions of a tool call may be streamed across different chunks (e.g., a chunk \n", "that includes a substring of the arguments may have null values for the tool name and id).\n", "\n", "Because message chunks inherit from their parent message class, an \n", "[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n", "with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. \n", "These fields are parsed best-effort from the message's tool call chunks.\n", "\n", "Note that not all providers currently support streaming for tool calls.\n", "\n", "Example:" ] }, { "cell_type": "code", "execution_count": 17, "id": "4f54a0de-74c7-4f2d-86c5-660aed23840d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[]\n", "[{'name': 'Multiply', 'args': '', 'id': 'call_d39MsxKM5cmeGJOoYKdGBgzc', 'index': 0}]\n", "[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 0}]\n", "[{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}]\n", "[{'name': None, 'args': '\"b\": 1', 'id': None, 'index': 0}]\n", "[{'name': None, 'args': '2}', 'id': None, 'index': 0}]\n", "[{'name': 'Add', 'args': '', 'id': 'call_QJpdxD9AehKbdXzMHxgDMMhs', 'index': 1}]\n", "[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 1}]\n", "[{'name': None, 'args': ': 11,', 'id': None, 'index': 1}]\n", "[{'name': None, 'args': ' \"b\": ', 'id': None, 'index': 1}]\n", "[{'name': None, 'args': '49}', 'id': None, 'index': 1}]\n", "[]\n" ] } ], "source": [ "async for chunk in llm_with_tools.astream(query):\n", " print(chunk.tool_call_chunks)" ] }, { "cell_type": "markdown", "id": "55046320-3466-4ec1-a1f8-336234ba9019", "metadata": {}, "source": [ "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n", "\n", "For example, below we accumulate tool call chunks:" ] }, { "cell_type": "code", "execution_count": 18, "id": "0a944af0-eedd-43c8-8ff3-f4301f129d9b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[]\n", "[{'name': 'Multiply', 'args': '', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n", "[{'name': 'Multiply', 'args': '{\"a\"', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, ', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 1', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\"', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11,', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": ', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n", "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n" ] } ], "source": [ "first = True\n", "async for chunk in llm_with_tools.astream(query):\n", " if first:\n", " gathered = chunk\n", " first = False\n", " else:\n", " gathered = gathered + chunk\n", "\n", " print(gathered.tool_call_chunks)" ] }, { "cell_type": "code", "execution_count": 19, "id": "db4e3e3a-3553-44dc-bd31-149c0981a06a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "<class 'str'>\n" ] } ], "source": [ "print(type(gathered.tool_call_chunks[0][\"args\"]))" ] }, { "cell_type": "markdown", "id": "95e92826-6e55-4684-9498-556f357f73ac", "metadata": {}, "source": [ "And below we accumulate tool calls to demonstrate partial parsing:" ] }, { "cell_type": "code", "execution_count": 20, "id": "e9402bde-d4b5-4564-a99e-f88c9b46b28a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[]\n", "[]\n", "[{'name': 'Multiply', 'args': {}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n", "[{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n", "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n" ] } ], "source": [ "first = True\n", "async for chunk in llm_with_tools.astream(query):\n", " if first:\n", " gathered = chunk\n", " first = False\n", " else:\n", " gathered = gathered + chunk\n", "\n", " print(gathered.tool_calls)" ] }, { "cell_type": "code", "execution_count": 21, "id": "8c2f21cc-0c6d-416a-871f-e854621c96e2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "<class 'dict'>\n" ] } ], "source": [ "print(type(gathered.tool_calls[0][\"args\"]))" ] }, { "cell_type": "markdown", "id": "97a0c977-0c3c-4011-b49b-db98c609d0ce", "metadata": {}, "source": [ "## Passing tool outputs to model\n", "\n", "If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s." ] }, { "cell_type": "code", "execution_count": 117, "id": "48049192-be28-42ab-9a44-d897924e67cd", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n", " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_K5DsWEmgt6D08EI9AFu9NaL1', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_qywVrsplg0ZMv7LHYYMjyG81', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-1a0b8cdd-9221-4d94-b2ed-5701f67ce9fe-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_K5DsWEmgt6D08EI9AFu9NaL1'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_qywVrsplg0ZMv7LHYYMjyG81'}]),\n", " ToolMessage(content='36', tool_call_id='call_K5DsWEmgt6D08EI9AFu9NaL1'),\n", " ToolMessage(content='60', tool_call_id='call_qywVrsplg0ZMv7LHYYMjyG81')]" ] }, "execution_count": 117, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import HumanMessage, ToolMessage\n", "\n", "messages = [HumanMessage(query)]\n", "ai_msg = llm_with_tools.invoke(messages)\n", "messages.append(ai_msg)\n", "for tool_call in ai_msg.tool_calls:\n", " selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n", " tool_output = selected_tool.invoke(tool_call[\"args\"])\n", " messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n", "messages" ] }, { "cell_type": "code", "execution_count": 118, "id": "611e0f36-d736-48d1-bca1-1cec51d223f3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-a6c8093c-b16a-4c92-8308-7c9ac998118c-0')" ] }, "execution_count": 118, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm_with_tools.invoke(messages)" ] }, { "cell_type": "markdown", "id": "a5937498-d6fe-400a-b192-ef35c314168e", "metadata": {}, "source": [ "## Few-shot prompting\n", "\n", "For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt.\n", "\n", "For example, even with some special instructions our model can get tripped up by order of operations:" ] }, { "cell_type": "code", "execution_count": 112, "id": "5ef2e7c3-0925-49da-ab8f-e42c4fa40f29", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'name': 'Multiply',\n", " 'args': {'a': 119, 'b': 8},\n", " 'id': 'call_Dl3FXRVkQCFW4sUNYOe4rFr7'},\n", " {'name': 'Add',\n", " 'args': {'a': 952, 'b': -20},\n", " 'id': 'call_n03l4hmka7VZTCiP387Wud2C'}]" ] }, "execution_count": 112, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm_with_tools.invoke(\n", " \"Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations\"\n", ").tool_calls" ] }, { "cell_type": "markdown", "id": "a5249069-b5f8-40ac-ae74-30d67c4e9168", "metadata": {}, "source": [ "The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 * 8 yet.\n", "\n", "By adding a prompt with some examples we can correct this behavior:" ] }, { "cell_type": "code", "execution_count": 107, "id": "7b2e8b19-270f-4e1a-8be7-7aad704c1cf4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'name': 'Multiply',\n", " 'args': {'a': 119, 'b': 8},\n", " 'id': 'call_MoSgwzIhPxhclfygkYaKIsGZ'}]" ] }, "execution_count": 107, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.messages import AIMessage\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "\n", "examples = [\n", " HumanMessage(\n", " \"What's the product of 317253 and 128472 plus four\", name=\"example_user\"\n", " ),\n", " AIMessage(\n", " \"\",\n", " name=\"example_assistant\",\n", " tool_calls=[\n", " {\"name\": \"Multiply\", \"args\": {\"x\": 317253, \"y\": 128472}, \"id\": \"1\"}\n", " ],\n", " ),\n", " ToolMessage(\"16505054784\", tool_call_id=\"1\"),\n", " AIMessage(\n", " \"\",\n", " name=\"example_assistant\",\n", " tool_calls=[{\"name\": \"Add\", \"args\": {\"x\": 16505054784, \"y\": 4}, \"id\": \"2\"}],\n", " ),\n", " ToolMessage(\"16505054788\", tool_call_id=\"2\"),\n", " AIMessage(\n", " \"The product of 317253 and 128472 plus four is 16505054788\",\n", " name=\"example_assistant\",\n", " ),\n", "]\n", "\n", "system = \"\"\"You are bad at math but are an expert at using a calculator. \n", "\n", "Use past tool usage as an example of how to correctly use the tools.\"\"\"\n", "few_shot_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", system),\n", " *examples,\n", " (\"human\", \"{query}\"),\n", " ]\n", ")\n", "\n", "chain = {\"query\": RunnablePassthrough()} | few_shot_prompt | llm_with_tools\n", "chain.invoke(\"Whats 119 times 8 minus 20\").tool_calls" ] }, { "cell_type": "markdown", "id": "19160e3e-3eb5-4e9a-ae56-74a2dce0af32", "metadata": {}, "source": [ "Seems like we get the correct output this time.\n", "\n", "Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like." ] }, { "cell_type": "markdown", "id": "020cfd3b-0838-49d0-96bb-7cd919921833", "metadata": {}, "source": [ "## Next steps\n", "\n", "- **Output parsing**: See [OpenAI Tools output\n", " parsers](/docs/how_to/output_parser_structured)\n", " to learn about extracting the function calling API responses into\n", " various formats.\n", "- **Structured output chains**: [Some models have constructors](/docs/how_to/structured_output) that\n", " handle creating a structured output chain for you.\n", "- **Tool use**: See how to construct chains and agents that\n", " call the invoked tools in [these\n", " guides](/docs/how_to#tools)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/functions.ipynb
{ "cells": [ { "cell_type": "raw", "id": "ce0e08fd", "metadata": {}, "source": [ "---\n", "sidebar_position: 3\n", "keywords: [RunnableLambda, LCEL]\n", "---" ] }, { "cell_type": "markdown", "id": "fbc4bf6e", "metadata": {}, "source": [ "# How to run custom functions\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "\n", ":::\n", "\n", "You can use arbitrary functions as [Runnables](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable). This is useful for formatting or when you need functionality not provided by other LangChain components, and custom functions used as Runnables are called [`RunnableLambdas`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html).\n", "\n", "Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single dict input and unpacks it into multiple argument.\n", "\n", "This guide will cover:\n", "\n", "- How to explicitly create a runnable from a custom function using the `RunnableLambda` constructor and the convenience `@chain` decorator\n", "- Coercion of custom functions into runnables when used in chains\n", "- How to accept and use run metadata in your custom function\n", "- How to stream with custom functions by having them return generators\n", "\n", "## Using the constructor\n", "\n", "Below, we explicitly wrap our custom logic using the `RunnableLambda` constructor:" ] }, { "cell_type": "code", "execution_count": null, "id": "5c34d2af", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain_openai\n", "\n", "import os\n", "from getpass import getpass\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass()" ] }, { "cell_type": "code", "execution_count": 2, "id": "6bb221b3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content='3 + 9 equals 12.', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 14, 'total_tokens': 22}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-73728de3-e483-49e3-ad54-51bd9570e71a-0')" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from operator import itemgetter\n", "\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnableLambda\n", "from langchain_openai import ChatOpenAI\n", "\n", "\n", "def length_function(text):\n", " return len(text)\n", "\n", "\n", "def _multiple_length_function(text1, text2):\n", " return len(text1) * len(text2)\n", "\n", "\n", "def multiple_length_function(_dict):\n", " return _multiple_length_function(_dict[\"text1\"], _dict[\"text2\"])\n", "\n", "\n", "model = ChatOpenAI()\n", "\n", "prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n", "\n", "chain1 = prompt | model\n", "\n", "chain = (\n", " {\n", " \"a\": itemgetter(\"foo\") | RunnableLambda(length_function),\n", " \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")}\n", " | RunnableLambda(multiple_length_function),\n", " }\n", " | prompt\n", " | model\n", ")\n", "\n", "chain.invoke({\"foo\": \"bar\", \"bar\": \"gah\"})" ] }, { "cell_type": "markdown", "id": "b7926002", "metadata": {}, "source": [ "## The convenience `@chain` decorator\n", "\n", "You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping the function in a `RunnableLambda` constructor as shown above. Here's an example:" ] }, { "cell_type": "code", "execution_count": 3, "id": "3142a516", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'The subject of the joke is the bear and his girlfriend.'" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.runnables import chain\n", "\n", "prompt1 = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n", "prompt2 = ChatPromptTemplate.from_template(\"What is the subject of this joke: {joke}\")\n", "\n", "\n", "@chain\n", "def custom_chain(text):\n", " prompt_val1 = prompt1.invoke({\"topic\": text})\n", " output1 = ChatOpenAI().invoke(prompt_val1)\n", " parsed_output1 = StrOutputParser().invoke(output1)\n", " chain2 = prompt2 | ChatOpenAI() | StrOutputParser()\n", " return chain2.invoke({\"joke\": parsed_output1})\n", "\n", "\n", "custom_chain.invoke(\"bears\")" ] }, { "cell_type": "markdown", "id": "4728ddd9-914d-42ce-ae9b-72c9ce8ec940", "metadata": {}, "source": [ "Above, the `@chain` decorator is used to convert `custom_chain` into a runnable, which we invoke with the `.invoke()` method.\n", "\n", "If you are using a tracing with [LangSmith](https://docs.smith.langchain.com/), you should see a `custom_chain` trace in there, with the calls to OpenAI nested underneath.\n", "\n", "## Automatic coercion in chains\n", "\n", "When using custom functions in chains with the pipe operator (`|`), you can omit the `RunnableLambda` or `@chain` constructor and rely on coercion. Here's a simple example with a function that takes the output from the model and returns the first five letters of it:" ] }, { "cell_type": "code", "execution_count": 4, "id": "5ab39a87", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Once '" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "prompt = ChatPromptTemplate.from_template(\"tell me a story about {topic}\")\n", "\n", "model = ChatOpenAI()\n", "\n", "chain_with_coerced_function = prompt | model | (lambda x: x.content[:5])\n", "\n", "chain_with_coerced_function.invoke({\"topic\": \"bears\"})" ] }, { "cell_type": "markdown", "id": "c9a481d1", "metadata": {}, "source": [ "Note that we didn't need to wrap the custom function `(lambda x: x.content[:5])` in a `RunnableLambda` constructor because the `model` on the left of the pipe operator is already a Runnable. The custom function is **coerced** into a runnable. See [this section](/docs/how_to/sequence/#coercion) for more information.\n", "\n", "## Passing run metadata\n", "\n", "Runnable lambdas can optionally accept a [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig) parameter, which they can use to pass callbacks, tags, and other configuration information to nested runs." ] }, { "cell_type": "code", "execution_count": 5, "id": "ff0daf0c-49dd-4d21-9772-e5fa133c5f36", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'foo': 'bar'}\n", "Tokens Used: 62\n", "\tPrompt Tokens: 56\n", "\tCompletion Tokens: 6\n", "Successful Requests: 1\n", "Total Cost (USD): $9.6e-05\n" ] } ], "source": [ "import json\n", "\n", "from langchain_core.runnables import RunnableConfig\n", "\n", "\n", "def parse_or_fix(text: str, config: RunnableConfig):\n", " fixing_chain = (\n", " ChatPromptTemplate.from_template(\n", " \"Fix the following text:\\n\\n```text\\n{input}\\n```\\nError: {error}\"\n", " \" Don't narrate, just respond with the fixed data.\"\n", " )\n", " | model\n", " | StrOutputParser()\n", " )\n", " for _ in range(3):\n", " try:\n", " return json.loads(text)\n", " except Exception as e:\n", " text = fixing_chain.invoke({\"input\": text, \"error\": e}, config)\n", " return \"Failed to parse\"\n", "\n", "\n", "from langchain_community.callbacks import get_openai_callback\n", "\n", "with get_openai_callback() as cb:\n", " output = RunnableLambda(parse_or_fix).invoke(\n", " \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n", " )\n", " print(output)\n", " print(cb)" ] }, { "cell_type": "code", "execution_count": 6, "id": "1a5e709e-9d75-48c7-bb9c-503251990505", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'foo': 'bar'}\n", "Tokens Used: 62\n", "\tPrompt Tokens: 56\n", "\tCompletion Tokens: 6\n", "Successful Requests: 1\n", "Total Cost (USD): $9.6e-05\n" ] } ], "source": [ "from langchain_community.callbacks import get_openai_callback\n", "\n", "with get_openai_callback() as cb:\n", " output = RunnableLambda(parse_or_fix).invoke(\n", " \"{foo: bar}\", {\"tags\": [\"my-tag\"], \"callbacks\": [cb]}\n", " )\n", " print(output)\n", " print(cb)" ] }, { "cell_type": "markdown", "id": "922b48bd", "metadata": {}, "source": [ "## Streaming\n", "\n", ":::{.callout-note}\n", "[RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html) is best suited for code that does not need to support streaming. If you need to support streaming (i.e., be able to operate on chunks of inputs and yield chunks of outputs), use [RunnableGenerator](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableGenerator.html) instead as in the example below.\n", ":::\n", "\n", "You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a chain.\n", "\n", "The signature of these generators should be `Iterator[Input] -> Iterator[Output]`. Or for async generators: `AsyncIterator[Input] -> AsyncIterator[Output]`.\n", "\n", "These are useful for:\n", "- implementing a custom output parser\n", "- modifying the output of a previous step, while preserving streaming capabilities\n", "\n", "Here's an example of a custom output parser for comma-separated lists. First, we create a chain that generates such a list as text:" ] }, { "cell_type": "code", "execution_count": 7, "id": "29f55c38", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "lion, tiger, wolf, gorilla, panda" ] } ], "source": [ "from typing import Iterator, List\n", "\n", "prompt = ChatPromptTemplate.from_template(\n", " \"Write a comma-separated list of 5 animals similar to: {animal}. Do not include numbers\"\n", ")\n", "\n", "str_chain = prompt | model | StrOutputParser()\n", "\n", "for chunk in str_chain.stream({\"animal\": \"bear\"}):\n", " print(chunk, end=\"\", flush=True)" ] }, { "cell_type": "markdown", "id": "46345323", "metadata": {}, "source": [ "Next, we define a custom function that will aggregate the currently streamed output and yield it when the model generates the next comma in the list:" ] }, { "cell_type": "code", "execution_count": 8, "id": "f08b8a5b", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['lion']\n", "['tiger']\n", "['wolf']\n", "['gorilla']\n", "['raccoon']\n" ] } ], "source": [ "# This is a custom parser that splits an iterator of llm tokens\n", "# into a list of strings separated by commas\n", "def split_into_list(input: Iterator[str]) -> Iterator[List[str]]:\n", " # hold partial input until we get a comma\n", " buffer = \"\"\n", " for chunk in input:\n", " # add current chunk to buffer\n", " buffer += chunk\n", " # while there are commas in the buffer\n", " while \",\" in buffer:\n", " # split buffer on comma\n", " comma_index = buffer.index(\",\")\n", " # yield everything before the comma\n", " yield [buffer[:comma_index].strip()]\n", " # save the rest for the next iteration\n", " buffer = buffer[comma_index + 1 :]\n", " # yield the last chunk\n", " yield [buffer.strip()]\n", "\n", "\n", "list_chain = str_chain | split_into_list\n", "\n", "for chunk in list_chain.stream({\"animal\": \"bear\"}):\n", " print(chunk, flush=True)" ] }, { "cell_type": "markdown", "id": "0a5adb69", "metadata": {}, "source": [ "Invoking it gives a full array of values:" ] }, { "cell_type": "code", "execution_count": 9, "id": "9ea4ddc6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['lion', 'tiger', 'wolf', 'gorilla', 'raccoon']" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list_chain.invoke({\"animal\": \"bear\"})" ] }, { "cell_type": "markdown", "id": "96e320ed", "metadata": {}, "source": [ "## Async version\n", "\n", "If you are working in an `async` environment, here is an `async` version of the above example:" ] }, { "cell_type": "code", "execution_count": 10, "id": "569dbbef", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['lion']\n", "['tiger']\n", "['wolf']\n", "['gorilla']\n", "['panda']\n" ] } ], "source": [ "from typing import AsyncIterator\n", "\n", "\n", "async def asplit_into_list(\n", " input: AsyncIterator[str],\n", ") -> AsyncIterator[List[str]]: # async def\n", " buffer = \"\"\n", " async for (\n", " chunk\n", " ) in input: # `input` is a `async_generator` object, so use `async for`\n", " buffer += chunk\n", " while \",\" in buffer:\n", " comma_index = buffer.index(\",\")\n", " yield [buffer[:comma_index].strip()]\n", " buffer = buffer[comma_index + 1 :]\n", " yield [buffer.strip()]\n", "\n", "\n", "list_chain = str_chain | asplit_into_list\n", "\n", "async for chunk in list_chain.astream({\"animal\": \"bear\"}):\n", " print(chunk, flush=True)" ] }, { "cell_type": "code", "execution_count": 11, "id": "3a650482", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['lion', 'tiger', 'wolf', 'gorilla', 'panda']" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "await list_chain.ainvoke({\"animal\": \"bear\"})" ] }, { "cell_type": "markdown", "id": "3306ac3b", "metadata": {}, "source": [ "## Next steps\n", "\n", "Now you've learned a few different ways to use custom logic within your chains, and how to implement streaming.\n", "\n", "To learn more, see the other how-to guides on runnables in this section." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/graph_constructing.ipynb
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 4\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to construct knowledge graphs\n", "\n", "In this guide we'll go over the basic ways of constructing a knowledge graph based on unstructured text. The constructured graph can then be used as knowledge base in a RAG application.\n", "\n", "## ⚠️ Security note ⚠️\n", "\n", "Constructing knowledge graphs requires executing write access to the database. There are inherent risks in doing this. Make sure that you verify and validate data before importing it. For more on general security best practices, [see here](/docs/security).\n", "\n", "\n", "## Architecture\n", "\n", "At a high-level, the steps of constructing a knowledge are from text are:\n", "\n", "1. **Extracting structured information from text**: Model is used to extract structured graph information from text.\n", "2. **Storing into graph database**: Storing the extracted structured graph information into a graph database enables downstream RAG applications\n", "\n", "## Setup\n", "\n", "First, get required packages and set environment variables.\n", "In this example, we will be using Neo4j graph database." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchain-openai langchain-experimental neo4j" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We default to OpenAI models in this guide." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdin", "output_type": "stream", "text": [ " ········\n" ] } ], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n", "\n", "# Uncomment the below to use LangSmith. Not required.\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we need to define Neo4j credentials and connection.\n", "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "from langchain_community.graphs import Neo4jGraph\n", "\n", "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n", "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n", "os.environ[\"NEO4J_PASSWORD\"] = \"password\"\n", "\n", "graph = Neo4jGraph()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## LLM Graph Transformer\n", "\n", "Extracting graph data from text enables the transformation of unstructured information into structured formats, facilitating deeper insights and more efficient navigation through complex relationships and patterns. The `LLMGraphTransformer` converts text documents into structured graph documents by leveraging a LLM to parse and categorize entities and their relationships. The selection of the LLM model significantly influences the output by determining the accuracy and nuance of the extracted graph data.\n" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "from langchain_experimental.graph_transformers import LLMGraphTransformer\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(temperature=0, model_name=\"gpt-4-turbo\")\n", "\n", "llm_transformer = LLMGraphTransformer(llm=llm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can pass in example text and examine the results." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n", "Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='MARRIED'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='PROFESSOR')]\n" ] } ], "source": [ "from langchain_core.documents import Document\n", "\n", "text = \"\"\"\n", "Marie Curie, born in 1867, was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.\n", "She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields.\n", "Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first-ever married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes.\n", "She was, in 1906, the first woman to become a professor at the University of Paris.\n", "\"\"\"\n", "documents = [Document(page_content=text)]\n", "graph_documents = llm_transformer.convert_to_graph_documents(documents)\n", "print(f\"Nodes:{graph_documents[0].nodes}\")\n", "print(f\"Relationships:{graph_documents[0].relationships}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Examine the following image to better grasp the structure of the generated knowledge graph. \n", "\n", "![graph_construction1.png](../../static/img/graph_construction1.png)\n", "\n", "Note that the graph construction process is non-deterministic since we are using LLM. Therefore, you might get slightly different results on each execution.\n", "\n", "Additionally, you have the flexibility to define specific types of nodes and relationships for extraction according to your requirements." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Nodes:[Node(id='Marie Curie', type='Person'), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n", "Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n" ] } ], "source": [ "llm_transformer_filtered = LLMGraphTransformer(\n", " llm=llm,\n", " allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n", " allowed_relationships=[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n", ")\n", "graph_documents_filtered = llm_transformer_filtered.convert_to_graph_documents(\n", " documents\n", ")\n", "print(f\"Nodes:{graph_documents_filtered[0].nodes}\")\n", "print(f\"Relationships:{graph_documents_filtered[0].relationships}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a better understanding of the generated graph, we can again visualize it.\n", "\n", "![graph_construction2.png](../../static/img/graph_construction2.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `node_properties` parameter enables the extraction of node properties, allowing the creation of a more detailed graph.\n", "When set to `True`, LLM autonomously identifies and extracts relevant node properties.\n", "Conversely, if `node_properties` is defined as a list of strings, the LLM selectively retrieves only the specified properties from the text." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Nodes:[Node(id='Marie Curie', type='Person', properties={'born_year': '1867'}), Node(id='Pierre Curie', type='Person'), Node(id='University Of Paris', type='Organization')]\n", "Relationships:[Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='Pierre Curie', type='Person'), type='SPOUSE'), Relationship(source=Node(id='Marie Curie', type='Person'), target=Node(id='University Of Paris', type='Organization'), type='WORKED_AT')]\n" ] } ], "source": [ "llm_transformer_props = LLMGraphTransformer(\n", " llm=llm,\n", " allowed_nodes=[\"Person\", \"Country\", \"Organization\"],\n", " allowed_relationships=[\"NATIONALITY\", \"LOCATED_IN\", \"WORKED_AT\", \"SPOUSE\"],\n", " node_properties=[\"born_year\"],\n", ")\n", "graph_documents_props = llm_transformer_props.convert_to_graph_documents(documents)\n", "print(f\"Nodes:{graph_documents_props[0].nodes}\")\n", "print(f\"Relationships:{graph_documents_props[0].relationships}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Storing to graph database\n", "\n", "The generated graph documents can be stored to a graph database using the `add_graph_documents` method." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "graph.add_graph_documents(graph_documents_props)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/graph_mapping.ipynb
{ "cells": [ { "cell_type": "raw", "id": "5e61b0f2-15b9-4241-9ab5-ff0f3f732232", "metadata": {}, "source": [ "---\n", "sidebar_position: 1\n", "---" ] }, { "cell_type": "markdown", "id": "846ef4f4-ee38-4a42-a7d3-1a23826e4830", "metadata": {}, "source": [ "# How to map values to a graph database\n", "\n", "In this guide we'll go over strategies to improve graph database query generation by mapping values from user inputs to database.\n", "When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database.\n", "Therefore, we can introduce a new step in graph database QA system to accurately map values.\n", "\n", "## Setup\n", "\n", "First, get required packages and set environment variables:" ] }, { "cell_type": "code", "execution_count": null, "id": "18294435-182d-48da-bcab-5b8945b6d9cf", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchain-openai neo4j" ] }, { "cell_type": "markdown", "id": "d86dd771-4001-4a34-8680-22e9b50e1e88", "metadata": {}, "source": [ "We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice." ] }, { "cell_type": "code", "execution_count": 2, "id": "9346f8e9-78bf-4667-b3d3-72807a73b718", "metadata": {}, "outputs": [ { "name": "stdin", "output_type": "stream", "text": [ " ········\n" ] } ], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n", "\n", "# Uncomment the below to use LangSmith. Not required.\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "markdown", "id": "271c8a23-e51c-4ead-a76e-cf21107db47e", "metadata": {}, "source": [ "Next, we need to define Neo4j credentials.\n", "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database." ] }, { "cell_type": "code", "execution_count": 3, "id": "a2a3bb65-05c7-4daf-bac2-b25ae7fe2751", "metadata": {}, "outputs": [], "source": [ "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n", "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n", "os.environ[\"NEO4J_PASSWORD\"] = \"password\"" ] }, { "cell_type": "markdown", "id": "50fa4510-29b7-49b6-8496-5e86f694e81f", "metadata": {}, "source": [ "The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors." ] }, { "cell_type": "code", "execution_count": 4, "id": "4ee9ef7a-eef9-4289-b9fd-8fbc31041688", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.graphs import Neo4jGraph\n", "\n", "graph = Neo4jGraph()\n", "\n", "# Import movie information\n", "\n", "movies_query = \"\"\"\n", "LOAD CSV WITH HEADERS FROM \n", "'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n", "AS row\n", "MERGE (m:Movie {id:row.movieId})\n", "SET m.released = date(row.released),\n", " m.title = row.title,\n", " m.imdbRating = toFloat(row.imdbRating)\n", "FOREACH (director in split(row.director, '|') | \n", " MERGE (p:Person {name:trim(director)})\n", " MERGE (p)-[:DIRECTED]->(m))\n", "FOREACH (actor in split(row.actors, '|') | \n", " MERGE (p:Person {name:trim(actor)})\n", " MERGE (p)-[:ACTED_IN]->(m))\n", "FOREACH (genre in split(row.genres, '|') | \n", " MERGE (g:Genre {name:trim(genre)})\n", " MERGE (m)-[:IN_GENRE]->(g))\n", "\"\"\"\n", "\n", "graph.query(movies_query)" ] }, { "cell_type": "markdown", "id": "0cb0ea30-ca55-4f35-aad6-beb57453de66", "metadata": {}, "source": [ "## Detecting entities in the user input\n", "We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database." ] }, { "cell_type": "code", "execution_count": 5, "id": "e1a19424-6046-40c2-81d1-f3b88193a293", "metadata": {}, "outputs": [], "source": [ "from typing import List, Optional\n", "\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.pydantic_v1 import BaseModel, Field\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n", "\n", "\n", "class Entities(BaseModel):\n", " \"\"\"Identifying information about entities.\"\"\"\n", "\n", " names: List[str] = Field(\n", " ...,\n", " description=\"All the person or movies appearing in the text\",\n", " )\n", "\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are extracting person and movies from the text.\",\n", " ),\n", " (\n", " \"human\",\n", " \"Use the given format to extract information from the following \"\n", " \"input: {question}\",\n", " ),\n", " ]\n", ")\n", "\n", "\n", "entity_chain = prompt | llm.with_structured_output(Entities)" ] }, { "cell_type": "markdown", "id": "9c14084c-37a7-4a9c-a026-74e12961c781", "metadata": {}, "source": [ "We can test the entity extraction chain." ] }, { "cell_type": "code", "execution_count": 6, "id": "bbfe0d8f-982e-46e6-88fb-8a4f0d850b07", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Entities(names=['Casino'])" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "entities = entity_chain.invoke({\"question\": \"Who played in Casino movie?\"})\n", "entities" ] }, { "cell_type": "markdown", "id": "a8afbf13-05d0-4383-8050-f88b8c2f6fab", "metadata": {}, "source": [ "We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings." ] }, { "cell_type": "code", "execution_count": 7, "id": "6f92929f-74fb-4db2-b7e1-eb1e9d386a67", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Casino maps to Casino Movie in database\\n'" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "match_query = \"\"\"MATCH (p:Person|Movie)\n", "WHERE p.name CONTAINS $value OR p.title CONTAINS $value\n", "RETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS type\n", "LIMIT 1\n", "\"\"\"\n", "\n", "\n", "def map_to_database(entities: Entities) -> Optional[str]:\n", " result = \"\"\n", " for entity in entities.names:\n", " response = graph.query(match_query, {\"value\": entity})\n", " try:\n", " result += f\"{entity} maps to {response[0]['result']} {response[0]['type']} in database\\n\"\n", " except IndexError:\n", " pass\n", " return result\n", "\n", "\n", "map_to_database(entities)" ] }, { "cell_type": "markdown", "id": "f66c6756-6efb-4b1e-9b5d-87ed914a5212", "metadata": {}, "source": [ "## Custom Cypher generating chain\n", "\n", "We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement.\n", "We will be using the LangChain expression language to accomplish that." ] }, { "cell_type": "code", "execution_count": 8, "id": "8ef3e21d-f1c2-45e2-9511-4920d1cf6e7e", "metadata": {}, "outputs": [], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.runnables import RunnablePassthrough\n", "\n", "# Generate Cypher statement based on natural language input\n", "cypher_template = \"\"\"Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:\n", "{schema}\n", "Entities in the question map to the following database values:\n", "{entities_list}\n", "Question: {question}\n", "Cypher query:\"\"\"\n", "\n", "cypher_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Given an input question, convert it to a Cypher query. No pre-amble.\",\n", " ),\n", " (\"human\", cypher_template),\n", " ]\n", ")\n", "\n", "cypher_response = (\n", " RunnablePassthrough.assign(names=entity_chain)\n", " | RunnablePassthrough.assign(\n", " entities_list=lambda x: map_to_database(x[\"names\"]),\n", " schema=lambda _: graph.get_schema,\n", " )\n", " | cypher_prompt\n", " | llm.bind(stop=[\"\\nCypherResult:\"])\n", " | StrOutputParser()\n", ")" ] }, { "cell_type": "code", "execution_count": 9, "id": "1f0011e3-9660-4975-af2a-486b1bc3b954", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'MATCH (:Movie {title: \"Casino\"})<-[:ACTED_IN]-(actor)\\nRETURN actor.name'" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "cypher = cypher_response.invoke({\"question\": \"Who played in Casino movie?\"})\n", "cypher" ] }, { "cell_type": "markdown", "id": "38095678-611f-4847-a4de-e51ef7ef727c", "metadata": {}, "source": [ "## Generating answers based on database results\n", "\n", "Now that we have a chain that generates the Cypher statement, we need to execute the Cypher statement against the database and send the database results back to an LLM to generate the final answer.\n", "Again, we will be using LCEL." ] }, { "cell_type": "code", "execution_count": 10, "id": "d1fa97c0-1c9c-41d3-9ee1-5f1905d17434", "metadata": {}, "outputs": [], "source": [ "from langchain.chains.graph_qa.cypher_utils import CypherQueryCorrector, Schema\n", "\n", "# Cypher validation tool for relationship directions\n", "corrector_schema = [\n", " Schema(el[\"start\"], el[\"type\"], el[\"end\"])\n", " for el in graph.structured_schema.get(\"relationships\")\n", "]\n", "cypher_validation = CypherQueryCorrector(corrector_schema)\n", "\n", "# Generate natural language response based on database results\n", "response_template = \"\"\"Based on the the question, Cypher query, and Cypher response, write a natural language response:\n", "Question: {question}\n", "Cypher query: {query}\n", "Cypher Response: {response}\"\"\"\n", "\n", "response_prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"Given an input question and Cypher response, convert it to a natural\"\n", " \" language answer. No pre-amble.\",\n", " ),\n", " (\"human\", response_template),\n", " ]\n", ")\n", "\n", "chain = (\n", " RunnablePassthrough.assign(query=cypher_response)\n", " | RunnablePassthrough.assign(\n", " response=lambda x: graph.query(cypher_validation(x[\"query\"])),\n", " )\n", " | response_prompt\n", " | llm\n", " | StrOutputParser()\n", ")" ] }, { "cell_type": "code", "execution_count": 11, "id": "918146e5-7918-46d2-a774-53f9547d8fcb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Robert De Niro, James Woods, Joe Pesci, and Sharon Stone played in the movie \"Casino\".'" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke({\"question\": \"Who played in Casino movie?\"})" ] }, { "cell_type": "code", "execution_count": null, "id": "c7ba75cd-8399-4e54-a6f8-8a411f159f56", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.18" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/graph_prompting.ipynb
{ "cells": [ { "cell_type": "raw", "metadata": {}, "source": [ "---\n", "sidebar_position: 2\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# How to best prompt for Graph-RAG\n", "\n", "In this guide we'll go over prompting strategies to improve graph database query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt.\n", "\n", "## Setup\n", "\n", "First, get required packages and set environment variables:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchain-openai neo4j" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdin", "output_type": "stream", "text": [ " ········\n" ] } ], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n", "\n", "# Uncomment the below to use LangSmith. Not required.\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we need to define Neo4j credentials.\n", "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n", "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n", "os.environ[\"NEO4J_PASSWORD\"] = \"password\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.graphs import Neo4jGraph\n", "\n", "graph = Neo4jGraph()\n", "\n", "# Import movie information\n", "\n", "movies_query = \"\"\"\n", "LOAD CSV WITH HEADERS FROM \n", "'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n", "AS row\n", "MERGE (m:Movie {id:row.movieId})\n", "SET m.released = date(row.released),\n", " m.title = row.title,\n", " m.imdbRating = toFloat(row.imdbRating)\n", "FOREACH (director in split(row.director, '|') | \n", " MERGE (p:Person {name:trim(director)})\n", " MERGE (p)-[:DIRECTED]->(m))\n", "FOREACH (actor in split(row.actors, '|') | \n", " MERGE (p:Person {name:trim(actor)})\n", " MERGE (p)-[:ACTED_IN]->(m))\n", "FOREACH (genre in split(row.genres, '|') | \n", " MERGE (g:Genre {name:trim(genre)})\n", " MERGE (m)-[:IN_GENRE]->(g))\n", "\"\"\"\n", "\n", "graph.query(movies_query)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Filtering graph schema\n", "\n", "At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements.\n", "Let's say we are dealing with the following graph schema:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Node properties are the following:\n", "Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING},Genre {name: STRING}\n", "Relationship properties are the following:\n", "\n", "The relationships are the following:\n", "(:Movie)-[:IN_GENRE]->(:Genre),(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n" ] } ], "source": [ "graph.refresh_schema()\n", "print(graph.schema)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's say we want to exclude the _Genre_ node from the schema representation we pass to an LLM.\n", "We can achieve that using the `exclude` parameter of the GraphCypherQAChain chain." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "from langchain.chains import GraphCypherQAChain\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n", "chain = GraphCypherQAChain.from_llm(\n", " graph=graph, llm=llm, exclude_types=[\"Genre\"], verbose=True\n", ")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Node properties are the following:\n", "Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING}\n", "Relationship properties are the following:\n", "\n", "The relationships are the following:\n", "(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n" ] } ], "source": [ "print(chain.graph_schema)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Few-shot examples\n", "\n", "Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries.\n", "\n", "Let's say we have the following examples:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "examples = [\n", " {\n", " \"question\": \"How many artists are there?\",\n", " \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\",\n", " },\n", " {\n", " \"question\": \"Which actors played in the movie Casino?\",\n", " \"query\": \"MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name\",\n", " },\n", " {\n", " \"question\": \"How many movies has Tom Hanks acted in?\",\n", " \"query\": \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n", " },\n", " {\n", " \"question\": \"List all the genres of the movie Schindler's List\",\n", " \"query\": \"MATCH (m:Movie {{title: 'Schindler\\\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name\",\n", " },\n", " {\n", " \"question\": \"Which actors have worked in movies from both the comedy and action genres?\",\n", " \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n", " },\n", " {\n", " \"question\": \"Which directors have made movies with at least three different actors named 'John'?\",\n", " \"query\": \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n", " },\n", " {\n", " \"question\": \"Identify movies where directors also played a role in the film.\",\n", " \"query\": \"MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name\",\n", " },\n", " {\n", " \"question\": \"Find the actor with the highest number of movies in the database.\",\n", " \"query\": \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\",\n", " },\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can create a few-shot prompt with them like so:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n", "\n", "example_prompt = PromptTemplate.from_template(\n", " \"User input: {question}\\nCypher query: {query}\"\n", ")\n", "prompt = FewShotPromptTemplate(\n", " examples=examples[:5],\n", " example_prompt=example_prompt,\n", " prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n", " suffix=\"User input: {question}\\nCypher query: \",\n", " input_variables=[\"question\", \"schema\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n", "\n", "Here is the schema information\n", "foo.\n", "\n", "Below are a number of examples of questions and their corresponding Cypher queries.\n", "\n", "User input: How many artists are there?\n", "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n", "\n", "User input: Which actors played in the movie Casino?\n", "Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.name\n", "\n", "User input: How many movies has Tom Hanks acted in?\n", "Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n", "\n", "User input: List all the genres of the movie Schindler's List\n", "Cypher query: MATCH (m:Movie {title: 'Schindler\\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.name\n", "\n", "User input: Which actors have worked in movies from both the comedy and action genres?\n", "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n", "\n", "User input: How many artists are there?\n", "Cypher query: \n" ] } ], "source": [ "print(prompt.format(question=\"How many artists are there?\", schema=\"foo\"))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dynamic few-shot examples\n", "\n", "If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.\n", "\n", "We can do just this using an ExampleSelector. In this case we'll use a [SemanticSimilarityExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones: " ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "from langchain_community.vectorstores import Neo4jVector\n", "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "example_selector = SemanticSimilarityExampleSelector.from_examples(\n", " examples,\n", " OpenAIEmbeddings(),\n", " Neo4jVector,\n", " k=5,\n", " input_keys=[\"question\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'query': 'MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)',\n", " 'question': 'How many artists are there?'},\n", " {'query': \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n", " 'question': 'How many movies has Tom Hanks acted in?'},\n", " {'query': \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n", " 'question': 'Which actors have worked in movies from both the comedy and action genres?'},\n", " {'query': \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n", " 'question': \"Which directors have made movies with at least three different actors named 'John'?\"},\n", " {'query': 'MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1',\n", " 'question': 'Find the actor with the highest number of movies in the database.'}]" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "example_selector.select_examples({\"question\": \"how many artists are there?\"})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [], "source": [ "prompt = FewShotPromptTemplate(\n", " example_selector=example_selector,\n", " example_prompt=example_prompt,\n", " prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n", " suffix=\"User input: {question}\\nCypher query: \",\n", " input_variables=[\"question\", \"schema\"],\n", ")" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n", "\n", "Here is the schema information\n", "foo.\n", "\n", "Below are a number of examples of questions and their corresponding Cypher queries.\n", "\n", "User input: How many artists are there?\n", "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n", "\n", "User input: How many movies has Tom Hanks acted in?\n", "Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n", "\n", "User input: Which actors have worked in movies from both the comedy and action genres?\n", "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n", "\n", "User input: Which directors have made movies with at least three different actors named 'John'?\n", "Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\n", "\n", "User input: Find the actor with the highest number of movies in the database.\n", "Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\n", "\n", "User input: how many artists are there?\n", "Cypher query: \n" ] } ], "source": [ "print(prompt.format(question=\"how many artists are there?\", schema=\"foo\"))" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [], "source": [ "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n", "chain = GraphCypherQAChain.from_llm(\n", " graph=graph, llm=llm, cypher_prompt=prompt, verbose=True\n", ")" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n", "Generated Cypher:\n", "\u001b[32;1m\u001b[1;3mMATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\u001b[0m\n", "Full Context:\n", "\u001b[32;1m\u001b[1;3m[{'count(DISTINCT a)': 967}]\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'query': 'How many actors are in the graph?',\n", " 'result': 'There are 967 actors in the graph.'}" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke(\"How many actors are in the graph?\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 4 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/graph_semantic.ipynb
{ "cells": [ { "cell_type": "raw", "id": "19cc5b11-3822-454b-afb3-7bebd7f17b5c", "metadata": {}, "source": [ "---\n", "sidebar_position: 1\n", "---" ] }, { "cell_type": "markdown", "id": "2e17a273-bcfc-433f-8d42-2ba9533feeb8", "metadata": {}, "source": [ "# How to add a semantic layer over graph database\n", "\n", "You can use database queries to retrieve information from a graph database like Neo4j.\n", "One option is to use LLMs to generate Cypher statements.\n", "While that option provides excellent flexibility, the solution could be brittle and not consistently generating precise Cypher statements.\n", "Instead of generating Cypher statements, we can implement Cypher templates as tools in a semantic layer that an LLM agent can interact with.\n", "\n", "![graph_semantic.png](../../static/img/graph_semantic.png)\n", "\n", "## Setup\n", "\n", "First, get required packages and set environment variables:" ] }, { "cell_type": "code", "execution_count": 1, "id": "ffdd48f6-bd05-4e5c-b846-d41183398a55", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install --upgrade --quiet langchain langchain-community langchain-openai neo4j" ] }, { "cell_type": "markdown", "id": "4575b174-01e6-4061-aebf-f81e718de777", "metadata": {}, "source": [ "We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice." ] }, { "cell_type": "code", "execution_count": 2, "id": "eb11c4a8-c00c-4c2d-9309-74a6acfff91c", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " ········\n" ] } ], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n", "\n", "# Uncomment the below to use LangSmith. Not required.\n", "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n", "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "markdown", "id": "76bb62ba-0060-41a2-a7b9-1f9c1faf571a", "metadata": {}, "source": [ "Next, we need to define Neo4j credentials.\n", "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database." ] }, { "cell_type": "code", "execution_count": 3, "id": "ef59a3af-31a8-4ad8-8eb9-132aca66956e", "metadata": {}, "outputs": [], "source": [ "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n", "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n", "os.environ[\"NEO4J_PASSWORD\"] = \"password\"" ] }, { "cell_type": "markdown", "id": "1e8fbc2c-b8e8-4c53-8fce-243cf99d3c1c", "metadata": {}, "source": [ "The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors." ] }, { "cell_type": "code", "execution_count": 4, "id": "c84b1449-6fcd-4140-b591-cb45e8dce207", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.graphs import Neo4jGraph\n", "\n", "graph = Neo4jGraph()\n", "\n", "# Import movie information\n", "\n", "movies_query = \"\"\"\n", "LOAD CSV WITH HEADERS FROM \n", "'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n", "AS row\n", "MERGE (m:Movie {id:row.movieId})\n", "SET m.released = date(row.released),\n", " m.title = row.title,\n", " m.imdbRating = toFloat(row.imdbRating)\n", "FOREACH (director in split(row.director, '|') | \n", " MERGE (p:Person {name:trim(director)})\n", " MERGE (p)-[:DIRECTED]->(m))\n", "FOREACH (actor in split(row.actors, '|') | \n", " MERGE (p:Person {name:trim(actor)})\n", " MERGE (p)-[:ACTED_IN]->(m))\n", "FOREACH (genre in split(row.genres, '|') | \n", " MERGE (g:Genre {name:trim(genre)})\n", " MERGE (m)-[:IN_GENRE]->(g))\n", "\"\"\"\n", "\n", "graph.query(movies_query)" ] }, { "cell_type": "markdown", "id": "403b9acd-aa0d-4157-b9de-6ec426835c43", "metadata": {}, "source": [ "## Custom tools with Cypher templates\n", "\n", "A semantic layer consists of various tools exposed to an LLM that it can use to interact with a knowledge graph.\n", "They can be of various complexity. You can think of each tool in a semantic layer as a function.\n", "\n", "The function we will implement is to retrieve information about movies or their cast." ] }, { "cell_type": "code", "execution_count": 5, "id": "d1dc1c8c-f343-4024-924b-a8a86cf5f1af", "metadata": {}, "outputs": [], "source": [ "from typing import Optional, Type\n", "\n", "# Import things that are needed generically\n", "from langchain.pydantic_v1 import BaseModel, Field\n", "from langchain_core.callbacks import (\n", " AsyncCallbackManagerForToolRun,\n", " CallbackManagerForToolRun,\n", ")\n", "from langchain_core.tools import BaseTool\n", "\n", "description_query = \"\"\"\n", "MATCH (m:Movie|Person)\n", "WHERE m.title CONTAINS $candidate OR m.name CONTAINS $candidate\n", "MATCH (m)-[r:ACTED_IN|HAS_GENRE]-(t)\n", "WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as names\n", "WITH m, type+\": \"+reduce(s=\"\", n IN names | s + n + \", \") as types\n", "WITH m, collect(types) as contexts\n", "WITH m, \"type:\" + labels(m)[0] + \"\\ntitle: \"+ coalesce(m.title, m.name) \n", " + \"\\nyear: \"+coalesce(m.released,\"\") +\"\\n\" +\n", " reduce(s=\"\", c in contexts | s + substring(c, 0, size(c)-2) +\"\\n\") as context\n", "RETURN context LIMIT 1\n", "\"\"\"\n", "\n", "\n", "def get_information(entity: str) -> str:\n", " try:\n", " data = graph.query(description_query, params={\"candidate\": entity})\n", " return data[0][\"context\"]\n", " except IndexError:\n", " return \"No information was found\"" ] }, { "cell_type": "markdown", "id": "bdecc24b-8065-4755-98cc-9c6d093d4897", "metadata": {}, "source": [ "You can observe that we have defined the Cypher statement used to retrieve information.\n", "Therefore, we can avoid generating Cypher statements and use the LLM agent to only populate the input parameters.\n", "To provide additional information to an LLM agent about when to use the tool and their input parameters, we wrap the function as a tool." ] }, { "cell_type": "code", "execution_count": 6, "id": "f4cde772-0d05-475d-a2f0-b53e1669bd13", "metadata": {}, "outputs": [], "source": [ "from typing import Optional, Type\n", "\n", "# Import things that are needed generically\n", "from langchain.pydantic_v1 import BaseModel, Field\n", "from langchain_core.callbacks import (\n", " AsyncCallbackManagerForToolRun,\n", " CallbackManagerForToolRun,\n", ")\n", "from langchain_core.tools import BaseTool\n", "\n", "\n", "class InformationInput(BaseModel):\n", " entity: str = Field(description=\"movie or a person mentioned in the question\")\n", "\n", "\n", "class InformationTool(BaseTool):\n", " name = \"Information\"\n", " description = (\n", " \"useful for when you need to answer questions about various actors or movies\"\n", " )\n", " args_schema: Type[BaseModel] = InformationInput\n", "\n", " def _run(\n", " self,\n", " entity: str,\n", " run_manager: Optional[CallbackManagerForToolRun] = None,\n", " ) -> str:\n", " \"\"\"Use the tool.\"\"\"\n", " return get_information(entity)\n", "\n", " async def _arun(\n", " self,\n", " entity: str,\n", " run_manager: Optional[AsyncCallbackManagerForToolRun] = None,\n", " ) -> str:\n", " \"\"\"Use the tool asynchronously.\"\"\"\n", " return get_information(entity)" ] }, { "cell_type": "markdown", "id": "ff4820aa-2b57-4558-901f-6d984b326738", "metadata": {}, "source": [ "## OpenAI Agent\n", "\n", "LangChain expression language makes it very convenient to define an agent to interact with a graph database over the semantic layer." ] }, { "cell_type": "code", "execution_count": 7, "id": "6e959ac2-537d-4358-a43b-e3a47f68e1d6", "metadata": {}, "outputs": [], "source": [ "from typing import List, Tuple\n", "\n", "from langchain.agents import AgentExecutor\n", "from langchain.agents.format_scratchpad import format_to_openai_function_messages\n", "from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser\n", "from langchain_core.messages import AIMessage, HumanMessage\n", "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", "from langchain_core.utils.function_calling import convert_to_openai_function\n", "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n", "tools = [InformationTool()]\n", "\n", "llm_with_tools = llm.bind(functions=[convert_to_openai_function(t) for t in tools])\n", "\n", "prompt = ChatPromptTemplate.from_messages(\n", " [\n", " (\n", " \"system\",\n", " \"You are a helpful assistant that finds information about movies \"\n", " \" and recommends them. If tools require follow up questions, \"\n", " \"make sure to ask the user for clarification. Make sure to include any \"\n", " \"available options that need to be clarified in the follow up questions \"\n", " \"Do only the things the user specifically requested. \",\n", " ),\n", " MessagesPlaceholder(variable_name=\"chat_history\"),\n", " (\"user\", \"{input}\"),\n", " MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n", " ]\n", ")\n", "\n", "\n", "def _format_chat_history(chat_history: List[Tuple[str, str]]):\n", " buffer = []\n", " for human, ai in chat_history:\n", " buffer.append(HumanMessage(content=human))\n", " buffer.append(AIMessage(content=ai))\n", " return buffer\n", "\n", "\n", "agent = (\n", " {\n", " \"input\": lambda x: x[\"input\"],\n", " \"chat_history\": lambda x: _format_chat_history(x[\"chat_history\"])\n", " if x.get(\"chat_history\")\n", " else [],\n", " \"agent_scratchpad\": lambda x: format_to_openai_function_messages(\n", " x[\"intermediate_steps\"]\n", " ),\n", " }\n", " | prompt\n", " | llm_with_tools\n", " | OpenAIFunctionsAgentOutputParser()\n", ")\n", "\n", "agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)" ] }, { "cell_type": "code", "execution_count": 8, "id": "b0459833-fe84-4ebc-9823-a3a3ffd929e9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n", "\u001b[32;1m\u001b[1;3m\n", "Invoking: `Information` with `{'entity': 'Casino'}`\n", "\n", "\n", "\u001b[0m\u001b[36;1m\u001b[1;3mtype:Movie\n", "title: Casino\n", "year: 1995-11-22\n", "ACTED_IN: Joe Pesci, Robert De Niro, Sharon Stone, James Woods\n", "\u001b[0m\u001b[32;1m\u001b[1;3mThe movie \"Casino\" starred Joe Pesci, Robert De Niro, Sharon Stone, and James Woods.\u001b[0m\n", "\n", "\u001b[1m> Finished chain.\u001b[0m\n" ] }, { "data": { "text/plain": [ "{'input': 'Who played in Casino?',\n", " 'output': 'The movie \"Casino\" starred Joe Pesci, Robert De Niro, Sharon Stone, and James Woods.'}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent_executor.invoke({\"input\": \"Who played in Casino?\"})" ] }, { "cell_type": "code", "execution_count": null, "id": "c2759973-de8a-4624-8930-c90a21d6caa3", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/hybrid.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "14d3fd06", "metadata": { "id": "14d3fd06" }, "source": [ "# Hybrid Search\n", "\n", "The standard search in LangChain is done by vector similarity. However, a number of vectorstores implementations (Astra DB, ElasticSearch, Neo4J, AzureSearch, ...) also support more advanced search combining vector similarity search and other search techniques (full-text, BM25, and so on). This is generally referred to as \"Hybrid\" search.\n", "\n", "**Step 1: Make sure the vectorstore you are using supports hybrid search**\n", "\n", "At the moment, there is no unified way to perform hybrid search in LangChain. Each vectorstore may have their own way to do it. This is generally exposed as a keyword argument that is passed in during `similarity_search`. By reading the documentation or source code, figure out whether the vectorstore you are using supports hybrid search, and, if so, how to use it.\n", "\n", "**Step 2: Add that parameter as a configurable field for the chain**\n", "\n", "This will let you easily call the chain and configure any relevant flags at runtime. See [this documentation](/docs/how_to/configure) for more information on configuration.\n", "\n", "**Step 3: Call the chain with that configurable field**\n", "\n", "Now, at runtime you can call this chain with configurable field.\n", "\n", "## Code Example\n", "\n", "Let's see a concrete example of what this looks like in code. We will use the Cassandra/CQL interface of Astra DB for this example.\n", "\n", "Install the following Python package:" ] }, { "cell_type": "code", "execution_count": null, "id": "c2efe35eea197769", "metadata": { "id": "c2efe35eea197769", "outputId": "527275b4-076e-4b22-945c-e41a59188116" }, "outputs": [], "source": [ "!pip install \"cassio>=0.1.7\"" ] }, { "cell_type": "markdown", "id": "b4ef96d44341cd84", "metadata": { "collapsed": false, "id": "b4ef96d44341cd84" }, "source": [ "Get the [connection secrets](https://docs.datastax.com/en/astra/astra-db-vector/get-started/quickstart.html).\n", "\n", "Initialize cassio:" ] }, { "cell_type": "code", "execution_count": null, "id": "cb2cef097277c32e", "metadata": { "id": "cb2cef097277c32e", "outputId": "4c3d05a0-319a-44a0-8ec3-0a9c78453132" }, "outputs": [], "source": [ "import cassio\n", "\n", "cassio.init(\n", " database_id=\"Your database ID\",\n", " token=\"Your application token\",\n", " keyspace=\"Your key space\",\n", ")" ] }, { "cell_type": "markdown", "id": "e1e51444877f45eb", "metadata": { "collapsed": false, "id": "e1e51444877f45eb" }, "source": [ "Create the Cassandra VectorStore with a standard [index analyzer](https://docs.datastax.com/en/astra/astra-db-vector/cql/use-analyzers-with-cql.html). The index analyzer is needed to enable term matching." ] }, { "cell_type": "code", "execution_count": null, "id": "7345de3c", "metadata": { "id": "7345de3c", "outputId": "d38bcee0-0134-4ac6-8d35-afcce282481b" }, "outputs": [], "source": [ "from cassio.table.cql import STANDARD_ANALYZER\n", "from langchain_community.vectorstores import Cassandra\n", "from langchain_openai import OpenAIEmbeddings\n", "\n", "embeddings = OpenAIEmbeddings()\n", "vectorstore = Cassandra(\n", " embedding=embeddings,\n", " table_name=\"test_hybrid\",\n", " body_index_options=[STANDARD_ANALYZER],\n", " session=None,\n", " keyspace=None,\n", ")\n", "\n", "vectorstore.add_texts(\n", " [\n", " \"In 2023, I visited Paris\",\n", " \"In 2022, I visited New York\",\n", " \"In 2021, I visited New Orleans\",\n", " ]\n", ")" ] }, { "cell_type": "markdown", "id": "73887f23bbab978c", "metadata": { "collapsed": false, "id": "73887f23bbab978c" }, "source": [ "If we do a standard similarity search, we get all the documents:" ] }, { "cell_type": "code", "execution_count": null, "id": "3c2a39fa", "metadata": { "id": "3c2a39fa", "outputId": "5290085b-896c-4c81-9b40-c315331b7009" }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='In 2022, I visited New York'),\n", "Document(page_content='In 2023, I visited Paris'),\n", "Document(page_content='In 2021, I visited New Orleans')]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vectorstore.as_retriever().invoke(\"What city did I visit last?\")" ] }, { "cell_type": "markdown", "id": "78d4c3c79e67d8c3", "metadata": { "collapsed": false, "id": "78d4c3c79e67d8c3" }, "source": [ "The Astra DB vectorstore `body_search` argument can be used to filter the search on the term `new`." ] }, { "cell_type": "code", "execution_count": null, "id": "56393baa", "metadata": { "id": "56393baa", "outputId": "d1c939f3-342f-4df4-94a3-d25429b5a25e" }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='In 2022, I visited New York'),\n", "Document(page_content='In 2021, I visited New Orleans')]" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vectorstore.as_retriever(search_kwargs={\"body_search\": \"new\"}).invoke(\n", " \"What city did I visit last?\"\n", ")" ] }, { "cell_type": "markdown", "id": "88ae97ed", "metadata": { "id": "88ae97ed" }, "source": [ "We can now create the chain that we will use to do question-answering over" ] }, { "cell_type": "code", "execution_count": null, "id": "62707b4f", "metadata": { "id": "62707b4f" }, "outputs": [], "source": [ "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import (\n", " ConfigurableField,\n", " RunnablePassthrough,\n", ")\n", "from langchain_openai import ChatOpenAI" ] }, { "cell_type": "markdown", "id": "b6778ffa", "metadata": { "id": "b6778ffa" }, "source": [ "This is basic question-answering chain set up." ] }, { "cell_type": "code", "execution_count": null, "id": "44a865f6", "metadata": { "id": "44a865f6" }, "outputs": [], "source": [ "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "model = ChatOpenAI()\n", "\n", "retriever = vectorstore.as_retriever()" ] }, { "cell_type": "markdown", "id": "72125166", "metadata": { "id": "72125166" }, "source": [ "Here we mark the retriever as having a configurable field. All vectorstore retrievers have `search_kwargs` as a field. This is just a dictionary, with vectorstore specific fields" ] }, { "cell_type": "code", "execution_count": null, "id": "babbadff", "metadata": { "id": "babbadff" }, "outputs": [], "source": [ "configurable_retriever = retriever.configurable_fields(\n", " search_kwargs=ConfigurableField(\n", " id=\"search_kwargs\",\n", " name=\"Search Kwargs\",\n", " description=\"The search kwargs to use\",\n", " )\n", ")" ] }, { "cell_type": "markdown", "id": "2d481b70", "metadata": { "id": "2d481b70" }, "source": [ "We can now create the chain using our configurable retriever" ] }, { "cell_type": "code", "execution_count": null, "id": "210b0446", "metadata": { "id": "210b0446" }, "outputs": [], "source": [ "chain = (\n", " {\"context\": configurable_retriever, \"question\": RunnablePassthrough()}\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")" ] }, { "cell_type": "code", "execution_count": null, "id": "a38037b2", "metadata": { "id": "a38037b2", "outputId": "1ea14996-5965-4a5e-9678-b9c35ce5c6de" }, "outputs": [ { "data": { "text/plain": [ "Paris" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke(\"What city did I visit last?\")" ] }, { "cell_type": "markdown", "id": "7f6458c3", "metadata": { "id": "7f6458c3" }, "source": [ "We can now invoke the chain with configurable options. `search_kwargs` is the id of the configurable field. The value is the search kwargs to use for Astra DB." ] }, { "cell_type": "code", "execution_count": null, "id": "9gYLqBTH8BFz", "metadata": { "id": "9gYLqBTH8BFz", "outputId": "4358a2e6-f306-48f1-dd5c-781ac8a33e89" }, "outputs": [ { "data": { "text/plain": [ "New York" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke(\n", " \"What city did I visit last?\",\n", " config={\"configurable\": {\"search_kwargs\": {\"body_search\": \"new\"}}},\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/index.mdx
--- sidebar_position: 0 sidebar_class_name: hidden --- # How-to guides Here you’ll find answers to “How do I….?” types of questions. These guides are *goal-oriented* and *concrete*; they're meant to help you complete a specific task. For conceptual explanations see the [Conceptual guide](/docs/concepts/). For end-to-end walkthroughs see [Tutorials](/docs/tutorials). For comprehensive descriptions of every class and function see the [API Reference](https://api.python.langchain.com/en/latest/). ## Installation - [How to: install LangChain packages](/docs/how_to/installation/) - [How to: use LangChain with different Pydantic versions](/docs/how_to/pydantic_compatibility) ## Key features This highlights functionality that is core to using LangChain. - [How to: return structured data from a model](/docs/how_to/structured_output/) - [How to: use a model to call tools](/docs/how_to/tool_calling) - [How to: stream runnables](/docs/how_to/streaming) - [How to: debug your LLM apps](/docs/how_to/debugging/) ## LangChain Expression Language (LCEL) [LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) protocol. [**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives. - [How to: chain runnables](/docs/how_to/sequence) - [How to: stream runnables](/docs/how_to/streaming) - [How to: invoke runnables in parallel](/docs/how_to/parallel/) - [How to: add default invocation args to runnables](/docs/how_to/binding/) - [How to: turn any function into a runnable](/docs/how_to/functions) - [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough) - [How to: configure runnable behavior at runtime](/docs/how_to/configure) - [How to: add message history (memory) to a chain](/docs/how_to/message_history) - [How to: route between sub-chains](/docs/how_to/routing) - [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/) - [How to: inspect runnables](/docs/how_to/inspect) - [How to: add fallbacks to a runnable](/docs/how_to/fallbacks) ## Components These are the core building blocks you can use when building applications. ### Prompt templates [Prompt Templates](/docs/concepts/#prompt-templates) are responsible for formatting user input into a format that can be passed to a language model. - [How to: use few shot examples](/docs/how_to/few_shot_examples) - [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/) - [How to: partially format prompt templates](/docs/how_to/prompts_partial) - [How to: compose prompts together](/docs/how_to/prompts_composition) ### Example selectors [Example Selectors](/docs/concepts/#example-selectors) are responsible for selecting the correct few shot examples to pass to the prompt. - [How to: use example selectors](/docs/how_to/example_selectors) - [How to: select examples by length](/docs/how_to/example_selectors_length_based) - [How to: select examples by semantic similarity](/docs/how_to/example_selectors_similarity) - [How to: select examples by semantic ngram overlap](/docs/how_to/example_selectors_ngram) - [How to: select examples by maximal marginal relevance](/docs/how_to/example_selectors_mmr) ### Chat models [Chat Models](/docs/concepts/#chat-models) are newer forms of language models that take messages in and output a message. - [How to: do function/tool calling](/docs/how_to/tool_calling) - [How to: get models to return structured output](/docs/how_to/structured_output) - [How to: cache model responses](/docs/how_to/chat_model_caching) - [How to: get log probabilities](/docs/how_to/logprobs) - [How to: create a custom chat model class](/docs/how_to/custom_chat_model) - [How to: stream a response back](/docs/how_to/chat_streaming) - [How to: track token usage](/docs/how_to/chat_token_usage_tracking) - [How to: track response metadata across providers](/docs/how_to/response_metadata) - [How to: let your end users choose their model](/docs/how_to/chat_models_universal_init/) - [How to: use chat model to call tools](/docs/how_to/tool_calling) - [How to: stream tool calls](/docs/how_to/tool_streaming) - [How to: few shot prompt tool behavior](/docs/how_to/tools_few_shot) - [How to: bind model-specific formated tools](/docs/how_to/tools_model_specific) - [How to: force specific tool call](/docs/how_to/tool_choice) - [How to: init any model in one line](/docs/how_to/chat_models_universal_init/) ### Messages [Messages](/docs/concepts/#messages) are the input and output of chat models. They have some `content` and a `role`, which describes the source of the message. - [How to: trim messages](/docs/how_to/trim_messages/) - [How to: filter messages](/docs/how_to/filter_messages/) - [How to: merge consecutive messages of the same type](/docs/how_to/merge_message_runs/) ### LLMs What LangChain calls [LLMs](/docs/concepts/#llms) are older forms of language models that take a string in and output a string. - [How to: cache model responses](/docs/how_to/llm_caching) - [How to: create a custom LLM class](/docs/how_to/custom_llm) - [How to: stream a response back](/docs/how_to/streaming_llm) - [How to: track token usage](/docs/how_to/llm_token_usage_tracking) - [How to: work with local LLMs](/docs/how_to/local_llms) ### Output parsers [Output Parsers](/docs/concepts/#output-parsers) are responsible for taking the output of an LLM and parsing into more structured format. - [How to: use output parsers to parse an LLM response into structured format](/docs/how_to/output_parser_structured) - [How to: parse JSON output](/docs/how_to/output_parser_json) - [How to: parse XML output](/docs/how_to/output_parser_xml) - [How to: parse YAML output](/docs/how_to/output_parser_yaml) - [How to: retry when output parsing errors occur](/docs/how_to/output_parser_retry) - [How to: try to fix errors in output parsing](/docs/how_to/output_parser_fixing) - [How to: write a custom output parser class](/docs/how_to/output_parser_custom) ### Document loaders [Document Loaders](/docs/concepts/#document-loaders) are responsible for loading documents from a variety of sources. - [How to: load CSV data](/docs/how_to/document_loader_csv) - [How to: load data from a directory](/docs/how_to/document_loader_directory) - [How to: load HTML data](/docs/how_to/document_loader_html) - [How to: load JSON data](/docs/how_to/document_loader_json) - [How to: load Markdown data](/docs/how_to/document_loader_markdown) - [How to: load Microsoft Office data](/docs/how_to/document_loader_office_file) - [How to: load PDF files](/docs/how_to/document_loader_pdf) - [How to: write a custom document loader](/docs/how_to/document_loader_custom) ### Text splitters [Text Splitters](/docs/concepts/#text-splitters) take a document and split into chunks that can be used for retrieval. - [How to: recursively split text](/docs/how_to/recursive_text_splitter) - [How to: split by HTML headers](/docs/how_to/HTML_header_metadata_splitter) - [How to: split by HTML sections](/docs/how_to/HTML_section_aware_splitter) - [How to: split by character](/docs/how_to/character_text_splitter) - [How to: split code](/docs/how_to/code_splitter) - [How to: split Markdown by headers](/docs/how_to/markdown_header_metadata_splitter) - [How to: recursively split JSON](/docs/how_to/recursive_json_splitter) - [How to: split text into semantic chunks](/docs/how_to/semantic-chunker) - [How to: split by tokens](/docs/how_to/split_by_token) ### Embedding models [Embedding Models](/docs/concepts/#embedding-models) take a piece of text and create a numerical representation of it. - [How to: embed text data](/docs/how_to/embed_text) - [How to: cache embedding results](/docs/how_to/caching_embeddings) ### Vector stores [Vector stores](/docs/concepts/#vector-stores) are databases that can efficiently store and retrieve embeddings. - [How to: use a vector store to retrieve data](/docs/how_to/vectorstores) ### Retrievers [Retrievers](/docs/concepts/#retrievers) are responsible for taking a query and returning relevant documents. - [How to: use a vector store to retrieve data](/docs/how_to/vectorstore_retriever) - [How to: generate multiple queries to retrieve data for](/docs/how_to/MultiQueryRetriever) - [How to: use contextual compression to compress the data retrieved](/docs/how_to/contextual_compression) - [How to: write a custom retriever class](/docs/how_to/custom_retriever) - [How to: add similarity scores to retriever results](/docs/how_to/add_scores_retriever) - [How to: combine the results from multiple retrievers](/docs/how_to/ensemble_retriever) - [How to: reorder retrieved results to mitigate the "lost in the middle" effect](/docs/how_to/long_context_reorder) - [How to: generate multiple embeddings per document](/docs/how_to/multi_vector) - [How to: retrieve the whole document for a chunk](/docs/how_to/parent_document_retriever) - [How to: generate metadata filters](/docs/how_to/self_query) - [How to: create a time-weighted retriever](/docs/how_to/time_weighted_vectorstore) - [How to: use hybrid vector and keyword retrieval](/docs/how_to/hybrid) ### Indexing Indexing is the process of keeping your vectorstore in-sync with the underlying data source. - [How to: reindex data to keep your vectorstore in-sync with the underlying data source](/docs/how_to/indexing) ### Tools LangChain [Tools](/docs/concepts/#tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. - [How to: create custom tools](/docs/how_to/custom_tools) - [How to: use built-in tools and built-in toolkits](/docs/how_to/tools_builtin) - [How to: use chat model to call tools](/docs/how_to/tool_calling) - [How to: pass tool results back to model](/docs/how_to/tool_results_pass_to_model) - [How to: add ad-hoc tool calling capability to LLMs and chat models](/docs/how_to/tools_prompting) - [How to: pass run time values to tools](/docs/how_to/tool_runtime) - [How to: add a human in the loop to tool usage](/docs/how_to/tools_human) - [How to: handle errors when calling tools](/docs/how_to/tools_error) - [How to: disable parallel tool calling](/docs/how_to/tool_choice) ### Multimodal - [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/) - [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/) ### Agents :::note For in depth how-to guides for agents, please check out [LangGraph](https://github.com/langchain-ai/langgraph) documentation. ::: - [How to: use legacy LangChain Agents (AgentExecutor)](/docs/how_to/agent_executor) - [How to: migrate from legacy LangChain agents to LangGraph](/docs/how_to/migrate_agent) ### Callbacks [Callbacks](/docs/concepts/#callbacks) allow you to hook into the various stages of your LLM application's execution. - [How to: pass in callbacks at runtime](/docs/how_to/callbacks_runtime) - [How to: attach callbacks to a module](/docs/how_to/callbacks_attach) - [How to: pass callbacks into a module constructor](/docs/how_to/callbacks_constructor) - [How to: create custom callback handlers](/docs/how_to/custom_callbacks) - [How to: use callbacks in async environments](/docs/how_to/callbacks_async) ### Custom All of LangChain components can easily be extended to support your own versions. - [How to: create a custom chat model class](/docs/how_to/custom_chat_model) - [How to: create a custom LLM class](/docs/how_to/custom_llm) - [How to: write a custom retriever class](/docs/how_to/custom_retriever) - [How to: write a custom document loader](/docs/how_to/document_loader_custom) - [How to: write a custom output parser class](/docs/how_to/output_parser_custom) - [How to: create custom callback handlers](/docs/how_to/custom_callbacks) - [How to: define a custom tool](/docs/how_to/custom_tools) ### Serialization - [How to: save and load LangChain objects](/docs/how_to/serialization) ## Use cases These guides cover use-case specific details. ### Q&A with RAG Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data. For a high-level tutorial on RAG, check out [this guide](/docs/tutorials/rag/). - [How to: add chat history](/docs/how_to/qa_chat_history_how_to/) - [How to: stream](/docs/how_to/qa_streaming/) - [How to: return sources](/docs/how_to/qa_sources/) - [How to: return citations](/docs/how_to/qa_citations/) - [How to: do per-user retrieval](/docs/how_to/qa_per_user/) ### Extraction Extraction is when you use LLMs to extract structured information from unstructured text. For a high level tutorial on extraction, check out [this guide](/docs/tutorials/extraction/). - [How to: use reference examples](/docs/how_to/extraction_examples/) - [How to: handle long text](/docs/how_to/extraction_long_text/) - [How to: do extraction without using function calling](/docs/how_to/extraction_parse) ### Chatbots Chatbots involve using an LLM to have a conversation. For a high-level tutorial on building chatbots, check out [this guide](/docs/tutorials/chatbot/). - [How to: manage memory](/docs/how_to/chatbots_memory) - [How to: do retrieval](/docs/how_to/chatbots_retrieval) - [How to: use tools](/docs/how_to/chatbots_tools) - [How to: manage large chat history](/docs/how_to/trim_messages/) ### Query analysis Query Analysis is the task of using an LLM to generate a query to send to a retriever. For a high-level tutorial on query analysis, check out [this guide](/docs/tutorials/query_analysis/). - [How to: add examples to the prompt](/docs/how_to/query_few_shot) - [How to: handle cases where no queries are generated](/docs/how_to/query_no_queries) - [How to: handle multiple queries](/docs/how_to/query_multiple_queries) - [How to: handle multiple retrievers](/docs/how_to/query_multiple_retrievers) - [How to: construct filters](/docs/how_to/query_constructing_filters) - [How to: deal with high cardinality categorical variables](/docs/how_to/query_high_cardinality) ### Q&A over SQL + CSV You can use LLMs to do question answering over tabular data. For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/). - [How to: use prompting to improve results](/docs/how_to/sql_prompting) - [How to: do query validation](/docs/how_to/sql_query_checking) - [How to: deal with large databases](/docs/how_to/sql_large_db) - [How to: deal with CSV files](/docs/how_to/sql_csv) ### Q&A over graph databases You can use an LLM to do question answering over graph databases. For a high-level tutorial, check out [this guide](/docs/tutorials/graph/). - [How to: map values to a database](/docs/how_to/graph_mapping) - [How to: add a semantic layer over the database](/docs/how_to/graph_semantic) - [How to: improve results with prompting](/docs/how_to/graph_prompting) - [How to: construct knowledge graphs](/docs/how_to/graph_constructing) ## [LangGraph](https://langchain-ai.github.io/langgraph) LangGraph is an extension of LangChain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. LangGraph documentation is currently hosted on a separate site. You can peruse [LangGraph how-to guides here](https://langchain-ai.github.io/langgraph/how-tos/). ## [LangSmith](https://docs.smith.langchain.com/) LangSmith allows you to closely trace, monitor and evaluate your LLM application. It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build. LangSmith documentation is hosted on a separate site. You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/). ### Evaluation <span data-heading-keywords="evaluation,evaluate"></span> Evaluating performance is a vital part of building LLM-powered applications. LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators. To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation).
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/indexing.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "0fe57ac5-31c5-4dbb-b96c-78dead32e1bd", "metadata": {}, "source": [ "# How to use the LangChain indexing API\n", "\n", "Here, we will look at a basic indexing workflow using the LangChain indexing API. \n", "\n", "The indexing API lets you load and keep in sync documents from any source into a vector store. Specifically, it helps:\n", "\n", "* Avoid writing duplicated content into the vector store\n", "* Avoid re-writing unchanged content\n", "* Avoid re-computing embeddings over unchanged content\n", "\n", "All of which should save you time and money, as well as improve your vector search results.\n", "\n", "Crucially, the indexing API will work even with documents that have gone through several \n", "transformation steps (e.g., via text chunking) with respect to the original source documents.\n", "\n", "## How it works\n", "\n", "LangChain indexing makes use of a record manager (`RecordManager`) that keeps track of document writes into the vector store.\n", "\n", "When indexing content, hashes are computed for each document, and the following information is stored in the record manager: \n", "\n", "- the document hash (hash of both page content and metadata)\n", "- write time\n", "- the source id -- each document should include information in its metadata to allow us to determine the ultimate source of this document\n", "\n", "## Deletion modes\n", "\n", "When indexing documents into a vector store, it's possible that some existing documents in the vector store should be deleted. In certain situations you may want to remove any existing documents that are derived from the same sources as the new documents being indexed. In others you may want to delete all existing documents wholesale. The indexing API deletion modes let you pick the behavior you want:\n", "\n", "| Cleanup Mode | De-Duplicates Content | Parallelizable | Cleans Up Deleted Source Docs | Cleans Up Mutations of Source Docs and/or Derived Docs | Clean Up Timing |\n", "|-------------|-----------------------|---------------|----------------------------------|----------------------------------------------------|---------------------|\n", "| None | ✅ | ✅ | ❌ | ❌ | - |\n", "| Incremental | ✅ | ✅ | ❌ | ✅ | Continuously |\n", "| Full | ✅ | ❌ | ✅ | ✅ | At end of indexing |\n", "\n", "\n", "`None` does not do any automatic clean up, allowing the user to manually do clean up of old content. \n", "\n", "`incremental` and `full` offer the following automated clean up:\n", "\n", "* If the content of the source document or derived documents has **changed**, both `incremental` or `full` modes will clean up (delete) previous versions of the content.\n", "* If the source document has been **deleted** (meaning it is not included in the documents currently being indexed), the `full` cleanup mode will delete it from the vector store correctly, but the `incremental` mode will not.\n", "\n", "When content is mutated (e.g., the source PDF file was revised) there will be a period of time during indexing when both the new and old versions may be returned to the user. This happens after the new content was written, but before the old version was deleted.\n", "\n", "* `incremental` indexing minimizes this period of time as it is able to do clean up continuously, as it writes.\n", "* `full` mode does the clean up after all batches have been written.\n", "\n", "## Requirements\n", "\n", "1. Do not use with a store that has been pre-populated with content independently of the indexing API, as the record manager will not know that records have been inserted previously.\n", "2. Only works with LangChain `vectorstore`'s that support:\n", " * document addition by id (`add_documents` method with `ids` argument)\n", " * delete by id (`delete` method with `ids` argument)\n", "\n", "Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n", " \n", "## Caution\n", "\n", "The record manager relies on a time-based mechanism to determine what content can be cleaned up (when using `full` or `incremental` cleanup modes).\n", "\n", "If two tasks run back-to-back, and the first task finishes before the clock time changes, then the second task may not be able to clean up content.\n", "\n", "This is unlikely to be an issue in actual settings for the following reasons:\n", "\n", "1. The RecordManager uses higher resolution timestamps.\n", "2. The data would need to change between the first and the second tasks runs, which becomes unlikely if the time interval between the tasks is small.\n", "3. Indexing tasks typically take more than a few ms." ] }, { "cell_type": "markdown", "id": "ec2109b4-cbcc-44eb-9dac-3f7345f971dc", "metadata": {}, "source": [ "## Quickstart" ] }, { "cell_type": "code", "execution_count": 1, "id": "15f7263e-c82e-4914-874f-9699ea4de93e", "metadata": {}, "outputs": [], "source": [ "from langchain.indexes import SQLRecordManager, index\n", "from langchain_core.documents import Document\n", "from langchain_elasticsearch import ElasticsearchStore\n", "from langchain_openai import OpenAIEmbeddings" ] }, { "cell_type": "markdown", "id": "f81201ab-d997-433c-9f18-ceea70e61cbd", "metadata": {}, "source": [ "Initialize a vector store and set up the embeddings:" ] }, { "cell_type": "code", "execution_count": 2, "id": "4ffc9659-91c0-41e0-ae4b-f7ff0d97292d", "metadata": {}, "outputs": [], "source": [ "collection_name = \"test_index\"\n", "\n", "embedding = OpenAIEmbeddings()\n", "\n", "vectorstore = ElasticsearchStore(\n", " es_url=\"http://localhost:9200\", index_name=\"test_index\", embedding=embedding\n", ")" ] }, { "cell_type": "markdown", "id": "b9b7564f-2334-428b-b513-13045a08b56c", "metadata": {}, "source": [ "Initialize a record manager with an appropriate namespace.\n", "\n", "**Suggestion:** Use a namespace that takes into account both the vector store and the collection name in the vector store; e.g., 'redis/my_docs', 'chromadb/my_docs' or 'postgres/my_docs'." ] }, { "cell_type": "code", "execution_count": 3, "id": "498cc80e-c339-49ee-893b-b18d06346ef8", "metadata": { "tags": [] }, "outputs": [], "source": [ "namespace = f\"elasticsearch/{collection_name}\"\n", "record_manager = SQLRecordManager(\n", " namespace, db_url=\"sqlite:///record_manager_cache.sql\"\n", ")" ] }, { "cell_type": "markdown", "id": "835c2c19-68ec-4086-9066-f7ba40877fd5", "metadata": {}, "source": [ "Create a schema before using the record manager." ] }, { "cell_type": "code", "execution_count": 4, "id": "a4be2da3-3a5c-468a-a824-560157290f7f", "metadata": {}, "outputs": [], "source": [ "record_manager.create_schema()" ] }, { "cell_type": "markdown", "id": "7f07c6bd-6ada-4b17-a8c5-fe5e4a5278fd", "metadata": {}, "source": [ "Let's index some test documents:" ] }, { "cell_type": "code", "execution_count": 5, "id": "bbfdf314-14f9-4799-8fb6-d42de4d51287", "metadata": {}, "outputs": [], "source": [ "doc1 = Document(page_content=\"kitty\", metadata={\"source\": \"kitty.txt\"})\n", "doc2 = Document(page_content=\"doggy\", metadata={\"source\": \"doggy.txt\"})" ] }, { "cell_type": "markdown", "id": "c7d572be-a913-4511-ab64-2864a252458a", "metadata": {}, "source": [ "Indexing into an empty vector store:" ] }, { "cell_type": "code", "execution_count": 6, "id": "67d2a5c8-f2bd-489a-b58e-2c7ba7fefe6f", "metadata": {}, "outputs": [], "source": [ "def _clear():\n", " \"\"\"Hacky helper method to clear content. See the `full` mode section to to understand why it works.\"\"\"\n", " index([], record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\")" ] }, { "cell_type": "markdown", "id": "e5e92e76-f23f-4a61-8a2d-f16baf288700", "metadata": {}, "source": [ "### ``None`` deletion mode\n", "\n", "This mode does not do automatic clean up of old versions of content; however, it still takes care of content de-duplication." ] }, { "cell_type": "code", "execution_count": 7, "id": "e2288cee-1738-4054-af72-23b5c5be8840", "metadata": {}, "outputs": [], "source": [ "_clear()" ] }, { "cell_type": "code", "execution_count": 8, "id": "b253483b-5be0-4151-b732-ca93db4457b1", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 1, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(\n", " [doc1, doc1, doc1, doc1, doc1],\n", " record_manager,\n", " vectorstore,\n", " cleanup=None,\n", " source_id_key=\"source\",\n", ")" ] }, { "cell_type": "code", "execution_count": 9, "id": "7abaf351-bf5a-4d9e-95cd-4e3ecbfc1a84", "metadata": {}, "outputs": [], "source": [ "_clear()" ] }, { "cell_type": "code", "execution_count": 10, "id": "55b6873c-5907-4fa6-84ca-df6cdf1810f0", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index([doc1, doc2], record_manager, vectorstore, cleanup=None, source_id_key=\"source\")" ] }, { "cell_type": "markdown", "id": "7be3e55a-5fe9-4f40-beff-577c2aa5e76a", "metadata": {}, "source": [ "Second time around all content will be skipped:" ] }, { "cell_type": "code", "execution_count": 11, "id": "59d74ca1-2e3d-4b4c-ad88-a4907aa20081", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 0, 'num_updated': 0, 'num_skipped': 2, 'num_deleted': 0}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index([doc1, doc2], record_manager, vectorstore, cleanup=None, source_id_key=\"source\")" ] }, { "cell_type": "markdown", "id": "237a809e-575d-4f02-870e-5906a3643f30", "metadata": {}, "source": [ "### ``\"incremental\"`` deletion mode" ] }, { "cell_type": "code", "execution_count": 12, "id": "6bc91073-0ab4-465a-9302-e7f4bbd2285c", "metadata": {}, "outputs": [], "source": [ "_clear()" ] }, { "cell_type": "code", "execution_count": 13, "id": "4a551091-6d46-4cdd-9af9-8672e5866a0a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(\n", " [doc1, doc2],\n", " record_manager,\n", " vectorstore,\n", " cleanup=\"incremental\",\n", " source_id_key=\"source\",\n", ")" ] }, { "cell_type": "markdown", "id": "d0604ab8-318c-4706-959b-3907af438630", "metadata": {}, "source": [ "Indexing again should result in both documents getting **skipped** -- also skipping the embedding operation!" ] }, { "cell_type": "code", "execution_count": 14, "id": "81785863-391b-4578-a6f6-63b3e5285488", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 0, 'num_updated': 0, 'num_skipped': 2, 'num_deleted': 0}" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(\n", " [doc1, doc2],\n", " record_manager,\n", " vectorstore,\n", " cleanup=\"incremental\",\n", " source_id_key=\"source\",\n", ")" ] }, { "cell_type": "markdown", "id": "b205c1ba-f069-4a4e-af93-dc98afd5c9e6", "metadata": {}, "source": [ "If we provide no documents with incremental indexing mode, nothing will change." ] }, { "cell_type": "code", "execution_count": 15, "id": "1f73ca85-7478-48ab-976c-17b00beec7bd", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 0, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index([], record_manager, vectorstore, cleanup=\"incremental\", source_id_key=\"source\")" ] }, { "cell_type": "markdown", "id": "b8c4ac96-8d60-4ade-8a94-e76ccb536442", "metadata": {}, "source": [ "If we mutate a document, the new version will be written and all old versions sharing the same source will be deleted." ] }, { "cell_type": "code", "execution_count": 16, "id": "27d05bcb-d96d-42eb-88a8-54b33d6cfcdc", "metadata": {}, "outputs": [], "source": [ "changed_doc_2 = Document(page_content=\"puppy\", metadata={\"source\": \"doggy.txt\"})" ] }, { "cell_type": "code", "execution_count": 17, "id": "3809e379-5962-4267-add9-b10f43e24c66", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 1, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 1}" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(\n", " [changed_doc_2],\n", " record_manager,\n", " vectorstore,\n", " cleanup=\"incremental\",\n", " source_id_key=\"source\",\n", ")" ] }, { "cell_type": "markdown", "id": "8bc75b9c-784a-4eb6-b5d6-688e3fbd4658", "metadata": {}, "source": [ "### ``\"full\"`` deletion mode\n", "\n", "In `full` mode the user should pass the `full` universe of content that should be indexed into the indexing function.\n", "\n", "Any documents that are not passed into the indexing function and are present in the vectorstore will be deleted!\n", "\n", "This behavior is useful to handle deletions of source documents." ] }, { "cell_type": "code", "execution_count": 18, "id": "38a14a3d-11c7-43e2-b7f1-08e487961bb5", "metadata": {}, "outputs": [], "source": [ "_clear()" ] }, { "cell_type": "code", "execution_count": 19, "id": "46b5d7b6-ce91-47d2-a9d0-f390e77d847f", "metadata": {}, "outputs": [], "source": [ "all_docs = [doc1, doc2]" ] }, { "cell_type": "code", "execution_count": 20, "id": "06954765-6155-40a0-b95e-33ef87754c8d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(all_docs, record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\")" ] }, { "cell_type": "markdown", "id": "887c45c6-4363-4389-ac56-9cdad682b4c8", "metadata": {}, "source": [ "Say someone deleted the first doc:" ] }, { "cell_type": "code", "execution_count": 21, "id": "35270e4e-9b03-4486-95de-e819ca5e469f", "metadata": {}, "outputs": [], "source": [ "del all_docs[0]" ] }, { "cell_type": "code", "execution_count": 22, "id": "7d835a6a-f468-4d79-9a3d-47db187edbb8", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='doggy', metadata={'source': 'doggy.txt'})]" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "all_docs" ] }, { "cell_type": "markdown", "id": "d940bcb4-cf6d-4c21-a565-e7f53f6dacf1", "metadata": {}, "source": [ "Using full mode will clean up the deleted content as well." ] }, { "cell_type": "code", "execution_count": 23, "id": "1b660eae-3bed-434d-a6f5-2aec96e5f0d6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 0, 'num_updated': 0, 'num_skipped': 1, 'num_deleted': 1}" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(all_docs, record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\")" ] }, { "cell_type": "markdown", "id": "1a7ecdc9-df3c-4601-b2f3-50fdffc6e5f9", "metadata": {}, "source": [ "## Source " ] }, { "cell_type": "markdown", "id": "4002a4ac-02dd-4599-9b23-9b59f54237c8", "metadata": {}, "source": [ "The metadata attribute contains a field called `source`. This source should be pointing at the *ultimate* provenance associated with the given document.\n", "\n", "For example, if these documents are representing chunks of some parent document, the `source` for both documents should be the same and reference the parent document.\n", "\n", "In general, `source` should always be specified. Only use a `None`, if you **never** intend to use `incremental` mode, and for some reason can't specify the `source` field correctly." ] }, { "cell_type": "code", "execution_count": 24, "id": "184d3051-7fd1-4db2-a1d5-218ac0e1e641", "metadata": {}, "outputs": [], "source": [ "from langchain_text_splitters import CharacterTextSplitter" ] }, { "cell_type": "code", "execution_count": 25, "id": "11318248-ad2a-4ef0-bd9b-9d4dab97caba", "metadata": {}, "outputs": [], "source": [ "doc1 = Document(\n", " page_content=\"kitty kitty kitty kitty kitty\", metadata={\"source\": \"kitty.txt\"}\n", ")\n", "doc2 = Document(page_content=\"doggy doggy the doggy\", metadata={\"source\": \"doggy.txt\"})" ] }, { "cell_type": "code", "execution_count": 26, "id": "2cbf0902-d17b-44c9-8983-e8d0e831f909", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='kitty kit', metadata={'source': 'kitty.txt'}),\n", " Document(page_content='tty kitty ki', metadata={'source': 'kitty.txt'}),\n", " Document(page_content='tty kitty', metadata={'source': 'kitty.txt'}),\n", " Document(page_content='doggy doggy', metadata={'source': 'doggy.txt'}),\n", " Document(page_content='the doggy', metadata={'source': 'doggy.txt'})]" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "new_docs = CharacterTextSplitter(\n", " separator=\"t\", keep_separator=True, chunk_size=12, chunk_overlap=2\n", ").split_documents([doc1, doc2])\n", "new_docs" ] }, { "cell_type": "code", "execution_count": 27, "id": "0f9d9bc2-ea85-48ab-b4a2-351c8708b1d4", "metadata": {}, "outputs": [], "source": [ "_clear()" ] }, { "cell_type": "code", "execution_count": 28, "id": "58781d81-f273-4aeb-8df6-540236826d00", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 5, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(\n", " new_docs,\n", " record_manager,\n", " vectorstore,\n", " cleanup=\"incremental\",\n", " source_id_key=\"source\",\n", ")" ] }, { "cell_type": "code", "execution_count": 29, "id": "11b81cb6-5f04-499b-b125-1abb22d353bf", "metadata": {}, "outputs": [], "source": [ "changed_doggy_docs = [\n", " Document(page_content=\"woof woof\", metadata={\"source\": \"doggy.txt\"}),\n", " Document(page_content=\"woof woof woof\", metadata={\"source\": \"doggy.txt\"}),\n", "]" ] }, { "cell_type": "markdown", "id": "ab1c0915-3f9e-42ac-bdb5-3017935c6e7f", "metadata": {}, "source": [ "This should delete the old versions of documents associated with `doggy.txt` source and replace them with the new versions." ] }, { "cell_type": "code", "execution_count": 30, "id": "fec71cb5-6757-4b92-a306-62509f6e867d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 2}" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(\n", " changed_doggy_docs,\n", " record_manager,\n", " vectorstore,\n", " cleanup=\"incremental\",\n", " source_id_key=\"source\",\n", ")" ] }, { "cell_type": "code", "execution_count": 31, "id": "876f5ab6-4b25-423e-8cff-f5a7a014395b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='woof woof', metadata={'source': 'doggy.txt'}),\n", " Document(page_content='woof woof woof', metadata={'source': 'doggy.txt'}),\n", " Document(page_content='tty kitty', metadata={'source': 'kitty.txt'}),\n", " Document(page_content='tty kitty ki', metadata={'source': 'kitty.txt'}),\n", " Document(page_content='kitty kit', metadata={'source': 'kitty.txt'})]" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vectorstore.similarity_search(\"dog\", k=30)" ] }, { "cell_type": "markdown", "id": "c0af4d24-d735-4e5d-ad9b-a2e8b281f9f1", "metadata": {}, "source": [ "## Using with loaders\n", "\n", "Indexing can accept either an iterable of documents or else any loader.\n", "\n", "**Attention:** The loader **must** set source keys correctly." ] }, { "cell_type": "code", "execution_count": 32, "id": "08b68357-27c0-4f07-a51d-61c986aeb359", "metadata": {}, "outputs": [], "source": [ "from langchain_core.document_loaders import BaseLoader\n", "\n", "\n", "class MyCustomLoader(BaseLoader):\n", " def lazy_load(self):\n", " text_splitter = CharacterTextSplitter(\n", " separator=\"t\", keep_separator=True, chunk_size=12, chunk_overlap=2\n", " )\n", " docs = [\n", " Document(page_content=\"woof woof\", metadata={\"source\": \"doggy.txt\"}),\n", " Document(page_content=\"woof woof woof\", metadata={\"source\": \"doggy.txt\"}),\n", " ]\n", " yield from text_splitter.split_documents(docs)\n", "\n", " def load(self):\n", " return list(self.lazy_load())" ] }, { "cell_type": "code", "execution_count": 33, "id": "5dae8e11-c0d6-4fc6-aa0e-68f8d92b5087", "metadata": {}, "outputs": [], "source": [ "_clear()" ] }, { "cell_type": "code", "execution_count": 34, "id": "d8d72f76-6d6e-4a7c-8fea-9bdec05af05b", "metadata": {}, "outputs": [], "source": [ "loader = MyCustomLoader()" ] }, { "cell_type": "code", "execution_count": 35, "id": "945c45cc-5a8d-4bd7-9f36-4ebd4a50e08b", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='woof woof', metadata={'source': 'doggy.txt'}),\n", " Document(page_content='woof woof woof', metadata={'source': 'doggy.txt'})]" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "loader.load()" ] }, { "cell_type": "code", "execution_count": 36, "id": "dcb1ba71-db49-4140-ab4a-c5d64fc2578a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'num_added': 2, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index(loader, record_manager, vectorstore, cleanup=\"full\", source_id_key=\"source\")" ] }, { "cell_type": "code", "execution_count": 37, "id": "441159c1-dd84-48d7-8599-37a65c9fb589", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='woof woof', metadata={'source': 'doggy.txt'}),\n", " Document(page_content='woof woof woof', metadata={'source': 'doggy.txt'})]" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "vectorstore.similarity_search(\"dog\", k=30)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/inspect.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "8c5eb99a", "metadata": {}, "source": [ "# How to inspect runnables\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n", "- [Chaining runnables](/docs/how_to/sequence/)\n", "\n", ":::\n", "\n", "Once you create a runnable with [LangChain Expression Language](/docs/concepts/#langchain-expression-language), you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n", "\n", "This guide shows some ways you can programmatically introspect the internal steps of chains. If you are instead interested in debugging issues in your chain, see [this section](/docs/how_to/debugging) instead.\n", "\n", "First, let's create an example chain. We will create one that does retrieval:" ] }, { "cell_type": "code", "execution_count": null, "id": "d816e954", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain langchain-openai faiss-cpu tiktoken" ] }, { "cell_type": "code", "execution_count": 2, "id": "139228c2", "metadata": {}, "outputs": [], "source": [ "from langchain_community.vectorstores import FAISS\n", "from langchain_core.output_parsers import StrOutputParser\n", "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnablePassthrough\n", "from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n", "\n", "vectorstore = FAISS.from_texts(\n", " [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n", ")\n", "retriever = vectorstore.as_retriever()\n", "\n", "template = \"\"\"Answer the question based only on the following context:\n", "{context}\n", "\n", "Question: {question}\n", "\"\"\"\n", "prompt = ChatPromptTemplate.from_template(template)\n", "\n", "model = ChatOpenAI()\n", "\n", "chain = (\n", " {\"context\": retriever, \"question\": RunnablePassthrough()}\n", " | prompt\n", " | model\n", " | StrOutputParser()\n", ")" ] }, { "cell_type": "markdown", "id": "849e3c42", "metadata": {}, "source": [ "## Get a graph\n", "\n", "You can use the `get_graph()` method to get a graph representation of the runnable:" ] }, { "cell_type": "code", "execution_count": null, "id": "2448b6c2", "metadata": {}, "outputs": [], "source": [ "chain.get_graph()" ] }, { "cell_type": "markdown", "id": "065b02fb", "metadata": {}, "source": [ "## Print a graph\n", "\n", "While that is not super legible, you can use the `print_ascii()` method to show that graph in a way that's easier to understand:" ] }, { "cell_type": "code", "execution_count": 5, "id": "d5ab1515", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " +---------------------------------+ \n", " | Parallel<context,question>Input | \n", " +---------------------------------+ \n", " ** ** \n", " *** *** \n", " ** ** \n", "+----------------------+ +-------------+ \n", "| VectorStoreRetriever | | Passthrough | \n", "+----------------------+ +-------------+ \n", " ** ** \n", " *** *** \n", " ** ** \n", " +----------------------------------+ \n", " | Parallel<context,question>Output | \n", " +----------------------------------+ \n", " * \n", " * \n", " * \n", " +--------------------+ \n", " | ChatPromptTemplate | \n", " +--------------------+ \n", " * \n", " * \n", " * \n", " +------------+ \n", " | ChatOpenAI | \n", " +------------+ \n", " * \n", " * \n", " * \n", " +-----------------+ \n", " | StrOutputParser | \n", " +-----------------+ \n", " * \n", " * \n", " * \n", " +-----------------------+ \n", " | StrOutputParserOutput | \n", " +-----------------------+ \n" ] } ], "source": [ "chain.get_graph().print_ascii()" ] }, { "cell_type": "markdown", "id": "2babf851", "metadata": {}, "source": [ "## Get the prompts\n", "\n", "You may want to see just the prompts that are used in a chain with the `get_prompts()` method:" ] }, { "cell_type": "code", "execution_count": 6, "id": "34b2118d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template='Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n'))])]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.get_prompts()" ] }, { "cell_type": "markdown", "id": "c5a74bd5", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to introspect your composed LCEL chains.\n", "\n", "Next, check out the other how-to guides on runnables in this section, or the related how-to guide on [debugging your chains](/docs/how_to/debugging)." ] }, { "cell_type": "code", "execution_count": null, "id": "ed965769", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/installation.mdx
--- sidebar_position: 2 --- # Installation ## Official release To install LangChain run: import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import CodeBlock from "@theme/CodeBlock"; <Tabs> <TabItem value="pip" label="Pip" default> <CodeBlock language="bash">pip install langchain</CodeBlock> </TabItem> <TabItem value="conda" label="Conda"> <CodeBlock language="bash">conda install langchain -c conda-forge</CodeBlock> </TabItem> </Tabs> This will install the bare minimum requirements of LangChain. A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc. By default, the dependencies needed to do that are NOT installed. You will need to install the dependencies for specific integrations separately. ## From source If you want to install from source, you can do so by cloning the repo and be sure that the directory is `PATH/TO/REPO/langchain/libs/langchain` running: ```bash pip install -e . ``` ## LangChain core The `langchain-core` package contains base abstractions that the rest of the LangChain ecosystem uses, along with the LangChain Expression Language. It is automatically installed by `langchain`, but can also be used separately. Install with: ```bash pip install langchain-core ``` ## LangChain community The `langchain-community` package contains third-party integrations. Install with: ```bash pip install langchain-community ``` ## LangChain experimental The `langchain-experimental` package holds experimental LangChain code, intended for research and experimental uses. Install with: ```bash pip install langchain-experimental ``` ## LangGraph `langgraph` is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. Install with: ```bash pip install langgraph ``` ## LangServe LangServe helps developers deploy LangChain runnables and chains as a REST API. LangServe is automatically installed by LangChain CLI. If not using LangChain CLI, install with: ```bash pip install "langserve[all]" ``` for both client and server dependencies. Or `pip install "langserve[client]"` for client code, and `pip install "langserve[server]"` for server code. ## LangChain CLI The LangChain CLI is useful for working with LangChain templates and other LangServe projects. Install with: ```bash pip install langchain-cli ``` ## LangSmith SDK The LangSmith SDK is automatically installed by LangChain. If not using LangChain, install with: ```bash pip install langsmith ```
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/lcel_cheatsheet.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "f26a4ba9-f29b-452a-85c0-3b2a188f2348", "metadata": {}, "source": [ "# LangChain Expression Language Cheatsheet\n", "\n", "This is a quick reference for all the most important LCEL primitives. For more advanced usage see the [LCEL how-to guides](/docs/how_to/#langchain-expression-language-lcel) and the [full API reference](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html).\n", "\n", "### Invoke a runnable\n", "#### [Runnable.invoke()](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.invoke) / [Runnable.ainvoke()](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.ainvoke)" ] }, { "cell_type": "code", "execution_count": 6, "id": "b3ac3ad1-3c0e-4279-8fde-125809af9d2a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'5'" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "runnable = RunnableLambda(lambda x: str(x))\n", "runnable.invoke(5)\n", "\n", "# Async variant:\n", "# await runnable.ainvoke(5)" ] }, { "cell_type": "markdown", "id": "aa74c79a-1bf6-4015-84bc-d0df6e6a8433", "metadata": {}, "source": [ "### Batch a runnable\n", "#### [Runnable.batch()](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.batch) / [Runnable.abatch()](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.abatch)" ] }, { "cell_type": "code", "execution_count": 7, "id": "3a184890-da09-4ff7-92f9-0d29ca571ae4", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "['7', '8', '9']" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "runnable = RunnableLambda(lambda x: str(x))\n", "runnable.batch([7, 8, 9])\n", "\n", "# Async variant:\n", "# await runnable.abatch([7, 8, 9])" ] }, { "cell_type": "markdown", "id": "b716b97f-cb58-447a-bef5-96202563cd2d", "metadata": {}, "source": [ "### Stream a runnable\n", "#### [Runnable.stream()](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.stream) / [Runnable.astream()](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream)" ] }, { "cell_type": "code", "execution_count": 8, "id": "983aa18b-e44d-4603-aaea-94e1e2339001", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0\n", "1\n", "2\n", "3\n", "4\n" ] } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "\n", "def func(x):\n", " for y in x:\n", " yield str(y)\n", "\n", "\n", "runnable = RunnableLambda(func)\n", "\n", "for chunk in runnable.stream(range(5)):\n", " print(chunk)\n", "\n", "# Async variant:\n", "# async for chunk in await runnable.astream(range(5)):\n", "# print(chunk)" ] }, { "cell_type": "markdown", "id": "6dc6bd08-98f3-4df6-9758-08b443e4328b", "metadata": {}, "source": [ "### Compose runnables\n", "#### Pipe operator `|`" ] }, { "cell_type": "code", "execution_count": 10, "id": "4f744b0b-16ae-43f5-9856-789973457c96", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'foo': 2}, {'foo': 2}]" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "runnable2 = RunnableLambda(lambda x: [x] * 2)\n", "\n", "chain = runnable1 | runnable2\n", "\n", "chain.invoke(2)" ] }, { "cell_type": "markdown", "id": "b01e8694-f820-4457-a27a-7220f789bb2c", "metadata": {}, "source": [ "### Invoke runnables in parallel\n", "#### [RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html)" ] }, { "cell_type": "code", "execution_count": 11, "id": "89e509e5-a9a5-4e56-b6dd-c4f23543c3c8", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'first': {'foo': 2}, 'second': [2, 2]}" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "runnable2 = RunnableLambda(lambda x: [x] * 2)\n", "\n", "chain = RunnableParallel(first=runnable1, second=runnable2)\n", "\n", "chain.invoke(2)" ] }, { "cell_type": "markdown", "id": "103f350b-84b3-421f-b64f-c01575a422bf", "metadata": {}, "source": [ "### Turn any function into a runnable\n", "#### [RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html)" ] }, { "cell_type": "code", "execution_count": 23, "id": "a9c0e43a-8eb5-4985-ad95-43f12c3e05e9", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "7" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "\n", "def func(x):\n", " return x + 5\n", "\n", "\n", "runnable = RunnableLambda(func)\n", "runnable.invoke(2)" ] }, { "cell_type": "markdown", "id": "20399b82-e417-403d-a53b-00f02a6ef2c6", "metadata": {}, "source": [ "### Merge input and output dicts\n", "#### [RunnablePassthrough.assign](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)" ] }, { "cell_type": "code", "execution_count": 13, "id": "ab05d376-9abb-4f26-916a-9aea062e4817", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'foo': 10, 'bar': 17}" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n", "\n", "runnable1 = RunnableLambda(lambda x: x[\"foo\"] + 7)\n", "\n", "chain = RunnablePassthrough.assign(bar=runnable1)\n", "\n", "chain.invoke({\"foo\": 10})" ] }, { "cell_type": "markdown", "id": "7f680f48-654a-44d1-86e3-13e1eb0955cb", "metadata": {}, "source": [ "### Include input dict in output dict\n", "#### [RunnablePassthrough](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html)" ] }, { "cell_type": "code", "execution_count": 15, "id": "3ec00283-e674-461e-a5d1-4876555a2a58", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'bar': 17, 'baz': {'foo': 10}}" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import (\n", " RunnableLambda,\n", " RunnableParallel,\n", " RunnablePassthrough,\n", ")\n", "\n", "runnable1 = RunnableLambda(lambda x: x[\"foo\"] + 7)\n", "\n", "chain = RunnableParallel(bar=runnable1, baz=RunnablePassthrough())\n", "\n", "chain.invoke({\"foo\": 10})" ] }, { "cell_type": "markdown", "id": "baa4e967-fcfd-4176-a720-6916fe0df130", "metadata": {}, "source": [ "### Add default invocation args\n", "#### [Runnable.bind](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.bind)" ] }, { "cell_type": "code", "execution_count": 38, "id": "6c03ce55-5258-4361-857e-d6785c968624", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'bar': 'hello', 'foo': 'bye'}" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Optional\n", "\n", "from langchain_core.runnables import RunnableLambda\n", "\n", "\n", "def func(main_arg: dict, other_arg: Optional[str] = None) -> dict:\n", " if other_arg:\n", " return {**main_arg, **{\"foo\": other_arg}}\n", " return main_arg\n", "\n", "\n", "runnable1 = RunnableLambda(func)\n", "bound_runnable1 = runnable1.bind(other_arg=\"bye\")\n", "\n", "bound_runnable1.invoke({\"bar\": \"hello\"})" ] }, { "cell_type": "markdown", "id": "cad5e0af-007e-47aa-93d4-f06bd837c3fb", "metadata": {}, "source": [ "### Add fallbacks\n", "#### [Runnable.with_fallbacks](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_fallbacks)" ] }, { "cell_type": "code", "execution_count": 19, "id": "5d132d96-802b-4952-a6cc-2caaf4612d0a", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'5foo'" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "runnable1 = RunnableLambda(lambda x: x + \"foo\")\n", "runnable2 = RunnableLambda(lambda x: str(x) + \"foo\")\n", "\n", "chain = runnable1.with_fallbacks([runnable2])\n", "\n", "chain.invoke(5)" ] }, { "cell_type": "markdown", "id": "809f6437-4e20-48b9-bdcb-7f9ea6463a19", "metadata": {}, "source": [ "### Add retries\n", "#### [Runnable.with_retry](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_retry)" ] }, { "cell_type": "code", "execution_count": 32, "id": "49a75c66-d335-4115-9d8f-ca07d69223c4", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "attempt with counter=0\n", "attempt with counter=1\n" ] }, { "data": { "text/plain": [ "2.0" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "counter = -1\n", "\n", "\n", "def func(x):\n", " global counter\n", " counter += 1\n", " print(f\"attempt with {counter=}\")\n", " return x / counter\n", "\n", "\n", "chain = RunnableLambda(func).with_retry(stop_after_attempt=2)\n", "\n", "chain.invoke(2)" ] }, { "cell_type": "markdown", "id": "741934ee-e97f-497d-b42f-371c79739a12", "metadata": {}, "source": [ "### Configure runnable execution\n", "#### [RunnableConfig](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.config.RunnableConfig.html)" ] }, { "cell_type": "code", "execution_count": 40, "id": "d85e357e-c125-4199-98d1-bd0db40e4ac0", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'first': {'foo': 7}, 'second': [7, 7], 'third': '7'}" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "runnable2 = RunnableLambda(lambda x: [x] * 2)\n", "runnable3 = RunnableLambda(lambda x: str(x))\n", "\n", "chain = RunnableParallel(first=runnable1, second=runnable2, third=runnable3)\n", "\n", "chain.invoke(7, config={\"max_concurrency\": 2})" ] }, { "cell_type": "markdown", "id": "8604edb4-4ffa-4cc7-89ba-e0c6947118ab", "metadata": {}, "source": [ "### Add default config to runnable\n", "#### [Runnable.with_config](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config)" ] }, { "cell_type": "code", "execution_count": 41, "id": "dfe8306c-d77c-479a-90ae-464db2b62605", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'first': {'foo': 7}, 'second': [7, 7], 'third': '7'}" ] }, "execution_count": 41, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "runnable2 = RunnableLambda(lambda x: [x] * 2)\n", "runnable3 = RunnableLambda(lambda x: str(x))\n", "\n", "chain = RunnableParallel(first=runnable1, second=runnable2, third=runnable3)\n", "configured_chain = chain.with_config(max_concurrency=2)\n", "\n", "chain.invoke(7)" ] }, { "cell_type": "markdown", "id": "0f3d3bb1-b4b3-4acd-9dd8-da114e514fff", "metadata": {}, "source": [ "### Make runnable attributes configurable\n", "#### [Runnable.with_configurable_fields](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSerializable.html#langchain_core.runnables.base.RunnableSerializable.configurable_fields)" ] }, { "cell_type": "code", "execution_count": 110, "id": "ca265c51-6192-4b5d-bf4e-048b6630abc6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'not bar': 3}" ] }, "execution_count": 110, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Optional\n", "\n", "from langchain_core.runnables import (\n", " ConfigurableField,\n", " RunnableConfig,\n", " RunnableSerializable,\n", ")\n", "\n", "\n", "class FooRunnable(RunnableSerializable[dict, dict]):\n", " output_key: str\n", "\n", " def invoke(\n", " self, input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any\n", " ) -> list:\n", " return self._call_with_config(self.subtract_seven, input, config, **kwargs)\n", "\n", " def subtract_seven(self, input: dict) -> dict:\n", " return {self.output_key: input[\"foo\"] - 7}\n", "\n", "\n", "runnable1 = FooRunnable(output_key=\"bar\")\n", "configurable_runnable1 = runnable1.configurable_fields(\n", " output_key=ConfigurableField(id=\"output_key\")\n", ")\n", "\n", "configurable_runnable1.invoke(\n", " {\"foo\": 10}, config={\"configurable\": {\"output_key\": \"not bar\"}}\n", ")" ] }, { "cell_type": "code", "execution_count": 111, "id": "e1cf0b01-dc03-40b7-9e0b-629895daa8e5", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'bar': 3}" ] }, "execution_count": 111, "metadata": {}, "output_type": "execute_result" } ], "source": [ "configurable_runnable1.invoke({\"foo\": 10})" ] }, { "cell_type": "markdown", "id": "c7b86f34-4098-4c43-9cde-407dc0e03c0d", "metadata": {}, "source": [ "### Make chain components configurable\n", "#### [Runnable.with_configurable_alternatives](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSerializable.html#langchain_core.runnables.base.RunnableSerializable.configurable_alternatives)" ] }, { "cell_type": "code", "execution_count": 106, "id": "98acdc84-b395-4dee-a9c7-d2f88a2486e3", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"{'foo': 7}\"" ] }, "execution_count": 106, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import Any, Optional\n", "\n", "from langchain_core.runnables import RunnableConfig, RunnableLambda, RunnableParallel\n", "\n", "\n", "class ListRunnable(RunnableSerializable[Any, list]):\n", " def invoke(\n", " self, input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any\n", " ) -> list:\n", " return self._call_with_config(self.listify, input, config, **kwargs)\n", "\n", " def listify(self, input: Any) -> list:\n", " return [input]\n", "\n", "\n", "class StrRunnable(RunnableSerializable[Any, str]):\n", " def invoke(\n", " self, input: Any, config: Optional[RunnableConfig] = None, **kwargs: Any\n", " ) -> list:\n", " return self._call_with_config(self.strify, input, config, **kwargs)\n", "\n", " def strify(self, input: Any) -> str:\n", " return str(input)\n", "\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "\n", "configurable_runnable = ListRunnable().configurable_alternatives(\n", " ConfigurableField(id=\"second_step\"), default_key=\"list\", string=StrRunnable()\n", ")\n", "chain = runnable1 | configurable_runnable\n", "\n", "chain.invoke(7, config={\"configurable\": {\"second_step\": \"string\"}})" ] }, { "cell_type": "code", "execution_count": 107, "id": "6e76f8dd-96e2-4b69-8e20-5a0f82c60c9f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'foo': 7}]" ] }, "execution_count": 107, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke(7)" ] }, { "cell_type": "markdown", "id": "33249dee-14dc-4c72-b6ba-a3f10678ab9c", "metadata": {}, "source": [ "### Build a chain dynamically based on input" ] }, { "cell_type": "code", "execution_count": 63, "id": "4611b7ad-c29c-446c-8886-324bfd00eb41", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'foo': 7}" ] }, "execution_count": 63, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "runnable2 = RunnableLambda(lambda x: [x] * 2)\n", "\n", "chain = RunnableLambda(lambda x: runnable1 if x > 6 else runnable2)\n", "\n", "chain.invoke(7)" ] }, { "cell_type": "code", "execution_count": 65, "id": "9655f494-2fba-4c26-a5d8-5683cc61e80d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[5, 5]" ] }, "execution_count": 65, "metadata": {}, "output_type": "execute_result" } ], "source": [ "chain.invoke(5)" ] }, { "cell_type": "markdown", "id": "f1263fe5-6561-419b-a936-687f8b274a5d", "metadata": {}, "source": [ "### Generate a stream of events\n", "#### [Runnable.astream_events](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_events)" ] }, { "cell_type": "code", "execution_count": 66, "id": "b921264a-29e2-41eb-949c-f1bbabe63e54", "metadata": {}, "outputs": [], "source": [ "# | echo: false\n", "\n", "import nest_asyncio\n", "\n", "nest_asyncio.apply()" ] }, { "cell_type": "code", "execution_count": 81, "id": "1988b1b2-b189-43c9-8ffd-d7f275881065", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "event=on_chain_start | name=RunnableSequence | data={'input': 'bar'}\n", "event=on_chain_start | name=first | data={}\n", "event=on_chain_stream | name=first | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_start | name=second | data={}\n", "event=on_chain_end | name=first | data={'output': {'foo': 'bar'}, 'input': 'bar'}\n", "event=on_chain_stream | name=second | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=RunnableSequence | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=second | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=RunnableSequence | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=second | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=RunnableSequence | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=second | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=RunnableSequence | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=second | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_stream | name=RunnableSequence | data={'chunk': {'foo': 'bar'}}\n", "event=on_chain_end | name=second | data={'output': {'foo': 'bar'}, 'input': {'foo': 'bar'}}\n", "event=on_chain_end | name=RunnableSequence | data={'output': {'foo': 'bar'}}\n" ] } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x}, name=\"first\")\n", "\n", "\n", "async def func(x):\n", " for _ in range(5):\n", " yield x\n", "\n", "\n", "runnable2 = RunnableLambda(func, name=\"second\")\n", "\n", "chain = runnable1 | runnable2\n", "\n", "async for event in chain.astream_events(\"bar\", version=\"v2\"):\n", " print(f\"event={event['event']} | name={event['name']} | data={event['data']}\")" ] }, { "cell_type": "markdown", "id": "a7575005-ffbd-4a52-a280-a521799fed5d", "metadata": {}, "source": [ "### Yield batched outputs as they complete\n", "#### [Runnable.batch_as_completed](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.batch_as_completed) / [Runnable.abatch_as_completed](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.abatch_as_completed)" ] }, { "cell_type": "code", "execution_count": 87, "id": "826dedad-d654-4f85-b8b4-a53eda2f2837", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "slept 1\n", "1 None\n", "slept 5\n", "0 None\n" ] } ], "source": [ "import time\n", "\n", "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: time.sleep(x) or print(f\"slept {x}\"))\n", "\n", "for idx, result in runnable1.batch_as_completed([5, 1]):\n", " print(idx, result)\n", "\n", "# Async variant:\n", "# async for idx, result in runnable1.abatch_as_completed([5, 1]):\n", "# print(idx, result)" ] }, { "cell_type": "markdown", "id": "cc1cacde-b35e-474a-a14a-ed9e8a858ba8", "metadata": {}, "source": [ "### Return subset of output dict\n", "#### [Runnable.pick](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.pick)" ] }, { "cell_type": "code", "execution_count": 88, "id": "78dd3ed5-5095-4698-ba4f-d0b73f12a608", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'foo': 7, 'bar': 'hi'}" ] }, "execution_count": 88, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n", "\n", "runnable1 = RunnableLambda(lambda x: x[\"baz\"] + 5)\n", "chain = RunnablePassthrough.assign(foo=runnable1).pick([\"foo\", \"bar\"])\n", "\n", "chain.invoke({\"bar\": \"hi\", \"baz\": 2})" ] }, { "cell_type": "markdown", "id": "5587dbda-d6d7-4480-b2e9-7541af076d36", "metadata": {}, "source": [ "### Declaratively make a batched version of a runnable\n", "#### [Runnable.map](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.map)" ] }, { "cell_type": "code", "execution_count": 20, "id": "fd020416-faf0-4343-945c-823d879d8431", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[5, 6, 7]" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.runnables import RunnableLambda\n", "\n", "runnable1 = RunnableLambda(lambda x: list(range(x)))\n", "runnable2 = RunnableLambda(lambda x: x + 5)\n", "\n", "chain = runnable1 | runnable2.map()\n", "\n", "chain.invoke(3)" ] }, { "cell_type": "markdown", "id": "1e4ed5c3-244e-4491-adbe-83af3fc14265", "metadata": {}, "source": [ "### Get a graph representation of a runnable\n", "#### [Runnable.get_graph](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.get_graph)" ] }, { "cell_type": "code", "execution_count": 100, "id": "11ef1419-a2ee-41bd-a74f-c057a6d737ac", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " +-------------+ \n", " | LambdaInput | \n", " +-------------+ \n", " * \n", " * \n", " * \n", " +------------------------------+ \n", " | Lambda(lambda x: {'foo': x}) | \n", " +------------------------------+ \n", " * \n", " * \n", " * \n", " +-----------------------------+ \n", " | Parallel<second,third>Input | \n", " +-----------------------------+ \n", " **** *** \n", " **** **** \n", " ** ** \n", "+---------------------------+ +--------------------------+ \n", "| Lambda(lambda x: [x] * 2) | | Lambda(lambda x: str(x)) | \n", "+---------------------------+ +--------------------------+ \n", " **** *** \n", " **** **** \n", " ** ** \n", " +------------------------------+ \n", " | Parallel<second,third>Output | \n", " +------------------------------+ \n" ] } ], "source": [ "from langchain_core.runnables import RunnableLambda, RunnableParallel\n", "\n", "runnable1 = RunnableLambda(lambda x: {\"foo\": x})\n", "runnable2 = RunnableLambda(lambda x: [x] * 2)\n", "runnable3 = RunnableLambda(lambda x: str(x))\n", "\n", "chain = runnable1 | RunnableParallel(second=runnable2, third=runnable3)\n", "\n", "chain.get_graph().print_ascii()" ] }, { "cell_type": "markdown", "id": "a2728a5e-e5b4-452d-9e90-afb00f06df44", "metadata": {}, "source": [ "### Get all prompts in a chain\n", "#### [Runnable.get_prompts](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.get_prompts)" ] }, { "cell_type": "code", "execution_count": 102, "id": "9c35a8ec-e0ac-4cdd-a921-19161a66f5bf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "**prompt i=0**\n", "\n", "================================ System Message ================================\n", "\n", "good ai\n", "\n", "================================ Human Message =================================\n", "\n", "{input}\n", "\n", "\n", "\n", "\n", "**prompt i=1**\n", "\n", "================================ System Message ================================\n", "\n", "really good ai\n", "\n", "================================ Human Message =================================\n", "\n", "{input}\n", "\n", "================================== AI Message ==================================\n", "\n", "{ai_output}\n", "\n", "================================ Human Message =================================\n", "\n", "{input2}\n", "\n", "\n", "\n", "\n" ] } ], "source": [ "from langchain_core.prompts import ChatPromptTemplate\n", "from langchain_core.runnables import RunnableLambda\n", "\n", "prompt1 = ChatPromptTemplate.from_messages(\n", " [(\"system\", \"good ai\"), (\"human\", \"{input}\")]\n", ")\n", "prompt2 = ChatPromptTemplate.from_messages(\n", " [\n", " (\"system\", \"really good ai\"),\n", " (\"human\", \"{input}\"),\n", " (\"ai\", \"{ai_output}\"),\n", " (\"human\", \"{input2}\"),\n", " ]\n", ")\n", "fake_llm = RunnableLambda(lambda prompt: \"i am good ai\")\n", "chain = prompt1.assign(ai_output=fake_llm) | prompt2 | fake_llm\n", "\n", "for i, prompt in enumerate(chain.get_prompts()):\n", " print(f\"**prompt {i=}**\\n\")\n", " print(prompt.pretty_repr())\n", " print(\"\\n\" * 3)" ] }, { "cell_type": "markdown", "id": "e5add050-b8e0-48eb-94cb-74afd88ed1a8", "metadata": {}, "source": [ "### Add lifecycle listeners\n", "#### [Runnable.with_listeners](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_listeners)" ] }, { "cell_type": "code", "execution_count": 105, "id": "faa37955-d9ce-46a7-bc39-301bf84421d6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "start_time: 2024-05-17 23:04:00.951065+00:00\n", "end_time: 2024-05-17 23:04:02.958765+00:00\n" ] } ], "source": [ "import time\n", "\n", "from langchain_core.runnables import RunnableLambda\n", "from langchain_core.tracers.schemas import Run\n", "\n", "\n", "def on_start(run_obj: Run):\n", " print(\"start_time:\", run_obj.start_time)\n", "\n", "\n", "def on_end(run_obj: Run):\n", " print(\"end_time:\", run_obj.end_time)\n", "\n", "\n", "runnable1 = RunnableLambda(lambda x: time.sleep(x))\n", "chain = runnable1.with_listeners(on_start=on_start, on_end=on_end)\n", "chain.invoke(2)" ] }, { "cell_type": "code", "execution_count": null, "id": "7186123c-99ce-45ed-a64f-9c627b09f92d", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "poetry-venv-2", "language": "python", "name": "poetry-venv-2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/llm_caching.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "b843b5c4", "metadata": {}, "source": [ "# How to cache LLM responses\n", "\n", "LangChain provides an optional caching layer for LLMs. This is useful for two reasons:\n", "\n", "It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.\n", "It can speed up your application by reducing the number of API calls you make to the LLM provider.\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "0aa6d335", "metadata": {}, "outputs": [], "source": [ "from langchain.globals import set_llm_cache\n", "from langchain_openai import OpenAI\n", "\n", "# To make the caching really obvious, lets use a slower model.\n", "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\", n=2, best_of=2)" ] }, { "cell_type": "code", "execution_count": 12, "id": "f168ff0d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 13.7 ms, sys: 6.54 ms, total: 20.2 ms\n", "Wall time: 330 ms\n" ] }, { "data": { "text/plain": [ "\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\"" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "from langchain.cache import InMemoryCache\n", "\n", "set_llm_cache(InMemoryCache())\n", "\n", "# The first time, it is not yet in cache, so it should take longer\n", "llm.predict(\"Tell me a joke\")" ] }, { "cell_type": "code", "execution_count": 13, "id": "ce7620fb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 436 µs, sys: 921 µs, total: 1.36 ms\n", "Wall time: 1.36 ms\n" ] }, { "data": { "text/plain": [ "\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was two-tired!\"" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "# The second time it is, so it goes faster\n", "llm.predict(\"Tell me a joke\")" ] }, { "cell_type": "markdown", "id": "4ab452f4", "metadata": {}, "source": [ "## SQLite Cache" ] }, { "cell_type": "code", "execution_count": 8, "id": "2e65de83", "metadata": {}, "outputs": [], "source": [ "!rm .langchain.db" ] }, { "cell_type": "code", "execution_count": 9, "id": "0be83715", "metadata": {}, "outputs": [], "source": [ "# We can do the same thing with a SQLite cache\n", "from langchain_community.cache import SQLiteCache\n", "\n", "set_llm_cache(SQLiteCache(database_path=\".langchain.db\"))" ] }, { "cell_type": "code", "execution_count": 10, "id": "9b427ce7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 29.3 ms, sys: 17.3 ms, total: 46.7 ms\n", "Wall time: 364 ms\n" ] }, { "data": { "text/plain": [ "'\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!'" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "# The first time, it is not yet in cache, so it should take longer\n", "llm.predict(\"Tell me a joke\")" ] }, { "cell_type": "code", "execution_count": 11, "id": "87f52611", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 4.58 ms, sys: 2.23 ms, total: 6.8 ms\n", "Wall time: 4.68 ms\n" ] }, { "data": { "text/plain": [ "'\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!'" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "# The second time it is, so it goes faster\n", "llm.predict(\"Tell me a joke\")" ] }, { "cell_type": "code", "execution_count": null, "id": "6a9bb158", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/llm_token_usage_tracking.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "90dff237-bc28-4185-a2c0-d5203bbdeacd", "metadata": {}, "source": [ "# How to track token usage for LLMs\n", "\n", "Tracking token usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "\n", "- [LLMs](/docs/concepts/#llms)\n", ":::\n", "\n", "## Using LangSmith\n", "\n", "You can use [LangSmith](https://www.langchain.com/langsmith) to help track token usage in your LLM application. See the [LangSmith quick start guide](https://docs.smith.langchain.com/).\n", "\n", "## Using callbacks\n", "\n", "There are some API-specific callback context managers that allow you to track token usage across multiple calls. You'll need to check whether such an integration is available for your particular model.\n", "\n", "If such an integration is not available for your model, you can create a custom callback manager by adapting the implementation of the [OpenAI callback manager](https://api.python.langchain.com/en/latest/_modules/langchain_community/callbacks/openai_info.html#OpenAICallbackHandler).\n", "\n", "### OpenAI\n", "\n", "Let's first look at an extremely simple example of tracking token usage for a single Chat model call.\n", "\n", ":::{.callout-danger}\n", "\n", "The callback handler does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). For support in a streaming context, refer to the corresponding guide for chat models [here](/docs/how_to/chat_token_usage_tracking).\n", "\n", ":::" ] }, { "cell_type": "markdown", "id": "f790edd9-823e-4bc5-befa-e9529c7237a0", "metadata": {}, "source": [ "### Single call" ] }, { "cell_type": "code", "execution_count": 1, "id": "2eebbee2-6ca1-4fa8-a3aa-0376888ceefb", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "Why don't scientists trust atoms?\n", "\n", "Because they make up everything.\n", "---\n", "\n", "Total Tokens: 18\n", "Prompt Tokens: 4\n", "Completion Tokens: 14\n", "Total Cost (USD): $3.4e-05\n" ] } ], "source": [ "from langchain_community.callbacks import get_openai_callback\n", "from langchain_openai import OpenAI\n", "\n", "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n", "\n", "with get_openai_callback() as cb:\n", " result = llm.invoke(\"Tell me a joke\")\n", " print(result)\n", " print(\"---\")\n", "print()\n", "\n", "print(f\"Total Tokens: {cb.total_tokens}\")\n", "print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n", "print(f\"Completion Tokens: {cb.completion_tokens}\")\n", "print(f\"Total Cost (USD): ${cb.total_cost}\")" ] }, { "cell_type": "markdown", "id": "7df3be35-dd97-4e3a-bd51-52434ab2249d", "metadata": {}, "source": [ "### Multiple calls\n", "\n", "Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence to a chain. This will also work for an agent which may use multiple steps." ] }, { "cell_type": "code", "execution_count": 2, "id": "3ec10419-294c-44bf-af85-86aabf457cb6", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "Why did the chicken go to the seance?\n", "\n", "To talk to the other side of the road!\n", "--\n", "\n", "\n", "Why did the fish need a lawyer?\n", "\n", "Because it got caught in a net!\n", "\n", "---\n", "Total Tokens: 50\n", "Prompt Tokens: 12\n", "Completion Tokens: 38\n", "Total Cost (USD): $9.400000000000001e-05\n" ] } ], "source": [ "from langchain_community.callbacks import get_openai_callback\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import OpenAI\n", "\n", "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n", "\n", "template = PromptTemplate.from_template(\"Tell me a joke about {topic}\")\n", "chain = template | llm\n", "\n", "with get_openai_callback() as cb:\n", " response = chain.invoke({\"topic\": \"birds\"})\n", " print(response)\n", " response = chain.invoke({\"topic\": \"fish\"})\n", " print(\"--\")\n", " print(response)\n", "\n", "\n", "print()\n", "print(\"---\")\n", "print(f\"Total Tokens: {cb.total_tokens}\")\n", "print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n", "print(f\"Completion Tokens: {cb.completion_tokens}\")\n", "print(f\"Total Cost (USD): ${cb.total_cost}\")" ] }, { "cell_type": "markdown", "id": "ad7a3fba-9fac-4222-8f87-d1d276d27d6e", "metadata": { "tags": [] }, "source": [ "## Streaming\n", "\n", ":::{.callout-danger}\n", "\n", "`get_openai_callback` does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). If you want to count tokens correctly in a streaming context, there are a number of options:\n", "\n", "- Use chat models as described in [this guide](/docs/how_to/chat_token_usage_tracking);\n", "- Implement a [custom callback handler](/docs/how_to/custom_callbacks/) that uses appropriate tokenizers to count the tokens;\n", "- Use a monitoring platform such as [LangSmith](https://www.langchain.com/langsmith).\n", ":::\n", "\n", "Note that when using legacy language models in a streaming context, token counts are not updated:" ] }, { "cell_type": "code", "execution_count": 3, "id": "cd61ed79-7858-49bb-afb5-d41291f597ba", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\n", "Why don't scientists trust atoms?\n", "\n", "Because they make up everything!\n", "\n", "Why don't scientists trust atoms?\n", "\n", "Because they make up everything.\n", "---\n", "\n", "Total Tokens: 0\n", "Prompt Tokens: 0\n", "Completion Tokens: 0\n", "Total Cost (USD): $0.0\n" ] } ], "source": [ "from langchain_community.callbacks import get_openai_callback\n", "from langchain_openai import OpenAI\n", "\n", "llm = OpenAI(model_name=\"gpt-3.5-turbo-instruct\")\n", "\n", "with get_openai_callback() as cb:\n", " for chunk in llm.stream(\"Tell me a joke\"):\n", " print(chunk, end=\"\", flush=True)\n", " print(result)\n", " print(\"---\")\n", "print()\n", "\n", "print(f\"Total Tokens: {cb.total_tokens}\")\n", "print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n", "print(f\"Completion Tokens: {cb.completion_tokens}\")\n", "print(f\"Total Cost (USD): ${cb.total_cost}\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/local_llms.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "b8982428", "metadata": {}, "source": [ "# Run LLMs locally\n", "\n", "## Use case\n", "\n", "The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n", "\n", "This has at least two important benefits:\n", "\n", "1. `Privacy`: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n", "2. `Cost`: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n", "\n", "## Overview\n", "\n", "Running an LLM locally requires a few things:\n", "\n", "1. `Open-source LLM`: An open-source LLM that can be freely modified and shared \n", "2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n", "\n", "### Open-source LLMs\n", "\n", "Users can now gain access to a rapidly growing set of [open-source LLMs](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better). \n", "\n", "These LLMs can be assessed across at least two dimensions (see figure):\n", " \n", "1. `Base model`: What is the base-model and how was it trained?\n", "2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n", "\n", "![Image description](../../static/img/OSS_LLM_overview.png)\n", "\n", "The relative performance of these models can be assessed using several leaderboards, including:\n", "\n", "1. [LmSys](https://chat.lmsys.org/?arena)\n", "2. [GPT4All](https://gpt4all.io/index.html)\n", "3. [HuggingFace](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)\n", "\n", "### Inference\n", "\n", "A few frameworks for this have emerged to support inference of open-source LLMs on various devices:\n", "\n", "1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n", "2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n", "3. [`Ollama`](https://ollama.ai/): Bundles model weights and environment into an app that runs on device and serves the LLM\n", "4. [`llamafile`](https://github.com/Mozilla-Ocho/llamafile): Bundles model weights and everything needed to run the model in a single file, allowing you to run the LLM locally from this file without any additional installation steps\n", "\n", "In general, these frameworks will do a few things:\n", "\n", "1. `Quantization`: Reduce the memory footprint of the raw model weights\n", "2. `Efficient implementation for inference`: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n", "\n", "In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n", "\n", "![Image description](../../static/img/llama-memory-weights.png)\n", "\n", "With less precision, we radically decrease the memory needed to store the LLM in memory.\n", "\n", "In addition, we can see the importance of GPU memory bandwidth [sheet](https://docs.google.com/spreadsheets/d/1OehfHHNSn66BP2h3Bxp2NJTVX97icU0GmCXF6pK23H8/edit#gid=0)!\n", "\n", "A Mac M2 Max is 5-6x faster than a M1 for inference due to the larger GPU memory bandwidth.\n", "\n", "![Image description](../../static/img/llama_t_put.png)\n", "\n", "## Quickstart\n", "\n", "[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n", " \n", "The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n", " \n", "* [Download and run](https://ollama.ai/download) the app\n", "* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama2`\n", "* When the app is running, all models are automatically served on `localhost:11434`\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "86178adb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "' The first man on the moon was Neil Armstrong, who landed on the moon on July 20, 1969 as part of the Apollo 11 mission. obviously.'" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.llms import Ollama\n", "\n", "llm = Ollama(model=\"llama2\")\n", "llm.invoke(\"The first man on the moon was ...\")" ] }, { "cell_type": "markdown", "id": "343ab645", "metadata": {}, "source": [ "Stream tokens as they are being generated." ] }, { "cell_type": "code", "execution_count": 40, "id": "9cd83603", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon's surface, famously declaring \"That's one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission." ] }, { "data": { "text/plain": [ "' The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission.'" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n", "\n", "llm = Ollama(\n", " model=\"llama2\", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n", ")\n", "llm.invoke(\"The first man on the moon was ...\")" ] }, { "cell_type": "markdown", "id": "5cb27414", "metadata": {}, "source": [ "## Environment\n", "\n", "Inference speed is a challenge when running models locally (see above).\n", "\n", "To minimize latency, it is desirable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n", "\n", "And even with GPU, the available GPU memory bandwidth (as noted above) is important.\n", "\n", "### Running Apple silicon GPU\n", "\n", "`Ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n", " \n", "Other frameworks require the user to set up the environment to utilize the Apple GPU.\n", "\n", "For example, `llama.cpp` python bindings can be configured to use the GPU via [Metal](https://developer.apple.com/metal/).\n", "\n", "Metal is a graphics and compute API created by Apple providing near-direct access to the GPU. \n", "\n", "See the [`llama.cpp`](docs/integrations/llms/llamacpp) setup [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to enable this.\n", "\n", "In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n", "\n", "E.g., for me:\n", "\n", "```\n", "conda activate /Users/rlm/miniforge3/envs/llama\n", "```\n", "\n", "With the above confirmed, then:\n", "\n", "```\n", "CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir\n", "```" ] }, { "cell_type": "markdown", "id": "c382e79a", "metadata": {}, "source": [ "## LLMs\n", "\n", "There are various ways to gain access to quantized model weights.\n", "\n", "1. [`HuggingFace`](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n", "2. [`gpt4all`](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n", "3. [`Ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n", "\n", "### Ollama\n", "\n", "With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n", "\n", "* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n", "* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n", "* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)" ] }, { "cell_type": "code", "execution_count": 42, "id": "8ecd2f78", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "' Sure! Here\\'s the answer, broken down step by step:\\n\\nThe first man on the moon was... Neil Armstrong.\\n\\nHere\\'s how I arrived at that answer:\\n\\n1. The first manned mission to land on the moon was Apollo 11.\\n2. The mission included three astronauts: Neil Armstrong, Edwin \"Buzz\" Aldrin, and Michael Collins.\\n3. Neil Armstrong was the mission commander and the first person to set foot on the moon.\\n4. On July 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nSo, the first man on the moon was Neil Armstrong!'" ] }, "execution_count": 42, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.llms import Ollama\n", "\n", "llm = Ollama(model=\"llama2:13b\")\n", "llm.invoke(\"The first man on the moon was ... think step by step\")" ] }, { "cell_type": "markdown", "id": "07c8c0d1", "metadata": {}, "source": [ "### Llama.cpp\n", "\n", "Llama.cpp is compatible with a [broad set of models](https://github.com/ggerganov/llama.cpp).\n", "\n", "For example, below we run inference on `llama2-13b` with 4 bit quantization downloaded from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-GGML/tree/main).\n", "\n", "As noted above, see the [API reference](https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html?highlight=llamacpp#langchain.llms.llamacpp.LlamaCpp) for the full set of parameters. \n", "\n", "From the [llama.cpp API reference docs](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.llamacpp.LlamaCpp.htm), a few are worth commenting on:\n", "\n", "`n_gpu_layers`: number of layers to be loaded into GPU memory\n", "\n", "* Value: 1\n", "* Meaning: Only one layer of the model will be loaded into GPU memory (1 is often sufficient).\n", "\n", "`n_batch`: number of tokens the model should process in parallel \n", "\n", "* Value: n_batch\n", "* Meaning: It's recommended to choose a value between 1 and n_ctx (which in this case is set to 2048)\n", "\n", "`n_ctx`: Token context window\n", "\n", "* Value: 2048\n", "* Meaning: The model will consider a window of 2048 tokens at a time\n", "\n", "`f16_kv`: whether the model should use half-precision for the key/value cache\n", "\n", "* Value: True\n", "* Meaning: The model will use half-precision, which can be more memory efficient; Metal only supports True." ] }, { "cell_type": "code", "execution_count": null, "id": "5eba38dc", "metadata": { "vscode": { "languageId": "plaintext" } }, "outputs": [], "source": [ "%env CMAKE_ARGS=\"-DLLAMA_METAL=on\"\n", "%env FORCE_CMAKE=1\n", "%pip install --upgrade --quiet llama-cpp-python --no-cache-dirclear" ] }, { "cell_type": "code", "execution_count": null, "id": "a88bf0c8-e989-4bcd-bcb7-4d7757e684f2", "metadata": {}, "outputs": [], "source": [ "from langchain_community.llms import LlamaCpp\n", "from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler\n", "\n", "llm = LlamaCpp(\n", " model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n", " n_gpu_layers=1,\n", " n_batch=512,\n", " n_ctx=2048,\n", " f16_kv=True,\n", " callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n", " verbose=True,\n", ")" ] }, { "cell_type": "markdown", "id": "f56f5168", "metadata": {}, "source": [ "The console log will show the below to indicate Metal was enabled properly from steps above:\n", "```\n", "ggml_metal_init: allocating\n", "ggml_metal_init: using MPS\n", "```" ] }, { "cell_type": "code", "execution_count": 45, "id": "7890a077", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Llama.generate: prefix-match hit\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ " and use logical reasoning to figure out who the first man on the moon was.\n", "\n", "Here are some clues:\n", "\n", "1. The first man on the moon was an American.\n", "2. He was part of the Apollo 11 mission.\n", "3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\n", "4. His last name is Armstrong.\n", "\n", "Now, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\n", "Therefore, the first man on the moon was Neil Armstrong!" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n", "llama_print_timings: load time = 9623.21 ms\n", "llama_print_timings: sample time = 143.77 ms / 203 runs ( 0.71 ms per token, 1412.01 tokens per second)\n", "llama_print_timings: prompt eval time = 485.94 ms / 7 tokens ( 69.42 ms per token, 14.40 tokens per second)\n", "llama_print_timings: eval time = 6385.16 ms / 202 runs ( 31.61 ms per token, 31.64 tokens per second)\n", "llama_print_timings: total time = 7279.28 ms\n" ] }, { "data": { "text/plain": [ "\" and use logical reasoning to figure out who the first man on the moon was.\\n\\nHere are some clues:\\n\\n1. The first man on the moon was an American.\\n2. He was part of the Apollo 11 mission.\\n3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\\n4. His last name is Armstrong.\\n\\nNow, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\\nTherefore, the first man on the moon was Neil Armstrong!\"" ] }, "execution_count": 45, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.invoke(\"The first man on the moon was ... Let's think step by step\")" ] }, { "cell_type": "markdown", "id": "831ddf7c", "metadata": {}, "source": [ "### GPT4All\n", "\n", "We can use model weights downloaded from [GPT4All](/docs/integrations/llms/gpt4all) model explorer.\n", "\n", "Similar to what is shown above, we can run inference and use [the API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gpt4all.GPT4All.html) to set parameters of interest." ] }, { "cell_type": "code", "execution_count": null, "id": "e27baf6e", "metadata": {}, "outputs": [], "source": [ "%pip install gpt4all" ] }, { "cell_type": "code", "execution_count": null, "id": "915ecd4c-8f6b-4de3-a787-b64cb7c682b4", "metadata": {}, "outputs": [], "source": [ "from langchain_community.llms import GPT4All\n", "\n", "llm = GPT4All(\n", " model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\"\n", ")" ] }, { "cell_type": "code", "execution_count": 47, "id": "e3d4526f", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\".\\n1) The United States decides to send a manned mission to the moon.2) They choose their best astronauts and train them for this specific mission.3) They build a spacecraft that can take humans to the moon, called the Lunar Module (LM).4) They also create a larger spacecraft, called the Saturn V rocket, which will launch both the LM and the Command Service Module (CSM), which will carry the astronauts into orbit.5) The mission is planned down to the smallest detail: from the trajectory of the rockets to the exact movements of the astronauts during their moon landing.6) On July 16, 1969, the Saturn V rocket launches from Kennedy Space Center in Florida, carrying the Apollo 11 mission crew into space.7) After one and a half orbits around the Earth, the LM separates from the CSM and begins its descent to the moon's surface.8) On July 20, 1969, at 2:56 pm EDT (GMT-4), Neil Armstrong becomes the first man on the moon. He speaks these\"" ] }, "execution_count": 47, "metadata": {}, "output_type": "execute_result" } ], "source": [ "llm.invoke(\"The first man on the moon was ... Let's think step by step\")" ] }, { "cell_type": "markdown", "id": "056854e2-5e4b-4a03-be7e-03192e5c4e1e", "metadata": {}, "source": [ "### llamafile\n", "\n", "One of the simplest ways to run an LLM locally is using a [llamafile](https://github.com/Mozilla-Ocho/llamafile). All you need to do is:\n", "\n", "1) Download a llamafile from [HuggingFace](https://huggingface.co/models?other=llamafile)\n", "2) Make the file executable\n", "3) Run the file\n", "\n", "llamafiles bundle model weights and a [specially-compiled](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#technical-details) version of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) into a single file that can run on most computers any additional dependencies. They also come with an embedded inference server that provides an [API](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md#api-endpoints) for interacting with your model. \n", "\n", "Here's a simple bash script that shows all 3 setup steps:\n", "\n", "```bash\n", "# Download a llamafile from HuggingFace\n", "wget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n", "\n", "# Make the file executable. On Windows, instead just rename the file to end in \".exe\".\n", "chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n", "\n", "# Start the model server. Listens at http://localhost:8080 by default.\n", "./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser\n", "```\n", "\n", "After you run the above setup steps, you can use LangChain to interact with your model:" ] }, { "cell_type": "code", "execution_count": 1, "id": "002e655c-ba18-4db3-ac7b-f33e825d14b6", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"\\nFirstly, let's imagine the scene where Neil Armstrong stepped onto the moon. This happened in 1969. The first man on the moon was Neil Armstrong. We already know that.\\n2nd, let's take a step back. Neil Armstrong didn't have any special powers. He had to land his spacecraft safely on the moon without injuring anyone or causing any damage. If he failed to do this, he would have been killed along with all those people who were on board the spacecraft.\\n3rd, let's imagine that Neil Armstrong successfully landed his spacecraft on the moon and made it back to Earth safely. The next step was for him to be hailed as a hero by his people back home. It took years before Neil Armstrong became an American hero.\\n4th, let's take another step back. Let's imagine that Neil Armstrong wasn't hailed as a hero, and instead, he was just forgotten. This happened in the 1970s. Neil Armstrong wasn't recognized for his remarkable achievement on the moon until after he died.\\n5th, let's take another step back. Let's imagine that Neil Armstrong didn't die in the 1970s and instead, lived to be a hundred years old. This happened in 2036. In the year 2036, Neil Armstrong would have been a centenarian.\\nNow, let's think about the present. Neil Armstrong is still alive. He turned 95 years old on July 20th, 2018. If he were to die now, his achievement of becoming the first human being to set foot on the moon would remain an unforgettable moment in history.\\nI hope this helps you understand the significance and importance of Neil Armstrong's achievement on the moon!\"" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.llms.llamafile import Llamafile\n", "\n", "llm = Llamafile()\n", "\n", "llm.invoke(\"The first man on the moon was ... Let's think step by step.\")" ] }, { "cell_type": "markdown", "id": "6b84e543", "metadata": {}, "source": [ "## Prompts\n", "\n", "Some LLMs will benefit from specific prompts.\n", "\n", "For example, LLaMA will use [special tokens](https://twitter.com/RLanceMartin/status/1681879318493003776?s=20).\n", "\n", "We can use `ConditionalPromptSelector` to set prompt based on the model type." ] }, { "cell_type": "code", "execution_count": null, "id": "16759b7c-7903-4269-b7b4-f83b313d8091", "metadata": {}, "outputs": [], "source": [ "# Set our LLM\n", "llm = LlamaCpp(\n", " model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n", " n_gpu_layers=1,\n", " n_batch=512,\n", " n_ctx=2048,\n", " f16_kv=True,\n", " callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n", " verbose=True,\n", ")" ] }, { "cell_type": "markdown", "id": "66656084", "metadata": {}, "source": [ "Set the associated prompt based upon the model version." ] }, { "cell_type": "code", "execution_count": 58, "id": "8555f5bf", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='<<SYS>> \\n You are an assistant tasked with improving Google search results. \\n <</SYS>> \\n\\n [INST] Generate THREE Google search queries that are similar to this question. The output should be a numbered list of questions and each should have a question mark at the end: \\n\\n {question} [/INST]', template_format='f-string', validate_template=True)" ] }, "execution_count": 58, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain.chains import LLMChain\n", "from langchain.chains.prompt_selector import ConditionalPromptSelector\n", "from langchain_core.prompts import PromptTemplate\n", "\n", "DEFAULT_LLAMA_SEARCH_PROMPT = PromptTemplate(\n", " input_variables=[\"question\"],\n", " template=\"\"\"<<SYS>> \\n You are an assistant tasked with improving Google search \\\n", "results. \\n <</SYS>> \\n\\n [INST] Generate THREE Google search queries that \\\n", "are similar to this question. The output should be a numbered list of questions \\\n", "and each should have a question mark at the end: \\n\\n {question} [/INST]\"\"\",\n", ")\n", "\n", "DEFAULT_SEARCH_PROMPT = PromptTemplate(\n", " input_variables=[\"question\"],\n", " template=\"\"\"You are an assistant tasked with improving Google search \\\n", "results. Generate THREE Google search queries that are similar to \\\n", "this question. The output should be a numbered list of questions and each \\\n", "should have a question mark at the end: {question}\"\"\",\n", ")\n", "\n", "QUESTION_PROMPT_SELECTOR = ConditionalPromptSelector(\n", " default_prompt=DEFAULT_SEARCH_PROMPT,\n", " conditionals=[(lambda llm: isinstance(llm, LlamaCpp), DEFAULT_LLAMA_SEARCH_PROMPT)],\n", ")\n", "\n", "prompt = QUESTION_PROMPT_SELECTOR.get_prompt(llm)\n", "prompt" ] }, { "cell_type": "code", "execution_count": 59, "id": "d0aedfd2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " Sure! Here are three similar search queries with a question mark at the end:\n", "\n", "1. Which NBA team did LeBron James lead to a championship in the year he was drafted?\n", "2. Who won the Grammy Awards for Best New Artist and Best Female Pop Vocal Performance in the same year that Lady Gaga was born?\n", "3. What MLB team did Babe Ruth play for when he hit 60 home runs in a single season?" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n", "llama_print_timings: load time = 14943.19 ms\n", "llama_print_timings: sample time = 72.93 ms / 101 runs ( 0.72 ms per token, 1384.87 tokens per second)\n", "llama_print_timings: prompt eval time = 14942.95 ms / 93 tokens ( 160.68 ms per token, 6.22 tokens per second)\n", "llama_print_timings: eval time = 3430.85 ms / 100 runs ( 34.31 ms per token, 29.15 tokens per second)\n", "llama_print_timings: total time = 18578.26 ms\n" ] }, { "data": { "text/plain": [ "' Sure! Here are three similar search queries with a question mark at the end:\\n\\n1. Which NBA team did LeBron James lead to a championship in the year he was drafted?\\n2. Who won the Grammy Awards for Best New Artist and Best Female Pop Vocal Performance in the same year that Lady Gaga was born?\\n3. What MLB team did Babe Ruth play for when he hit 60 home runs in a single season?'" ] }, "execution_count": 59, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Chain\n", "llm_chain = LLMChain(prompt=prompt, llm=llm)\n", "question = \"What NFL team won the Super Bowl in the year that Justin Bieber was born?\"\n", "llm_chain.run({\"question\": question})" ] }, { "cell_type": "markdown", "id": "6e0d37e7-f1d9-4848-bf2c-c22392ee141f", "metadata": {}, "source": [ "We also can use the LangChain Prompt Hub to fetch and / or store prompts that are model specific.\n", "\n", "This will work with your [LangSmith API key](https://docs.smith.langchain.com/).\n", "\n", "For example, [here](https://smith.langchain.com/hub/rlm/rag-prompt-llama) is a prompt for RAG with LLaMA-specific tokens." ] }, { "cell_type": "markdown", "id": "6ba66260", "metadata": {}, "source": [ "## Use cases\n", "\n", "Given an `llm` created from one of the models above, you can use it for [many use cases](/docs/how_to#use-cases).\n", "\n", "For example, here is a guide to [RAG](/docs/tutorials/local_rag) with local LLMs.\n", "\n", "In general, use cases for local LLMs can be driven by at least two factors:\n", "\n", "* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n", "* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n", "\n", "In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open-source LLMs." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.7" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/logprobs.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "78b45321-7740-4399-b2ad-459811131de3", "metadata": {}, "source": [ "# How to get log probabilities\n", "\n", ":::info Prerequisites\n", "\n", "This guide assumes familiarity with the following concepts:\n", "- [Chat models](/docs/concepts/#chat-models)\n", "\n", ":::\n", "\n", "Certain chat models can be configured to return token-level log probabilities representing the likelihood of a given token. This guide walks through how to get this information in LangChain." ] }, { "cell_type": "markdown", "id": "7f5016bf-2a7b-4140-9b80-8c35c7e5c0d5", "metadata": {}, "source": [ "## OpenAI\n", "\n", "Install the LangChain x OpenAI package and set your API key" ] }, { "cell_type": "code", "execution_count": null, "id": "fe5143fe-84d3-4a91-bae8-629807bbe2cb", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-openai" ] }, { "cell_type": "code", "execution_count": 2, "id": "fd1a2bff-7ac8-46cb-ab95-72c616b45f2c", "metadata": {}, "outputs": [], "source": [ "import getpass\n", "import os\n", "\n", "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()" ] }, { "cell_type": "markdown", "id": "f88ffa0d-f4a7-482c-88de-cbec501a79b1", "metadata": {}, "source": [ "For the OpenAI API to return log probabilities we need to configure the `logprobs=True` param. Then, the logprobs are included on each output [`AIMessage`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html) as part of the `response_metadata`:" ] }, { "cell_type": "code", "execution_count": 3, "id": "d1bf0a9a-e402-4931-ab53-32899f8e0326", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[{'token': 'I', 'bytes': [73], 'logprob': -0.26341408, 'top_logprobs': []},\n", " {'token': \"'m\",\n", " 'bytes': [39, 109],\n", " 'logprob': -0.48584133,\n", " 'top_logprobs': []},\n", " {'token': ' just',\n", " 'bytes': [32, 106, 117, 115, 116],\n", " 'logprob': -0.23484154,\n", " 'top_logprobs': []},\n", " {'token': ' a',\n", " 'bytes': [32, 97],\n", " 'logprob': -0.0018291725,\n", " 'top_logprobs': []},\n", " {'token': ' computer',\n", " 'bytes': [32, 99, 111, 109, 112, 117, 116, 101, 114],\n", " 'logprob': -0.052299336,\n", " 'top_logprobs': []}]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_openai import ChatOpenAI\n", "\n", "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\").bind(logprobs=True)\n", "\n", "msg = llm.invoke((\"human\", \"how are you today\"))\n", "\n", "msg.response_metadata[\"logprobs\"][\"content\"][:5]" ] }, { "cell_type": "markdown", "id": "d1ee1c29-d27e-4353-8c3c-2ed7e7f95ff5", "metadata": {}, "source": [ "And are part of streamed Message chunks as well:" ] }, { "cell_type": "code", "execution_count": 4, "id": "4bfaf309-3b23-43b7-b333-01fc4848992d", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[]\n", "[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}]\n", "[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}, {'token': \"'m\", 'bytes': [39, 109], 'logprob': -0.3238896, 'top_logprobs': []}]\n", "[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}, {'token': \"'m\", 'bytes': [39, 109], 'logprob': -0.3238896, 'top_logprobs': []}, {'token': ' just', 'bytes': [32, 106, 117, 115, 116], 'logprob': -0.23778509, 'top_logprobs': []}]\n", "[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}, {'token': \"'m\", 'bytes': [39, 109], 'logprob': -0.3238896, 'top_logprobs': []}, {'token': ' just', 'bytes': [32, 106, 117, 115, 116], 'logprob': -0.23778509, 'top_logprobs': []}, {'token': ' a', 'bytes': [32, 97], 'logprob': -0.0022134194, 'top_logprobs': []}]\n" ] } ], "source": [ "ct = 0\n", "full = None\n", "for chunk in llm.stream((\"human\", \"how are you today\")):\n", " if ct < 5:\n", " full = chunk if full is None else full + chunk\n", " if \"logprobs\" in full.response_metadata:\n", " print(full.response_metadata[\"logprobs\"][\"content\"])\n", " else:\n", " break\n", " ct += 1" ] }, { "cell_type": "markdown", "id": "19766435", "metadata": {}, "source": [ "## Next steps\n", "\n", "You've now learned how to get logprobs from OpenAI models in LangChain.\n", "\n", "Next, check out the other how-to guides chat models in this section, like [how to get a model to return structured output](/docs/how_to/structured_output) or [how to track token usage](/docs/how_to/chat_token_usage_tracking)." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/long_context_reorder.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "fc0db1bc", "metadata": {}, "source": [ "# How to reorder retrieved results to mitigate the \"lost in the middle\" effect\n", "\n", "Substantial performance degradations in [RAG](/docs/tutorials/rag) applications have been [documented](https://arxiv.org/abs/2307.03172) as the number of retrieved documents grows (e.g., beyond ten). In brief: models are liable to miss relevant information in the middle of long contexts.\n", "\n", "By contrast, queries against vector stores will typically return documents in descending order of relevance (e.g., as measured by cosine similarity of [embeddings](/docs/concepts/#embedding-models)).\n", "\n", "To mitigate the [\"lost in the middle\"](https://arxiv.org/abs/2307.03172) effect, you can re-order documents after retrieval such that the most relevant documents are positioned at extrema (e.g., the first and last pieces of context), and the least relevant documents are positioned in the middle. In some cases this can help surface the most relevant information to LLMs.\n", "\n", "The [LongContextReorder](https://api.python.langchain.com/en/latest/document_transformers/langchain_community.document_transformers.long_context_reorder.LongContextReorder.html) document transformer implements this re-ordering procedure. Below we demonstrate an example." ] }, { "cell_type": "code", "execution_count": null, "id": "2074fdaa-edff-468a-970f-6f5f26e93d4a", "metadata": {}, "outputs": [], "source": [ "%pip install --upgrade --quiet sentence-transformers langchain-chroma langchain langchain-openai langchain-huggingface > /dev/null" ] }, { "cell_type": "markdown", "id": "c97eaaf2-34b7-4770-9949-e1abc4ca5226", "metadata": {}, "source": [ "First we embed some artificial documents and index them in an (in-memory) [Chroma](/docs/integrations/providers/chroma/) vector store. We will use [Hugging Face](/docs/integrations/text_embedding/huggingfacehub/) embeddings, but any LangChain vector store or embeddings model will suffice." ] }, { "cell_type": "code", "execution_count": 2, "id": "49cbcd8e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='This is a document about the Boston Celtics'),\n", " Document(page_content='The Celtics are my favourite team.'),\n", " Document(page_content='L. Kornet is one of the best Celtics players.'),\n", " Document(page_content='The Boston Celtics won the game by 20 points'),\n", " Document(page_content='Larry Bird was an iconic NBA player.'),\n", " Document(page_content='Elden Ring is one of the best games in the last 15 years.'),\n", " Document(page_content='Basquetball is a great sport.'),\n", " Document(page_content='I simply love going to the movies'),\n", " Document(page_content='Fly me to the moon is one of my favourite songs.'),\n", " Document(page_content='This is just a random text.')]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_chroma import Chroma\n", "from langchain_huggingface import HuggingFaceEmbeddings\n", "\n", "# Get embeddings.\n", "embeddings = HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n", "\n", "texts = [\n", " \"Basquetball is a great sport.\",\n", " \"Fly me to the moon is one of my favourite songs.\",\n", " \"The Celtics are my favourite team.\",\n", " \"This is a document about the Boston Celtics\",\n", " \"I simply love going to the movies\",\n", " \"The Boston Celtics won the game by 20 points\",\n", " \"This is just a random text.\",\n", " \"Elden Ring is one of the best games in the last 15 years.\",\n", " \"L. Kornet is one of the best Celtics players.\",\n", " \"Larry Bird was an iconic NBA player.\",\n", "]\n", "\n", "# Create a retriever\n", "retriever = Chroma.from_texts(texts, embedding=embeddings).as_retriever(\n", " search_kwargs={\"k\": 10}\n", ")\n", "query = \"What can you tell me about the Celtics?\"\n", "\n", "# Get relevant documents ordered by relevance score\n", "docs = retriever.invoke(query)\n", "docs" ] }, { "cell_type": "markdown", "id": "175d031a-43fa-42f4-93c4-2ba52c3c3ee5", "metadata": {}, "source": [ "Note that documents are returned in descending order of relevance to the query. The `LongContextReorder` document transformer will implement the re-ordering described above:" ] }, { "cell_type": "code", "execution_count": 3, "id": "9a1181f2-a3dc-4614-9233-2196ab65939e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='The Celtics are my favourite team.'),\n", " Document(page_content='The Boston Celtics won the game by 20 points'),\n", " Document(page_content='Elden Ring is one of the best games in the last 15 years.'),\n", " Document(page_content='I simply love going to the movies'),\n", " Document(page_content='This is just a random text.'),\n", " Document(page_content='Fly me to the moon is one of my favourite songs.'),\n", " Document(page_content='Basquetball is a great sport.'),\n", " Document(page_content='Larry Bird was an iconic NBA player.'),\n", " Document(page_content='L. Kornet is one of the best Celtics players.'),\n", " Document(page_content='This is a document about the Boston Celtics')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from langchain_community.document_transformers import LongContextReorder\n", "\n", "# Reorder the documents:\n", "# Less relevant document will be at the middle of the list and more\n", "# relevant elements at beginning / end.\n", "reordering = LongContextReorder()\n", "reordered_docs = reordering.transform_documents(docs)\n", "\n", "# Confirm that the 4 relevant documents are at beginning and end.\n", "reordered_docs" ] }, { "cell_type": "markdown", "id": "a8d2ef0c-c397-4d8d-8118-3f7acf86d241", "metadata": {}, "source": [ "Below, we show how to incorporate the re-ordered documents into a simple question-answering chain:" ] }, { "cell_type": "code", "execution_count": 5, "id": "8bbea705-d5b9-4ed5-9957-e12547283622", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "The Celtics are a professional basketball team and one of the most iconic franchises in the NBA. They are highly regarded and have a large fan base. The team has had many successful seasons and is often considered one of the top teams in the league. They have a strong history and have produced many great players, such as Larry Bird and L. Kornet. The team is based in Boston and is often referred to as the Boston Celtics.\n" ] } ], "source": [ "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain_core.prompts import PromptTemplate\n", "from langchain_openai import OpenAI\n", "\n", "llm = OpenAI()\n", "\n", "prompt_template = \"\"\"\n", "Given these texts:\n", "-----\n", "{context}\n", "-----\n", "Please answer the following question:\n", "{query}\n", "\"\"\"\n", "\n", "prompt = PromptTemplate(\n", " template=prompt_template,\n", " input_variables=[\"context\", \"query\"],\n", ")\n", "\n", "# Create and invoke the chain:\n", "chain = create_stuff_documents_chain(llm, prompt)\n", "response = chain.invoke({\"context\": reordered_docs, \"query\": query})\n", "print(response)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/markdown_header_metadata_splitter.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "70e9b619", "metadata": {}, "source": [ "# How to split Markdown by Headers\n", "\n", "### Motivation\n", "\n", "Many chat or Q+A applications involve chunking input documents prior to embedding and vector storage.\n", "\n", "[These notes](https://www.pinecone.io/learn/chunking-strategies/) from Pinecone provide some useful tips:\n", "\n", "```\n", "When a full paragraph or document is embedded, the embedding process considers both the overall context and the relationships between the sentences and phrases within the text. This can result in a more comprehensive vector representation that captures the broader meaning and themes of the text.\n", "```\n", " \n", "As mentioned, chunking often aims to keep text with common context together. With this in mind, we might want to specifically honor the structure of the document itself. For example, a markdown file is organized by headers. Creating chunks within specific header groups is an intuitive idea. To address this challenge, we can use [MarkdownHeaderTextSplitter](https://api.python.langchain.com/en/latest/markdown/langchain_text_splitters.markdown.MarkdownHeaderTextSplitter.html). This will split a markdown file by a specified set of headers. \n", "\n", "For example, if we want to split this markdown:\n", "```\n", "md = '# Foo\\n\\n ## Bar\\n\\nHi this is Jim \\nHi this is Joe\\n\\n ## Baz\\n\\n Hi this is Molly' \n", "```\n", " \n", "We can specify the headers to split on:\n", "```\n", "[(\"#\", \"Header 1\"),(\"##\", \"Header 2\")]\n", "```\n", "\n", "And content is grouped or split by common headers:\n", "```\n", "{'content': 'Hi this is Jim \\nHi this is Joe', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Bar'}}\n", "{'content': 'Hi this is Molly', 'metadata': {'Header 1': 'Foo', 'Header 2': 'Baz'}}\n", "```\n", "\n", "Let's have a look at some examples below.\n", "\n", "### Basic usage:" ] }, { "cell_type": "code", "execution_count": null, "id": "0cd11819-4d4e-4fc1-aa85-faf69d24db89", "metadata": {}, "outputs": [], "source": [ "%pip install -qU langchain-text-splitters" ] }, { "cell_type": "code", "execution_count": 1, "id": "ceb3c1fb", "metadata": { "ExecuteTime": { "end_time": "2023-09-25T19:12:27.243781300Z", "start_time": "2023-09-25T19:12:24.943559400Z" } }, "outputs": [], "source": [ "from langchain_text_splitters import MarkdownHeaderTextSplitter" ] }, { "cell_type": "code", "execution_count": 2, "id": "2ae3649b", "metadata": { "ExecuteTime": { "end_time": "2023-09-25T19:12:31.917013600Z", "start_time": "2023-09-25T19:12:31.905694500Z" } }, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Hi this is Jim \\nHi this is Joe', metadata={'Header 1': 'Foo', 'Header 2': 'Bar'}),\n", " Document(page_content='Hi this is Lance', metadata={'Header 1': 'Foo', 'Header 2': 'Bar', 'Header 3': 'Boo'}),\n", " Document(page_content='Hi this is Molly', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'})]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "markdown_document = \"# Foo\\n\\n ## Bar\\n\\nHi this is Jim\\n\\nHi this is Joe\\n\\n ### Boo \\n\\n Hi this is Lance \\n\\n ## Baz\\n\\n Hi this is Molly\"\n", "\n", "headers_to_split_on = [\n", " (\"#\", \"Header 1\"),\n", " (\"##\", \"Header 2\"),\n", " (\"###\", \"Header 3\"),\n", "]\n", "\n", "markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on)\n", "md_header_splits = markdown_splitter.split_text(markdown_document)\n", "md_header_splits" ] }, { "cell_type": "code", "execution_count": 3, "id": "aac1738c", "metadata": { "ExecuteTime": { "end_time": "2023-09-25T19:12:35.672077100Z", "start_time": "2023-09-25T19:12:35.666731400Z" } }, "outputs": [ { "data": { "text/plain": [ "langchain_core.documents.base.Document" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "type(md_header_splits[0])" ] }, { "cell_type": "markdown", "id": "102aad57-7bef-42d3-ab4e-b50d6dc11718", "metadata": {}, "source": [ "By default, `MarkdownHeaderTextSplitter` strips headers being split on from the output chunk's content. This can be disabled by setting `strip_headers = False`." ] }, { "cell_type": "code", "execution_count": 4, "id": "9fce45ba-a4be-4a69-ad27-f5ff195c4fd7", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='# Foo \\n## Bar \\nHi this is Jim \\nHi this is Joe', metadata={'Header 1': 'Foo', 'Header 2': 'Bar'}),\n", " Document(page_content='### Boo \\nHi this is Lance', metadata={'Header 1': 'Foo', 'Header 2': 'Bar', 'Header 3': 'Boo'}),\n", " Document(page_content='## Baz \\nHi this is Molly', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'})]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on, strip_headers=False)\n", "md_header_splits = markdown_splitter.split_text(markdown_document)\n", "md_header_splits" ] }, { "cell_type": "markdown", "id": "aa67e0cc-d721-4536-9c7a-9fa3a7a69cbe", "metadata": {}, "source": [ "### How to return Markdown lines as separate documents\n", "\n", "By default, `MarkdownHeaderTextSplitter` aggregates lines based on the headers specified in `headers_to_split_on`. We can disable this by specifying `return_each_line`:" ] }, { "cell_type": "code", "execution_count": 5, "id": "940bb609-c9c3-4593-ac2d-d825c80ceb44", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='Hi this is Jim', metadata={'Header 1': 'Foo', 'Header 2': 'Bar'}),\n", " Document(page_content='Hi this is Joe', metadata={'Header 1': 'Foo', 'Header 2': 'Bar'}),\n", " Document(page_content='Hi this is Lance', metadata={'Header 1': 'Foo', 'Header 2': 'Bar', 'Header 3': 'Boo'}),\n", " Document(page_content='Hi this is Molly', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'})]" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "markdown_splitter = MarkdownHeaderTextSplitter(\n", " headers_to_split_on,\n", " return_each_line=True,\n", ")\n", "md_header_splits = markdown_splitter.split_text(markdown_document)\n", "md_header_splits" ] }, { "cell_type": "markdown", "id": "9bd8977a", "metadata": {}, "source": [ "Note that here header information is retained in the `metadata` for each document.\n", "\n", "### How to constrain chunk size:\n", "\n", "Within each markdown group we can then apply any text splitter we want, such as `RecursiveCharacterTextSplitter`, which allows for further control of the chunk size." ] }, { "cell_type": "code", "execution_count": 6, "id": "6f1f62bf-2653-4361-9bb0-964d86cb14db", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[Document(page_content='# Intro \\n## History \\nMarkdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9]', metadata={'Header 1': 'Intro', 'Header 2': 'History'}),\n", " Document(page_content='Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files.', metadata={'Header 1': 'Intro', 'Header 2': 'History'}),\n", " Document(page_content='## Rise and divergence \\nAs Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \\nadditional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks.', metadata={'Header 1': 'Intro', 'Header 2': 'Rise and divergence'}),\n", " Document(page_content='#### Standardization \\nFrom 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort.', metadata={'Header 1': 'Intro', 'Header 2': 'Rise and divergence'}),\n", " Document(page_content='## Implementations \\nImplementations of Markdown are available for over a dozen programming languages.', metadata={'Header 1': 'Intro', 'Header 2': 'Implementations'})]" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "markdown_document = \"# Intro \\n\\n ## History \\n\\n Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9] \\n\\n Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files. \\n\\n ## Rise and divergence \\n\\n As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for \\n\\n additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks. \\n\\n #### Standardization \\n\\n From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterised as a standardisation effort. \\n\\n ## Implementations \\n\\n Implementations of Markdown are available for over a dozen programming languages.\"\n", "\n", "headers_to_split_on = [\n", " (\"#\", \"Header 1\"),\n", " (\"##\", \"Header 2\"),\n", "]\n", "\n", "# MD splits\n", "markdown_splitter = MarkdownHeaderTextSplitter(\n", " headers_to_split_on=headers_to_split_on, strip_headers=False\n", ")\n", "md_header_splits = markdown_splitter.split_text(markdown_document)\n", "\n", "# Char-level splits\n", "from langchain_text_splitters import RecursiveCharacterTextSplitter\n", "\n", "chunk_size = 250\n", "chunk_overlap = 30\n", "text_splitter = RecursiveCharacterTextSplitter(\n", " chunk_size=chunk_size, chunk_overlap=chunk_overlap\n", ")\n", "\n", "# Split\n", "splits = text_splitter.split_documents(md_header_splits)\n", "splits" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.4" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT
https://github.com/langchain-ai/langchain/blob/master/docs/docs/how_to/merge_message_runs.ipynb
{ "cells": [ { "cell_type": "markdown", "id": "ac47bfab-0f4f-42ce-8bb6-898ef22a0338", "metadata": {}, "source": [ "# How to merge consecutive messages of the same type\n", "\n", "Certain models do not support passing in consecutive messages of the same type (a.k.a. \"runs\" of the same message type).\n", "\n", "The `merge_message_runs` utility makes it easy to merge consecutive messages of the same type.\n", "\n", "## Basic usage" ] }, { "cell_type": "code", "execution_count": 1, "id": "1a215bbb-c05c-40b0-a6fd-d94884d517df", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\")\n", "\n", "HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways'])\n", "\n", "AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')\n" ] } ], "source": [ "from langchain_core.messages import (\n", " AIMessage,\n", " HumanMessage,\n", " SystemMessage,\n", " merge_message_runs,\n", ")\n", "\n", "messages = [\n", " SystemMessage(\"you're a good assistant.\"),\n", " SystemMessage(\"you always respond with a joke.\"),\n", " HumanMessage([{\"type\": \"text\", \"text\": \"i wonder why it's called langchain\"}]),\n", " HumanMessage(\"and who is harrison chasing anyways\"),\n", " AIMessage(\n", " 'Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!'\n", " ),\n", " AIMessage(\"Why, he's probably chasing after the last cup of coffee in the office!\"),\n", "]\n", "\n", "merged = merge_message_runs(messages)\n", "print(\"\\n\\n\".join([repr(x) for x in merged]))" ] }, { "cell_type": "markdown", "id": "0544c811-7112-4b76-8877-cc897407c738", "metadata": {}, "source": [ "Notice that if the contents of one of the messages to merge is a list of content blocks then the merged message will have a list of content blocks. And if both messages to merge have string contents then those are concatenated with a newline character." ] }, { "cell_type": "markdown", "id": "1b2eee74-71c8-4168-b968-bca580c25d18", "metadata": {}, "source": [ "## Chaining\n", "\n", "`merge_message_runs` can be used in an imperatively (like above) or declaratively, making it easy to compose with other components in a chain:" ] }, { "cell_type": "code", "execution_count": 3, "id": "6d5a0283-11f8-435b-b27b-7b18f7693592", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "AIMessage(content=[], response_metadata={'id': 'msg_01D6R8Naum57q8qBau9vLBUX', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 84, 'output_tokens': 3}}, id='run-ac0c465b-b54f-4b8b-9295-e5951250d653-0', usage_metadata={'input_tokens': 84, 'output_tokens': 3, 'total_tokens': 87})" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# pip install -U langchain-anthropic\n", "from langchain_anthropic import ChatAnthropic\n", "\n", "llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\", temperature=0)\n", "# Notice we don't pass in messages. This creates\n", "# a RunnableLambda that takes messages as input\n", "merger = merge_message_runs()\n", "chain = merger | llm\n", "chain.invoke(messages)" ] }, { "cell_type": "markdown", "id": "72e90dce-693c-4842-9526-ce6460fe956b", "metadata": {}, "source": [ "Looking at the LangSmith trace we can see that before the messages are passed to the model they are merged: https://smith.langchain.com/public/ab558677-cac9-4c59-9066-1ecce5bcd87c/r\n", "\n", "Looking at just the merger, we can see that it's a Runnable object that can be invoked like all Runnables:" ] }, { "cell_type": "code", "execution_count": 4, "id": "460817a6-c327-429d-958e-181a8c46059c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[SystemMessage(content=\"you're a good assistant.\\nyou always respond with a joke.\"),\n", " HumanMessage(content=[{'type': 'text', 'text': \"i wonder why it's called langchain\"}, 'and who is harrison chasing anyways']),\n", " AIMessage(content='Well, I guess they thought \"WordRope\" and \"SentenceString\" just didn\\'t have the same ring to it!\\nWhy, he\\'s probably chasing after the last cup of coffee in the office!')]" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "merger.invoke(messages)" ] }, { "cell_type": "markdown", "id": "4548d916-ce21-4dc6-8f19-eedb8003ace6", "metadata": {}, "source": [ "## API reference\n", "\n", "For a complete description of all arguments head to the API reference: https://api.python.langchain.com/en/latest/messages/langchain_core.messages.utils.merge_message_runs.html" ] } ], "metadata": { "kernelspec": { "display_name": "poetry-venv-2", "language": "python", "name": "poetry-venv-2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.1" } }, "nbformat": 4, "nbformat_minor": 5 }
Wed, 26 Jun 2024 13:15:51 GMT