id
stringlengths
14
16
text
stringlengths
31
2.73k
source
stringlengths
56
166
8d4cff8ae975-2
cache_path = f'{file_prefix}_{i}.txt' cache_obj.init( pre_embedding_func=get_prompt, data_manager=get_data_manager(data_path=cache_path), ) i += 1 langchain.llm_cache = GPTCache(init_gptcache_map) %%time # The first time, it is not yet in cache, so it should take longer llm("Tell me a joke")...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
8d4cff8ae975-3
onnx = Onnx() cache_base = CacheBase('sqlite') vector_base = VectorBase('faiss', dimension=onnx.dimension) data_manager = get_data_manager(cache_base, vector_base, max_size=10, clean_size=2) cache_obj.init( pre_embedding_func=get_prompt, embedding_func=onnx.to_embeddings, data_ma...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
8d4cff8ae975-4
# from langchain.cache import SQLAlchemyCache # from sqlalchemy import create_engine # engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres") # langchain.llm_cache = SQLAlchemyCache(engine) Custom SQLAlchemy Schemas# # You can define your own declarative SQLAlchemyCache child class to customiz...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
8d4cff8ae975-5
llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False) %%time llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!' %%time llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
8d4cff8ae975-6
from langchain.chains.summarize import load_summarize_chain chain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm) %%time chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infr...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
8d4cff8ae975-7
Redis Cache GPTCache SQLAlchemy Cache Custom SQLAlchemy Schemas Optional Caching Optional Caching in Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_caching.html
417481e5efea-0
.ipynb .pdf 如何使用 LLM 的异步 API 如何使用 LLM 的异步 API# LangChain 利用 asyncio 库为 LLM 提供了异步支持。 异步支持特别适用于同时调用多个 LLM,因为这些调用都是网络绑定的。目前,支持 OpenAI, PromptLayerOpenAI, ChatOpenAI 和 Anthropic,但将在未来实现其他 LLM 的异步支持。 您可以使用 agenerate 方法异步调用 OpenAI 的 LLM。 import time import asyncio from langchain.llms import OpenAI def generate_serially(): ...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
417481e5efea-1
generate_serially() elapsed = time.perf_counter() - s print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') I'm doing well, thank you. How about you? I'm doing well, thank you. How about you? I'm doing well, how about you? I'm doing well, thank you. How about you? I'm doing well, thank you. How a...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
417481e5efea-2
接着,定义了一个异步函数 async_generate,该函数使用了 await 关键字,表示函数调用将异步执行。 该函数调用了 OpenAI 的 agenerate 方法来生成文本,生成的文本通过 print() 输出。 最后,定义了一个并发生成的 generate_concurrently 函数,它与 async_generate 函数结合起来使用,用于并发地调用 LLM 来生成文本。 在主函数运行时, 首先调用了异步函数 await generate_concurrently(),并计算了执行该函数所需的时间,以便与后续序列执行函数的时间进行比较。 然后,调用了序列执行函数 generate_serially(),并计算了它的...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
417481e5efea-3
在 asyncio.gather(*tasks) 中,星号运算符 * 用于解包 tasks 列表,将其作为单独的参数列表传递给 asyncio.gather() 函数。这样,每个 async_generate(llm) 函数就会以并发方式执行,从而加速整个过程。最终结果是,由 async_generate(llm) 函数生成的文本数据会返回一个包含在 gather() 函数返回值的列表中,该列表包含了所有并发执行的结果。 previous 通用功能 next 如何写一个自定义的LLM包装器 By Harrison Chase © Copyright 2023, Harrison Chase. ...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/async_llm.html
cf7d97ec50e0-0
.ipynb .pdf 如何写一个自定义的LLM包装器 如何写一个自定义的LLM包装器# 这个notebook将介绍如何创建一个自定义的LLM包装器,以便您可以使用自己的LLM或与LangChain支持的不同包装器。自定义LLM只需要实现一个必需的方法: _call 方法,它接受一个字符串输入和一个可选的停用词列表,并返回一个字符串结果。 也可以实现一个可选的方法: _identifying_params 属性,用于帮助打印该类。应返回一个字典。 我们来实现一个非常简单的自定义LLM,它只返回输入文本的前N个字符。 from langchain.llms.base import LLM from typing import Opti...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/custom_llm.html
cf7d97ec50e0-1
llm("This is a foobar thing") 'This is a ' 我们还可以打印这个LLM并查看自定义的打印效果。 print(llm) CustomLLM Params: {'n': 10} previous 如何使用 LLM 的异步 API next How (and why) to use the fake LLM By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/custom_llm.html
4cc53d0d0584-0
.ipynb .pdf How to serialize LLM classes Contents Loading Saving How to serialize LLM classes# This notebook walks through how to write and read an LLM Configuration to and from disk. This is useful if you want to save the configuration for a given LLM (e.g., the provider, the temperature, etc). from langchain.llms i...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_serialization.html
4cc53d0d0584-1
llm.save("llm.json") llm.save("llm.yaml") previous How to cache LLM calls next 如何实现 LLM 和 Chat Model 的流式响应 Contents Loading Saving By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/llm_serialization.html
26d853c0e0b9-0
.ipynb .pdf 如何实现 LLM 和 Chat Model 的流式响应 如何实现 LLM 和 Chat Model 的流式响应# LangChain为LLMs提供了流式支持。目前,我们支持使用OpenAI,ChatOpenAI和Anthropic实现进行流式传输,在路线图中也计划为其他LLM实现提供流式支持。要使用流式传输,需要实现on_llm_new_token的CallbackHandler 。在这个例子中,我们使用StreamingStdOutCallbackHandler。 from langchain.llms import OpenAI, Anthropic from langchain.chat_models ...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
26d853c0e0b9-1
It's the perfect way to stay refreshed. Verse 3 I'm sippin' on sparkling water, It's so light and so clear, It's the perfect way to keep me cool On a hot summer night. Chorus Sparkling water, sparkling water, It's the best way to stay hydrated, It's so crisp and so clean, It's the perfect way to stay refreshed. 使用 gene...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
26d853c0e0b9-2
Verse 2: No sugar, no calories, just H2O A drink that's good for me, don't you know With lemon or lime, you're even better Sparkling water, you're my forever Chorus: Sparkling water, oh how you shine A taste so clean, it's simply divine You quench my thirst, you make me feel alive Sparkling water, you're my favorite vi...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
26d853c0e0b9-3
Sparkling water, a drink quite all right, Bubbles sparkling in the light. '\nSparkling water, bubbles so bright,\n\nFizzing and popping in the light.\n\nNo sugar or calories, a healthy delight,\n\nSparkling water, refreshing and light.\n\nCarbonation that tickles the tongue,\n\nIn flavors of lemon and lime unsung.\n\nS...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/streaming_llm.html
1af42b0ef692-0
.ipynb .pdf How (and why) to use the fake LLM How (and why) to use the fake LLM# We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way. In this notebook we go over how to use this. We start this with usi...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/fake_llm.html
eb989c3c76dd-0
.ipynb .pdf How to track token usage How to track token usage# This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API. Let’s first look at an extremely simple example of tracking token usage for a single LLM call. from langchain.llms import OpenAI f...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/token_usage_tracking.html
eb989c3c76dd-1
print(f"Total Tokens: {cb.total_tokens}") print(f"Prompt Tokens: {cb.prompt_tokens}") print(f"Completion Tokens: {cb.completion_tokens}") print(f"Total Cost (USD): ${cb.total_cost}") > Entering new AgentExecutor chain... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised t...
https://langchain-cn.readthedocs.io/en/latest/modules/models/llms/examples/token_usage_tracking.html
a2c912b44a1b-0
.ipynb .pdf Getting Started Contents PromptTemplates LLMChain Streaming Getting Started# This notebook covers how to get started with chat models. The interface is based around messages rather than raw text. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.pro...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/getting_started.html
a2c912b44a1b-1
[ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), Hum...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/getting_started.html
a2c912b44a1b-2
system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted mes...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/getting_started.html
a2c912b44a1b-3
Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A d...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/getting_started.html
61992c4282d9-0
.rst .pdf How-To Guides How-To Guides# The examples here all address certain “how-to” guides for working with chat models. How to use few shot examples How to stream responses previous Getting Started next How to use few shot examples By Harrison Chase © Copyright 2023, Harrison Chase. Last updated ...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/how_to_guides.html
7a3ddb430991-0
.rst .pdf Integrations Integrations# The examples here all highlight how to integrate with different chat models. Azure OpenAI PromptLayer ChatOpenAI previous How to stream responses next Azure By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations.html
1f5379131e0d-0
.ipynb .pdf PromptLayer ChatOpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer ChatOpenAI# This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests. Install PromptLayer# The promp...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
1f5379131e0d-1
chat = PromptLayerChatOpenAI(return_pl_id=True) chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]]) for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations/promptlayer_chatopenai.html
bfaae92cef96-0
.ipynb .pdf Azure Azure# This notebook goes over how to connect to an Azure hosted OpenAI endpoint from langchain.chat_models import AzureChatOpenAI from langchain.schema import HumanMessage BASE_URL = "https://${TODO}.openai.azure.com" API_KEY = "..." DEPLOYMENT_NAME = "chat" model = AzureChatOpenAI( openai_api_ba...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations/azure_chat_openai.html
181c19d7be1b-0
.ipynb .pdf OpenAI OpenAI# This notebook covers how to get started with OpenAI chat models. from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema impo...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations/openai.html
181c19d7be1b-1
AIMessage(content="J'adore la programmation.", additional_kwargs={}) previous Azure next PromptLayer ChatOpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/integrations/openai.html
0276f03ad7fd-0
.ipynb .pdf How to stream responses How to stream responses# This notebook goes over how to use streaming with a chat model. from langchain.chat_models import ChatOpenAI from langchain.schema import ( HumanMessage, ) from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout impo...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/streaming.html
0276f03ad7fd-1
Sparkling previous How to use few shot examples next Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/streaming.html
a509a08ba421-0
.ipynb .pdf How to use few shot examples Contents Alternating Human/AI messages System Messages How to use few shot examples# This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abst...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/few_shot_examples.html
a509a08ba421-1
template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"}) example_ai = SystemMessagePromptTemplate.from_template("Argh m...
https://langchain-cn.readthedocs.io/en/latest/modules/models/chat/examples/few_shot_examples.html
9e7e1cfeb140-0
.ipynb .pdf Jina Jina# Let’s load the Jina Embedding class. from langchain.embeddings import JinaEmbeddings embeddings = JinaEmbeddings(jina_auth_token=jina_auth_token, model_name="ViT-B-32::openai") text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([t...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/jina.html
23f2cf33ab52-0
.ipynb .pdf Self Hosted Embeddings Self Hosted Embeddings# Let’s load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes. from langchain.embeddings import ( SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, SelfHostedHuggingFaceInstructEmbeddi...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/self-hosted.html
23f2cf33ab52-1
tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) def inference_fn(pipeline, prompt): # Return last hidden state of the model if isinstance(prompt, list): return [emb[...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/self-hosted.html
6d27e00374ee-0
.ipynb .pdf Hugging Face Hub Hugging Face Hub# Let’s load the Hugging Face Embedding class. from langchain.embeddings import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) previous F...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/huggingfacehub.html
9153e439e89b-0
.ipynb .pdf Fake Embeddings Fake Embeddings# LangChain also provides a fake embedding class. You can use this to test your pipelines. from langchain.embeddings import FakeEmbeddings embeddings = FakeEmbeddings(size=1352) query_result = embeddings.embed_query("foo") doc_results = embeddings.embed_documents(["foo"]) prev...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/fake.html
d4257e7d3061-0
.ipynb .pdf TensorflowHub TensorflowHub# Let’s load the TensorflowHub Embedding class. from langchain.embeddings import TensorflowHubEmbeddings embeddings = TensorflowHubEmbeddings() 2023-01-30 23:53:01.652176: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neu...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/tensorflowhub.html
d531c77a22c2-0
.ipynb .pdf Cohere Cohere# Let’s load the Cohere Embedding class. from langchain.embeddings import CohereEmbeddings embeddings = CohereEmbeddings(cohere_api_key=cohere_api_key) text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) previous AzureOpe...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/cohere.html
661f456353f7-0
.ipynb .pdf InstructEmbeddings InstructEmbeddings# Let’s load the HuggingFace instruct Embeddings class. from langchain.embeddings import HuggingFaceInstructEmbeddings embeddings = HuggingFaceInstructEmbeddings( query_instruction="Represent the query for retrieval: " ) load INSTRUCTOR_Transformer max_seq_length 51...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/instruct_embeddings.html
1775a4684b7a-0
.ipynb .pdf Llama-cpp Llama-cpp# This notebook goes over how to use Llama-cpp embeddings within LangChain !pip install llama-cpp-python from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model/ggml-model-q4_0.bin") text = "This is a test document." query_result = llama.e...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/llamacpp.html
369b1a931b55-0
.ipynb .pdf AzureOpenAI AzureOpenAI# Let’s load the OpenAI Embedding class with environment variables set to indicate to use Azure endpoints. # set the environment variables needed for openai package to know to reach out to azure import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https:/...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/azureopenai.html
4cb55ed07378-0
.ipynb .pdf Aleph Alpha Contents Asymmetric Symmetric Aleph Alpha# There are two possible ways to use Aleph Alpha’s semantic embeddings. If you have texts with a dissimilar structure (e.g. a Document and a Query) you would want to use asymmetric embeddings. Conversely, for texts with comparable structures, symmetric ...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/aleph_alpha.html
99dcdb5f59d2-0
.ipynb .pdf OpenAI OpenAI# Let’s load the OpenAI Embedding class. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text = "This is a test document." query_result = embeddings.embed_query(text) doc_result = embeddings.embed_documents([text]) Let’s load the OpenAI Embedding class with fir...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/openai.html
9d8cb453110f-0
.ipynb .pdf SageMaker Endpoint Embeddings SageMaker Endpoint Embeddings# Let’s load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker. For instrucstions on how to do this, please see here !pip3 install langchain boto3 from typing import Dict from ...
https://langchain-cn.readthedocs.io/en/latest/modules/models/text_embedding/examples/sagemaker-endpoint.html
96724e09f3c8-0
.ipynb .pdf Chat Prompt Template Chat Prompt Template# Chat Models takes a list of chat messages as input - this list commonly referred to as a prompt. Typically this is not simply a hardcoded list of messages but rather a combination of a template, some examples, and user input. LangChain provides several classes and ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/chat_prompt_template.html
96724e09f3c8-1
HumanMessage(content='I love programming.', additional_kwargs={})] If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg: prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/chat_prompt_template.html
d2d7529cc0db-0
.rst .pdf Prompt Templates Prompt Templates# Note Conceptual Guide Language models take text as input - that text is commonly referred to as a prompt. Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input. LangChain provides several classes and functions t...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates.html
9bf9a4a8bbdd-0
.rst .pdf Output Parsers Output Parsers# Note Conceptual Guide Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. Output parsers are classes that help structure language model responses. There are two main methods an out...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers.html
efc01cb34af0-0
.rst .pdf Example Selectors Example Selectors# Note Conceptual Guide If you have a large number of examples, you may need to select which ones to include in the prompt. The ExampleSelector is the class responsible for doing so. The base interface is defined as below: class BaseExampleSelector(ABC): """Interface for...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors.html
f9c36b300d20-0
.md .pdf Getting Started Contents What is a prompt template? Create a prompt template Load a prompt template from LangChainHub Pass few shot examples to a prompt template Select examples for a prompt template Getting Started# In this tutorial, we will learn about: what a prompt template is, and why it is needed, how ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
f9c36b300d20-1
no_input_prompt.format() # -> "Tell me a joke." # An example prompt with one input variable one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.") one_input_prompt.format(adjective="funny") # -> "Tell me a funny joke." # An example prompt with multiple input variables m...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
f9c36b300d20-2
In this example, we’ll create a prompt to generate word antonyms. from langchain import PromptTemplate, FewShotPromptTemplate # First, create the list of few shot examples. examples = [ {"word": "happy", "antonym": "sad"}, {"word": "tall", "antonym": "short"}, ] # Next, we specify the template to format the exa...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
f9c36b300d20-3
print(few_shot_prompt.format(input="big")) # -> Give the antonym of every input # -> # -> Word: happy # -> Antonym: sad # -> # -> Word: tall # -> Antonym: short # -> # -> Word: big # -> Antonym: Select examples for a prompt template# If you have a large number of examples, you can use the ExampleSelector to select a s...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
f9c36b300d20-4
examples=examples, # This is the PromptTemplate being used to format the examples. example_prompt=example_prompt, # This is the maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, ) # We can now use the `example_selector`...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
f9c36b300d20-5
# -> Word: happy # -> Antonym: sad # -> # -> Word: big and huge and massive and large and gigantic and tall and much much much much much bigger than everything else # -> Antonym: LangChain comes with a few example selectors that you can use. For more details on how to use them, see Example Selectors. You can create cus...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/getting_started.html
6375c2accb33-0
.rst .pdf How-To Guides How-To Guides# If you’re new to the library, you may want to start with the Quickstart. The user guide here shows more advanced workflows and how to use the library in different ways. How to create a custom prompt template How to create a prompt template that uses few shot examples How to work w...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/how_to_guides.html
fa22aeae1f0c-0
.ipynb .pdf How to work with partial Prompt Templates Contents Partial With Strings Partial With Functions How to work with partial Prompt Templates# A prompt template is a class with a .format method which takes in a key-value map and returns a string (a prompt) to pass to the language model. Like other methods, it ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/partial.html
fa22aeae1f0c-1
print(prompt.format(bar="baz")) foobaz Partial With Functions# The other common use is to partial with a function. The use case for this is when you have a variable you know that you always want to fetch in a common way. A prime example of this is with date or time. Imagine you have a prompt which you always want to ha...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/partial.html
fa22aeae1f0c-2
Contents Partial With Strings Partial With Functions By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/partial.html
41d4f2384273-0
.ipynb .pdf How to create a prompt template that uses few shot examples Contents Use Case Using an example set Create the example set Create a formatter for the few shot examples Feed examples and formatter to FewShotPromptTemplate Using an example selector Feed examples into ExampleSelector Feed example selector int...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
41d4f2384273-1
"answer": """ Are follow up questions needed here: Yes. Follow up: Who was the founder of craigslist? Intermediate answer: Craigslist was founded by Craig Newmark. Follow up: When was Craig Newmark born? Intermediate answer: Craig Newmark was born on December 6, 1952. So the final answer is: December 6, 1952 """ }, ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
41d4f2384273-2
print(example_prompt.format(**examples[0])) Question: Who lived longer, Muhammad Ali or Alan Turing? Are follow up questions needed here: Yes. Follow up: How old was Muhammad Ali when he died? Intermediate answer: Muhammad Ali was 74 years old when he died. Follow up: How old was Alan Turing when he died? Intermediate ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
41d4f2384273-3
Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The mother of George Washington was Mary Ball Washington. Follow up: Who was the father of Mary Ball Washington? Intermediate answer: The father of Mary Ball Washington was Joseph Ball. So the final answer...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
41d4f2384273-4
# This is the list of examples available to select from. examples, # This is the embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), # This is the VectorStore class that is used to store the embeddings and do a similarity search over. Chroma,...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
41d4f2384273-5
suffix="Question: {input}", input_variables=["input"] ) print(prompt.format(input="Who was the father of Mary Ball Washington?")) Question: Who was the maternal grandfather of George Washington? Are follow up questions needed here: Yes. Follow up: Who was the mother of George Washington? Intermediate answer: The m...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/few_shot_examples.html
ff4517bb6660-0
.ipynb .pdf How to create a custom prompt template Contents Why are custom prompt templates needed? Creating a Custom Prompt Template Use the custom prompt template How to create a custom prompt template# Let’s suppose we want the LLM to generate English language explanations of a function given its name. To achieve ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
ff4517bb6660-1
def get_source_code(function_name): # Get the source code of the function return inspect.getsource(function_name) Next, we’ll create a custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function. from langchain.prompts import String...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
ff4517bb6660-2
prompt = fn_explainer.format(function_name=get_source_code) print(prompt) Given the function name and source code, generate an English language explanation of the function. Function Name: get_source_code Source Code: def get_source_code(function_name): # Get the source code of the fu...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/custom_prompt_template.html
e43397fa456f-0
.ipynb .pdf How to serialize prompts Contents PromptTemplate Loading from YAML Loading from JSON Loading Template from a File FewShotPromptTemplate Examples Loading from YAML Loading from JSON Examples in the Config Example Prompt from a File How to serialize prompts# It is often preferrable to store prompts not as p...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
e43397fa456f-1
prompt = load_prompt("simple_prompt.yaml") print(prompt.format(adjective="funny", content="chickens")) Tell me a funny joke about chickens. Loading from JSON# This shows an example of loading a PromptTemplate from JSON. !cat simple_prompt.json { "_type": "prompt", "input_variables": ["adjective", "content"], ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
e43397fa456f-2
output: sad - input: tall output: short Loading from YAML# This shows an example of loading a few shot example from YAML. !cat few_shot_prompt.yaml _type: few_shot input_variables: ["adjective"] prefix: Write antonyms for the following words. example_prompt: _type: prompt input_variables: ["i...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
e43397fa456f-3
!cat few_shot_prompt.json { "_type": "few_shot", "input_variables": ["adjective"], "prefix": "Write antonyms for the following words.", "example_prompt": { "_type": "prompt", "input_variables": ["input", "output"], "template": "Input: {input}\nOutput: {output}" }, "exampl...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
e43397fa456f-4
Output: short Input: funny Output: Example Prompt from a File# This shows an example of loading the PromptTemplate that is used to format the examples from a separate file. Note that the key changes from example_prompt to example_prompt_path. !cat example_prompt.json { "_type": "prompt", "input_variables": ["in...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/prompt_templates/examples/prompt_serialization.html
ac00c0f894dc-0
.ipynb .pdf Maximal Marginal Relevance ExampleSelector Maximal Marginal Relevance ExampleSelector# The MaxMarginalRelevanceExampleSelector selects examples based on a combination of which examples are most similar to the inputs, while also optimizing for diversity. It does this by finding the examples with the embeddin...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/mmr.html
ac00c0f894dc-1
k=2 ) mmr_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) # Input is a feeling,...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/mmr.html
d4db0976013f-0
.ipynb .pdf Similarity ExampleSelector Similarity ExampleSelector# The SemanticSimilarityExampleSelector selects examples based on which examples are most similar to the inputs. It does this by finding the examples with the embeddings that have the greatest cosine similarity with the inputs. from langchain.prompts.exam...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/similarity.html
d4db0976013f-1
example_prompt=example_prompt, prefix="Give the antonym of every input", suffix="Input: {adjective}\nOutput:", input_variables=["adjective"], ) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. # Input is a feeling, so should select the happy/sad example pr...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/similarity.html
ea36b81674e7-0
.ipynb .pdf NGram Overlap ExampleSelector NGram Overlap ExampleSelector# The NGramOverlapExampleSelector selects and orders examples based on which examples are most similar to the input, according to an ngram overlap score. The ngram overlap score is a float between 0.0 and 1.0, inclusive. The selector allows for a th...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
ea36b81674e7-1
{"input": "Spot can run.", "output": "Spot puede correr."}, ] example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", ) example_selector = NGramOverlapExampleSelector( # These are the examples it has available to choose from. examples=examples, ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
ea36b81674e7-2
Output: Ver correr a Spot. Input: My dog barks. Output: Mi perro ladra. Input: Spot can run fast. Output: # You can add examples to NGramOverlapExampleSelector as well. new_example = {"input": "Spot plays fetch.", "output": "Spot juega a buscar."} example_selector.add_example(new_example) print(dynamic_prompt.format(se...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
ea36b81674e7-3
Input: Spot plays fetch. Output: Spot juega a buscar. Input: Spot can play fetch. Output: # Setting threshold greater than 1.0 example_selector.threshold=1.0+1e-9 print(dynamic_prompt.format(sentence="Spot can play fetch.")) Give the Spanish translation of every input Input: Spot can play fetch. Output: previous Maxima...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html
026962e7e9d1-0
.ipynb .pdf LengthBased ExampleSelector LengthBased ExampleSelector# This ExampleSelector selects which examples to use based on length. This is useful when you are worried about constructing a prompt that will go over the length of the context window. For longer inputs, it will select fewer examples to include, while ...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
026962e7e9d1-1
# it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) ) dynamic_prompt = FewShotPromptTemplate( # We provide an ExampleSelector instead of examples. example_selector=example_selector, example_prompt=example_prompt, pref...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
026962e7e9d1-2
Input: sunny Output: gloomy Input: windy Output: calm Input: big Output: small Input: enthusiastic Output: previous How to create a custom example selector next Maximal Marginal Relevance ExampleSelector By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Apr 18, 2023.
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/length_based.html
9f84e87fa6dd-0
.md .pdf How to create a custom example selector Contents Implement custom example selector Use custom example selector How to create a custom example selector# In this tutorial, we’ll create a custom example selector that selects every alternate example from a given list of examples. An ExampleSelector must implemen...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
9f84e87fa6dd-1
# Add new example to the set of examples example_selector.add_example({"foo": "4"}) example_selector.examples # -> [{'foo': '1'}, {'foo': '2'}, {'foo': '3'}, {'foo': '4'}] # Select examples example_selector.select_examples({"foo": "foo"}) # -> array([{'foo': '1'}, {'foo': '4'}], dtype=object) previous Example Selectors...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html
b10a0bd6da21-0
.ipynb .pdf Output Parsers Output Parsers# Language models output text. But many times you may want to get more structured information than just text back. This is where output parsers come in. Output parsers are classes that help structure language model responses. There are two main methods an output parser must impl...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/getting_started.html
b10a0bd6da21-1
punchline: str = Field(description="answer to resolve the joke") # You can add custom validation logic easily with Pydantic. @validator('setup') def question_ends_with_question_mark(cls, field): if field[-1] != '?': raise ValueError("Badly formed question!") return field # S...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/getting_started.html
868a600c5950-0
.ipynb .pdf Structured Output Parser Structured Output Parser# While the Pydantic/JSON parser is more powerful, we initially experimented data structures having text fields only. from langchain.output_parsers import StructuredOutputParser, ResponseSchema from langchain.prompts import PromptTemplate, ChatPromptTemplate,...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/structured.html
868a600c5950-1
], input_variables=["question"], partial_variables={"format_instructions": format_instructions} ) _input = prompt.format_prompt(question="what's the capital of france") output = chat_model(_input.to_messages()) output_parser.parse(output.content) {'answer': 'Paris', 'source': 'https://en.wikipedia.org/wiki/Pari...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/structured.html
f37758d206cd-0
.ipynb .pdf OutputFixingParser OutputFixingParser# This output parser wraps another output parser and tries to fix any mmistakes The Pydantic guardrail simply tries to parse the LLM response. If it does not parse correctly, then it errors. But we can do other things besides throw errors. Specifically, we can pass the m...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
f37758d206cd-1
24 return self.pydantic_object.parse_obj(json_object) File ~/.pyenv/versions/3.9.1/lib/python3.9/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 343 if (cls is None and object_hook is None and 344 parse_int is None and parse_float is N...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
f37758d206cd-2
Cell In[6], line 1 ----> 1 parser.parse(misformatted) File ~/workplace/langchain/langchain/output_parsers/pydantic.py:29, in PydanticOutputParser.parse(self, text) 27 name = self.pydantic_object.__name__ 28 msg = f"Failed to parse {name} from completion {text}. Got: {e}" ---> 29 raise OutputParserException(ms...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/output_fixing_parser.html
88709ace2e22-0
.ipynb .pdf RetryOutputParser RetryOutputParser# While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can’t. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example. from langchain.prompts...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/retry.html
88709ace2e22-1
23 json_object = json.loads(json_str) ---> 24 return self.pydantic_object.parse_obj(json_object) 26 except (json.JSONDecodeError, ValidationError) as e: File ~/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/pydantic/main.py:527, in pydantic.main.BaseModel.parse_obj() File ~/.pyenv/version...
https://langchain-cn.readthedocs.io/en/latest/modules/prompts/output_parsers/examples/retry.html