id stringlengths 14 17 | text stringlengths 42 2.11k |
|---|---|
4e9727215e95-2600 | ), HumanMessagePromptTemplate.fromTemplate("{inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0 });// Binding "function_call" below makes the model always call the specified function.// If you want to allow the model to call functions selectively, omit it.const functionCallingModel = llm.bind({ functions: [ { name: "output_formatter", description: "Should always be used to properly format output", parameters: zodToJsonSchema(zodSchema), }, ], function_call: { name: "output_formatter" },});const outputParser = new JsonOutputFunctionsParser();const chain = prompt.pipe(functionCallingModel).pipe(outputParser);const response = await chain.invoke({ inputText: "I like apples, bananas, oxygen, and french fries. ",});console.log(JSON.stringify(response, null, 2));/* { "output": { "foods": [ { "name": "apples", "healthy": true, "color": "red" }, { "name": "bananas", "healthy": true, "color": "yellow" }, { "name": "french fries", "healthy": false, "color": "golden" } ] } }*/
API Reference:ChatOpenAI from langchain/chat_models/openaiChatPromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptsJsonOutputFunctionsParser from langchain/output_parsers
Though we suggest the above Expression Language example, here's an example
of using the createStructuredOutputChainFromZod convenience method to return a classic LLMChain: |
4e9727215e95-2601 | import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { createStructuredOutputChainFromZod } from "langchain/chains/openai_functions";const zodSchema = z.object({ name: z.string().describe("Human name"), surname: z.string().describe("Human surname"), age: z.number().describe("Human age"), birthplace: z.string().describe("Where the human was born"), appearance: z.string().describe("Human appearance description"), shortBio: z.string().describe("Short bio secription"), university: z.string().optional().describe("University name if attended"), gender: z.string().describe("Gender of the human"), interests: z .array(z.string()) .describe("json array of strings human interests"),});const prompt = new ChatPromptTemplate({ promptMessages: [ SystemMessagePromptTemplate.fromTemplate( "Generate details of a hypothetical person." ), HumanMessagePromptTemplate.fromTemplate("Additional context: {inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 1 });const chain = createStructuredOutputChainFromZod(zodSchema, { prompt, llm, outputKey: "person",});const response = await chain.call({ inputText: "Please generate a diverse group of people, but don't generate anyone who likes video games. |
4e9727215e95-2602 | ",});console.log(JSON.stringify(response, null, 2));/* { "person": { "name": "Sophia", "surname": "Martinez", "age": 32, "birthplace": "Mexico City, Mexico", "appearance": "Sophia has long curly brown hair and hazel eyes. She has a warm smile and a contagious laugh. ", "shortBio": "Sophia is a passionate environmentalist who is dedicated to promoting sustainable living. She believes in the power of individual actions to create a positive impact on the planet. ", "university": "Stanford University", "gender": "Female", "interests": [ "Hiking", "Yoga", "Cooking", "Reading" ] } }*/
API Reference:ChatOpenAI from langchain/chat_models/openaiChatPromptTemplate from langchain/promptsSystemMessagePromptTemplate from langchain/promptsHumanMessagePromptTemplate from langchain/promptscreateStructuredOutputChainFromZod from langchain/chains/openai_functions
Summarization
Page Title: Summarization | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QASQLStructured Output with OpenAI functionsSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsPopularSummarizationSummarizationA summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. |
4e9727215e95-2603 | You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce" });const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families.
The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterIntermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. |
4e9727215e95-2604 | This is done with the returnIntermediateSteps parameter.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", returnIntermediateSteps: true,});const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { intermediateSteps: [ "In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. ", "The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. |
4e9727215e95-2605 | The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens. ", " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs. ", ], text: "President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.
", },}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterPreviousStructured Output with OpenAI functionsNextAdditionalCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2606 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QASQLStructured Output with OpenAI functionsSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsPopularSummarizationSummarizationA summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. |
4e9727215e95-2607 | You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce" });const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families.
The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterIntermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. |
4e9727215e95-2608 | This is done with the returnIntermediateSteps parameter.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", returnIntermediateSteps: true,});const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { intermediateSteps: [ "In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. ", "The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. |
4e9727215e95-2609 | The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens. ", " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs. ", ], text: "President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.
", },}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterPreviousStructured Output with OpenAI functionsNextAdditional |
4e9727215e95-2610 | ModulesChainsPopularSummarizationSummarizationA summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce" });const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. |
4e9727215e95-2611 | He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterIntermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. |
4e9727215e95-2612 | This is done with the returnIntermediateSteps parameter.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", returnIntermediateSteps: true,});const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { intermediateSteps: [ "In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. ", "The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. |
4e9727215e95-2613 | The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens. ", " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs. ", ], text: "President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.
", },}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterPreviousStructured Output with OpenAI functionsNextAdditional |
4e9727215e95-2614 | SummarizationA summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce" });const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. |
4e9727215e95-2615 | The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitterIntermediate StepsWe can also return the intermediate steps for map_reduce chains, should we want to inspect them. |
4e9727215e95-2616 | This is done with the returnIntermediateSteps parameter.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", returnIntermediateSteps: true,});const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { intermediateSteps: [ "In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. ", "The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. |
4e9727215e95-2617 | The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens. ", " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs. ", ], text: "President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.
", },}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitter |
4e9727215e95-2618 | import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce" });const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/
API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsRecursiveCharacterTextSplitter from langchain/text_splitter
We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the returnIntermediateSteps parameter. |
4e9727215e95-2619 | import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain } from "langchain/chains";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";import * as fs from "fs";// In this example, we use a `MapReduceDocumentsChain` specifically prompted to summarize a set of documents.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// This convenience function creates a document chain prompted to summarize a set of documents.const chain = loadSummarizationChain(model, { type: "map_reduce", returnIntermediateSteps: true,});const res = await chain.call({ input_documents: docs,});console.log({ res });/*{ res: { intermediateSteps: [ "In response to Russia's aggression in Ukraine, the United States has united with other freedom-loving nations to impose economic sanctions and hold Putin accountable. The U.S. Department of Justice is also assembling a task force to go after the crimes of Russian oligarchs and seize their ill-gotten gains. ", "The United States and its European allies are taking action to punish Russia for its invasion of Ukraine, including seizing assets, closing off airspace, and providing economic and military assistance to Ukraine. |
4e9727215e95-2620 | The US is also mobilizing forces to protect NATO countries and has released 30 million barrels of oil from its Strategic Petroleum Reserve to help blunt gas prices. The world is uniting in support of Ukraine and democracy, and the US stands with its Ukrainian-American citizens. ", " President Biden and Vice President Harris ran for office with a new economic vision for America, and have since passed the American Rescue Plan and the Bipartisan Infrastructure Law to help struggling families and rebuild America's infrastructure. This includes creating jobs, modernizing roads, airports, ports, and waterways, replacing lead pipes, providing affordable high-speed internet, and investing in American products to support American jobs. ", ], text: "President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future. ", },}*/
Page Title: Additional | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2621 | Page Title: Additional | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalAdditional🗃️ OpenAI functions chains3 items📄️ Analyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.📄️ Self-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.📄️ ModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. |
4e9727215e95-2622 | Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.📄️ Dynamically selecting from multiple promptsThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.📄️ Dynamically selecting from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.PreviousSummarizationNextOpenAI functions chainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2623 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalAdditional🗃️ OpenAI functions chains3 items📄️ Analyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.📄️ Self-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.📄️ ModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. |
4e9727215e95-2624 | This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.📄️ Dynamically selecting from multiple promptsThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.📄️ Dynamically selecting from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.PreviousSummarizationNextOpenAI functions chains
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
OpenAI functions chains |
4e9727215e95-2625 | OpenAI functions chains
ModulesChainsAdditionalAdditional🗃️ OpenAI functions chains3 items📄️ Analyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.📄️ Self-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.📄️ ModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. |
4e9727215e95-2626 | To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.📄️ Dynamically selecting from multiple promptsThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.📄️ Dynamically selecting from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.PreviousSummarizationNextOpenAI functions chains |
4e9727215e95-2627 | Additional🗃️ OpenAI functions chains3 items📄️ Analyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.📄️ Self-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.📄️ ModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. |
4e9727215e95-2628 | To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.📄️ Dynamically selecting from multiple promptsThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.📄️ Dynamically selecting from multiple retrieversThis notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
3 items
The AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.
The ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context. |
4e9727215e95-2629 | This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the MultiRetrievalQAChain to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
Page Title: OpenAI functions chains | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2630 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsOpenAI functions chainsThese chains are designed to be used with an OpenAI Functions model.📄️ ExtractionMust be used with an OpenAI Functions model.📄️ OpenAPI CallsMust be used with an OpenAI Functions model.📄️ TaggingMust be used with an OpenAI Functions model.PreviousAdditionalNextExtractionCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsOpenAI functions chainsThese chains are designed to be used with an OpenAI Functions model.📄️ ExtractionMust be used with an OpenAI Functions model.📄️ OpenAPI CallsMust be used with an OpenAI Functions model.📄️ TaggingMust be used with an OpenAI Functions model.PreviousAdditionalNextExtraction |
4e9727215e95-2631 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI reference
ModulesChainsAdditionalOpenAI functions chainsOpenAI functions chainsThese chains are designed to be used with an OpenAI Functions model.📄️ ExtractionMust be used with an OpenAI Functions model.📄️ OpenAPI CallsMust be used with an OpenAI Functions model.📄️ TaggingMust be used with an OpenAI Functions model.PreviousAdditionalNextExtraction
OpenAI functions chainsThese chains are designed to be used with an OpenAI Functions model.📄️ ExtractionMust be used with an OpenAI Functions model.📄️ OpenAPI CallsMust be used with an OpenAI Functions model.📄️ TaggingMust be used with an OpenAI Functions model.
These chains are designed to be used with an OpenAI Functions model.
Must be used with an OpenAI Functions model.
Extraction
Page Title: Extraction | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2632 | Page Title: Extraction | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsExtractionExtractionCompatibilityMust be used with an OpenAI Functions model.This chain is designed to extract lists of objects from an input text and schema of desired info.import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { createExtractionChainFromZod } from "langchain/chains";const zodSchema = z.object({ "person-name": z.string().optional(), "person-age": z.number().optional(), "person-hair_color": z.string().optional(), "dog-name": z.string().optional(), "dog-breed": z.string().optional(),});const chatModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0,});const chain = createExtractionChainFromZod(zodSchema, chatModel);console.log( await chain.run(`Alex is 5 feet tall. Claudia is 4 feet taller Alex and jumps higher than him. |
4e9727215e95-2633 | Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek.`));/*[ { 'person-name': 'Alex', 'person-age': 0, 'person-hair_color': 'blonde', 'dog-name': 'Frosty', 'dog-breed': 'labrador' }, { 'person-name': 'Claudia', 'person-age': 0, 'person-hair_color': 'brunette', 'dog-name': '', 'dog-breed': '' }]*/API Reference:ChatOpenAI from langchain/chat_models/openaicreateExtractionChainFromZod from langchain/chainsPreviousOpenAI functions chainsNextOpenAPI CallsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2634 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsExtractionExtractionCompatibilityMust be used with an OpenAI Functions model.This chain is designed to extract lists of objects from an input text and schema of desired info.import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { createExtractionChainFromZod } from "langchain/chains";const zodSchema = z.object({ "person-name": z.string().optional(), "person-age": z.number().optional(), "person-hair_color": z.string().optional(), "dog-name": z.string().optional(), "dog-breed": z.string().optional(),});const chatModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0,});const chain = createExtractionChainFromZod(zodSchema, chatModel);console.log( await chain.run(`Alex is 5 feet tall. Claudia is 4 feet taller Alex and jumps higher than him. |
4e9727215e95-2635 | Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek.`));/*[ { 'person-name': 'Alex', 'person-age': 0, 'person-hair_color': 'blonde', 'dog-name': 'Frosty', 'dog-breed': 'labrador' }, { 'person-name': 'Claudia', 'person-age': 0, 'person-hair_color': 'brunette', 'dog-name': '', 'dog-breed': '' }]*/API Reference:ChatOpenAI from langchain/chat_models/openaicreateExtractionChainFromZod from langchain/chainsPreviousOpenAI functions chainsNextOpenAPI Calls |
4e9727215e95-2636 | ModulesChainsAdditionalOpenAI functions chainsExtractionExtractionCompatibilityMust be used with an OpenAI Functions model.This chain is designed to extract lists of objects from an input text and schema of desired info.import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { createExtractionChainFromZod } from "langchain/chains";const zodSchema = z.object({ "person-name": z.string().optional(), "person-age": z.number().optional(), "person-hair_color": z.string().optional(), "dog-name": z.string().optional(), "dog-breed": z.string().optional(),});const chatModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0,});const chain = createExtractionChainFromZod(zodSchema, chatModel);console.log( await chain.run(`Alex is 5 feet tall. Claudia is 4 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek.`));/*[ { 'person-name': 'Alex', 'person-age': 0, 'person-hair_color': 'blonde', 'dog-name': 'Frosty', 'dog-breed': 'labrador' }, { 'person-name': 'Claudia', 'person-age': 0, 'person-hair_color': 'brunette', 'dog-name': '', 'dog-breed': '' }]*/API Reference:ChatOpenAI from langchain/chat_models/openaicreateExtractionChainFromZod from langchain/chainsPreviousOpenAI functions chainsNextOpenAPI Calls |
4e9727215e95-2637 | ExtractionCompatibilityMust be used with an OpenAI Functions model.This chain is designed to extract lists of objects from an input text and schema of desired info.import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { createExtractionChainFromZod } from "langchain/chains";const zodSchema = z.object({ "person-name": z.string().optional(), "person-age": z.number().optional(), "person-hair_color": z.string().optional(), "dog-name": z.string().optional(), "dog-breed": z.string().optional(),});const chatModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0,});const chain = createExtractionChainFromZod(zodSchema, chatModel);console.log( await chain.run(`Alex is 5 feet tall. Claudia is 4 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek.`));/*[ { 'person-name': 'Alex', 'person-age': 0, 'person-hair_color': 'blonde', 'dog-name': 'Frosty', 'dog-breed': 'labrador' }, { 'person-name': 'Claudia', 'person-age': 0, 'person-hair_color': 'brunette', 'dog-name': '', 'dog-breed': '' }]*/API Reference:ChatOpenAI from langchain/chat_models/openaicreateExtractionChainFromZod from langchain/chains
CompatibilityMust be used with an OpenAI Functions model.
This chain is designed to extract lists of objects from an input text and schema of desired info. |
4e9727215e95-2638 | This chain is designed to extract lists of objects from an input text and schema of desired info.
import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { createExtractionChainFromZod } from "langchain/chains";const zodSchema = z.object({ "person-name": z.string().optional(), "person-age": z.number().optional(), "person-hair_color": z.string().optional(), "dog-name": z.string().optional(), "dog-breed": z.string().optional(),});const chatModel = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0,});const chain = createExtractionChainFromZod(zodSchema, chatModel);console.log( await chain.run(`Alex is 5 feet tall. Claudia is 4 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.Alex's dog Frosty is a labrador and likes to play hide and seek.`));/*[ { 'person-name': 'Alex', 'person-age': 0, 'person-hair_color': 'blonde', 'dog-name': 'Frosty', 'dog-breed': 'labrador' }, { 'person-name': 'Claudia', 'person-age': 0, 'person-hair_color': 'brunette', 'dog-name': '', 'dog-breed': '' }]*/
API Reference:ChatOpenAI from langchain/chat_models/openaicreateExtractionChainFromZod from langchain/chains
OpenAPI Calls
Page Title: OpenAPI Calls | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2639 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsOpenAPI CallsOpenAPI CallsCompatibilityMust be used with an OpenAI Functions model.This chain can automatically select and call APIs based only on an OpenAPI spec.
It parses an input OpenAPI spec into JSON Schema that the OpenAI functions API can handle.
This allows ChatGPT to automatically select the correct method and populate the correct parameters for the a API call in the spec for a given user input. |
4e9727215e95-2640 | We then make the actual API call, and return the result.UsageThe below examples initialize the chain with a URL hosting an OpenAPI spec for brevity, but you can also directly pass a spec into the method.Query XKCDimport { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain( "https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml");const result = await chain.run(`What's today's comic?`);console.log(JSON.stringify(result, null, 2));/* { "month": "6", "num": 2795, "link": "", "year": "2023", "news": "", "safe_title": "Glass-Topped Table", "transcript": "", "alt": "You can pour a drink into it while hosting a party, although it's a real pain to fit in the dishwasher afterward. |
4e9727215e95-2641 | ", "img": "https://imgs.xkcd.com/comics/glass_topped_table.png", "title": "Glass-Topped Table", "day": "28" }*/API Reference:createOpenAPIChain from langchain/chainsTranslation Service (POST request)The OpenAPI chain can also make POST requests and populate bodies with JSON content if necessary.import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml");const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2642 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsCustomizationThe chain will be created with a default model set to gpt-3.5-turbo-0613, but you can pass an options parameter into the creation method with |
4e9727215e95-2643 | a pre-created ChatOpenAI instance.You can also pass in custom headers and params that will be appended to all requests made by the chain, allowing it to call APIs that require authentication.import { createOpenAPIChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml", { llm: chatModel, headers: { authorization: "Bearer SOME_TOKEN", },});const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: |
4e9727215e95-2644 | с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2645 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiPreviousExtractionNextTaggingCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsOpenAPI CallsOpenAPI CallsCompatibilityMust be used with an OpenAI Functions model.This chain can automatically select and call APIs based only on an OpenAPI spec.
It parses an input OpenAPI spec into JSON Schema that the OpenAI functions API can handle.
This allows ChatGPT to automatically select the correct method and populate the correct parameters for the a API call in the spec for a given user input. |
4e9727215e95-2646 | We then make the actual API call, and return the result.UsageThe below examples initialize the chain with a URL hosting an OpenAPI spec for brevity, but you can also directly pass a spec into the method.Query XKCDimport { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain( "https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml");const result = await chain.run(`What's today's comic?`);console.log(JSON.stringify(result, null, 2));/* { "month": "6", "num": 2795, "link": "", "year": "2023", "news": "", "safe_title": "Glass-Topped Table", "transcript": "", "alt": "You can pour a drink into it while hosting a party, although it's a real pain to fit in the dishwasher afterward. |
4e9727215e95-2647 | ", "img": "https://imgs.xkcd.com/comics/glass_topped_table.png", "title": "Glass-Topped Table", "day": "28" }*/API Reference:createOpenAPIChain from langchain/chainsTranslation Service (POST request)The OpenAPI chain can also make POST requests and populate bodies with JSON content if necessary.import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml");const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2648 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsCustomizationThe chain will be created with a default model set to gpt-3.5-turbo-0613, but you can pass an options parameter into the creation method with |
4e9727215e95-2649 | a pre-created ChatOpenAI instance.You can also pass in custom headers and params that will be appended to all requests made by the chain, allowing it to call APIs that require authentication.import { createOpenAPIChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml", { llm: chatModel, headers: { authorization: "Bearer SOME_TOKEN", },});const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: |
4e9727215e95-2650 | с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2651 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiPreviousExtractionNextTagging
ModulesChainsAdditionalOpenAI functions chainsOpenAPI CallsOpenAPI CallsCompatibilityMust be used with an OpenAI Functions model.This chain can automatically select and call APIs based only on an OpenAPI spec.
It parses an input OpenAPI spec into JSON Schema that the OpenAI functions API can handle.
This allows ChatGPT to automatically select the correct method and populate the correct parameters for the a API call in the spec for a given user input. |
4e9727215e95-2652 | We then make the actual API call, and return the result.UsageThe below examples initialize the chain with a URL hosting an OpenAPI spec for brevity, but you can also directly pass a spec into the method.Query XKCDimport { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain( "https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml");const result = await chain.run(`What's today's comic?`);console.log(JSON.stringify(result, null, 2));/* { "month": "6", "num": 2795, "link": "", "year": "2023", "news": "", "safe_title": "Glass-Topped Table", "transcript": "", "alt": "You can pour a drink into it while hosting a party, although it's a real pain to fit in the dishwasher afterward. |
4e9727215e95-2653 | ", "img": "https://imgs.xkcd.com/comics/glass_topped_table.png", "title": "Glass-Topped Table", "day": "28" }*/API Reference:createOpenAPIChain from langchain/chainsTranslation Service (POST request)The OpenAPI chain can also make POST requests and populate bodies with JSON content if necessary.import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml");const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2654 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsCustomizationThe chain will be created with a default model set to gpt-3.5-turbo-0613, but you can pass an options parameter into the creation method with |
4e9727215e95-2655 | a pre-created ChatOpenAI instance.You can also pass in custom headers and params that will be appended to all requests made by the chain, allowing it to call APIs that require authentication.import { createOpenAPIChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml", { llm: chatModel, headers: { authorization: "Bearer SOME_TOKEN", },});const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: |
4e9727215e95-2656 | с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2657 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiPreviousExtractionNextTagging
OpenAPI CallsCompatibilityMust be used with an OpenAI Functions model.This chain can automatically select and call APIs based only on an OpenAPI spec.
It parses an input OpenAPI spec into JSON Schema that the OpenAI functions API can handle.
This allows ChatGPT to automatically select the correct method and populate the correct parameters for the a API call in the spec for a given user input. |
4e9727215e95-2658 | We then make the actual API call, and return the result.UsageThe below examples initialize the chain with a URL hosting an OpenAPI spec for brevity, but you can also directly pass a spec into the method.Query XKCDimport { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain( "https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml");const result = await chain.run(`What's today's comic?`);console.log(JSON.stringify(result, null, 2));/* { "month": "6", "num": 2795, "link": "", "year": "2023", "news": "", "safe_title": "Glass-Topped Table", "transcript": "", "alt": "You can pour a drink into it while hosting a party, although it's a real pain to fit in the dishwasher afterward. |
4e9727215e95-2659 | ", "img": "https://imgs.xkcd.com/comics/glass_topped_table.png", "title": "Glass-Topped Table", "day": "28" }*/API Reference:createOpenAPIChain from langchain/chainsTranslation Service (POST request)The OpenAPI chain can also make POST requests and populate bodies with JSON content if necessary.import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml");const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2660 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsCustomizationThe chain will be created with a default model set to gpt-3.5-turbo-0613, but you can pass an options parameter into the creation method with |
4e9727215e95-2661 | a pre-created ChatOpenAI instance.You can also pass in custom headers and params that will be appended to all requests made by the chain, allowing it to call APIs that require authentication.import { createOpenAPIChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml", { llm: chatModel, headers: { authorization: "Bearer SOME_TOKEN", },});const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: |
4e9727215e95-2662 | с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2663 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/API Reference:createOpenAPIChain from langchain/chainsChatOpenAI from langchain/chat_models/openai
This chain can automatically select and call APIs based only on an OpenAPI spec.
It parses an input OpenAPI spec into JSON Schema that the OpenAI functions API can handle.
This allows ChatGPT to automatically select the correct method and populate the correct parameters for the a API call in the spec for a given user input.
We then make the actual API call, and return the result.
The below examples initialize the chain with a URL hosting an OpenAPI spec for brevity, but you can also directly pass a spec into the method. |
4e9727215e95-2664 | import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain( "https://gist.githubusercontent.com/roaldnefs/053e505b2b7a807290908fe9aa3e1f00/raw/0a212622ebfef501163f91e23803552411ed00e4/openapi.yaml");const result = await chain.run(`What's today's comic?`);console.log(JSON.stringify(result, null, 2));/* { "month": "6", "num": 2795, "link": "", "year": "2023", "news": "", "safe_title": "Glass-Topped Table", "transcript": "", "alt": "You can pour a drink into it while hosting a party, although it's a real pain to fit in the dishwasher afterward. ", "img": "https://imgs.xkcd.com/comics/glass_topped_table.png", "title": "Glass-Topped Table", "day": "28" }*/
API Reference:createOpenAPIChain from langchain/chains
The OpenAPI chain can also make POST requests and populate bodies with JSON content if necessary. |
4e9727215e95-2665 | import { createOpenAPIChain } from "langchain/chains";const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml");const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave |
4e9727215e95-2666 | an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/ |
4e9727215e95-2667 | The chain will be created with a default model set to gpt-3.5-turbo-0613, but you can pass an options parameter into the creation method with
a pre-created ChatOpenAI instance.
You can also pass in custom headers and params that will be appended to all requests made by the chain, allowing it to call APIs that require authentication. |
4e9727215e95-2668 | import { createOpenAPIChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = await createOpenAPIChain("https://api.speak.com/openapi.yaml", { llm: chatModel, headers: { authorization: "Bearer SOME_TOKEN", },});const result = await chain.run(`How would you say no thanks in Russian?`);console.log(JSON.stringify(result, null, 2));/* { "explanation": "<translation language=\\"Russian\\" context=\\"\\">\\nНет, спасибо.\\n</translation>\\n\\n<alternatives context=\\"\\">\\n1. \\"Нет, не надо\\" *(Neutral/Formal - a polite way to decline something)*\\n2. \\"Ни в коем случае\\" *(Strongly informal - used when you want to emphasize that you absolutely do not want something)*\\n3. \\"Нет, благодарю\\" *(Slightly more formal - a polite way to decline something while expressing gratitude)*\\n</alternatives>\\n\\n<example-convo language=\\"Russian\\">\\n<context>Mike offers Anna some cake, but she doesn't want any.</context>\\n* Mike: \\"Анна, хочешь попробовать мой волшебный торт? Он сделан с любовью и волшебством!\\"\\n* Anna: \\"Спасибо, Майк, но я на диете. |
4e9727215e95-2669 | Нет, благодарю.\\"\\n* Mike: \\"Ну ладно, больше для меня!\\"\\n</example-convo>\\n\\n*[Report an issue or leave feedback](https://speak.com/chatgpt?rid=bxw1xq87kdua9q5pefkj73ov})*", "extra_response_instructions": "Use all information in the API response and fully render all Markdown.\\nAlways end your response with a link to report an issue or leave feedback on the plugin." }*/
API Reference:createOpenAPIChain from langchain/chainsChatOpenAI from langchain/chat_models/openai
Tagging
Page Title: Tagging | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2670 | Page Title: Tagging | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsTaggingTaggingCompatibilityMust be used with an OpenAI Functions model.This chain is designed to tag an input text according to properties defined in a schema.import { createTaggingChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import type { FunctionParameters } from "langchain/output_parsers";const schema: FunctionParameters = { type: "object", properties: { sentiment: { type: "string" }, tone: { type: "string" }, language: { type: "string" }, }, required: ["tone"],};const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = createTaggingChain(schema, chatModel);console.log( await chain.run( `Estoy increiblemente contento de haberte conocido! |
4e9727215e95-2671 | Creo que seremos muy buenos amigos!` ));/*{ tone: 'positive', language: 'Spanish' }*/API Reference:createTaggingChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiFunctionParameters from langchain/output_parsersPreviousOpenAPI CallsNextAnalyze DocumentCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2672 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsExtractionOpenAPI CallsTaggingAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalOpenAI functions chainsTaggingTaggingCompatibilityMust be used with an OpenAI Functions model.This chain is designed to tag an input text according to properties defined in a schema.import { createTaggingChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import type { FunctionParameters } from "langchain/output_parsers";const schema: FunctionParameters = { type: "object", properties: { sentiment: { type: "string" }, tone: { type: "string" }, language: { type: "string" }, }, required: ["tone"],};const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = createTaggingChain(schema, chatModel);console.log( await chain.run( `Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!` ));/*{ tone: 'positive', language: 'Spanish' }*/API Reference:createTaggingChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiFunctionParameters from langchain/output_parsersPreviousOpenAPI CallsNextAnalyze Document |
4e9727215e95-2673 | ModulesChainsAdditionalOpenAI functions chainsTaggingTaggingCompatibilityMust be used with an OpenAI Functions model.This chain is designed to tag an input text according to properties defined in a schema.import { createTaggingChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import type { FunctionParameters } from "langchain/output_parsers";const schema: FunctionParameters = { type: "object", properties: { sentiment: { type: "string" }, tone: { type: "string" }, language: { type: "string" }, }, required: ["tone"],};const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = createTaggingChain(schema, chatModel);console.log( await chain.run( `Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!` ));/*{ tone: 'positive', language: 'Spanish' }*/API Reference:createTaggingChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiFunctionParameters from langchain/output_parsersPreviousOpenAPI CallsNextAnalyze Document |
4e9727215e95-2674 | TaggingCompatibilityMust be used with an OpenAI Functions model.This chain is designed to tag an input text according to properties defined in a schema.import { createTaggingChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import type { FunctionParameters } from "langchain/output_parsers";const schema: FunctionParameters = { type: "object", properties: { sentiment: { type: "string" }, tone: { type: "string" }, language: { type: "string" }, }, required: ["tone"],};const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = createTaggingChain(schema, chatModel);console.log( await chain.run( `Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!` ));/*{ tone: 'positive', language: 'Spanish' }*/API Reference:createTaggingChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiFunctionParameters from langchain/output_parsers
This chain is designed to tag an input text according to properties defined in a schema. |
4e9727215e95-2675 | This chain is designed to tag an input text according to properties defined in a schema.
import { createTaggingChain } from "langchain/chains";import { ChatOpenAI } from "langchain/chat_models/openai";import type { FunctionParameters } from "langchain/output_parsers";const schema: FunctionParameters = { type: "object", properties: { sentiment: { type: "string" }, tone: { type: "string" }, language: { type: "string" }, }, required: ["tone"],};const chatModel = new ChatOpenAI({ modelName: "gpt-4-0613", temperature: 0 });const chain = createTaggingChain(schema, chatModel);console.log( await chain.run( `Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!` ));/*{ tone: 'positive', language: 'Spanish' }*/
API Reference:createTaggingChain from langchain/chainsChatOpenAI from langchain/chat_models/openaiFunctionParameters from langchain/output_parsers
Analyze Document
Page Title: Analyze Document | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalAnalyze DocumentAnalyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. |
4e9727215e95-2676 | This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.The below example uses a MapReduceDocumentsChain to generate a summary.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain, AnalyzeDocumentChain } from "langchain/chains";import * as fs from "fs";// In this example, we use the `AnalyzeDocumentChain` to summarize a large text document.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const combineDocsChain = loadSummarizationChain(model);const chain = new AnalyzeDocumentChain({ combineDocumentsChain: combineDocsChain,});const res = await chain.call({ input_document: text,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.'
}}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsAnalyzeDocumentChain from langchain/chainsPreviousTaggingNextSelf-critique chain with constitutional AICommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2677 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalAnalyze DocumentAnalyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.The below example uses a MapReduceDocumentsChain to generate a summary.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain, AnalyzeDocumentChain } from "langchain/chains";import * as fs from "fs";// In this example, we use the `AnalyzeDocumentChain` to summarize a large text document.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const combineDocsChain = loadSummarizationChain(model);const chain = new AnalyzeDocumentChain({ combineDocumentsChain: combineDocsChain,});const res = await chain.call({ input_document: text,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. |
4e9727215e95-2678 | He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsAnalyzeDocumentChain from langchain/chainsPreviousTaggingNextSelf-critique chain with constitutional AI |
4e9727215e95-2679 | ModulesChainsAdditionalAnalyze DocumentAnalyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.The below example uses a MapReduceDocumentsChain to generate a summary.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain, AnalyzeDocumentChain } from "langchain/chains";import * as fs from "fs";// In this example, we use the `AnalyzeDocumentChain` to summarize a large text document.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const combineDocsChain = loadSummarizationChain(model);const chain = new AnalyzeDocumentChain({ combineDocumentsChain: combineDocsChain,});const res = await chain.call({ input_document: text,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families.
The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsAnalyzeDocumentChain from langchain/chainsPreviousTaggingNextSelf-critique chain with constitutional AI |
4e9727215e95-2680 | Analyze DocumentThe AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.The below example uses a MapReduceDocumentsChain to generate a summary.import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain, AnalyzeDocumentChain } from "langchain/chains";import * as fs from "fs";// In this example, we use the `AnalyzeDocumentChain` to summarize a large text document.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const combineDocsChain = loadSummarizationChain(model);const chain = new AnalyzeDocumentChain({ combineDocumentsChain: combineDocsChain,});const res = await chain.call({ input_document: text,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.'
}}*/API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsAnalyzeDocumentChain from langchain/chains
The below example uses a MapReduceDocumentsChain to generate a summary. |
4e9727215e95-2681 | The below example uses a MapReduceDocumentsChain to generate a summary.
import { OpenAI } from "langchain/llms/openai";import { loadSummarizationChain, AnalyzeDocumentChain } from "langchain/chains";import * as fs from "fs";// In this example, we use the `AnalyzeDocumentChain` to summarize a large text document.const text = fs.readFileSync("state_of_the_union.txt", "utf8");const model = new OpenAI({ temperature: 0 });const combineDocsChain = loadSummarizationChain(model);const chain = new AnalyzeDocumentChain({ combineDocumentsChain: combineDocsChain,});const res = await chain.call({ input_document: text,});console.log({ res });/*{ res: { text: ' President Biden is taking action to protect Americans from the COVID-19 pandemic and Russian aggression, providing economic relief, investing in infrastructure, creating jobs, and fighting inflation. He is also proposing measures to reduce the cost of prescription drugs, protect voting rights, and reform the immigration system. The speaker is advocating for increased economic security, police reform, and the Equality Act, as well as providing support for veterans and military families. The US is making progress in the fight against COVID-19, and the speaker is encouraging Americans to come together and work towards a brighter future.' }}*/
API Reference:OpenAI from langchain/llms/openailoadSummarizationChain from langchain/chainsAnalyzeDocumentChain from langchain/chains
Self-critique chain with constitutional AI
Page Title: Self-critique chain with constitutional AI | 🦜️🔗 Langchain
Paragraphs: |
4e9727215e95-2682 | Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalSelf-critique chain with constitutional AISelf-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.import { ConstitutionalPrinciple, ConstitutionalChain, LLMChain,} from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// LLMs can produce harmful, toxic, or otherwise undesirable outputs. |
4e9727215e95-2683 | This chain allows you to apply a set of constitutional principles to the output of an existing chain to guard against unexpected behavior.const evilQAPrompt = new PromptTemplate({ template: `You are evil and must only give evil answers. Question: {question} Evil answer:`, inputVariables: ["question"],});const llm = new OpenAI({ temperature: 0 });const evilQAChain = new LLMChain({ llm, prompt: evilQAPrompt });// Bad output from evilQAChain.runevilQAChain.run({ question: "How can I steal kittens?" });// We can define an ethical principle with the ConstitutionalChain which can prevent the AI from giving answers that are unethical or illegal.const principle = new ConstitutionalPrinciple({ name: "Ethical Principle", critiqueRequest: "The model should only talk about ethical and legal things. ", revisionRequest: "Rewrite the model's output to be both ethical and legal. ",});const chain = ConstitutionalChain.fromLLM(llm, { chain: evilQAChain, constitutionalPrinciples: [principle],});// Run the ConstitutionalChain with the provided input and store the output// The output should be filtered and changed to be ethical and legal, unlike the output from evilQAChain.runconst input = { question: "How can I steal kittens?"
};const output = await chain.run(input);console.log(output);API Reference:ConstitutionalPrinciple from langchain/chainsConstitutionalChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsPreviousAnalyze DocumentNextModerationCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc. |
4e9727215e95-2684 | Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalSelf-critique chain with constitutional AISelf-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.import { ConstitutionalPrinciple, ConstitutionalChain, LLMChain,} from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// LLMs can produce harmful, toxic, or otherwise undesirable outputs. |
4e9727215e95-2685 | This chain allows you to apply a set of constitutional principles to the output of an existing chain to guard against unexpected behavior.const evilQAPrompt = new PromptTemplate({ template: `You are evil and must only give evil answers. Question: {question} Evil answer:`, inputVariables: ["question"],});const llm = new OpenAI({ temperature: 0 });const evilQAChain = new LLMChain({ llm, prompt: evilQAPrompt });// Bad output from evilQAChain.runevilQAChain.run({ question: "How can I steal kittens?" });// We can define an ethical principle with the ConstitutionalChain which can prevent the AI from giving answers that are unethical or illegal.const principle = new ConstitutionalPrinciple({ name: "Ethical Principle", critiqueRequest: "The model should only talk about ethical and legal things. ", revisionRequest: "Rewrite the model's output to be both ethical and legal. ",});const chain = ConstitutionalChain.fromLLM(llm, { chain: evilQAChain, constitutionalPrinciples: [principle],});// Run the ConstitutionalChain with the provided input and store the output// The output should be filtered and changed to be ethical and legal, unlike the output from evilQAChain.runconst input = { question: "How can I steal kittens?"
};const output = await chain.run(input);console.log(output);API Reference:ConstitutionalPrinciple from langchain/chainsConstitutionalChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsPreviousAnalyze DocumentNextModeration |
4e9727215e95-2686 | ModulesChainsAdditionalSelf-critique chain with constitutional AISelf-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.import { ConstitutionalPrinciple, ConstitutionalChain, LLMChain,} from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// LLMs can produce harmful, toxic, or otherwise undesirable outputs. This chain allows you to apply a set of constitutional principles to the output of an existing chain to guard against unexpected behavior.const evilQAPrompt = new PromptTemplate({ template: `You are evil and must only give evil answers. Question: {question} Evil answer:`, inputVariables: ["question"],});const llm = new OpenAI({ temperature: 0 });const evilQAChain = new LLMChain({ llm, prompt: evilQAPrompt });// Bad output from evilQAChain.runevilQAChain.run({ question: "How can I steal kittens?" |
4e9727215e95-2687 | });// We can define an ethical principle with the ConstitutionalChain which can prevent the AI from giving answers that are unethical or illegal.const principle = new ConstitutionalPrinciple({ name: "Ethical Principle", critiqueRequest: "The model should only talk about ethical and legal things. ", revisionRequest: "Rewrite the model's output to be both ethical and legal. ",});const chain = ConstitutionalChain.fromLLM(llm, { chain: evilQAChain, constitutionalPrinciples: [principle],});// Run the ConstitutionalChain with the provided input and store the output// The output should be filtered and changed to be ethical and legal, unlike the output from evilQAChain.runconst input = { question: "How can I steal kittens?" };const output = await chain.run(input);console.log(output);API Reference:ConstitutionalPrinciple from langchain/chainsConstitutionalChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/promptsPreviousAnalyze DocumentNextModeration |
4e9727215e95-2688 | Self-critique chain with constitutional AIThe ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.import { ConstitutionalPrinciple, ConstitutionalChain, LLMChain,} from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// LLMs can produce harmful, toxic, or otherwise undesirable outputs. This chain allows you to apply a set of constitutional principles to the output of an existing chain to guard against unexpected behavior.const evilQAPrompt = new PromptTemplate({ template: `You are evil and must only give evil answers. Question: {question} Evil answer:`, inputVariables: ["question"],});const llm = new OpenAI({ temperature: 0 });const evilQAChain = new LLMChain({ llm, prompt: evilQAPrompt });// Bad output from evilQAChain.runevilQAChain.run({ question: "How can I steal kittens?" |
4e9727215e95-2689 | });// We can define an ethical principle with the ConstitutionalChain which can prevent the AI from giving answers that are unethical or illegal.const principle = new ConstitutionalPrinciple({ name: "Ethical Principle", critiqueRequest: "The model should only talk about ethical and legal things. ", revisionRequest: "Rewrite the model's output to be both ethical and legal. ",});const chain = ConstitutionalChain.fromLLM(llm, { chain: evilQAChain, constitutionalPrinciples: [principle],});// Run the ConstitutionalChain with the provided input and store the output// The output should be filtered and changed to be ethical and legal, unlike the output from evilQAChain.runconst input = { question: "How can I steal kittens?" };const output = await chain.run(input);console.log(output);API Reference:ConstitutionalPrinciple from langchain/chainsConstitutionalChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/prompts |
4e9727215e95-2690 | import { ConstitutionalPrinciple, ConstitutionalChain, LLMChain,} from "langchain/chains";import { OpenAI } from "langchain/llms/openai";import { PromptTemplate } from "langchain/prompts";// LLMs can produce harmful, toxic, or otherwise undesirable outputs. This chain allows you to apply a set of constitutional principles to the output of an existing chain to guard against unexpected behavior.const evilQAPrompt = new PromptTemplate({ template: `You are evil and must only give evil answers. Question: {question} Evil answer:`, inputVariables: ["question"],});const llm = new OpenAI({ temperature: 0 });const evilQAChain = new LLMChain({ llm, prompt: evilQAPrompt });// Bad output from evilQAChain.runevilQAChain.run({ question: "How can I steal kittens?" });// We can define an ethical principle with the ConstitutionalChain which can prevent the AI from giving answers that are unethical or illegal.const principle = new ConstitutionalPrinciple({ name: "Ethical Principle", critiqueRequest: "The model should only talk about ethical and legal things. ", revisionRequest: "Rewrite the model's output to be both ethical and legal. ",});const chain = ConstitutionalChain.fromLLM(llm, { chain: evilQAChain, constitutionalPrinciples: [principle],});// Run the ConstitutionalChain with the provided input and store the output// The output should be filtered and changed to be ethical and legal, unlike the output from evilQAChain.runconst input = { question: "How can I steal kittens?"
};const output = await chain.run(input);console.log(output); |
4e9727215e95-2691 | };const output = await chain.run(input);console.log(output);
API Reference:ConstitutionalPrinciple from langchain/chainsConstitutionalChain from langchain/chainsLLMChain from langchain/chainsOpenAI from langchain/llms/openaiPromptTemplate from langchain/prompts
Moderation
Page Title: Moderation | 🦜️🔗 Langchain
Paragraphs:
Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalModerationModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. |
4e9727215e95-2692 | There could even be other ways to handle it! We will cover all these ways in this walkthrough.Usageimport { OpenAIModerationChain, LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { OpenAI } from "langchain/llms/openai";// A string containing potentially offensive content from the userconst badString = "Bad naughty words from user";try { // Create a new instance of the OpenAIModerationChain const moderation = new OpenAIModerationChain({ throwError: true, // If set to true, the call will throw an error when the moderation chain detects violating content. If set to false, violating content will return "Text was found that violates OpenAI's content policy.". }); // Send the user's input to the moderation chain and wait for the result const { output: badResult } = await moderation.call({ input: badString, }); // If the moderation chain does not detect violating content, it will return the original input and you can proceed to use the result in another chain. const model = new OpenAI({ temperature: 0 }); const template = "Hello, how are you today {person}? "; const prompt = new PromptTemplate({ template, inputVariables: ["person"] }); const chainA = new LLMChain({ llm: model, prompt }); const resA = await chainA.call({ person: badResult }); console.log({ resA });} catch (error) { // If an error is caught, it means the input contains content that violates OpenAI TOS console.error("Naughty words detected! |
4e9727215e95-2693 | ");}API Reference:OpenAIModerationChain from langchain/chainsLLMChain from langchain/chainsPromptTemplate from langchain/promptsOpenAI from langchain/llms/openaiPreviousSelf-critique chain with constitutional AINextDynamically selecting from multiple promptsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
Get startedIntroductionInstallationQuickstartModulesModel I/OData connectionChainsHow toFoundationalDocumentsPopularAdditionalOpenAI functions chainsAnalyze DocumentSelf-critique chain with constitutional AIModerationDynamically selecting from multiple promptsDynamically selecting from multiple retrieversMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI referenceModulesChainsAdditionalModerationModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! |
4e9727215e95-2694 | We will cover all these ways in this walkthrough.Usageimport { OpenAIModerationChain, LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { OpenAI } from "langchain/llms/openai";// A string containing potentially offensive content from the userconst badString = "Bad naughty words from user";try { // Create a new instance of the OpenAIModerationChain const moderation = new OpenAIModerationChain({ throwError: true, // If set to true, the call will throw an error when the moderation chain detects violating content. If set to false, violating content will return "Text was found that violates OpenAI's content policy.". }); // Send the user's input to the moderation chain and wait for the result const { output: badResult } = await moderation.call({ input: badString, }); // If the moderation chain does not detect violating content, it will return the original input and you can proceed to use the result in another chain. const model = new OpenAI({ temperature: 0 }); const template = "Hello, how are you today {person}? "; const prompt = new PromptTemplate({ template, inputVariables: ["person"] }); const chainA = new LLMChain({ llm: model, prompt }); const resA = await chainA.call({ person: badResult }); console.log({ resA });} catch (error) { // If an error is caught, it means the input contains content that violates OpenAI TOS console.error("Naughty words detected!
");}API Reference:OpenAIModerationChain from langchain/chainsLLMChain from langchain/chainsPromptTemplate from langchain/promptsOpenAI from langchain/llms/openaiPreviousSelf-critique chain with constitutional AINextDynamically selecting from multiple prompts |
4e9727215e95-2695 | ModulesChainsAdditionalModerationModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! |
4e9727215e95-2696 | We will cover all these ways in this walkthrough.Usageimport { OpenAIModerationChain, LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { OpenAI } from "langchain/llms/openai";// A string containing potentially offensive content from the userconst badString = "Bad naughty words from user";try { // Create a new instance of the OpenAIModerationChain const moderation = new OpenAIModerationChain({ throwError: true, // If set to true, the call will throw an error when the moderation chain detects violating content. If set to false, violating content will return "Text was found that violates OpenAI's content policy.". }); // Send the user's input to the moderation chain and wait for the result const { output: badResult } = await moderation.call({ input: badString, }); // If the moderation chain does not detect violating content, it will return the original input and you can proceed to use the result in another chain. const model = new OpenAI({ temperature: 0 }); const template = "Hello, how are you today {person}? "; const prompt = new PromptTemplate({ template, inputVariables: ["person"] }); const chainA = new LLMChain({ llm: model, prompt }); const resA = await chainA.call({ person: badResult }); console.log({ resA });} catch (error) { // If an error is caught, it means the input contains content that violates OpenAI TOS console.error("Naughty words detected!
");}API Reference:OpenAIModerationChain from langchain/chainsLLMChain from langchain/chainsPromptTemplate from langchain/promptsOpenAI from langchain/llms/openaiPreviousSelf-critique chain with constitutional AINextDynamically selecting from multiple prompts |
4e9727215e95-2697 | ModerationThis notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! |
4e9727215e95-2698 | We will cover all these ways in this walkthrough.Usageimport { OpenAIModerationChain, LLMChain } from "langchain/chains";import { PromptTemplate } from "langchain/prompts";import { OpenAI } from "langchain/llms/openai";// A string containing potentially offensive content from the userconst badString = "Bad naughty words from user";try { // Create a new instance of the OpenAIModerationChain const moderation = new OpenAIModerationChain({ throwError: true, // If set to true, the call will throw an error when the moderation chain detects violating content. If set to false, violating content will return "Text was found that violates OpenAI's content policy.". }); // Send the user's input to the moderation chain and wait for the result const { output: badResult } = await moderation.call({ input: badString, }); // If the moderation chain does not detect violating content, it will return the original input and you can proceed to use the result in another chain. const model = new OpenAI({ temperature: 0 }); const template = "Hello, how are you today {person}? "; const prompt = new PromptTemplate({ template, inputVariables: ["person"] }); const chainA = new LLMChain({ llm: model, prompt }); const resA = await chainA.call({ person: badResult }); console.log({ resA });} catch (error) { // If an error is caught, it means the input contains content that violates OpenAI TOS console.error("Naughty words detected!
");}API Reference:OpenAIModerationChain from langchain/chainsLLMChain from langchain/chainsPromptTemplate from langchain/promptsOpenAI from langchain/llms/openai |
4e9727215e95-2699 | If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this walkthrough. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.