id
stringlengths
14
17
text
stringlengths
42
2.11k
4e9727215e95-2500
Passing specific options here is completely optional, but can be useful if you want to customize the way the response is presented to the end user, or if you have too many documents for the default StuffDocumentsChain. You can see the API reference of the usable fields here. In case you want to make chat_history avail...
4e9727215e95-2501
to let it know which values to store.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter...
4e9727215e95-2502
gpt-3.5 or gpt-4) }), questionGeneratorChainOptions: { llm: fasterModel, }, } ); /* Ask it a question */ const question = "What did the president say about Justice Breyer? "; const res = await chain.call({ question }); console.log(res); const followUpRes = await chain.call({ question: "Wa...
4e9727215e95-2503
Here's an example:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter } from "langchain/...
4e9727215e95-2504
streaming: true, callbacks: [ { handleLLMNewToken(token) { streamedResponse += token; }, }, ], }); const nonStreamingModel = new ChatOpenAI({}); const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorStore.asRetriever(), { returnSourceDocument...
4e9727215e95-2505
a chat_history string or array of HumanMessages and AIMessages directly into the chain.call method:import { OpenAI } from "langchain/llms/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddi...
4e9727215e95-2506
", chat_history: chatHistory,});console.log(followUpRes);API Reference:OpenAI from langchain/llms/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRecursiveCharacterTextSplitter from langchain/text_splitterPrompt Custo...
4e9727215e95-2507
allowing the QA chain to answer meta questions with the additional context:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";i...
4e9727215e95-2508
make up an answer.----------------<Relevant chat history excerpt as context here>Standalone question: <Rephrased question here>\`\`\`Your answer:`;const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const vectorStore = await HNSWLib.fromTexts( [ "Mitochondria are the powerhouse of the cel...
4e9727215e95-2509
}*/API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiBufferMemory from langchain/memoryKeep in mind that adding more context to the prompt in this way may distract the ...
4e9727215e95-2510
In the below example, we will create one from a vector store, which can be created from embeddings.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langch...
4e9727215e95-2511
"; const res = await chain.call({ question }); console.log(res); /* Ask it a follow up question */ const followUpRes = await chain.call({ question: "Was that nice? ", }); console.log(followUpRes);};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNS...
4e9727215e95-2512
Passing specific options here is completely optional, but can be useful if you want to customize the way the response is presented to the end user, or if you have too many documents for the default StuffDocumentsChain. You can see the API reference of the usable fields here. In case you want to make chat_history avail...
4e9727215e95-2513
to let it know which values to store.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter...
4e9727215e95-2514
gpt-3.5 or gpt-4) }), questionGeneratorChainOptions: { llm: fasterModel, }, } ); /* Ask it a question */ const question = "What did the president say about Justice Breyer? "; const res = await chain.call({ question }); console.log(res); const followUpRes = await chain.call({ question: "Wa...
4e9727215e95-2515
Here's an example:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter } from "langchain/...
4e9727215e95-2516
streaming: true, callbacks: [ { handleLLMNewToken(token) { streamedResponse += token; }, }, ], }); const nonStreamingModel = new ChatOpenAI({}); const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorStore.asRetriever(), { returnSourceDocument...
4e9727215e95-2517
a chat_history string or array of HumanMessages and AIMessages directly into the chain.call method:import { OpenAI } from "langchain/llms/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddi...
4e9727215e95-2518
", chat_history: chatHistory,});console.log(followUpRes);API Reference:OpenAI from langchain/llms/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRecursiveCharacterTextSplitter from langchain/text_splitterPrompt Custo...
4e9727215e95-2519
allowing the QA chain to answer meta questions with the additional context:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";i...
4e9727215e95-2520
make up an answer.----------------<Relevant chat history excerpt as context here>Standalone question: <Rephrased question here>\`\`\`Your answer:`;const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const vectorStore = await HNSWLib.fromTexts( [ "Mitochondria are the powerhouse of the cel...
4e9727215e95-2521
}*/API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiBufferMemory from langchain/memoryKeep in mind that adding more context to the prompt in this way may distract the ...
4e9727215e95-2522
In the below example, we will create one from a vector store, which can be created from embeddings.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langch...
4e9727215e95-2523
"; const res = await chain.call({ question }); console.log(res); /* Ask it a follow up question */ const followUpRes = await chain.call({ question: "Was that nice? ", }); console.log(followUpRes);};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNS...
4e9727215e95-2524
Passing specific options here is completely optional, but can be useful if you want to customize the way the response is presented to the end user, or if you have too many documents for the default StuffDocumentsChain. You can see the API reference of the usable fields here. In case you want to make chat_history avail...
4e9727215e95-2525
to let it know which values to store.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter...
4e9727215e95-2526
gpt-3.5 or gpt-4) }), questionGeneratorChainOptions: { llm: fasterModel, }, } ); /* Ask it a question */ const question = "What did the president say about Justice Breyer? "; const res = await chain.call({ question }); console.log(res); const followUpRes = await chain.call({ question: "Wa...
4e9727215e95-2527
Here's an example:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter } from "langchain/...
4e9727215e95-2528
streaming: true, callbacks: [ { handleLLMNewToken(token) { streamedResponse += token; }, }, ], }); const nonStreamingModel = new ChatOpenAI({}); const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorStore.asRetriever(), { returnSourceDocument...
4e9727215e95-2529
a chat_history string or array of HumanMessages and AIMessages directly into the chain.call method:import { OpenAI } from "langchain/llms/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddi...
4e9727215e95-2530
", chat_history: chatHistory,});console.log(followUpRes);API Reference:OpenAI from langchain/llms/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRecursiveCharacterTextSplitter from langchain/text_splitterPrompt Custo...
4e9727215e95-2531
allowing the QA chain to answer meta questions with the additional context:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";i...
4e9727215e95-2532
make up an answer.----------------<Relevant chat history excerpt as context here>Standalone question: <Rephrased question here>\`\`\`Your answer:`;const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const vectorStore = await HNSWLib.fromTexts( [ "Mitochondria are the powerhouse of the cel...
4e9727215e95-2533
}*/API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiBufferMemory from langchain/memoryKeep in mind that adding more context to the prompt in this way may distract the ...
4e9727215e95-2534
In the below example, we will create one from a vector store, which can be created from embeddings.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langch...
4e9727215e95-2535
"; const res = await chain.call({ question }); console.log(res); /* Ask it a follow up question */ const followUpRes = await chain.call({ question: "Was that nice? ", }); console.log(followUpRes);};API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNS...
4e9727215e95-2536
Passing specific options here is completely optional, but can be useful if you want to customize the way the response is presented to the end user, or if you have too many documents for the default StuffDocumentsChain. You can see the API reference of the usable fields here. In case you want to make chat_history avail...
4e9727215e95-2537
to let it know which values to store.import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter...
4e9727215e95-2538
gpt-3.5 or gpt-4) }), questionGeneratorChainOptions: { llm: fasterModel, }, } ); /* Ask it a question */ const question = "What did the president say about Justice Breyer? "; const res = await chain.call({ question }); console.log(res); const followUpRes = await chain.call({ question: "Wa...
4e9727215e95-2539
Here's an example:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter } from "langchain/...
4e9727215e95-2540
streaming: true, callbacks: [ { handleLLMNewToken(token) { streamedResponse += token; }, }, ], }); const nonStreamingModel = new ChatOpenAI({}); const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorStore.asRetriever(), { returnSourceDocument...
4e9727215e95-2541
a chat_history string or array of HumanMessages and AIMessages directly into the chain.call method:import { OpenAI } from "langchain/llms/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddi...
4e9727215e95-2542
", chat_history: chatHistory,});console.log(followUpRes);API Reference:OpenAI from langchain/llms/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRecursiveCharacterTextSplitter from langchain/text_splitterPrompt Custo...
4e9727215e95-2543
allowing the QA chain to answer meta questions with the additional context:import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";i...
4e9727215e95-2544
make up an answer.----------------<Relevant chat history excerpt as context here>Standalone question: <Rephrased question here>\`\`\`Your answer:`;const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const vectorStore = await HNSWLib.fromTexts( [ "Mitochondria are the powerhouse of the cel...
4e9727215e95-2545
}*/API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiBufferMemory from langchain/memoryKeep in mind that adding more context to the prompt in this way may distract the ...
4e9727215e95-2546
import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";imp...
4e9727215e95-2547
API Reference:ChatOpenAI from langchain/chat_models/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRecursiveCharacterTextSplitter from langchain/text_splitterBufferMemory from langchain/memory In the above code snipp...
4e9727215e95-2548
to let it know which values to store. import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai";import { RecursiveCharacterTextSplitt...
4e9727215e95-2549
"; const res = await chain.call({ question }); console.log(res); const followUpRes = await chain.call({ question: "Was that nice?" }); console.log(followUpRes);}; You can also use the above concept of using two different LLMs to stream only the final response from the chain, and not output from the intermediate st...
4e9727215e95-2550
true, callbacks: [ { handleLLMNewToken(token) { streamedResponse += token; }, }, ], }); const nonStreamingModel = new ChatOpenAI({}); const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorStore.asRetriever(), { returnSourceDocuments: true, ...
4e9727215e95-2551
a chat_history string or array of HumanMessages and AIMessages directly into the chain.call method: import { OpenAI } from "langchain/llms/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embed...
4e9727215e95-2552
API Reference:OpenAI from langchain/llms/openaiConversationalRetrievalQAChain from langchain/chainsHNSWLib from langchain/vectorstores/hnswlibOpenAIEmbeddings from langchain/embeddings/openaiRecursiveCharacterTextSplitter from langchain/text_splitter If you want to further change the chain's behavior, you can change t...
4e9727215e95-2553
allowing the QA chain to answer meta questions with the additional context: import { ChatOpenAI } from "langchain/chat_models/openai";import { ConversationalRetrievalQAChain } from "langchain/chains";import { HNSWLib } from "langchain/vectorstores/hnswlib";import { OpenAIEmbeddings } from "langchain/embeddings/openai"...
4e9727215e95-2554
chat history excerpt as context here>Standalone question: <Rephrased question here>\`\`\`Your answer:`;const model = new ChatOpenAI({ modelName: "gpt-3.5-turbo", temperature: 0,});const vectorStore = await HNSWLib.fromTexts( [ "Mitochondria are the powerhouse of the cell", "Foo is red", "Bar is red", "Bu...
4e9727215e95-2555
Keep in mind that adding more context to the prompt in this way may distract the LLM from other relevant retrieved information. SQL Page Title: SQL | 🦜️🔗 Langchain Paragraphs: Skip to main content🦜️🔗 LangChainDocsUse casesAPILangSmithPython DocsCTRLKGet startedIntroductionInstallationQuickstartModulesModel I/​O...
4e9727215e95-2556
Postgres, SQLite, Microsoft SQL Server, MySQL, and SAP HANA.Finally follow the instructions on https://database.guide/2-sample-databases-sqlite/ to get the sample database for this example.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";...
4e9727215e95-2557
It can also reduce the number of tokens used in the chain.const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, includesTables: ["Track"],});If desired, you can return the used SQL command when calling the chain.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/opena...
4e9727215e95-2558
', * sql: ' SELECT COUNT(*) FROM "Track";' * } */console.log(res);API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbSAP Hana​Here's an example of using the chain with a SAP HANA database:import { DataSource } from "typeorm";import { OpenAI } f...
4e9727215e95-2559
");console.log(res);// There are 3503 tracks.API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbCustom prompt​You can also customize the prompt that is used. Here is an example prompting the model to understand that "foobar" is the same as the Em...
4e9727215e95-2560
/const datasource = new DataSource({ type: "sqlite", database: "data/Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db, sqlOutputKey: "sql", prompt,});const res = await chain.call(...
4e9727215e95-2561
Postgres, SQLite, Microsoft SQL Server, MySQL, and SAP HANA.Finally follow the instructions on https://database.guide/2-sample-databases-sqlite/ to get the sample database for this example.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";...
4e9727215e95-2562
It can also reduce the number of tokens used in the chain.const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, includesTables: ["Track"],});If desired, you can return the used SQL command when calling the chain.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/opena...
4e9727215e95-2563
', * sql: ' SELECT COUNT(*) FROM "Track";' * } */console.log(res);API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbSAP Hana​Here's an example of using the chain with a SAP HANA database:import { DataSource } from "typeorm";import { OpenAI } f...
4e9727215e95-2564
");console.log(res);// There are 3503 tracks.API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbCustom prompt​You can also customize the prompt that is used. Here is an example prompting the model to understand that "foobar" is the same as the Em...
4e9727215e95-2565
/const datasource = new DataSource({ type: "sqlite", database: "data/Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db, sqlOutputKey: "sql", prompt,});const res = await chain.call(...
4e9727215e95-2566
Postgres, SQLite, Microsoft SQL Server, MySQL, and SAP HANA.Finally follow the instructions on https://database.guide/2-sample-databases-sqlite/ to get the sample database for this example.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";...
4e9727215e95-2567
It can also reduce the number of tokens used in the chain.const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, includesTables: ["Track"],});If desired, you can return the used SQL command when calling the chain.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/opena...
4e9727215e95-2568
', * sql: ' SELECT COUNT(*) FROM "Track";' * } */console.log(res);API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbSAP Hana​Here's an example of using the chain with a SAP HANA database:import { DataSource } from "typeorm";import { OpenAI } f...
4e9727215e95-2569
");console.log(res);// There are 3503 tracks.API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbCustom prompt​You can also customize the prompt that is used. Here is an example prompting the model to understand that "foobar" is the same as the Em...
4e9727215e95-2570
/const datasource = new DataSource({ type: "sqlite", database: "data/Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db, sqlOutputKey: "sql", prompt,});const res = await chain.call(...
4e9727215e95-2571
Postgres, SQLite, Microsoft SQL Server, MySQL, and SAP HANA.Finally follow the instructions on https://database.guide/2-sample-databases-sqlite/ to get the sample database for this example.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";...
4e9727215e95-2572
It can also reduce the number of tokens used in the chain.const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource, includesTables: ["Track"],});If desired, you can return the used SQL command when calling the chain.import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/opena...
4e9727215e95-2573
', * sql: ' SELECT COUNT(*) FROM "Track";' * } */console.log(res);API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbSAP Hana​Here's an example of using the chain with a SAP HANA database:import { DataSource } from "typeorm";import { OpenAI } f...
4e9727215e95-2574
");console.log(res);// There are 3503 tracks.API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbCustom prompt​You can also customize the prompt that is used. Here is an example prompting the model to understand that "foobar" is the same as the Em...
4e9727215e95-2575
/const datasource = new DataSource({ type: "sqlite", database: "data/Chinook.db",});const db = await SqlDatabase.fromDataSourceParams({ appDataSource: datasource,});const chain = new SqlDatabaseChain({ llm: new OpenAI({ temperature: 0 }), database: db, sqlOutputKey: "sql", prompt,});const res = await chain.call(...
4e9727215e95-2576
import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";/** * This example uses Chinook database, which is a sample database available for SQL Server, Oracle, MySQL, etc. * To set it up ...
4e9727215e95-2577
If desired, you can return the used SQL command when calling the chain. import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";/** * This example uses Chinook database, which is a samp...
4e9727215e95-2578
Here's an example of using the chain with a SAP HANA database: import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";/** * This example uses a SAP HANA Cloud database. You can create ...
4e9727215e95-2579
import { DataSource } from "typeorm";import { OpenAI } from "langchain/llms/openai";import { SqlDatabase } from "langchain/sql_db";import { SqlDatabaseChain } from "langchain/chains/sql_db";import { PromptTemplate } from "langchain/prompts";const template = `Given an input question, first create a syntactically correct...
4e9727215e95-2580
', sql: ' SELECT COUNT(*) FROM Employee;' }*/ API Reference:OpenAI from langchain/llms/openaiSqlDatabase from langchain/sql_dbSqlDatabaseChain from langchain/chains/sql_dbPromptTemplate from langchain/prompts Structured Output with OpenAI functions Page Title: Structured Output with OpenAI functions | 🦜️🔗 Lang...
4e9727215e95-2581
library and convert it with the zod-to-json-schema package. To do so, install the following packages:npmYarnpnpmnpm install zod zod-to-json-schemayarn add zod zod-to-json-schemapnpm add zod zod-to-json-schemaFormat Text into Structured Data​import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";im...
4e9727215e95-2582
), HumanMessagePromptTemplate.fromTemplate("{inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0 });// Binding "function_call" below makes the model always call the specified function.// If you want to allow the model to call functions sele...
4e9727215e95-2583
of using the createStructuredOutputChainFromZod convenience method to return a classic LLMChain:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { createStructuredOu...
4e9727215e95-2584
), HumanMessagePromptTemplate.fromTemplate("Additional context: {inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 1 });const chain = createStructuredOutputChainFromZod(zodSchema, { prompt, llm, outputKey: "person",});const response = aw...
4e9727215e95-2585
Get startedIntroductionInstallationQuickstartModulesModel I/​OData connectionChainsHow toFoundationalDocumentsPopularAPI chainsRetrieval QAConversational Retrieval QASQLStructured Output with OpenAI functionsSummarizationAdditionalMemoryAgentsCallbacksModulesGuidesEcosystemAdditional resourcesCommunity navigatorAPI ref...
4e9727215e95-2586
library and convert it with the zod-to-json-schema package. To do so, install the following packages:npmYarnpnpmnpm install zod zod-to-json-schemayarn add zod zod-to-json-schemapnpm add zod zod-to-json-schemaFormat Text into Structured Data​import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";im...
4e9727215e95-2587
), HumanMessagePromptTemplate.fromTemplate("{inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0 });// Binding "function_call" below makes the model always call the specified function.// If you want to allow the model to call functions sele...
4e9727215e95-2588
of using the createStructuredOutputChainFromZod convenience method to return a classic LLMChain:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { createStructuredOu...
4e9727215e95-2589
), HumanMessagePromptTemplate.fromTemplate("Additional context: {inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 1 });const chain = createStructuredOutputChainFromZod(zodSchema, { prompt, llm, outputKey: "person",});const response = aw...
4e9727215e95-2590
It converts input schema into an OpenAI function, then forces OpenAI to call that function to return a response in the correct format.You can use it where you would use a chain with a StructuredOutputParser, but it doesn't require any special instructions stuffed into the prompt. It will also more reliably output stru...
4e9727215e95-2591
), HumanMessagePromptTemplate.fromTemplate("{inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0 });// Binding "function_call" below makes the model always call the specified function.// If you want to allow the model to call functions sele...
4e9727215e95-2592
of using the createStructuredOutputChainFromZod convenience method to return a classic LLMChain:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { createStructuredOu...
4e9727215e95-2593
), HumanMessagePromptTemplate.fromTemplate("Additional context: {inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 1 });const chain = createStructuredOutputChainFromZod(zodSchema, { prompt, llm, outputKey: "person",});const response = aw...
4e9727215e95-2594
It converts input schema into an OpenAI function, then forces OpenAI to call that function to return a response in the correct format.You can use it where you would use a chain with a StructuredOutputParser, but it doesn't require any special instructions stuffed into the prompt. It will also more reliably output stru...
4e9727215e95-2595
), HumanMessagePromptTemplate.fromTemplate("{inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 0 });// Binding "function_call" below makes the model always call the specified function.// If you want to allow the model to call functions sele...
4e9727215e95-2596
of using the createStructuredOutputChainFromZod convenience method to return a classic LLMChain:import { z } from "zod";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { createStructuredOu...
4e9727215e95-2597
), HumanMessagePromptTemplate.fromTemplate("Additional context: {inputText}"), ], inputVariables: ["inputText"],});const llm = new ChatOpenAI({ modelName: "gpt-3.5-turbo-0613", temperature: 1 });const chain = createStructuredOutputChainFromZod(zodSchema, { prompt, llm, outputKey: "person",});const response = aw...
4e9727215e95-2598
It converts input schema into an OpenAI function, then forces OpenAI to call that function to return a response in the correct format. You can use it where you would use a chain with a StructuredOutputParser, but it doesn't require any special instructions stuffed into the prompt. It will also more reliably output st...
4e9727215e95-2599
pnpm add zod zod-to-json-schema import { z } from "zod";import { zodToJsonSchema } from "zod-to-json-schema";import { ChatOpenAI } from "langchain/chat_models/openai";import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,} from "langchain/prompts";import { JsonOutputFunctionsParser } ...