filetype
stringclasses
2 values
content
stringlengths
0
75k
filename
stringlengths
59
152
.md
# Migrating ## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23 In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`. This migration has already started, but we are remaining backwards compatible until 7/28. On that date, we will remove functionality from `langchain`. Read more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043). ### Migrating to `langchain_experimental` We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`. This guide covers how to migrate. ### Installation Previously: `pip install -U langchain` Now (only if you want to access things in experimental): `pip install -U langchain langchain_experimental` ### Things in `langchain.experimental` Previously: `from langchain.experimental import ...` Now: `from langchain_experimental import ...` ### PALChain Previously: `from langchain.chains import PALChain` Now: `from langchain_experimental.pal_chain import PALChain` ### SQLDatabaseChain Previously: `from langchain.chains import SQLDatabaseChain` Now: `from langchain_experimental.sql import SQLDatabaseChain` Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb) `from langchain.chains import create_sql_query_chain` ### `load_prompt` for Python files Note: this only applies if you want to load Python files as prompts. If you want to load json/yaml files, no change is needed. Previously: `from langchain.prompts import load_prompt` Now: `from langchain_experimental.prompts import load_prompt`
C:\Users\wesla\CodePilotAI\repositories\langchain\MIGRATE.md
.md
# 🦜️🔗 LangChain ⚡ Build context-aware reasoning applications ⚡ [![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases) [![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml) [![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai) [![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS) [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) [![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain) [![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain) [![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues) Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs). To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com). [LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications. Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team. ## Quick Install With pip: ```bash pip install langchain ``` With conda: ```bash conda install langchain -c conda-forge ``` ## 🤔 What is LangChain? **LangChain** is a framework for developing applications powered by language models. It enables applications that: - **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.) - **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.) This framework consists of several parts. - **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents. - **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks. - **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API. - **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain. - **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. The LangChain libraries themselves are made up of several different packages. - **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language. - **[`langchain-community`](libs/community)**: Third party integrations. - **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. ![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png "LangChain Architecture Overview") ## 🧱 What can you build with LangChain? **❓ Retrieval augmented generation** - [Documentation](https://python.langchain.com/docs/use_cases/question_answering/) - End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain) **💬 Analyzing structured data** - [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql) - End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2) **🤖 Chatbots** - [Documentation](https://python.langchain.com/docs/use_cases/chatbots) - End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain) And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more. ## 🚀 How does LangChain help? The main value props of the LangChain libraries are: 1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not 2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones. Components fall into the following **modules**: **📃 Model I/O:** This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs. **📚 Retrieval:** Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources. **🤖 Agents:** Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. ## 📖 Documentation Please see [here](https://python.langchain.com) for full documentation, which includes: - [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples - Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers) - [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai) - [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews - [Reference](https://api.python.langchain.com): full API docs ## 💁 Contributing As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation. For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/). ## 🌟 Contributors [![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
C:\Users\wesla\CodePilotAI\repositories\langchain\README.md
.md
# Security Policy ## Reporting a Vulnerability Please report security vulnerabilities by email to `security@langchain.dev`. This email is an alias to a subset of our maintainers, and will ensure the issue is promptly triaged and acted upon as needed.
C:\Users\wesla\CodePilotAI\repositories\langchain\SECURITY.md
.md
# Dev container This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment. You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers). ## GitHub Codespaces [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain) You may use the button above, or follow these steps to open this repo in a Codespace: 1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain. 1. Click on the **Codespaces** tab. 1. Click **Create codespace on master** . For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace). ## VS Code Dev Containers [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain) Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name: ``` https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame> ``` Then you will have a local cloned repo where you can contribute and then create pull requests. If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use. Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension: 1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started). 2. Open a locally cloned copy of the code: - Fork and Clone this repository to your local filesystem. - Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command. - Select the cloned copy of this folder, wait for the container to start, and try things out! You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers). ## Tips and tricks * If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info. * If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
C:\Users\wesla\CodePilotAI\repositories\langchain\.devcontainer\README.md
.md
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at conduct@langchain.dev. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1, available at [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1]. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder][Mozilla CoC]. For answers to common questions about this code of conduct, see the FAQ at [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at [https://www.contributor-covenant.org/translations][translations]. [homepage]: https://www.contributor-covenant.org [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html [Mozilla CoC]: https://github.com/mozilla/diversity [FAQ]: https://www.contributor-covenant.org/faq [translations]: https://www.contributor-covenant.org/translations
C:\Users\wesla\CodePilotAI\repositories\langchain\.github\CODE_OF_CONDUCT.md
.md
# Contributing to LangChain Hi there! Thank you for even being interested in contributing to LangChain. As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes. To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
C:\Users\wesla\CodePilotAI\repositories\langchain\.github\CONTRIBUTING.md
.md
Thank you for contributing to LangChain! - [ ] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.
C:\Users\wesla\CodePilotAI\repositories\langchain\.github\PULL_REQUEST_TEMPLATE.md
.md
# LangChain cookbook Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the [main documentation](https://python.langchain.com). Notebook | Description :- | :- [LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters. [Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains. [Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains. [Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all. [analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document. [autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools. [autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times. [baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers. [baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information. [camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion. [causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies. [code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake. [custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints. [custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory. [databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql. [deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4. [elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API. [extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools [forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer. [generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever. [gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium. [hugginggpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/hugginggpt.ipynb) | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face. [hypothetical_document_embeddin...](https://github.com/langchain-ai/langchain/tree/master/cookbook/hypothetical_document_embeddings.ipynb) | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries. [learned_prompt_optimization.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/learned_prompt_optimization.ipynb) | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences. [llm_bash.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_bash.ipynb) | Perform simple filesystem commands using language learning models (llms) and a bash process. [llm_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_checker.ipynb) | Create a self-checking chain using the llmcheckerchain function. [llm_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_math.ipynb) | Solve complex word math problems using language models and python repls. [llm_summarization_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_summarization_checker.ipynb) | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results. [llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics. [meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly. [multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text. [multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents. [multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network. [multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example. [myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications. [openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline. [openai_v1_cookbook.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_v1_cookbook.ipynb) | Explore new functionality released alongside the V1 release of the OpenAI Python library. [petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library. [plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent. [press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai). [program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper. [qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources. [retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector. [sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings. [self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset. [smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output. [tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique. [twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake. [two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools. [two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master. [wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
C:\Users\wesla\CodePilotAI\repositories\langchain\cookbook\README.md
.md
# LangChain Documentation For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/documentation)
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\README.md
.txt
-e ../libs/langchain -e ../libs/community -e ../libs/core urllib3==1.26.18
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\vercel_requirements.txt
.txt
-e libs/experimental -e libs/langchain -e libs/core -e libs/community pydantic<2 autodoc_pydantic==1.8.0 myst_parser nbsphinx==0.8.9 sphinx>=5 sphinx-autobuild==2021.3.14 sphinx_rtd_theme==1.0.0 sphinx-typlog-theme==0.8.0 sphinx-panels toml myst_nb sphinx_copybutton pydata-sphinx-theme==0.13.1
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\api_reference\requirements.txt
.txt
Copyright (c) 2007-2023 The scikit-learn developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\api_reference\templates\COPYRIGHT.txt
.txt
Copyright (c) 2007-2023 The scikit-learn developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\api_reference\themes\COPYRIGHT.txt
.md
# Security LangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources. ## Best Practices When building such applications developers should remember to follow good security practices: * [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application. * **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data. * [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use. Risks of not doing so include, but are not limited to: * Data corruption or loss. * Unauthorized access to confidential information. * Compromised performance or availability of critical resources. Example scenarios with mitigation strategies: * A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container. * A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse. * A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials. If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications. ## Reporting a Vulnerability Please report security vulnerabilities by email to security@langchain.dev. This will ensure the issue is promptly triaged and acted upon as needed. ## Enterprise solutions LangChain may offer enterprise solutions for customers who have additional security requirements. Please contact us at sales@langchain.dev.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\security.md
.md
# Debugging If you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created. Here are a few different tools and functionalities to aid in debugging. ## Tracing Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them. For anyone building production-grade LLM applications, we highly recommend using a platform like this. ![Screenshot of the LangSmith debugging interface showing an AgentExecutor run with input and output details, and a run tree visualization.](../../static/img/run_details.png "LangSmith Debugging Interface") ## `set_debug` and `set_verbose` If you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run. There are a number of ways to enable printing at varying degrees of verbosity. Let's suppose we have a simple agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see: ```python from langchain.agents import AgentType, initialize_agent, load_tools from langchain_openai import ChatOpenAI llm = ChatOpenAI(model_name="gpt-4", temperature=0) tools = load_tools(["ddg-search", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION) ``` ```python agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <CodeOutputBlock lang="python"> ``` 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is approximately 19345 days old in 2023.' ``` </CodeOutputBlock> ### `set_debug(True)` Setting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs. ```python from langchain.globals import set_debug set_debug(True) agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <details> <summary>Console output</summary> <CodeOutputBlock lang="python"> ``` [chain/start] [1:RunTypeEnum.chain:AgentExecutor] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?" } [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] [5.53s] Exiting LLM run with output: { "generations": [ [ { "text": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 206, "completion_tokens": 71, "total_tokens": 277 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] [5.53s] Exiting Chain run with output: { "text": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input: "Director of the 2023 film Oppenheimer and their age" [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] [1.51s] Exiting Tool run with output: "Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age." [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] [4.46s] Exiting LLM run with output: { "generations": [ [ { "text": "The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 550, "completion_tokens": 39, "total_tokens": 589 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] [4.46s] Exiting Chain run with output: { "text": "The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input: "Christopher Nolan age" [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] [1.33s] Exiting Tool run with output: "Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as "Dunkirk," "Inception," "Interstellar," and the "Dark Knight" trilogy, has spent the last three years living in Oppenheimer's world, writing ..." [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] [2.69s] Exiting LLM run with output: { "generations": [ [ { "text": "Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 868, "completion_tokens": 46, "total_tokens": 914 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] [2.69s] Exiting Chain run with output: { "text": "Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365" } [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] Entering Tool run with input: "52*365" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] Entering Chain run with input: { "question": "52*365" } [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "question": "52*365", "stop": [ "```output" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n\nQuestion: ${Question with math problem.}\n```text\n${single line mathematical expression that solves the problem}\n```\n...numexpr.evaluate(text)...\n```output\n${Output of running the code}\n```\nAnswer: ${Answer}\n\nBegin.\n\nQuestion: What is 37593 * 67?\n```text\n37593 * 67\n```\n...numexpr.evaluate(\"37593 * 67\")...\n```output\n2518731\n```\nAnswer: 2518731\n\nQuestion: 37593^(1/5)\n```text\n37593**(1/5)\n```\n...numexpr.evaluate(\"37593**(1/5)\")...\n```output\n8.222831614237718\n```\nAnswer: 8.222831614237718\n\nQuestion: 52*365" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] [2.89s] Exiting LLM run with output: { "generations": [ [ { "text": "```text\n52*365\n```\n...numexpr.evaluate(\"52*365\")...\n", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "```text\n52*365\n```\n...numexpr.evaluate(\"52*365\")...\n", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 203, "completion_tokens": 19, "total_tokens": 222 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] [2.89s] Exiting Chain run with output: { "text": "```text\n52*365\n```\n...numexpr.evaluate(\"52*365\")...\n" } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] [2.90s] Exiting Chain run with output: { "answer": "Answer: 18980" } [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] [2.90s] Exiting Tool run with output: "Answer: 18980" [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] Entering Chain run with input: { "input": "Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?", "agent_scratchpad": "I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365\nObservation: Answer: 18980\nThought:", "stop": [ "\nObservation:", "\n\tObservation:" ] } [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input: { "prompts": [ "Human: Answer the following questions as best you can. You have access to the following tools:\n\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\nCalculator: Useful for when you need to answer questions about math.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\nAction: duckduckgo_search\nAction Input: \"Director of the 2023 film Oppenheimer and their age\"\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\nAction: duckduckgo_search\nAction Input: \"Christopher Nolan age\"\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\nAction: Calculator\nAction Input: 52*365\nObservation: Answer: 18980\nThought:" ] } [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] [3.52s] Exiting LLM run with output: { "generations": [ [ { "text": "I now know the final answer\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.", "generation_info": { "finish_reason": "stop" }, "message": { "lc": 1, "type": "constructor", "id": [ "langchain", "schema", "messages", "AIMessage" ], "kwargs": { "content": "I now know the final answer\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.", "additional_kwargs": {} } } } ] ], "llm_output": { "token_usage": { "prompt_tokens": 926, "completion_tokens": 43, "total_tokens": 969 }, "model_name": "gpt-4" }, "run": null } [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] [3.52s] Exiting Chain run with output: { "text": "I now know the final answer\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days." } [chain/end] [1:RunTypeEnum.chain:AgentExecutor] [21.96s] Exiting Chain run with output: { "output": "The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days." } 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.' ``` </CodeOutputBlock> </details> ### `set_verbose(True)` Setting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic. ```python from langchain.globals import set_verbose set_verbose(True) agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <details> <summary>Console output</summary> <CodeOutputBlock lang="python"> ``` > Entering new AgentExecutor chain... > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought: > Finished chain. First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought: > Finished chain. The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ... Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ... Thought: > Finished chain. Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days. Action: Calculator Action Input: (2023 - 1970) * 365 > Entering new LLMMathChain chain... (2023 - 1970) * 365 > Entering new LLMChain chain... Prompt after formatting: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question. Question: ${Question with math problem.} ```text ${single line mathematical expression that solves the problem} ``` ...numexpr.evaluate(text)... ```output ${Output of running the code} ``` Answer: ${Answer} Begin. Question: What is 37593 * 67? ```text 37593 * 67 ``` ...numexpr.evaluate("37593 * 67")... ```output 2518731 ``` Answer: 2518731 Question: 37593^(1/5) ```text 37593**(1/5) ``` ...numexpr.evaluate("37593**(1/5)")... ```output 8.222831614237718 ``` Answer: 8.222831614237718 Question: (2023 - 1970) * 365 > Finished chain. ```text (2023 - 1970) * 365 ``` ...numexpr.evaluate("(2023 - 1970) * 365")... Answer: 19345 > Finished chain. Observation: Answer: 19345 Thought: > Entering new LLMChain chain... Prompt after formatting: Answer the following questions as best you can. You have access to the following tools: duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query. Calculator: Useful for when you need to answer questions about math. Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [duckduckgo_search, Calculator] Action Input: the input to the action Observation: the result of the action ... (this Thought/Action/Action Input/Observation can repeat N times) Thought: I now know the final answer Final Answer: the final answer to the original input question Begin! Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)? Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age. Action: duckduckgo_search Action Input: "Director of the 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about "the man who ... Thought:Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days. Action: Calculator Action Input: (2023 - 1970) * 365 Observation: Answer: 19345 Thought: > Finished chain. I now know the final answer Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days. > Finished chain. 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.' ``` </CodeOutputBlock> </details> ### `Chain(..., verbose=True)` You can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object). ```python # Passing verbose=True to initialize_agent will pass that along to the AgentExecutor (which is a Chain). agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, ) agent.run("Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?") ``` <details> <summary>Console output</summary> <CodeOutputBlock lang="python"> ``` > Entering new AgentExecutor chain... First, I need to find out who directed the film Oppenheimer in 2023 and their birth date. Then, I can calculate their age in years and days. Action: duckduckgo_search Action Input: "Director of 2023 film Oppenheimer" Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, "Oppenheimer," Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named "Trinity". A Review of Christopher Nolan's new film 'Oppenheimer' , the story of the man who fathered the Atomic Bomb. Cillian Murphy leads an all star cast ... Release Date: July 21, 2023. Director ... For his new film, "Oppenheimer," starring Cillian Murphy and Emily Blunt, director Christopher Nolan set out to build an entire 1940s western town. Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age. Action: duckduckgo_search Action Input: "Christopher Nolan birth date" Observation: July 30, 1970 (age 52) London England Notable Works: "Dunkirk" "Tenet" "The Prestige" See all related content → Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. Date of Birth: 30 July 1970 . ... Christopher Nolan is a British-American film director, producer, and screenwriter. His films have grossed more than US$5 billion worldwide, and have garnered 11 Academy Awards from 36 nominations. ... Thought:Christopher Nolan was born on July 30, 1970. Now I can calculate his age in years and then in days. Action: Calculator Action Input: {"operation": "subtract", "operands": [2023, 1970]} Observation: Answer: 53 Thought:Christopher Nolan is 53 years old in 2023. Now I need to calculate his age in days. Action: Calculator Action Input: {"operation": "multiply", "operands": [53, 365]} Observation: Answer: 19345 Thought:I now know the final answer Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days. > Finished chain. 'The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days.' ``` </CodeOutputBlock> </details> ## Other callbacks `Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/filecallbackhandler). You can also implement your own callbacks to execute custom functionality. See here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\guides\debugging.md
.md
# Pydantic compatibility - Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/) - v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/) - Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time ## LangChain Pydantic migration plan As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2. * Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features). * During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below). User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain. Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in the case of inheritance and in the case of passing objects to LangChain. **Example 1: Extending via inheritance** **YES** ```python from pydantic.v1 import root_validator, validator class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @validator('x') # v1 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1, ) ``` Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors **NO** ```python from pydantic import Field, field_validator # pydantic v2 class CustomTool(BaseTool): # BaseTool is v1 code x: int = Field(default=1) def _run(*args, **kwargs): return "hello" @field_validator('x') # v2 code @classmethod def validate_x(cls, x: int) -> int: return 1 CustomTool( name='custom_tool', description="hello", x=1, ) ``` **Example 2: Passing objects to LangChain** **YES** ```python from langchain_core.tools import Tool from pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace class CalculatorInput(BaseModel): question: str = Field() Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput ) ``` **NO** ```python from langchain_core.tools import Tool from pydantic import BaseModel, Field # <-- Uses v2 namespace class CalculatorInput(BaseModel): question: str = Field() Tool.from_function( # <-- tool uses v1 namespace func=lambda question: 'hello', name="Calculator", description="useful for when you need to answer questions about math", args_schema=CalculatorInput ) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\guides\pydantic_compatibility.md
.md
# LLMonitor >[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools. <video controls width='100%' > <source src='https://llmonitor.com/videos/demo-annotated.mp4'/> </video> ## Setup Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`. Once you have it, set it as an environment variable by running: ```bash export LLMONITOR_APP_ID="..." ``` If you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler: ```python from langchain.callbacks import LLMonitorCallbackHandler handler = LLMonitorCallbackHandler(app_id="...") ``` ## Usage with LLM/Chat models ```python from langchain_openai import OpenAI from langchain_openai import ChatOpenAI from langchain.callbacks import LLMonitorCallbackHandler handler = LLMonitorCallbackHandler() llm = OpenAI( callbacks=[handler], ) chat = ChatOpenAI(callbacks=[handler]) llm("Tell me a joke") ``` ## Usage with chains and agents Make sure to pass the callback handler to the `run` method so that all related chains and llm calls are correctly tracked. It is also recommended to pass `agent_name` in the metadata to be able to distinguish between agents in the dashboard. Example: ```python from langchain_openai import ChatOpenAI from langchain_core.messages import SystemMessage, HumanMessage from langchain.agents import OpenAIFunctionsAgent, AgentExecutor, tool from langchain.callbacks import LLMonitorCallbackHandler llm = ChatOpenAI(temperature=0) handler = LLMonitorCallbackHandler() @tool def get_word_length(word: str) -> int: """Returns the length of a word.""" return len(word) tools = [get_word_length] prompt = OpenAIFunctionsAgent.create_prompt( system_message=SystemMessage( content="You are very powerful assistant, but bad at calculating lengths of words." ) ) agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True) agent_executor = AgentExecutor( agent=agent, tools=tools, verbose=True, metadata={"agent_name": "WordCount"} # <- recommended, assign a custom name ) agent_executor.run("how many letters in the word educa?", callbacks=[handler]) ``` Another example: ```python from langchain.agents import load_tools, initialize_agent, AgentType from langchain_openai import OpenAI from langchain.callbacks import LLMonitorCallbackHandler handler = LLMonitorCallbackHandler() llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ "agent_name": "GirlfriendAgeFinder" }) # <- recommended, assign a custom name agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=[handler], ) ``` ## User Tracking User tracking allows you to identify your users, track their cost, conversations and more. ```python from langchain.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identify with identify("user-123"): llm("Tell me a joke") with identify("user-456", user_props={"email": "user456@test.com"}): agen.run("Who is Leo DiCaprio's girlfriend?") ``` ## Support For any question or issue with integration you can reach out to the LLMonitor team on [Discord](http://discord.com/invite/8PafSG58kK) or via [email](mailto:vince@llmonitor.com).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\callbacks\llmonitor.md
.md
# Streamlit > **[Streamlit](https://streamlit.io/) is a faster way to build and share data apps.** > Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required. > See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai). [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/streamlit-agent?quickstart=1) In this guide we will demonstrate how to use `StreamlitCallbackHandler` to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent: <iframe loading="lazy" src="https://langchain-mrkl.streamlit.app/?embed=true&embed_options=light_theme" style={{ width: 100 + '%', border: 'none', marginBottom: 1 + 'rem', height: 600 }} allow="camera;clipboard-read;clipboard-write;" ></iframe> ## Installation and Setup ```bash pip install langchain streamlit ``` You can run `streamlit hello` to load a sample app and validate your install succeeded. See full instructions in Streamlit's [Getting started documentation](https://docs.streamlit.io/library/get-started). ## Display thoughts and actions To create a `StreamlitCallbackHandler`, you just need to provide a parent container to render the output. ```python from langchain_community.callbacks import StreamlitCallbackHandler import streamlit as st st_callback = StreamlitCallbackHandler(st.container()) ``` Additional keyword arguments to customize the display behavior are described in the [API reference](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html). ### Scenario 1: Using an Agent with Tools The primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the `StreamlitCallbackHandler` to `agent.run()` in order to visualize the thoughts and actions live in your app. ```python import streamlit as st from langchain import hub from langchain.agents import AgentExecutor, create_react_agent, load_tools from langchain_community.callbacks import StreamlitCallbackHandler from langchain_openai import OpenAI llm = OpenAI(temperature=0, streaming=True) tools = load_tools(["ddg-search"]) prompt = hub.pull("hwchase17/react") agent = create_react_agent(llm, tools, prompt) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True) if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent_executor.invoke( {"input": prompt}, {"callbacks": [st_callback]} ) st.write(response["output"]) ``` **Note:** You will need to set `OPENAI_API_KEY` for the above app code to run successfully. The easiest way to do this is via [Streamlit secrets.toml](https://docs.streamlit.io/library/advanced-features/secrets-management), or any other local ENV management tool. ### Additional scenarios Currently `StreamlitCallbackHandler` is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future. You may also be interested in using [StreamlitChatMessageHistory](/docs/integrations/memory/streamlit_chat_message_history) for LangChain.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\callbacks\streamlit.md
.txt
1/22/23, 6:30 PM - User 1: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks! 1/22/23, 8:24 PM - User 2: Goodmorning! $50 is too low. 1/23/23, 2:59 AM - User 1: How much do you want? 1/23/23, 3:00 AM - User 2: Online is at least $100 1/23/23, 3:01 AM - User 2: Here is $129 1/23/23, 3:01 AM - User 2: <Media omitted> 1/23/23, 3:01 AM - User 1: Im not interested in this bag. Im interested in the blue one! 1/23/23, 3:02 AM - User 1: I thought you were selling the blue one! 1/23/23, 3:18 AM - User 2: No Im sorry it was my mistake, the blue one is not for sale 1/23/23, 3:19 AM - User 1: Oh no worries! Bye 1/23/23, 3:19 AM - User 2: Bye! 1/23/23, 3:22_AM - User 1: And let me know if anything changes
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\document_loaders\example_data\whatsapp_chat.txt
.txt
application.json 1023495323659816971/ applications/ avatar.gif user.json events-2023-00000-of-00001.json events-2023-00000-of-00001.json events-2023-00000-of-00001.json events-2023-00000-of-00001.json analytics/ modeling/ reporting/ tns/ channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv channel.json messages.csv c1000084973275058257/ c1000108836771856496/ c1004874234339794977/ c1004874234339794979/ c1004874234339794981/ c1004874234339794982/ c1005785616165896283/ c1011447733393043628/ c1011548022905249822/ c1011650063027687575/ c1011714070182895727/ c1013930263950135346/ c1013930396829884426/ c1014957294745829479/ c1014961384821366794/ c1014974864370712696/ c1019288541592817785/ c1024947790767464478/ c1027257686858932255/ c1027927867989962814/ c1032151840999100436/ c1032575808826523662/ c1037561178286739466/ c1038097349660135474/ c1038097372695236729/ c1038689169351913544/ c1038692122452312125/ c1039957371381887049/ c1040989617157066782/ c1047165096452960316/ c1047565374645870743/ c1050225908914589716/ c1050226593668284416/ c1050227353311248404/ c1051632794427723827/ c1052599046717591632/ c1052615516981821531/ c1056285083520217149/ c105765859191975936/ c1061166503753416735/ c1062024667105341502/ c1066640566621835284/ c1070018538758221874/ c1072944049788555314/ c1075121707033042985/ c1075438954632990820/ c1077238309320929342/ c1081432695315386418/ c1082169962157838366/ c1084011585871282256/ c1084352082812878928/ c1085149531437535343/ c1086944178086359060/ c1093214985557123223/ c1093215227555876914/ c1093930791794393089/ c1096323263161978891/ c1096489741710532730/ c1097000752653795358/ c278566343836565505/ c279692806442844161/ c280973436971515906/ c283812709789859851/ c343944376055103488/ c486935104384532502/ c531543370041131008/ c538158613252800512/ c572384192571113512/ c619960843878268950/ c661268593870372876/ c661394153778970624/ c663302088226373632/ c669957895257063445/ c670218237891313664/ c673160333661306880/ c674693947800420363/ c674694138129678375/ c743425228952305695/ c754627904406814770/ c754638493875044503/ c757205803651301436/ c759232323710484531/ c771802926372093973/ c783240623582609416/ c783244379115880448/ c801744322788982814/ c810514969892225024/ c816983218434605057/ c830184175176122389/ c830679381033877564/ c831172308395622480/ c849582819105177650/ c860977555875430492/ c867042653401251880/ c868094992986550322/ c868917941184376842/ c905007686976946176/ c909600839717511211/ c909600931816018031/ c923095048931905557/ c924877027180417035/ c938491245347631114/ c938743368375214110/ c969876184185860107/ c969945714056642580/ c969948939728093214/ c981037338517966889/ c984120044478939146/ c985958948085592064/ c990816829993811978/ c993402018901266436/ c993782366948565102/ c993843360752226364/ c994556806644899870/ index.json audit-log.json guild.json audit-log.json guild.json audit-log.json bans.json channels.json emoji.json guild.json icon.jpeg webhooks.json audit-log.json guild.json audit-log.json bans.json channels.json emoji.json guild.json webhooks.json audit-log.json guild.json audit-log.json bans.json channels.json emoji.json guild.json icon.png webhooks.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json audit-log.json guild.json 1024120160740716544/ 102860784329052160/ 1032575808826523659/ 1038097195422978059/ 1039583521112600638/ 1050224141732687912/ 1069661049827111054/ 267624335836053506/ 278285146518716417/ 486935104384532500/ 531303890453397522/ 669880381649977354/ 727016164215226450/ 743099584242516037/ 753173158198116402/ 830184174198718474/ 860977555293470772/ 887994159741427712/ 909600839717511208/ 974519864045756446/ index.json account/ activities_e/ activities_w/ activity/ messages/ programs/ README.txt servers/
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\document_loaders\example_data\fake_discord_data\output.txt
.md
# Remembrall This page covers how to use the [Remembrall](https://remembrall.dev) ecosystem within LangChain. ## What is Remembrall? Remembrall gives your language model long-term memory, retrieval augmented generation, and complete observability with just a few lines of code. ![Screenshot of the Remembrall dashboard showing request statistics and model interactions.](/img/RemembrallDashboard.png "Remembrall Dashboard Interface") It works as a light-weight proxy on top of your OpenAI calls and simply augments the context of the chat calls at runtime with relevant facts that have been collected. ## Setup To get started, [sign in with Github on the Remembrall platform](https://remembrall.dev/login) and copy your [API key from the settings page](https://remembrall.dev/dashboard/settings). Any request that you send with the modified `openai_api_base` (see below) and Remembrall API key will automatically be tracked in the Remembrall dashboard. You **never** have to share your OpenAI key with our platform and this information is **never** stored by the Remembrall systems. To do this, we need to install the following dependencies: ```bash pip install -U langchain-openai ``` ### Enable Long Term Memory In addition to setting the `openai_api_base` and Remembrall API key via `x-gp-api-key`, you should specify a UID to maintain memory for. This will usually be a unique user identifier (like email). ```python from langchain_openai import ChatOpenAI chat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-remember": "user@email.com", } }) chat_model.predict("My favorite color is blue.") import time; time.sleep(5) # wait for system to save fact via auto save print(chat_model.predict("What is my favorite color?")) ``` ### Enable Retrieval Augmented Generation First, create a document context in the [Remembrall dashboard](https://remembrall.dev/dashboard/spells). Paste in the document texts or upload documents as PDFs to be processed. Save the Document Context ID and insert it as shown below. ```python from langchain_openai import ChatOpenAI chat_model = ChatOpenAI(openai_api_base="https://remembrall.dev/api/openai/v1", model_kwargs={ "headers":{ "x-gp-api-key": "remembrall-api-key-here", "x-gp-context": "document-context-id-goes-here", } }) print(chat_model.predict("This is a question that can be answered with my document.")) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\memory\remembrall.md
.md
# Airtable >[Airtable](https://en.wikipedia.org/wiki/Airtable) is a cloud collaboration service. `Airtable` is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet. > The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox', > 'phone number', and 'drop-down list', and can reference file attachments like images. >Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records > and publish views to external websites. ## Installation and Setup ```bash pip install pyairtable ``` * Get your [API key](https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens). * Get the [ID of your base](https://airtable.com/developers/web/api/introduction). * Get the [table ID from the table url](https://www.highviewapps.com/kb/where-can-i-find-the-airtable-base-id-and-table-id/#:~:text=Both%20the%20Airtable%20Base%20ID,URL%20that%20begins%20with%20tbl). ## Document Loader ```python from langchain_community.document_loaders import AirtableLoader ``` See an [example](/docs/integrations/document_loaders/airtable).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\airtable.md
.md
# AwaDB >[AwaDB](https://github.com/awa-ai/awadb) is an AI Native database for the search and storage of embedding vectors used by LLM Applications. ## Installation and Setup ```bash pip install awadb ``` ## Vector Store ```python from langchain_community.vectorstores import AwaDB ``` See a [usage example](/docs/integrations/vectorstores/awadb). ## Text Embedding Model ```python from langchain_community.embeddings import AwaEmbeddings ``` See a [usage example](/docs/integrations/text_embedding/awadb).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\awadb.md
.md
# Baseten [Baseten](https://baseten.co) provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. As a model inference platform, Baseten is a `Provider` in the LangChain ecosystem. The Baseten integration currently implements a single `Component`, LLMs, but more are planned! Baseten lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences: * Rather than paying per token, you pay per minute of GPU used. * Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability. * While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with Truss. You can learn more about Baseten in [our docs](https://docs.baseten.co/) or read on for LangChain-specific info. ## Setup: LangChain + Baseten You'll need two things to use Baseten models with LangChain: - A [Baseten account](https://baseten.co) - An [API key](https://docs.baseten.co/observability/api-keys) Export your API key to your as an environment variable called `BASETEN_API_KEY`. ```sh export BASETEN_API_KEY="paste_your_api_key_here" ``` ## Component guide: LLMs Baseten integrates with LangChain through the [LLM component](https://python.langchain.com/docs/integrations/llms/baseten), which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace. You can deploy foundation models like Mistral and Llama 2 with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with Truss](https://truss.baseten.co/welcome). In this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard. To use this module, you must: * Export your Baseten API key as the environment variable BASETEN_API_KEY * Get the model ID for your model from your Baseten dashboard * Identify the model deployment ("production" for all model library models) [Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments. Production deployment (standard for model library models) ```python from langchain_community.llms import Baseten mistral = Baseten(model="MODEL_ID", deployment="production") mistral("What is the Mistral wind?") ``` Development deployment ```python from langchain_community.llms import Baseten mistral = Baseten(model="MODEL_ID", deployment="development") mistral("What is the Mistral wind?") ``` Other published deployment ```python from langchain_community.llms import Baseten mistral = Baseten(model="MODEL_ID", deployment="DEPLOYMENT_ID") mistral("What is the Mistral wind?") ``` Streaming LLM output, chat completions, embeddings models, and more are all supported on the Baseten platform and coming soon to our LangChain integration. Contact us at [support@baseten.co](mailto:support@baseten.co) with any questions about using Baseten with LangChain.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\baseten.md
.md
# BREEBS (Open Knowledge) [BREEBS](https://www.breebs.com/) is an open collaborative knowledge platform. Anybody can create a Breeb, a knowledge capsule based on PDFs stored on a Google Drive folder. A breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources. Behind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration. ## List of available Breebs To get the full list of Breebs, including their key (breeb_key) and description : https://breebs.promptbreeders.com/web/listbreebs. Dozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance. ## Creating a new Breeb To generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the [BREEBS website](https://www.breebs.com/) by clicking the "Create Breeb" button. You can currently include up to 120 files, with a total character limit of 15 million. ## Retriever ```python from langchain.retrievers import BreebsRetriever ``` # Example [See usage example (Retrieval & ConversationalRetrievalChain)](https://python.langchain.com/docs/integrations/retrievers/breebs)
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\breebs.md
.md
Databricks ========== The [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform. Databricks embraces the LangChain ecosystem in various ways: 1. Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChain 2. Databricks MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer steps 3. Databricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.Databricks 4. Databricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face Hub Databricks connector for the SQLDatabase Chain ---------------------------------------------- You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. Databricks MLflow integrates with LangChain ------------------------------------------- MLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/integrations/providers/mlflow_tracking) for details about MLflow's integration with LangChain. Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details. Databricks MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving. Databricks External Models -------------------------- [Databricks External Models](https://docs.databricks.com/generative-ai/external-models/index.html) is a service that is designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. The following example creates an endpoint that serves OpenAI's GPT-4 model and generates a chat response from it: ```python from langchain_community.chat_models import ChatDatabricks from langchain_core.messages import HumanMessage from mlflow.deployments import get_deploy_client client = get_deploy_client("databricks") name = f"chat" client.create_endpoint( name=name, config={ "served_entities": [ { "name": "test", "external_model": { "name": "gpt-4", "provider": "openai", "task": "llm/v1/chat", "openai_config": { "openai_api_key": "{{secrets/<scope>/<key>}}", }, }, } ], }, ) chat = ChatDatabricks(endpoint=name, temperature=0.1) print(chat([HumanMessage(content="hello")])) # -> content='Hello! How can I assist you today?' ``` Databricks Foundation Model APIs -------------------------------- [Databricks Foundation Model APIs](https://docs.databricks.com/machine-learning/foundation-models/index.html) allow you to access and query state-of-the-art open source models from dedicated serving endpoints. With Foundation Model APIs, developers can quickly and easily build applications that leverage a high-quality generative AI model without maintaining their own model deployment. The following example uses the `databricks-bge-large-en` endpoint to generate embeddings from text: ```python from langchain_community.embeddings import DatabricksEmbeddings embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en") print(embeddings.embed_query("hello")[:3]) # -> [0.051055908203125, 0.007221221923828125, 0.003879547119140625, ...] ``` Databricks as an LLM provider ----------------------------- The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks#wrapping-a-serving-endpoint-custom-model) demonstrates how to serve a custom model that has been registered by MLflow as a Databricks endpoint. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development. Databricks Vector Search ------------------------ Databricks Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors. See the notebook [Databricks Vector Search](/docs/integrations/vectorstores/databricks_vector_search) for instructions to use it with LangChain.
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\databricks.md
.md
# Fireworks This page covers how to use [Fireworks](https://fireworks.ai/) models within Langchain. ## Installation and setup - Install the Fireworks integration package. ``` pip install langchain-fireworks ``` - Get a Fireworks API key by signing up at [fireworks.ai](https://fireworks.ai). - Authenticate by setting the FIREWORKS_API_KEY environment variable. ## Authentication There are two ways to authenticate using your Fireworks API key: 1. Setting the `FIREWORKS_API_KEY` environment variable. ```python os.environ["FIREWORKS_API_KEY"] = "<KEY>" ``` 2. Setting `fireworks_api_key` field in the Fireworks LLM module. ```python llm = Fireworks(fireworks_api_key="<KEY>") ``` ## Using the Fireworks LLM module Fireworks integrates with Langchain through the LLM module. In this example, we will work the mixtral-8x7b-instruct model. ```python from langchain_fireworks import Fireworks llm = Fireworks( fireworks_api_key="<KEY>", model="accounts/fireworks/models/mixtral-8x7b-instruct", max_tokens=256) llm("Name 3 sports.") ``` For a more detailed walkthrough, see [here](/docs/integrations/llms/Fireworks).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\fireworks.md
.md
# Marqo This page covers how to use the Marqo ecosystem within LangChain. ### **What is Marqo?** Marqo is a tensor search engine that uses embeddings stored in in-memory HNSW indexes to achieve cutting edge search speeds. Marqo can scale to hundred-million document indexes with horizontal index sharding and allows for async and non-blocking data upload and search. Marqo uses the latest machine learning models from PyTorch, Huggingface, OpenAI and more. You can start with a pre-configured model or bring your own. The built in ONNX support and conversion allows for faster inference and higher throughput on both CPU and GPU. Because Marqo include its own inference your documents can have a mix of text and images, you can bring Marqo indexes with data from your other systems into the langchain ecosystem without having to worry about your embeddings being compatible. Deployment of Marqo is flexible, you can get started yourself with our docker image or [contact us about our managed cloud offering!](https://www.marqo.ai/pricing) To run Marqo locally with our docker image, [see our getting started.](https://docs.marqo.ai/latest/) ## Installation and Setup - Install the Python SDK with `pip install marqo` ## Wrappers ### VectorStore There exists a wrapper around Marqo indexes, allowing you to use them within the vectorstore framework. Marqo lets you select from a range of models for generating embeddings and exposes some preprocessing configurations. The Marqo vectorstore can also work with existing multimodel indexes where your documents have a mix of images and text, for more information refer to [our documentation](https://docs.marqo.ai/latest/#multi-modal-and-cross-modal-search). Note that instaniating the Marqo vectorstore with an existing multimodal index will disable the ability to add any new documents to it via the langchain vectorstore `add_texts` method. To import this vectorstore: ```python from langchain_community.vectorstores import Marqo ``` For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo)
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\marqo.md
.md
# Predibase Learn how to use LangChain with models on Predibase. ## Setup - Create a [Predibase](https://predibase.com/) account and [API key](https://docs.predibase.com/sdk-guide/intro). - Install the Predibase Python client with `pip install predibase` - Use your API key to authenticate ### LLM Predibase integrates with LangChain by implementing LLM module. You can see a short example below or a full notebook under LLM > Integrations > Predibase. ```python import os os.environ["PREDIBASE_API_TOKEN"] = "{PREDIBASE_API_TOKEN}" from langchain_community.llms import Predibase model = Predibase(model = 'vicuna-13b', predibase_api_key=os.environ.get('PREDIBASE_API_TOKEN')) response = model("Can you recommend me a nice dry wine?") print(response) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\predibase.md
.md
# PubMed # PubMed >[PubMed®](https://pubmed.ncbi.nlm.nih.gov/) by `The National Center for Biotechnology Information, National Library of Medicine` > comprises more than 35 million citations for biomedical literature from `MEDLINE`, life science journals, and online books. > Citations may include links to full text content from `PubMed Central` and publisher web sites. ## Setup You need to install a python package. ```bash pip install xmltodict ``` ### Retriever See a [usage example](/docs/integrations/retrievers/pubmed). ```python from langchain.retrievers import PubMedRetriever ``` ### Document Loader See a [usage example](/docs/integrations/document_loaders/pubmed). ```python from langchain_community.document_loaders import PubMedLoader ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\pubmed.md
.md
# Shale Protocol [Shale Protocol](https://shaleprotocol.com) provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure. Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs. With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost. This page covers how Shale-Serve API can be incorporated with LangChain. As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases. ## How to ### 1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key. ### 2. Use https://shale.live/v1 as OpenAI API drop-in replacement For example ```python from langchain_openai import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain import os os.environ['OPENAI_API_BASE'] = "https://shale.live/v1" os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY" llm = OpenAI() template = """Question: {question} # Answer: Let's think step by step.""" prompt = PromptTemplate.from_template(template) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.run(question) ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\shaleprotocol.md
.md
# Vearch [Vearch](https://github.com/vearch/vearch) is a scalable distributed system for efficient similarity search of deep learning vectors. # Installation and Setup Vearch Python SDK enables vearch to use locally. Vearch python sdk can be installed easily by pip install vearch. # Vectorstore Vearch also can used as vectorstore. Most detalis in [this notebook](/docs/integrations/vectorstores/vearch) ```python from langchain_community.vectorstores import Vearch ```
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\vearch.md
.md
# Portkey >[Portkey](https://docs.portkey.ai/overview/introduction) is a platform designed to streamline the deployment > and management of Generative AI applications. > It provides comprehensive features for monitoring, managing models, > and improving the performance of your AI applications. ## LLMOps for Langchain Portkey brings production readiness to Langchain. With Portkey, you can - [x] view detailed **metrics & logs** for all requests, - [x] enable **semantic cache** to reduce latency & costs, - [x] implement automatic **retries & fallbacks** for failed requests, - [x] add **custom tags** to requests for better tracking and analysis and [more](https://docs.portkey.ai). ### Using Portkey with Langchain Using Portkey is as simple as just choosing which Portkey features you want, enabling them via `headers=Portkey.Config` and passing it in your LLM calls. To start, get your Portkey API key by [signing up here](https://app.portkey.ai/login). (Click the profile icon on the top left, then click on "Copy API Key") For OpenAI, a simple integration with logging feature would look like this: ```python from langchain_openai import OpenAI from langchain_community.utilities import Portkey # Add the Portkey API Key from your account headers = Portkey.Config( api_key = "<PORTKEY_API_KEY>" ) llm = OpenAI(temperature=0.9, headers=headers) llm.predict("What would be a good company name for a company that makes colorful socks?") ``` Your logs will be captured on your [Portkey dashboard](https://app.portkey.ai). A common Portkey X Langchain use case is to **trace a chain or an agent** and view all the LLM calls originating from that request. ### **Tracing Chains & Agents** ```python from langchain.agents import AgentType, initialize_agent, load_tools from langchain_openai import OpenAI from langchain_community.utilities import Portkey # Add the Portkey API Key from your account headers = Portkey.Config( api_key = "<PORTKEY_API_KEY>", trace_id = "fef659" ) llm = OpenAI(temperature=0, headers=headers) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Let's test it out! agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?") ``` **You can see the requests' logs along with the trace id on Portkey dashboard:** <img src="/img/portkey-dashboard.gif" height="250"/> <img src="/img/portkey-tracing.png" height="250"/> ## Advanced Features 1. **Logging:** Log all your LLM requests automatically by sending them through Portkey. Each request log contains `timestamp`, `model name`, `total cost`, `request time`, `request json`, `response json`, and additional Portkey features. 2. **Tracing:** Trace id can be passed along with each request and is visibe on the logs on Portkey dashboard. You can also set a **distinct trace id** for each request. You can [append user feedback](https://docs.portkey.ai/key-features/feedback-api) to a trace id as well. 3. **Caching:** Respond to previously served customers queries from cache instead of sending them again to OpenAI. Match exact strings OR semantically similar strings. Cache can save costs and reduce latencies by 20x. 4. **Retries:** Automatically reprocess any unsuccessful API requests **`upto 5`** times. Uses an **`exponential backoff`** strategy, which spaces out retry attempts to prevent network overload. 5. **Tagging:** Track and audit each user interaction in high detail with predefined tags. | Feature | Config Key | Value (Type) | Required/Optional | | -- | -- | -- | -- | | API Key | `api_key` | API Key (`string`) | ✅ Required | | [Tracing Requests](https://docs.portkey.ai/key-features/request-tracing) | `trace_id` | Custom `string` | ❔ Optional | | [Automatic Retries](https://docs.portkey.ai/key-features/automatic-retries) | `retry_count` | `integer` [1,2,3,4,5] | ❔ Optional | | [Enabling Cache](https://docs.portkey.ai/key-features/request-caching) | `cache` | `simple` OR `semantic` | ❔ Optional | | Cache Force Refresh | `cache_force_refresh` | `True` | ❔ Optional | | Set Cache Expiry | `cache_age` | `integer` (in seconds) | ❔ Optional | | [Add User](https://docs.portkey.ai/key-features/custom-metadata) | `user` | `string` | ❔ Optional | | [Add Organisation](https://docs.portkey.ai/key-features/custom-metadata) | `organisation` | `string` | ❔ Optional | | [Add Environment](https://docs.portkey.ai/key-features/custom-metadata) | `environment` | `string` | ❔ Optional | | [Add Prompt (version/id/string)](https://docs.portkey.ai/key-features/custom-metadata) | `prompt` | `string` | ❔ Optional | ## **Enabling all Portkey Features:** ```py headers = Portkey.Config( # Mandatory api_key="<PORTKEY_API_KEY>", # Cache Options cache="semantic", cache_force_refresh="True", cache_age=1729, # Advanced retry_count=5, trace_id="langchain_agent", # Metadata environment="production", user="john", organisation="acme", prompt="Frost" ) ``` For detailed information on each feature and how to use it, [please refer to the Portkey docs](https://docs.portkey.ai). If you have any questions or need further assistance, [reach out to us on Twitter.](https://twitter.com/portkeyai).
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\integrations\providers\portkey\index.md
.md
--- sidebar_class_name: hidden --- # LangSmith [LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you move from prototype to production. Check out the [interactive walkthrough](/docs/langsmith/walkthrough) to get started. For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/). For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow, check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include: - Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)). - Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)). - How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)). - How to fine-tune an LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)). - How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
C:\Users\wesla\CodePilotAI\repositories\langchain\docs\docs\langsmith\index.md

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card