diff --git "a/langchain_github_repository.json" "b/langchain_github_repository.json" new file mode 100644--- /dev/null +++ "b/langchain_github_repository.json" @@ -0,0 +1,1007 @@ +[ + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\MIGRATE.md", + "filetype": ".md", + "content": "# Migrating\n\n## \ud83d\udea8Breaking Changes for select chains (SQLDatabase) on 7/28/23\n\nIn an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.\nThis migration has already started, but we are remaining backwards compatible until 7/28.\nOn that date, we will remove functionality from `langchain`.\nRead more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043).\n\n### Migrating to `langchain_experimental`\n\nWe are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`.\nThis guide covers how to migrate.\n\n### Installation\n\nPreviously:\n\n`pip install -U langchain`\n\nNow (only if you want to access things in experimental):\n\n`pip install -U langchain langchain_experimental`\n\n### Things in `langchain.experimental`\n\nPreviously:\n\n`from langchain.experimental import ...`\n\nNow:\n\n`from langchain_experimental import ...`\n\n### PALChain\n\nPreviously:\n\n`from langchain.chains import PALChain`\n\nNow:\n\n`from langchain_experimental.pal_chain import PALChain`\n\n### SQLDatabaseChain\n\nPreviously:\n\n`from langchain.chains import SQLDatabaseChain`\n\nNow:\n\n`from langchain_experimental.sql import SQLDatabaseChain`\n\nAlternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)\n\n`from langchain.chains import create_sql_query_chain`\n\n### `load_prompt` for Python files\n\nNote: this only applies if you want to load Python files as prompts.\nIf you want to load json/yaml files, no change is needed.\n\nPreviously:\n\n`from langchain.prompts import load_prompt`\n\nNow:\n\n`from langchain_experimental.prompts import load_prompt`\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\README.md", + "filetype": ".md", + "content": "# \ud83e\udd9c\ufe0f\ud83d\udd17 LangChain\n\n\u26a1 Build context-aware reasoning applications \u26a1\n\n[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)\n[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)\n[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)\n[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)\n[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)\n[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)\n[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)\n[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)\n[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)\n\nLooking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).\n\nTo help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com). \n[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications. \nFill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.\n\n## Quick Install\n\nWith pip:\n```bash\npip install langchain\n```\n\nWith conda:\n```bash\nconda install langchain -c conda-forge\n```\n\n## \ud83e\udd14 What is LangChain?\n\n**LangChain** is a framework for developing applications powered by language models. It enables applications that:\n- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)\n- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)\n\nThis framework consists of several parts.\n- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.\n- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.\n- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.\n- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.\n- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. \n\nThe LangChain libraries themselves are made up of several different packages.\n- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.\n- **[`langchain-community`](libs/community)**: Third party integrations.\n- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.\n\n![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png \"LangChain Architecture Overview\")\n\n## \ud83e\uddf1 What can you build with LangChain?\n**\u2753 Retrieval augmented generation**\n\n- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)\n- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)\n\n**\ud83d\udcac Analyzing structured data**\n\n- [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql)\n- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2)\n\n**\ud83e\udd16 Chatbots**\n\n- [Documentation](https://python.langchain.com/docs/use_cases/chatbots)\n- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)\n\nAnd much more! Head to the [Use cases](https://python.langchain.com/docs/use_cases/) section of the docs for more.\n\n## \ud83d\ude80 How does LangChain help?\nThe main value props of the LangChain libraries are:\n1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not\n2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks\n\nOff-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones. \n\nComponents fall into the following **modules**:\n\n**\ud83d\udcc3 Model I/O:**\n\nThis includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.\n\n**\ud83d\udcda Retrieval:**\n\nData Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.\n\n**\ud83e\udd16 Agents:**\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.\n\n## \ud83d\udcd6 Documentation\n\nPlease see [here](https://python.langchain.com) for full documentation, which includes:\n\n- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples\n- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)\n- [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai)\n- [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews\n- [Reference](https://api.python.langchain.com): full API docs\n\n\n## \ud83d\udc81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).\n\n## \ud83c\udf1f Contributors\n\n[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\SECURITY.md", + "filetype": ".md", + "content": "# Security Policy\n\n## Reporting a Vulnerability\n\nPlease report security vulnerabilities by email to `security@langchain.dev`.\nThis email is an alias to a subset of our maintainers, and will ensure the issue is promptly triaged and acted upon as needed.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\.devcontainer\\README.md", + "filetype": ".md", + "content": "# Dev container\n\nThis project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment.\n\nYou can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).\n\n## GitHub Codespaces\n[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)\n\nYou may use the button above, or follow these steps to open this repo in a Codespace:\n1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.\n1. Click on the **Codespaces** tab.\n1. Click **Create codespace on master** .\n\nFor more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).\n \n## VS Code Dev Containers\n[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)\n\nNote: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name: \n```\nhttps://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com//\n\n```\nThen you will have a local cloned repo where you can contribute and then create pull requests.\n\nIf you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.\n\nAlternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:\n\n1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).\n\n2. Open a locally cloned copy of the code:\n\n - Fork and Clone this repository to your local filesystem.\n - Press F1 and select the **Dev Containers: Open Folder in Container...** command.\n - Select the cloned copy of this folder, wait for the container to start, and try things out!\n\nYou can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers).\n\n## Tips and tricks\n\n* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.\n* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\.github\\CODE_OF_CONDUCT.md", + "filetype": ".md", + "content": "# Contributor Covenant Code of Conduct\n\n## Our Pledge\n\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, caste, color, religion, or sexual\nidentity and orientation.\n\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\n\n## Our Standards\n\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\n and learning from the experience\n* Focusing on what is best not just for us as individuals, but for the overall\n community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or advances of\n any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email address,\n without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n professional setting\n\n## Enforcement Responsibilities\n\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\n\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\n\n## Scope\n\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\n\n## Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at\nconduct@langchain.dev.\nAll complaints will be reviewed and investigated promptly and fairly.\n\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\n\n## Enforcement Guidelines\n\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n\n### 1. Correction\n\n**Community Impact**: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\n\n**Consequence**: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n\n### 2. Warning\n\n**Community Impact**: A violation through a single incident or series of\nactions.\n\n**Consequence**: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or permanent\nban.\n\n### 3. Temporary Ban\n\n**Community Impact**: A serious violation of community standards, including\nsustained inappropriate behavior.\n\n**Consequence**: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n\n### 4. Permanent Ban\n\n**Community Impact**: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior, harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\n\n**Consequence**: A permanent ban from any sort of public interaction within the\ncommunity.\n\n## Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage],\nversion 2.1, available at\n[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].\n\nCommunity Impact Guidelines were inspired by\n[Mozilla's code of conduct enforcement ladder][Mozilla CoC].\n\nFor answers to common questions about this code of conduct, see the FAQ at\n[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at\n[https://www.contributor-covenant.org/translations][translations].\n\n[homepage]: https://www.contributor-covenant.org\n[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html\n[Mozilla CoC]: https://github.com/mozilla/diversity\n[FAQ]: https://www.contributor-covenant.org/faq\n[translations]: https://www.contributor-covenant.org/translations" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\.github\\CONTRIBUTING.md", + "filetype": ".md", + "content": "# Contributing to LangChain\n\nHi there! Thank you for even being interested in contributing to LangChain.\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.\n\nTo learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/)." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\.github\\PULL_REQUEST_TEMPLATE.md", + "filetype": ".md", + "content": "Thank you for contributing to LangChain!\n\n- [ ] **PR title**: \"package: description\"\n - Where \"package\" is whichever of langchain, community, core, experimental, etc. is being modified. Use \"docs: ...\" for purely docs changes, \"templates: ...\" for template changes, \"infra: ...\" for CI changes.\n - Example: \"community: add foobar LLM\"\n\n\n- [ ] **PR message**: ***Delete this entire checklist*** and replace with\n - **Description:** a description of the change\n - **Issue:** the issue # it fixes, if applicable\n - **Dependencies:** any dependencies required for this change\n - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!\n\n\n- [ ] **Add tests and docs**: If you're adding a new integration, please include\n 1. a test for the integration, preferably unit tests that do not rely on network access,\n 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.\n\n\n- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/\n\nAdditional guidelines:\n- Make sure optional dependencies are imported within a function.\n- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.\n- Most PRs should not touch more than one package.\n- Changes should be backwards compatible.\n- If you are adding something to community, do not re-import it in langchain.\n\nIf no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\cookbook\\README.md", + "filetype": ".md", + "content": "# LangChain cookbook\n\nExample code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the [main documentation](https://python.langchain.com).\n\nNotebook | Description\n:- | :-\n[LLaMA2_sql_chat.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/LLaMA2_sql_chat.ipynb) | Build a chat application that interacts with a SQL database using an open source llm (llama2), specifically demonstrated on an SQLite database containing rosters.\n[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.\n[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.\n[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.\n[analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document.\n[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.\n[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.\n[baby_agi.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi.ipynb) | Implement babyagi, an ai agent that can generate and execute tasks based on a given objective, with the flexibility to swap out specific vectorstores/model providers.\n[baby_agi_with_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/baby_agi_with_agent.ipynb) | Swap out the execution chain in the babyagi notebook with an agent that has access to tools, aiming to obtain more reliable information.\n[camel_role_playing.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/camel_role_playing.ipynb) | Implement the camel framework for creating autonomous cooperative agents in large-scale language models, using role-playing and inception prompting to guide chat agents towards task completion.\n[causal_program_aided_language_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/causal_program_aided_language_model.ipynb) | Implement the causal program-aided language (cpal) chain, which improves upon the program-aided language (pal) by incorporating causal structure to prevent hallucination in language models, particularly when dealing with complex narratives and math problems with nested dependencies.\n[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.\n[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.\n[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.\n[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.\n[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.\n[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.\n[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools\n[forward_looking_retrieval_augm...](https://github.com/langchain-ai/langchain/tree/master/cookbook/forward_looking_retrieval_augmented_generation.ipynb) | Implement the forward-looking active retrieval augmented generation (flare) method, which generates answers to questions, identifies uncertain tokens, generates hypothetical questions based on these tokens, and retrieves relevant documents to continue generating the answer.\n[generative_agents_interactive_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb) | Implement a generative agent that simulates human behavior, based on a research paper, using a time-weighted memory object backed by a langchain retriever.\n[gymnasium_agent_simulation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/gymnasium_agent_simulation.ipynb) | Create a simple agent-environment interaction loop in simulated environments like text-based games with gymnasium.\n[hugginggpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/hugginggpt.ipynb) | Implement hugginggpt, a system that connects language models like chatgpt with the machine learning community via hugging face.\n[hypothetical_document_embeddin...](https://github.com/langchain-ai/langchain/tree/master/cookbook/hypothetical_document_embeddings.ipynb) | Improve document indexing with hypothetical document embeddings (hyde), an embedding technique that generates and embeds hypothetical answers to queries.\n[learned_prompt_optimization.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/learned_prompt_optimization.ipynb) | Automatically enhance language model prompts by injecting specific terms using reinforcement learning, which can be used to personalize responses based on user preferences.\n[llm_bash.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_bash.ipynb) | Perform simple filesystem commands using language learning models (llms) and a bash process.\n[llm_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_checker.ipynb) | Create a self-checking chain using the llmcheckerchain function.\n[llm_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_math.ipynb) | Solve complex word math problems using language models and python repls.\n[llm_summarization_checker.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_summarization_checker.ipynb) | Check the accuracy of text summaries, with the option to run the checker multiple times for improved results.\n[llm_symbolic_math.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/llm_symbolic_math.ipynb) | Solve algebraic equations with the help of llms (language learning models) and sympy, a python library for symbolic mathematics.\n[meta_prompt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/meta_prompt.ipynb) | Implement the meta-prompt concept, which is a method for building self-improving agents that reflect on their own performance and modify their instructions accordingly.\n[multi_modal_output_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_modal_output_agent.ipynb) | Generate multi-modal outputs, specifically images and text.\n[multi_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multi_player_dnd.ipynb) | Simulate multi-player dungeons & dragons games, with a custom function determining the speaking schedule of the agents.\n[multiagent_authoritarian.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_authoritarian.ipynb) | Implement a multi-agent simulation where a privileged agent controls the conversation, including deciding who speaks and when the conversation ends, in the context of a simulated news network.\n[multiagent_bidding.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/multiagent_bidding.ipynb) | Implement a multi-agent simulation where agents bid to speak, with the highest bidder speaking next, demonstrated through a fictitious presidential debate example.\n[myscale_vector_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/myscale_vector_sql.ipynb) | Access and interact with the myscale integrated vector database, which can enhance the performance of language model (llm) applications.\n[openai_functions_retrieval_qa....](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_functions_retrieval_qa.ipynb) | Structure response output in a question-answering system by incorporating openai functions into a retrieval pipeline.\n[openai_v1_cookbook.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/openai_v1_cookbook.ipynb) | Explore new functionality released alongside the V1 release of the OpenAI Python library.\n[petting_zoo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/petting_zoo.ipynb) | Create multi-agent simulations with simulated environments using the petting zoo library.\n[plan_and_execute_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/plan_and_execute_agent.ipynb) | Create plan-and-execute agents that accomplish objectives by planning tasks with a language model (llm) and executing them with a separate agent.\n[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).\n[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.\n[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.\n[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.\n[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.\n[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.\n[smart_llm.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/smart_llm.ipynb) | Implement a smartllmchain, a self-critique chain that generates multiple output proposals, critiques them to find the best one, and then improves upon it to produce a final output.\n[tree_of_thought.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/tree_of_thought.ipynb) | Query a large language model using the tree of thought technique.\n[twitter-the-algorithm-analysis...](https://github.com/langchain-ai/langchain/tree/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb) | Analyze the source code of the Twitter algorithm with the help of gpt4 and activeloop's deep lake.\n[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.\n[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.\n[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\README.md", + "filetype": ".md", + "content": "# LangChain Documentation\n\nFor more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/documentation)\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\vercel_requirements.txt", + "filetype": ".txt", + "content": "-e ../libs/langchain\n-e ../libs/community\n-e ../libs/core\nurllib3==1.26.18\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\api_reference\\requirements.txt", + "filetype": ".txt", + "content": "-e libs/experimental\n-e libs/langchain\n-e libs/core\n-e libs/community\npydantic<2\nautodoc_pydantic==1.8.0\nmyst_parser\nnbsphinx==0.8.9\nsphinx>=5\nsphinx-autobuild==2021.3.14\nsphinx_rtd_theme==1.0.0\nsphinx-typlog-theme==0.8.0\nsphinx-panels\ntoml\nmyst_nb\nsphinx_copybutton\npydata-sphinx-theme==0.13.1" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\api_reference\\templates\\COPYRIGHT.txt", + "filetype": ".txt", + "content": "Copyright (c) 2007-2023 The scikit-learn developers.\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\api_reference\\themes\\COPYRIGHT.txt", + "filetype": ".txt", + "content": "Copyright (c) 2007-2023 The scikit-learn developers.\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n this list of conditions and the following disclaimer in the documentation\n and/or other materials provided with the distribution.\n\n* Neither the name of the copyright holder nor the names of its\n contributors may be used to endorse or promote products derived from\n this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\nAND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\nSERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\nCAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\nOR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\nOF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\security.md", + "filetype": ".md", + "content": "# Security\n\nLangChain has a large ecosystem of integrations with various external resources like local and remote file systems, APIs and databases. These integrations allow developers to create versatile applications that combine the power of LLMs with the ability to access, interact with and manipulate external resources.\n\n## Best Practices\n\nWhen building such applications developers should remember to follow good security practices:\n\n* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application.\n* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it\u2019s safest to assume that any LLM able to use those credentials may in fact delete data.\n* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It\u2019s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.\n\nRisks of not doing so include, but are not limited to:\n* Data corruption or loss.\n* Unauthorized access to confidential information.\n* Compromised performance or availability of critical resources.\n\nExample scenarios with mitigation strategies:\n\n* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container.\n* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.\n* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.\n\nIf you're building applications that access external resources like file systems, APIs\nor databases, consider speaking with your company's security team to determine how to best\ndesign and secure your applications.\n\n## Reporting a Vulnerability\n\nPlease report security vulnerabilities by email to security@langchain.dev. This will ensure the issue is promptly triaged and acted upon as needed.\n\n## Enterprise solutions\n\nLangChain may offer enterprise solutions for customers who have additional security\nrequirements. Please contact us at sales@langchain.dev." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\guides\\debugging.md", + "filetype": ".md", + "content": "# Debugging\n\nIf you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.\n\nHere are a few different tools and functionalities to aid in debugging.\n\n\n\n## Tracing\n\nPlatforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [WandB](/docs/integrations/providers/wandb_tracing) are the most comprehensive solutions for debugging. These platforms make it easy to not only log and visualize LLM apps, but also to actively debug, test and refine them.\n\nFor anyone building production-grade LLM applications, we highly recommend using a platform like this.\n\n![Screenshot of the LangSmith debugging interface showing an AgentExecutor run with input and output details, and a run tree visualization.](../../static/img/run_details.png \"LangSmith Debugging Interface\")\n\n## `set_debug` and `set_verbose`\n\nIf you're prototyping in Jupyter Notebooks or running Python scripts, it can be helpful to print out the intermediate steps of a Chain run. \n\nThere are a number of ways to enable printing at varying degrees of verbosity.\n\nLet's suppose we have a simple agent, and want to visualize the actions it takes and tool outputs it receives. Without any debugging, here's what we see:\n\n\n```python\nfrom langchain.agents import AgentType, initialize_agent, load_tools\nfrom langchain_openai import ChatOpenAI\n\nllm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\ntools = load_tools([\"ddg-search\", \"llm-math\"], llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)\n```\n\n\n```python\nagent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")\n```\n\n\n\n```\n 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is approximately 19345 days old in 2023.'\n```\n\n\n\n### `set_debug(True)`\n\nSetting the global `debug` flag will cause all LangChain components with callback support (chains, models, agents, tools, retrievers) to print the inputs they receive and outputs they generate. This is the most verbose setting and will fully log raw inputs and outputs.\n\n\n```python\nfrom langchain.globals import set_debug\n\nset_debug(True)\n\nagent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")\n```\n\n
Console output\n\n\n\n```\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor] Entering Chain run with input:\n {\n \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\"\n }\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] Entering Chain run with input:\n {\n \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\",\n \"agent_scratchpad\": \"\",\n \"stop\": [\n \"\\nObservation:\",\n \"\\n\\tObservation:\"\n ]\n }\n [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input:\n {\n \"prompts\": [\n \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:\"\n ]\n }\n [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain > 3:RunTypeEnum.llm:ChatOpenAI] [5.53s] Exiting LLM run with output:\n {\n \"generations\": [\n [\n {\n \"text\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\",\n \"generation_info\": {\n \"finish_reason\": \"stop\"\n },\n \"message\": {\n \"lc\": 1,\n \"type\": \"constructor\",\n \"id\": [\n \"langchain\",\n \"schema\",\n \"messages\",\n \"AIMessage\"\n ],\n \"kwargs\": {\n \"content\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\",\n \"additional_kwargs\": {}\n }\n }\n }\n ]\n ],\n \"llm_output\": {\n \"token_usage\": {\n \"prompt_tokens\": 206,\n \"completion_tokens\": 71,\n \"total_tokens\": 277\n },\n \"model_name\": \"gpt-4\"\n },\n \"run\": null\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 2:RunTypeEnum.chain:LLMChain] [5.53s] Exiting Chain run with output:\n {\n \"text\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\"\n }\n [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input:\n \"Director of the 2023 film Oppenheimer and their age\"\n [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 4:RunTypeEnum.tool:duckduckgo_search] [1.51s] Exiting Tool run with output:\n \"Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\"\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] Entering Chain run with input:\n {\n \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\",\n \"agent_scratchpad\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:\",\n \"stop\": [\n \"\\nObservation:\",\n \"\\n\\tObservation:\"\n ]\n }\n [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input:\n {\n \"prompts\": [\n \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:\"\n ]\n }\n [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain > 6:RunTypeEnum.llm:ChatOpenAI] [4.46s] Exiting LLM run with output:\n {\n \"generations\": [\n [\n {\n \"text\": \"The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\",\n \"generation_info\": {\n \"finish_reason\": \"stop\"\n },\n \"message\": {\n \"lc\": 1,\n \"type\": \"constructor\",\n \"id\": [\n \"langchain\",\n \"schema\",\n \"messages\",\n \"AIMessage\"\n ],\n \"kwargs\": {\n \"content\": \"The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\",\n \"additional_kwargs\": {}\n }\n }\n }\n ]\n ],\n \"llm_output\": {\n \"token_usage\": {\n \"prompt_tokens\": 550,\n \"completion_tokens\": 39,\n \"total_tokens\": 589\n },\n \"model_name\": \"gpt-4\"\n },\n \"run\": null\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 5:RunTypeEnum.chain:LLMChain] [4.46s] Exiting Chain run with output:\n {\n \"text\": \"The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\"\n }\n [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] Entering Tool run with input:\n \"Christopher Nolan age\"\n [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 7:RunTypeEnum.tool:duckduckgo_search] [1.33s] Exiting Tool run with output:\n \"Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \"Dunkirk,\" \"Inception,\" \"Interstellar,\" and the \"Dark Knight\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\"\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] Entering Chain run with input:\n {\n \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\",\n \"agent_scratchpad\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:\",\n \"stop\": [\n \"\\nObservation:\",\n \"\\n\\tObservation:\"\n ]\n }\n [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input:\n {\n \"prompts\": [\n \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:\"\n ]\n }\n [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain > 9:RunTypeEnum.llm:ChatOpenAI] [2.69s] Exiting LLM run with output:\n {\n \"generations\": [\n [\n {\n \"text\": \"Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\",\n \"generation_info\": {\n \"finish_reason\": \"stop\"\n },\n \"message\": {\n \"lc\": 1,\n \"type\": \"constructor\",\n \"id\": [\n \"langchain\",\n \"schema\",\n \"messages\",\n \"AIMessage\"\n ],\n \"kwargs\": {\n \"content\": \"Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\",\n \"additional_kwargs\": {}\n }\n }\n }\n ]\n ],\n \"llm_output\": {\n \"token_usage\": {\n \"prompt_tokens\": 868,\n \"completion_tokens\": 46,\n \"total_tokens\": 914\n },\n \"model_name\": \"gpt-4\"\n },\n \"run\": null\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 8:RunTypeEnum.chain:LLMChain] [2.69s] Exiting Chain run with output:\n {\n \"text\": \"Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\"\n }\n [tool/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] Entering Tool run with input:\n \"52*365\"\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] Entering Chain run with input:\n {\n \"question\": \"52*365\"\n }\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] Entering Chain run with input:\n {\n \"question\": \"52*365\",\n \"stop\": [\n \"```output\"\n ]\n }\n [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input:\n {\n \"prompts\": [\n \"Human: Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\\n\\nQuestion: ${Question with math problem.}\\n```text\\n${single line mathematical expression that solves the problem}\\n```\\n...numexpr.evaluate(text)...\\n```output\\n${Output of running the code}\\n```\\nAnswer: ${Answer}\\n\\nBegin.\\n\\nQuestion: What is 37593 * 67?\\n```text\\n37593 * 67\\n```\\n...numexpr.evaluate(\\\"37593 * 67\\\")...\\n```output\\n2518731\\n```\\nAnswer: 2518731\\n\\nQuestion: 37593^(1/5)\\n```text\\n37593**(1/5)\\n```\\n...numexpr.evaluate(\\\"37593**(1/5)\\\")...\\n```output\\n8.222831614237718\\n```\\nAnswer: 8.222831614237718\\n\\nQuestion: 52*365\"\n ]\n }\n [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain > 13:RunTypeEnum.llm:ChatOpenAI] [2.89s] Exiting LLM run with output:\n {\n \"generations\": [\n [\n {\n \"text\": \"```text\\n52*365\\n```\\n...numexpr.evaluate(\\\"52*365\\\")...\\n\",\n \"generation_info\": {\n \"finish_reason\": \"stop\"\n },\n \"message\": {\n \"lc\": 1,\n \"type\": \"constructor\",\n \"id\": [\n \"langchain\",\n \"schema\",\n \"messages\",\n \"AIMessage\"\n ],\n \"kwargs\": {\n \"content\": \"```text\\n52*365\\n```\\n...numexpr.evaluate(\\\"52*365\\\")...\\n\",\n \"additional_kwargs\": {}\n }\n }\n }\n ]\n ],\n \"llm_output\": {\n \"token_usage\": {\n \"prompt_tokens\": 203,\n \"completion_tokens\": 19,\n \"total_tokens\": 222\n },\n \"model_name\": \"gpt-4\"\n },\n \"run\": null\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain > 12:RunTypeEnum.chain:LLMChain] [2.89s] Exiting Chain run with output:\n {\n \"text\": \"```text\\n52*365\\n```\\n...numexpr.evaluate(\\\"52*365\\\")...\\n\"\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator > 11:RunTypeEnum.chain:LLMMathChain] [2.90s] Exiting Chain run with output:\n {\n \"answer\": \"Answer: 18980\"\n }\n [tool/end] [1:RunTypeEnum.chain:AgentExecutor > 10:RunTypeEnum.tool:Calculator] [2.90s] Exiting Tool run with output:\n \"Answer: 18980\"\n [chain/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] Entering Chain run with input:\n {\n \"input\": \"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\",\n \"agent_scratchpad\": \"I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\\nObservation: Answer: 18980\\nThought:\",\n \"stop\": [\n \"\\nObservation:\",\n \"\\n\\tObservation:\"\n ]\n }\n [llm/start] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] Entering LLM run with input:\n {\n \"prompts\": [\n \"Human: Answer the following questions as best you can. You have access to the following tools:\\n\\nduckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [duckduckgo_search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\\nThought:I need to find out who directed the 2023 film Oppenheimer and their age. Then, I need to calculate their age in days. I will use DuckDuckGo to find out the director and their age.\\nAction: duckduckgo_search\\nAction Input: \\\"Director of the 2023 film Oppenheimer and their age\\\"\\nObservation: Capturing the mad scramble to build the first atomic bomb required rapid-fire filming, strict set rules and the construction of an entire 1940s western town. By Jada Yuan. July 19, 2023 at 5:00 a ... In Christopher Nolan's new film, \\\"Oppenheimer,\\\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. Christopher Nolan goes deep on 'Oppenheimer,' his most 'extreme' film to date. By Kenneth Turan. July 11, 2023 5 AM PT. For Subscribers. Christopher Nolan is photographed in Los Angeles ... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\\nThought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his age.\\nAction: duckduckgo_search\\nAction Input: \\\"Christopher Nolan age\\\"\\nObservation: Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. July 30, 1970 (age 52) London England Notable Works: \\\"Dunkirk\\\" \\\"Tenet\\\" \\\"The Prestige\\\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film July 11, 2023 5 AM PT For Subscribers Christopher Nolan is photographed in Los Angeles. (Joe Pugliese / For The Times) This is not the story I was supposed to write. Oppenheimer director Christopher Nolan, Cillian Murphy, Emily Blunt and Matt Damon on the stakes of making a three-hour, CGI-free summer film. Christopher Nolan, the director behind such films as \\\"Dunkirk,\\\" \\\"Inception,\\\" \\\"Interstellar,\\\" and the \\\"Dark Knight\\\" trilogy, has spent the last three years living in Oppenheimer's world, writing ...\\nThought:Christopher Nolan was born on July 30, 1970, which makes him 52 years old in 2023. Now I need to calculate his age in days.\\nAction: Calculator\\nAction Input: 52*365\\nObservation: Answer: 18980\\nThought:\"\n ]\n }\n [llm/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain > 15:RunTypeEnum.llm:ChatOpenAI] [3.52s] Exiting LLM run with output:\n {\n \"generations\": [\n [\n {\n \"text\": \"I now know the final answer\\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\",\n \"generation_info\": {\n \"finish_reason\": \"stop\"\n },\n \"message\": {\n \"lc\": 1,\n \"type\": \"constructor\",\n \"id\": [\n \"langchain\",\n \"schema\",\n \"messages\",\n \"AIMessage\"\n ],\n \"kwargs\": {\n \"content\": \"I now know the final answer\\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\",\n \"additional_kwargs\": {}\n }\n }\n }\n ]\n ],\n \"llm_output\": {\n \"token_usage\": {\n \"prompt_tokens\": 926,\n \"completion_tokens\": 43,\n \"total_tokens\": 969\n },\n \"model_name\": \"gpt-4\"\n },\n \"run\": null\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor > 14:RunTypeEnum.chain:LLMChain] [3.52s] Exiting Chain run with output:\n {\n \"text\": \"I now know the final answer\\nFinal Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\"\n }\n [chain/end] [1:RunTypeEnum.chain:AgentExecutor] [21.96s] Exiting Chain run with output:\n {\n \"output\": \"The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.\"\n }\n\n\n\n\n\n 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 52 years old. His age in days is approximately 18980 days.'\n```\n\n\n\n
\n\n### `set_verbose(True)`\n\nSetting the `verbose` flag will print out inputs and outputs in a slightly more readable format and will skip logging certain raw outputs (like the token usage stats for an LLM call) so that you can focus on application logic.\n\n\n```python\nfrom langchain.globals import set_verbose\n\nset_verbose(True)\n\nagent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")\n```\n\n
Console output\n\n\n\n```\n \n \n > Entering new AgentExecutor chain...\n \n \n > Entering new LLMChain chain...\n Prompt after formatting:\n Answer the following questions as best you can. You have access to the following tools:\n \n duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\n Calculator: Useful for when you need to answer questions about math.\n \n Use the following format:\n \n Question: the input question you must answer\n Thought: you should always think about what to do\n Action: the action to take, should be one of [duckduckgo_search, Calculator]\n Action Input: the input to the action\n Observation: the result of the action\n ... (this Thought/Action/Action Input/Observation can repeat N times)\n Thought: I now know the final answer\n Final Answer: the final answer to the original input question\n \n Begin!\n \n Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\n Thought:\n \n > Finished chain.\n First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.\n Action: duckduckgo_search\n Action Input: \"Director of the 2023 film Oppenheimer\"\n Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\n Thought:\n \n > Entering new LLMChain chain...\n Prompt after formatting:\n Answer the following questions as best you can. You have access to the following tools:\n \n duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\n Calculator: Useful for when you need to answer questions about math.\n \n Use the following format:\n \n Question: the input question you must answer\n Thought: you should always think about what to do\n Action: the action to take, should be one of [duckduckgo_search, Calculator]\n Action Input: the input to the action\n Observation: the result of the action\n ... (this Thought/Action/Action Input/Observation can repeat N times)\n Thought: I now know the final answer\n Final Answer: the final answer to the original input question\n \n Begin!\n \n Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\n Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.\n Action: duckduckgo_search\n Action Input: \"Director of the 2023 film Oppenheimer\"\n Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\n Thought:\n \n > Finished chain.\n The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age.\n Action: duckduckgo_search\n Action Input: \"Christopher Nolan birth date\"\n Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about \"the man who ...\n Thought:\n \n > Entering new LLMChain chain...\n Prompt after formatting:\n Answer the following questions as best you can. You have access to the following tools:\n \n duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\n Calculator: Useful for when you need to answer questions about math.\n \n Use the following format:\n \n Question: the input question you must answer\n Thought: you should always think about what to do\n Action: the action to take, should be one of [duckduckgo_search, Calculator]\n Action Input: the input to the action\n Observation: the result of the action\n ... (this Thought/Action/Action Input/Observation can repeat N times)\n Thought: I now know the final answer\n Final Answer: the final answer to the original input question\n \n Begin!\n \n Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\n Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.\n Action: duckduckgo_search\n Action Input: \"Director of the 2023 film Oppenheimer\"\n Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\n Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age.\n Action: duckduckgo_search\n Action Input: \"Christopher Nolan birth date\"\n Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about \"the man who ...\n Thought:\n \n > Finished chain.\n Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days.\n Action: Calculator\n Action Input: (2023 - 1970) * 365\n \n > Entering new LLMMathChain chain...\n (2023 - 1970) * 365\n \n > Entering new LLMChain chain...\n Prompt after formatting:\n Translate a math problem into a expression that can be executed using Python's numexpr library. Use the output of running this code to answer the question.\n \n Question: ${Question with math problem.}\n ```text\n ${single line mathematical expression that solves the problem}\n ```\n ...numexpr.evaluate(text)...\n ```output\n ${Output of running the code}\n ```\n Answer: ${Answer}\n \n Begin.\n \n Question: What is 37593 * 67?\n ```text\n 37593 * 67\n ```\n ...numexpr.evaluate(\"37593 * 67\")...\n ```output\n 2518731\n ```\n Answer: 2518731\n \n Question: 37593^(1/5)\n ```text\n 37593**(1/5)\n ```\n ...numexpr.evaluate(\"37593**(1/5)\")...\n ```output\n 8.222831614237718\n ```\n Answer: 8.222831614237718\n \n Question: (2023 - 1970) * 365\n \n \n > Finished chain.\n ```text\n (2023 - 1970) * 365\n ```\n ...numexpr.evaluate(\"(2023 - 1970) * 365\")...\n \n Answer: 19345\n > Finished chain.\n \n Observation: Answer: 19345\n Thought:\n \n > Entering new LLMChain chain...\n Prompt after formatting:\n Answer the following questions as best you can. You have access to the following tools:\n \n duckduckgo_search: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.\n Calculator: Useful for when you need to answer questions about math.\n \n Use the following format:\n \n Question: the input question you must answer\n Thought: you should always think about what to do\n Action: the action to take, should be one of [duckduckgo_search, Calculator]\n Action Input: the input to the action\n Observation: the result of the action\n ... (this Thought/Action/Action Input/Observation can repeat N times)\n Thought: I now know the final answer\n Final Answer: the final answer to the original input question\n \n Begin!\n \n Question: Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\n Thought:First, I need to find out who directed the film Oppenheimer in 2023 and their birth date to calculate their age.\n Action: duckduckgo_search\n Action Input: \"Director of the 2023 film Oppenheimer\"\n Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert ... 2023, 12:16 p.m. ET. ... including his role as the director of the Manhattan Engineer District, better ... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". In this opening salvo of 2023's Oscar battle, Nolan has enjoined a star-studded cast for a retelling of the brilliant and haunted life of J. Robert Oppenheimer, the American physicist whose... Oppenheimer is a 2023 epic biographical thriller film written and directed by Christopher Nolan.It is based on the 2005 biography American Prometheus by Kai Bird and Martin J. Sherwin about J. Robert Oppenheimer, a theoretical physicist who was pivotal in developing the first nuclear weapons as part of the Manhattan Project and thereby ushering in the Atomic Age.\n Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age.\n Action: duckduckgo_search\n Action Input: \"Christopher Nolan birth date\"\n Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. July 2023 sees the release of Christopher Nolan's new film, Oppenheimer, his first movie since 2020's Tenet and his split from Warner Bros. Billed as an epic thriller about \"the man who ...\n Thought:Christopher Nolan was born on July 30, 1970. Now I need to calculate his age in 2023 and then convert it into days.\n Action: Calculator\n Action Input: (2023 - 1970) * 365\n Observation: Answer: 19345\n Thought:\n \n > Finished chain.\n I now know the final answer\n Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.\n \n > Finished chain.\n\n\n 'The director of the 2023 film Oppenheimer is Christopher Nolan and he is 53 years old in 2023. His age in days is 19345 days.'\n```\n\n\n\n
\n\n### `Chain(..., verbose=True)`\n\nYou can also scope verbosity down to a single object, in which case only the inputs and outputs to that object are printed (along with any additional callbacks calls made specifically by that object).\n\n\n```python\n# Passing verbose=True to initialize_agent will pass that along to the AgentExecutor (which is a Chain).\nagent = initialize_agent(\n tools, \n llm, \n agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n verbose=True,\n)\n\nagent.run(\"Who directed the 2023 film Oppenheimer and what is their age? What is their age in days (assume 365 days per year)?\")\n```\n\n
Console output\n\n\n\n```\n > Entering new AgentExecutor chain...\n First, I need to find out who directed the film Oppenheimer in 2023 and their birth date. Then, I can calculate their age in years and days.\n Action: duckduckgo_search\n Action Input: \"Director of 2023 film Oppenheimer\"\n Observation: Oppenheimer: Directed by Christopher Nolan. With Cillian Murphy, Emily Blunt, Robert Downey Jr., Alden Ehrenreich. The story of American scientist J. Robert Oppenheimer and his role in the development of the atomic bomb. In Christopher Nolan's new film, \"Oppenheimer,\" Cillian Murphy stars as J. Robert Oppenheimer, the American physicist who oversaw the Manhattan Project in Los Alamos, N.M. Universal Pictures... J Robert Oppenheimer was the director of the secret Los Alamos Laboratory. It was established under US president Franklin D Roosevelt as part of the Manhattan Project to build the first atomic bomb. He oversaw the first atomic bomb detonation in the New Mexico desert in July 1945, code-named \"Trinity\". A Review of Christopher Nolan's new film 'Oppenheimer' , the story of the man who fathered the Atomic Bomb. Cillian Murphy leads an all star cast ... Release Date: July 21, 2023. Director ... For his new film, \"Oppenheimer,\" starring Cillian Murphy and Emily Blunt, director Christopher Nolan set out to build an entire 1940s western town.\n Thought:The director of the 2023 film Oppenheimer is Christopher Nolan. Now I need to find out his birth date to calculate his age.\n Action: duckduckgo_search\n Action Input: \"Christopher Nolan birth date\"\n Observation: July 30, 1970 (age 52) London England Notable Works: \"Dunkirk\" \"Tenet\" \"The Prestige\" See all related content \u2192 Recent News Jul. 13, 2023, 11:11 AM ET (AP) Cillian Murphy, playing Oppenheimer, finally gets to lead a Christopher Nolan film Christopher Edward Nolan CBE (born 30 July 1970) is a British and American filmmaker. Known for his Hollywood blockbusters with complex storytelling, Nolan is considered a leading filmmaker of the 21st century. His films have grossed $5 billion worldwide. The recipient of many accolades, he has been nominated for five Academy Awards, five BAFTA Awards and six Golden Globe Awards. Christopher Nolan is currently 52 according to his birthdate July 30, 1970 Sun Sign Leo Born Place Westminster, London, England, United Kingdom Residence Los Angeles, California, United States Nationality Education Chris attended Haileybury and Imperial Service College, in Hertford Heath, Hertfordshire. Christopher Nolan's next movie will study the man who developed the atomic bomb, J. Robert Oppenheimer. Here's the release date, plot, trailers & more. Date of Birth: 30 July 1970 . ... Christopher Nolan is a British-American film director, producer, and screenwriter. His films have grossed more than US$5 billion worldwide, and have garnered 11 Academy Awards from 36 nominations. ...\n Thought:Christopher Nolan was born on July 30, 1970. Now I can calculate his age in years and then in days.\n Action: Calculator\n Action Input: {\"operation\": \"subtract\", \"operands\": [2023, 1970]}\n Observation: Answer: 53\n Thought:Christopher Nolan is 53 years old in 2023. Now I need to calculate his age in days.\n Action: Calculator\n Action Input: {\"operation\": \"multiply\", \"operands\": [53, 365]}\n Observation: Answer: 19345\n Thought:I now know the final answer\n Final Answer: The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days.\n \n > Finished chain.\n\n\n 'The director of the 2023 film Oppenheimer is Christopher Nolan. He is 53 years old in 2023, which is approximately 19345 days.'\n```\n\n\n\n
\n\n## Other callbacks\n\n`Callbacks` are what we use to execute any functionality within a component outside the primary component logic. All of the above solutions use `Callbacks` under the hood to log intermediate steps of components. There are a number of `Callbacks` relevant for debugging that come with LangChain out of the box, like the [FileCallbackHandler](/docs/modules/callbacks/filecallbackhandler). You can also implement your own callbacks to execute custom functionality.\n\nSee here for more info on [Callbacks](/docs/modules/callbacks/), how to use them, and customize them.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\guides\\pydantic_compatibility.md", + "filetype": ".md", + "content": "# Pydantic compatibility\n\n- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)\n- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)\n- Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time\n\n## LangChain Pydantic migration plan\n\nAs of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2. \n * Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).\n * During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial\n migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).\n\nUser can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.\n\nBelow are two examples of showing how to avoid mixing pydantic v1 and v2 code in\nthe case of inheritance and in the case of passing objects to LangChain.\n\n**Example 1: Extending via inheritance**\n\n**YES** \n\n```python\nfrom pydantic.v1 import root_validator, validator\n\nclass CustomTool(BaseTool): # BaseTool is v1 code\n x: int = Field(default=1)\n\n def _run(*args, **kwargs):\n return \"hello\"\n\n @validator('x') # v1 code\n @classmethod\n def validate_x(cls, x: int) -> int:\n return 1\n \n\nCustomTool(\n name='custom_tool',\n description=\"hello\",\n x=1,\n)\n```\n\nMixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors\n\n**NO** \n\n```python\nfrom pydantic import Field, field_validator # pydantic v2\n\nclass CustomTool(BaseTool): # BaseTool is v1 code\n x: int = Field(default=1)\n\n def _run(*args, **kwargs):\n return \"hello\"\n\n @field_validator('x') # v2 code\n @classmethod\n def validate_x(cls, x: int) -> int:\n return 1\n \n\nCustomTool( \n name='custom_tool',\n description=\"hello\",\n x=1,\n)\n```\n\n**Example 2: Passing objects to LangChain**\n\n**YES**\n\n```python\nfrom langchain_core.tools import Tool\nfrom pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace\n\nclass CalculatorInput(BaseModel):\n question: str = Field()\n\nTool.from_function( # <-- tool uses v1 namespace\n func=lambda question: 'hello',\n name=\"Calculator\",\n description=\"useful for when you need to answer questions about math\",\n args_schema=CalculatorInput\n)\n```\n\n**NO**\n\n```python\nfrom langchain_core.tools import Tool\nfrom pydantic import BaseModel, Field # <-- Uses v2 namespace\n\nclass CalculatorInput(BaseModel):\n question: str = Field()\n\nTool.from_function( # <-- tool uses v1 namespace\n func=lambda question: 'hello',\n name=\"Calculator\",\n description=\"useful for when you need to answer questions about math\",\n args_schema=CalculatorInput\n)\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\callbacks\\llmonitor.md", + "filetype": ".md", + "content": "# LLMonitor\n\n>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.\n\n\n\n## Setup\n\nCreate an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`.\n\nOnce you have it, set it as an environment variable by running:\n\n```bash\nexport LLMONITOR_APP_ID=\"...\"\n```\n\nIf you'd prefer not to set an environment variable, you can pass the key directly when initializing the callback handler:\n\n```python\nfrom langchain.callbacks import LLMonitorCallbackHandler\n\nhandler = LLMonitorCallbackHandler(app_id=\"...\")\n```\n\n## Usage with LLM/Chat models\n\n```python\nfrom langchain_openai import OpenAI\nfrom langchain_openai import ChatOpenAI\nfrom langchain.callbacks import LLMonitorCallbackHandler\n\nhandler = LLMonitorCallbackHandler()\n\nllm = OpenAI(\n callbacks=[handler],\n)\n\nchat = ChatOpenAI(callbacks=[handler])\n\nllm(\"Tell me a joke\")\n\n```\n\n## Usage with chains and agents\n\nMake sure to pass the callback handler to the `run` method so that all related chains and llm calls are correctly tracked.\n\nIt is also recommended to pass `agent_name` in the metadata to be able to distinguish between agents in the dashboard.\n\nExample:\n\n```python\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.messages import SystemMessage, HumanMessage\nfrom langchain.agents import OpenAIFunctionsAgent, AgentExecutor, tool\nfrom langchain.callbacks import LLMonitorCallbackHandler\n\nllm = ChatOpenAI(temperature=0)\n\nhandler = LLMonitorCallbackHandler()\n\n@tool\ndef get_word_length(word: str) -> int:\n \"\"\"Returns the length of a word.\"\"\"\n return len(word)\n\ntools = [get_word_length]\n\nprompt = OpenAIFunctionsAgent.create_prompt(\n system_message=SystemMessage(\n content=\"You are very powerful assistant, but bad at calculating lengths of words.\"\n )\n)\n\nagent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True)\nagent_executor = AgentExecutor(\n agent=agent, tools=tools, verbose=True, metadata={\"agent_name\": \"WordCount\"} # <- recommended, assign a custom name\n)\nagent_executor.run(\"how many letters in the word educa?\", callbacks=[handler])\n```\n\nAnother example:\n\n```python\nfrom langchain.agents import load_tools, initialize_agent, AgentType\nfrom langchain_openai import OpenAI\nfrom langchain.callbacks import LLMonitorCallbackHandler\n\nhandler = LLMonitorCallbackHandler()\n\nllm = OpenAI(temperature=0)\ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, metadata={ \"agent_name\": \"GirlfriendAgeFinder\" }) # <- recommended, assign a custom name\n\nagent.run(\n \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\",\n callbacks=[handler],\n)\n```\n\n## User Tracking\nUser tracking allows you to identify your users, track their cost, conversations and more.\n\n```python\nfrom langchain.callbacks.llmonitor_callback import LLMonitorCallbackHandler, identify\n\nwith identify(\"user-123\"):\n llm(\"Tell me a joke\")\n\nwith identify(\"user-456\", user_props={\"email\": \"user456@test.com\"}):\n agen.run(\"Who is Leo DiCaprio's girlfriend?\")\n```\n## Support\n\nFor any question or issue with integration you can reach out to the LLMonitor team on [Discord](http://discord.com/invite/8PafSG58kK) or via [email](mailto:vince@llmonitor.com).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\callbacks\\streamlit.md", + "filetype": ".md", + "content": "# Streamlit\n\n> **[Streamlit](https://streamlit.io/) is a faster way to build and share data apps.**\n> Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front\u2011end experience required.\n> See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai).\n\n[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/streamlit-agent?quickstart=1)\n\nIn this guide we will demonstrate how to use `StreamlitCallbackHandler` to display the thoughts and actions of an agent in an\ninteractive Streamlit app. Try it out with the running app below using the MRKL agent:\n\n\n\n## Installation and Setup\n\n```bash\npip install langchain streamlit\n```\n\nYou can run `streamlit hello` to load a sample app and validate your install succeeded. See full instructions in Streamlit's\n[Getting started documentation](https://docs.streamlit.io/library/get-started).\n\n## Display thoughts and actions\n\nTo create a `StreamlitCallbackHandler`, you just need to provide a parent container to render the output.\n\n```python\nfrom langchain_community.callbacks import StreamlitCallbackHandler\nimport streamlit as st\n\nst_callback = StreamlitCallbackHandler(st.container())\n```\n\nAdditional keyword arguments to customize the display behavior are described in the\n[API reference](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler.html).\n\n### Scenario 1: Using an Agent with Tools\n\nThe primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an\nagent in your Streamlit app and simply pass the `StreamlitCallbackHandler` to `agent.run()` in order to visualize the\nthoughts and actions live in your app.\n\n```python\nimport streamlit as st\nfrom langchain import hub\nfrom langchain.agents import AgentExecutor, create_react_agent, load_tools\nfrom langchain_community.callbacks import StreamlitCallbackHandler\nfrom langchain_openai import OpenAI\n\nllm = OpenAI(temperature=0, streaming=True)\ntools = load_tools([\"ddg-search\"])\nprompt = hub.pull(\"hwchase17/react\")\nagent = create_react_agent(llm, tools, prompt)\nagent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n\nif prompt := st.chat_input():\n st.chat_message(\"user\").write(prompt)\n with st.chat_message(\"assistant\"):\n st_callback = StreamlitCallbackHandler(st.container())\n response = agent_executor.invoke(\n {\"input\": prompt}, {\"callbacks\": [st_callback]}\n )\n st.write(response[\"output\"])\n```\n\n**Note:** You will need to set `OPENAI_API_KEY` for the above app code to run successfully.\nThe easiest way to do this is via [Streamlit secrets.toml](https://docs.streamlit.io/library/advanced-features/secrets-management),\nor any other local ENV management tool.\n\n### Additional scenarios\n\nCurrently `StreamlitCallbackHandler` is geared towards use with a LangChain Agent Executor. Support for additional agent types,\nuse directly with Chains, etc will be added in the future.\n\nYou may also be interested in using\n[StreamlitChatMessageHistory](/docs/integrations/memory/streamlit_chat_message_history) for LangChain.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\document_loaders\\example_data\\whatsapp_chat.txt", + "filetype": ".txt", + "content": "1/22/23, 6:30 PM - User 1: Hi! Im interested in your bag. Im offering $50. Let me know if you are interested. Thanks!\n1/22/23, 8:24 PM - User 2: Goodmorning! $50 is too low.\n1/23/23, 2:59 AM - User 1: How much do you want?\n1/23/23, 3:00 AM - User 2: Online is at least $100\n1/23/23, 3:01 AM - User 2: Here is $129\n1/23/23, 3:01 AM - User 2: \n1/23/23, 3:01 AM - User 1: Im not interested in this bag. Im interested in the blue one!\n1/23/23, 3:02 AM - User 1: I thought you were selling the blue one!\n1/23/23, 3:18 AM - User 2: No Im sorry it was my mistake, the blue one is not for sale\n1/23/23, 3:19 AM - User 1: Oh no worries! Bye\n1/23/23, 3:19 AM - User 2: Bye!\n1/23/23, 3:22_AM - User 1: And let me know if anything changes" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\document_loaders\\example_data\\fake_discord_data\\output.txt", + "filetype": ".txt", + "content": " application.json\n 1023495323659816971/\n applications/\n avatar.gif\n user.json\n events-2023-00000-of-00001.json\n events-2023-00000-of-00001.json\n events-2023-00000-of-00001.json\n events-2023-00000-of-00001.json\n analytics/\n modeling/\n reporting/\n tns/\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n channel.json\n messages.csv\n c1000084973275058257/\n c1000108836771856496/\n c1004874234339794977/\n c1004874234339794979/\n c1004874234339794981/\n c1004874234339794982/\n c1005785616165896283/\n c1011447733393043628/\n c1011548022905249822/\n c1011650063027687575/\n c1011714070182895727/\n c1013930263950135346/\n c1013930396829884426/\n c1014957294745829479/\n c1014961384821366794/\n c1014974864370712696/\n c1019288541592817785/\n c1024947790767464478/\n c1027257686858932255/\n c1027927867989962814/\n c1032151840999100436/\n c1032575808826523662/\n c1037561178286739466/\n c1038097349660135474/\n c1038097372695236729/\n c1038689169351913544/\n c1038692122452312125/\n c1039957371381887049/\n c1040989617157066782/\n c1047165096452960316/\n c1047565374645870743/\n c1050225908914589716/\n c1050226593668284416/\n c1050227353311248404/\n c1051632794427723827/\n c1052599046717591632/\n c1052615516981821531/\n c1056285083520217149/\n c105765859191975936/\n c1061166503753416735/\n c1062024667105341502/\n c1066640566621835284/\n c1070018538758221874/\n c1072944049788555314/\n c1075121707033042985/\n c1075438954632990820/\n c1077238309320929342/\n c1081432695315386418/\n c1082169962157838366/\n c1084011585871282256/\n c1084352082812878928/\n c1085149531437535343/\n c1086944178086359060/\n c1093214985557123223/\n c1093215227555876914/\n c1093930791794393089/\n c1096323263161978891/\n c1096489741710532730/\n c1097000752653795358/\n c278566343836565505/\n c279692806442844161/\n c280973436971515906/\n c283812709789859851/\n c343944376055103488/\n c486935104384532502/\n c531543370041131008/\n c538158613252800512/\n c572384192571113512/\n c619960843878268950/\n c661268593870372876/\n c661394153778970624/\n c663302088226373632/\n c669957895257063445/\n c670218237891313664/\n c673160333661306880/\n c674693947800420363/\n c674694138129678375/\n c743425228952305695/\n c754627904406814770/\n c754638493875044503/\n c757205803651301436/\n c759232323710484531/\n c771802926372093973/\n c783240623582609416/\n c783244379115880448/\n c801744322788982814/\n c810514969892225024/\n c816983218434605057/\n c830184175176122389/\n c830679381033877564/\n c831172308395622480/\n c849582819105177650/\n c860977555875430492/\n c867042653401251880/\n c868094992986550322/\n c868917941184376842/\n c905007686976946176/\n c909600839717511211/\n c909600931816018031/\n c923095048931905557/\n c924877027180417035/\n c938491245347631114/\n c938743368375214110/\n c969876184185860107/\n c969945714056642580/\n c969948939728093214/\n c981037338517966889/\n c984120044478939146/\n c985958948085592064/\n c990816829993811978/\n c993402018901266436/\n c993782366948565102/\n c993843360752226364/\n c994556806644899870/\n index.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n bans.json\n channels.json\n emoji.json\n guild.json\n icon.jpeg\n webhooks.json\n audit-log.json\n guild.json\n audit-log.json\n bans.json\n channels.json\n emoji.json\n guild.json\n webhooks.json\n audit-log.json\n guild.json\n audit-log.json\n bans.json\n channels.json\n emoji.json\n guild.json\n icon.png\n webhooks.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n audit-log.json\n guild.json\n 1024120160740716544/\n 102860784329052160/\n 1032575808826523659/\n 1038097195422978059/\n 1039583521112600638/\n 1050224141732687912/\n 1069661049827111054/\n 267624335836053506/\n 278285146518716417/\n 486935104384532500/\n 531303890453397522/\n 669880381649977354/\n 727016164215226450/\n 743099584242516037/\n 753173158198116402/\n 830184174198718474/\n 860977555293470772/\n 887994159741427712/\n 909600839717511208/\n 974519864045756446/\n index.json\naccount/\nactivities_e/\nactivities_w/\nactivity/\nmessages/\nprograms/\nREADME.txt\nservers/\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\memory\\remembrall.md", + "filetype": ".md", + "content": "# Remembrall\n\nThis page covers how to use the [Remembrall](https://remembrall.dev) ecosystem within LangChain.\n\n## What is Remembrall?\n\nRemembrall gives your language model long-term memory, retrieval augmented generation, and complete observability with just a few lines of code.\n\n![Screenshot of the Remembrall dashboard showing request statistics and model interactions.](/img/RemembrallDashboard.png \"Remembrall Dashboard Interface\")\n\nIt works as a light-weight proxy on top of your OpenAI calls and simply augments the context of the chat calls at runtime with relevant facts that have been collected.\n\n## Setup\n\nTo get started, [sign in with Github on the Remembrall platform](https://remembrall.dev/login) and copy your [API key from the settings page](https://remembrall.dev/dashboard/settings).\n\nAny request that you send with the modified `openai_api_base` (see below) and Remembrall API key will automatically be tracked in the Remembrall dashboard. You **never** have to share your OpenAI key with our platform and this information is **never** stored by the Remembrall systems.\n\nTo do this, we need to install the following dependencies:\n\n```bash\npip install -U langchain-openai\n```\n\n### Enable Long Term Memory\n\nIn addition to setting the `openai_api_base` and Remembrall API key via `x-gp-api-key`, you should specify a UID to maintain memory for. This will usually be a unique user identifier (like email).\n\n```python\nfrom langchain_openai import ChatOpenAI\nchat_model = ChatOpenAI(openai_api_base=\"https://remembrall.dev/api/openai/v1\",\n model_kwargs={\n \"headers\":{\n \"x-gp-api-key\": \"remembrall-api-key-here\",\n \"x-gp-remember\": \"user@email.com\",\n }\n })\n\nchat_model.predict(\"My favorite color is blue.\")\nimport time; time.sleep(5) # wait for system to save fact via auto save\nprint(chat_model.predict(\"What is my favorite color?\"))\n```\n\n### Enable Retrieval Augmented Generation\n\nFirst, create a document context in the [Remembrall dashboard](https://remembrall.dev/dashboard/spells). Paste in the document texts or upload documents as PDFs to be processed. Save the Document Context ID and insert it as shown below.\n\n```python\nfrom langchain_openai import ChatOpenAI\nchat_model = ChatOpenAI(openai_api_base=\"https://remembrall.dev/api/openai/v1\",\n model_kwargs={\n \"headers\":{\n \"x-gp-api-key\": \"remembrall-api-key-here\",\n \"x-gp-context\": \"document-context-id-goes-here\",\n }\n })\n\nprint(chat_model.predict(\"This is a question that can be answered with my document.\"))\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\airtable.md", + "filetype": ".md", + "content": "# Airtable\n\n>[Airtable](https://en.wikipedia.org/wiki/Airtable) is a cloud collaboration service.\n`Airtable` is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet. \n> The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox', \n> 'phone number', and 'drop-down list', and can reference file attachments like images.\n\n>Users can create a database, set up column types, add records, link tables to one another, collaborate, sort records\n> and publish views to external websites.\n\n## Installation and Setup\n\n```bash\npip install pyairtable\n```\n\n* Get your [API key](https://support.airtable.com/docs/creating-and-using-api-keys-and-access-tokens).\n* Get the [ID of your base](https://airtable.com/developers/web/api/introduction).\n* Get the [table ID from the table url](https://www.highviewapps.com/kb/where-can-i-find-the-airtable-base-id-and-table-id/#:~:text=Both%20the%20Airtable%20Base%20ID,URL%20that%20begins%20with%20tbl).\n\n## Document Loader\n\n\n```python\nfrom langchain_community.document_loaders import AirtableLoader\n```\n\nSee an [example](/docs/integrations/document_loaders/airtable).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\awadb.md", + "filetype": ".md", + "content": "# AwaDB\n\n>[AwaDB](https://github.com/awa-ai/awadb) is an AI Native database for the search and storage of embedding vectors used by LLM Applications.\n\n## Installation and Setup\n\n```bash\npip install awadb\n```\n\n\n## Vector Store\n\n\n```python\nfrom langchain_community.vectorstores import AwaDB\n```\n\nSee a [usage example](/docs/integrations/vectorstores/awadb).\n\n\n## Text Embedding Model\n\n```python\nfrom langchain_community.embeddings import AwaEmbeddings\n```\n\nSee a [usage example](/docs/integrations/text_embedding/awadb).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\baseten.md", + "filetype": ".md", + "content": "# Baseten\n\n[Baseten](https://baseten.co) provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.\n\nAs a model inference platform, Baseten is a `Provider` in the LangChain ecosystem. The Baseten integration currently implements a single `Component`, LLMs, but more are planned!\n\nBaseten lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:\n\n* Rather than paying per token, you pay per minute of GPU used.\n* Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability.\n* While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with Truss.\n\nYou can learn more about Baseten in [our docs](https://docs.baseten.co/) or read on for LangChain-specific info.\n\n## Setup: LangChain + Baseten\n\nYou'll need two things to use Baseten models with LangChain:\n\n- A [Baseten account](https://baseten.co)\n- An [API key](https://docs.baseten.co/observability/api-keys)\n\nExport your API key to your as an environment variable called `BASETEN_API_KEY`.\n\n```sh\nexport BASETEN_API_KEY=\"paste_your_api_key_here\"\n```\n\n## Component guide: LLMs\n\nBaseten integrates with LangChain through the [LLM component](https://python.langchain.com/docs/integrations/llms/baseten), which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace.\n\nYou can deploy foundation models like Mistral and Llama 2 with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with Truss](https://truss.baseten.co/welcome).\n\nIn this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard.\n\nTo use this module, you must:\n\n* Export your Baseten API key as the environment variable BASETEN_API_KEY\n* Get the model ID for your model from your Baseten dashboard\n* Identify the model deployment (\"production\" for all model library models)\n\n[Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.\n\nProduction deployment (standard for model library models)\n\n```python\nfrom langchain_community.llms import Baseten\n\nmistral = Baseten(model=\"MODEL_ID\", deployment=\"production\")\nmistral(\"What is the Mistral wind?\")\n```\n\nDevelopment deployment\n\n```python\nfrom langchain_community.llms import Baseten\n\nmistral = Baseten(model=\"MODEL_ID\", deployment=\"development\")\nmistral(\"What is the Mistral wind?\")\n```\n\nOther published deployment\n\n```python\nfrom langchain_community.llms import Baseten\n\nmistral = Baseten(model=\"MODEL_ID\", deployment=\"DEPLOYMENT_ID\")\nmistral(\"What is the Mistral wind?\")\n```\n\nStreaming LLM output, chat completions, embeddings models, and more are all supported on the Baseten platform and coming soon to our LangChain integration. Contact us at [support@baseten.co](mailto:support@baseten.co) with any questions about using Baseten with LangChain.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\breebs.md", + "filetype": ".md", + "content": "# BREEBS (Open Knowledge)\n\n[BREEBS](https://www.breebs.com/) is an open collaborative knowledge platform. \nAnybody can create a Breeb, a knowledge capsule based on PDFs stored on a Google Drive folder.\nA breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources.\nBehind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration. \n\n## List of available Breebs\n\nTo get the full list of Breebs, including their key (breeb_key) and description : \nhttps://breebs.promptbreeders.com/web/listbreebs. \nDozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance.\n\n## Creating a new Breeb\n\nTo generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the [BREEBS website](https://www.breebs.com/) by clicking the \"Create Breeb\" button. You can currently include up to 120 files, with a total character limit of 15 million. \n\n## Retriever\n```python\nfrom langchain.retrievers import BreebsRetriever\n```\n\n# Example\n[See usage example (Retrieval & ConversationalRetrievalChain)](https://python.langchain.com/docs/integrations/retrievers/breebs)" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\databricks.md", + "filetype": ".md", + "content": "Databricks\n==========\n\nThe [Databricks](https://www.databricks.com/) Lakehouse Platform unifies data, analytics, and AI on one platform.\n\nDatabricks embraces the LangChain ecosystem in various ways:\n\n1. Databricks connector for the SQLDatabase Chain: SQLDatabase.from_databricks() provides an easy way to query your data on Databricks through LangChain\n2. Databricks MLflow integrates with LangChain: Tracking and serving LangChain applications with fewer steps\n3. Databricks as an LLM provider: Deploy your fine-tuned LLMs on Databricks via serving endpoints or cluster driver proxy apps, and query it as langchain.llms.Databricks\n4. Databricks Dolly: Databricks open-sourced Dolly which allows for commercial use, and can be accessed through the Hugging Face Hub\n\nDatabricks connector for the SQLDatabase Chain\n----------------------------------------------\nYou can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. \n\n\nDatabricks MLflow integrates with LangChain\n-------------------------------------------\n\nMLflow is an open-source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/integrations/providers/mlflow_tracking) for details about MLflow's integration with LangChain.\n\nDatabricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details.\n\nDatabricks MLflow makes it more convenient to develop LangChain applications on Databricks. For MLflow tracking, you don't need to set the tracking uri. For MLflow Model Serving, you can save LangChain Chains in the MLflow langchain flavor, and then register and serve the Chain with a few clicks on Databricks, with credentials securely managed by MLflow Model Serving.\n\nDatabricks External Models\n--------------------------\n\n[Databricks External Models](https://docs.databricks.com/generative-ai/external-models/index.html) is a service that is designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. The following example creates an endpoint that serves OpenAI's GPT-4 model and generates a chat response from it:\n\n```python\nfrom langchain_community.chat_models import ChatDatabricks\nfrom langchain_core.messages import HumanMessage\nfrom mlflow.deployments import get_deploy_client\n\n\nclient = get_deploy_client(\"databricks\")\nname = f\"chat\"\nclient.create_endpoint(\n name=name,\n config={\n \"served_entities\": [\n {\n \"name\": \"test\",\n \"external_model\": {\n \"name\": \"gpt-4\",\n \"provider\": \"openai\",\n \"task\": \"llm/v1/chat\",\n \"openai_config\": {\n \"openai_api_key\": \"{{secrets//}}\",\n },\n },\n }\n ],\n },\n)\nchat = ChatDatabricks(endpoint=name, temperature=0.1)\nprint(chat([HumanMessage(content=\"hello\")]))\n# -> content='Hello! How can I assist you today?'\n```\n\nDatabricks Foundation Model APIs\n--------------------------------\n\n[Databricks Foundation Model APIs](https://docs.databricks.com/machine-learning/foundation-models/index.html) allow you to access and query state-of-the-art open source models from dedicated serving endpoints. With Foundation Model APIs, developers can quickly and easily build applications that leverage a high-quality generative AI model without maintaining their own model deployment. The following example uses the `databricks-bge-large-en` endpoint to generate embeddings from text:\n\n```python\nfrom langchain_community.embeddings import DatabricksEmbeddings\n\n\nembeddings = DatabricksEmbeddings(endpoint=\"databricks-bge-large-en\")\nprint(embeddings.embed_query(\"hello\")[:3])\n# -> [0.051055908203125, 0.007221221923828125, 0.003879547119140625, ...]\n```\n\nDatabricks as an LLM provider\n-----------------------------\n\nThe notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks#wrapping-a-serving-endpoint-custom-model) demonstrates how to serve a custom model that has been registered by MLflow as a Databricks endpoint.\nIt supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development. \n\n\nDatabricks Vector Search\n------------------------\n\nDatabricks Vector Search is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors. See the notebook [Databricks Vector Search](/docs/integrations/vectorstores/databricks_vector_search) for instructions to use it with LangChain.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\fireworks.md", + "filetype": ".md", + "content": "# Fireworks\n\nThis page covers how to use [Fireworks](https://fireworks.ai/) models within\nLangchain.\n\n## Installation and setup\n\n- Install the Fireworks integration package.\n\n ```\n pip install langchain-fireworks\n ```\n\n- Get a Fireworks API key by signing up at [fireworks.ai](https://fireworks.ai).\n- Authenticate by setting the FIREWORKS_API_KEY environment variable.\n\n## Authentication\n\nThere are two ways to authenticate using your Fireworks API key:\n\n1. Setting the `FIREWORKS_API_KEY` environment variable.\n\n ```python\n os.environ[\"FIREWORKS_API_KEY\"] = \"\"\n ```\n\n2. Setting `fireworks_api_key` field in the Fireworks LLM module.\n\n ```python\n llm = Fireworks(fireworks_api_key=\"\")\n ```\n\n## Using the Fireworks LLM module\n\nFireworks integrates with Langchain through the LLM module. In this example, we\nwill work the mixtral-8x7b-instruct model. \n\n```python\nfrom langchain_fireworks import Fireworks \n\nllm = Fireworks(\n fireworks_api_key=\"\",\n model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n max_tokens=256)\nllm(\"Name 3 sports.\")\n```\n\nFor a more detailed walkthrough, see [here](/docs/integrations/llms/Fireworks).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\marqo.md", + "filetype": ".md", + "content": "# Marqo\n\nThis page covers how to use the Marqo ecosystem within LangChain.\n\n### **What is Marqo?**\n\nMarqo is a tensor search engine that uses embeddings stored in in-memory HNSW indexes to achieve cutting edge search speeds. Marqo can scale to hundred-million document indexes with horizontal index sharding and allows for async and non-blocking data upload and search. Marqo uses the latest machine learning models from PyTorch, Huggingface, OpenAI and more. You can start with a pre-configured model or bring your own. The built in ONNX support and conversion allows for faster inference and higher throughput on both CPU and GPU.\n\nBecause Marqo include its own inference your documents can have a mix of text and images, you can bring Marqo indexes with data from your other systems into the langchain ecosystem without having to worry about your embeddings being compatible. \n\nDeployment of Marqo is flexible, you can get started yourself with our docker image or [contact us about our managed cloud offering!](https://www.marqo.ai/pricing)\n\nTo run Marqo locally with our docker image, [see our getting started.](https://docs.marqo.ai/latest/)\n\n## Installation and Setup\n- Install the Python SDK with `pip install marqo`\n\n## Wrappers\n\n### VectorStore\n\nThere exists a wrapper around Marqo indexes, allowing you to use them within the vectorstore framework. Marqo lets you select from a range of models for generating embeddings and exposes some preprocessing configurations.\n\nThe Marqo vectorstore can also work with existing multimodel indexes where your documents have a mix of images and text, for more information refer to [our documentation](https://docs.marqo.ai/latest/#multi-modal-and-cross-modal-search). Note that instaniating the Marqo vectorstore with an existing multimodal index will disable the ability to add any new documents to it via the langchain vectorstore `add_texts` method.\n\nTo import this vectorstore:\n```python\nfrom langchain_community.vectorstores import Marqo\n```\n\nFor a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo)\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\predibase.md", + "filetype": ".md", + "content": "# Predibase\n\nLearn how to use LangChain with models on Predibase. \n\n## Setup\n- Create a [Predibase](https://predibase.com/) account and [API key](https://docs.predibase.com/sdk-guide/intro).\n- Install the Predibase Python client with `pip install predibase`\n- Use your API key to authenticate\n\n### LLM\n\nPredibase integrates with LangChain by implementing LLM module. You can see a short example below or a full notebook under LLM > Integrations > Predibase. \n\n```python\nimport os\nos.environ[\"PREDIBASE_API_TOKEN\"] = \"{PREDIBASE_API_TOKEN}\"\n\nfrom langchain_community.llms import Predibase\n\nmodel = Predibase(model = 'vicuna-13b', predibase_api_key=os.environ.get('PREDIBASE_API_TOKEN'))\n\nresponse = model(\"Can you recommend me a nice dry wine?\")\nprint(response)\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\pubmed.md", + "filetype": ".md", + "content": "# PubMed\n\n# PubMed\n\n>[PubMed\u00ae](https://pubmed.ncbi.nlm.nih.gov/) by `The National Center for Biotechnology Information, National Library of Medicine` \n> comprises more than 35 million citations for biomedical literature from `MEDLINE`, life science journals, and online books. \n> Citations may include links to full text content from `PubMed Central` and publisher web sites.\n\n## Setup\nYou need to install a python package.\n\n```bash\npip install xmltodict\n```\n\n### Retriever\n\nSee a [usage example](/docs/integrations/retrievers/pubmed).\n\n```python\nfrom langchain.retrievers import PubMedRetriever\n```\n\n### Document Loader\n\nSee a [usage example](/docs/integrations/document_loaders/pubmed).\n\n```python\nfrom langchain_community.document_loaders import PubMedLoader\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\shaleprotocol.md", + "filetype": ".md", + "content": "# Shale Protocol\n\n[Shale Protocol](https://shaleprotocol.com) provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure. \n\nOur free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs. \n\nWith Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.\n\nThis page covers how Shale-Serve API can be incorporated with LangChain.\n\nAs of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases. \n\n\n## How to\n\n### 1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the \"Shale Bot\" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.\n\n### 2. Use https://shale.live/v1 as OpenAI API drop-in replacement \n\nFor example\n```python\nfrom langchain_openai import OpenAI\nfrom langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n\nimport os\nos.environ['OPENAI_API_BASE'] = \"https://shale.live/v1\"\nos.environ['OPENAI_API_KEY'] = \"ENTER YOUR API KEY\"\n\nllm = OpenAI()\n\ntemplate = \"\"\"Question: {question}\n\n# Answer: Let's think step by step.\"\"\"\n\nprompt = PromptTemplate.from_template(template)\n\nllm_chain = LLMChain(prompt=prompt, llm=llm)\n\nquestion = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n\nllm_chain.run(question)\n\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\vearch.md", + "filetype": ".md", + "content": "# Vearch\n\n[Vearch](https://github.com/vearch/vearch) is a scalable distributed system for efficient similarity search of deep learning vectors.\n\n# Installation and Setup\n\nVearch Python SDK enables vearch to use locally. Vearch python sdk can be installed easily by pip install vearch.\n\n# Vectorstore\n\nVearch also can used as vectorstore. Most detalis in [this notebook](/docs/integrations/vectorstores/vearch)\n\n```python\nfrom langchain_community.vectorstores import Vearch\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\integrations\\providers\\portkey\\index.md", + "filetype": ".md", + "content": "# Portkey\n\n>[Portkey](https://docs.portkey.ai/overview/introduction) is a platform designed to streamline the deployment \n> and management of Generative AI applications. \n> It provides comprehensive features for monitoring, managing models,\n> and improving the performance of your AI applications.\n\n## LLMOps for Langchain\n\nPortkey brings production readiness to Langchain. With Portkey, you can \n- [x] view detailed **metrics & logs** for all requests, \n- [x] enable **semantic cache** to reduce latency & costs, \n- [x] implement automatic **retries & fallbacks** for failed requests, \n- [x] add **custom tags** to requests for better tracking and analysis and [more](https://docs.portkey.ai).\n\n### Using Portkey with Langchain\nUsing Portkey is as simple as just choosing which Portkey features you want, enabling them via `headers=Portkey.Config` and passing it in your LLM calls.\n\nTo start, get your Portkey API key by [signing up here](https://app.portkey.ai/login). (Click the profile icon on the top left, then click on \"Copy API Key\")\n\nFor OpenAI, a simple integration with logging feature would look like this:\n```python\nfrom langchain_openai import OpenAI\nfrom langchain_community.utilities import Portkey\n\n# Add the Portkey API Key from your account\nheaders = Portkey.Config(\n api_key = \"\"\n)\n\nllm = OpenAI(temperature=0.9, headers=headers)\nllm.predict(\"What would be a good company name for a company that makes colorful socks?\")\n```\nYour logs will be captured on your [Portkey dashboard](https://app.portkey.ai).\n\nA common Portkey X Langchain use case is to **trace a chain or an agent** and view all the LLM calls originating from that request. \n\n### **Tracing Chains & Agents**\n\n```python\nfrom langchain.agents import AgentType, initialize_agent, load_tools \nfrom langchain_openai import OpenAI\nfrom langchain_community.utilities import Portkey\n\n# Add the Portkey API Key from your account\nheaders = Portkey.Config(\n api_key = \"\",\n trace_id = \"fef659\"\n)\n\nllm = OpenAI(temperature=0, headers=headers) \ntools = load_tools([\"serpapi\", \"llm-math\"], llm=llm) \nagent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) \n \n# Let's test it out! \nagent.run(\"What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?\")\n```\n\n**You can see the requests' logs along with the trace id on Portkey dashboard:**\n\n\n\n\n## Advanced Features\n\n1. **Logging:** Log all your LLM requests automatically by sending them through Portkey. Each request log contains `timestamp`, `model name`, `total cost`, `request time`, `request json`, `response json`, and additional Portkey features.\n2. **Tracing:** Trace id can be passed along with each request and is visibe on the logs on Portkey dashboard. You can also set a **distinct trace id** for each request. You can [append user feedback](https://docs.portkey.ai/key-features/feedback-api) to a trace id as well.\n3. **Caching:** Respond to previously served customers queries from cache instead of sending them again to OpenAI. Match exact strings OR semantically similar strings. Cache can save costs and reduce latencies by 20x.\n4. **Retries:** Automatically reprocess any unsuccessful API requests **`upto 5`** times. Uses an **`exponential backoff`** strategy, which spaces out retry attempts to prevent network overload.\n5. **Tagging:** Track and audit each user interaction in high detail with predefined tags.\n\n| Feature | Config Key | Value (Type) | Required/Optional |\n| -- | -- | -- | -- |\n| API Key | `api_key` | API Key (`string`) | \u2705 Required |\n| [Tracing Requests](https://docs.portkey.ai/key-features/request-tracing) | `trace_id` | Custom `string` | \u2754 Optional |\n| [Automatic Retries](https://docs.portkey.ai/key-features/automatic-retries) | `retry_count` | `integer` [1,2,3,4,5] | \u2754 Optional |\n| [Enabling Cache](https://docs.portkey.ai/key-features/request-caching) | `cache` | `simple` OR `semantic` | \u2754 Optional |\n| Cache Force Refresh | `cache_force_refresh` | `True` | \u2754 Optional |\n| Set Cache Expiry | `cache_age` | `integer` (in seconds) | \u2754 Optional |\n| [Add User](https://docs.portkey.ai/key-features/custom-metadata) | `user` | `string` | \u2754 Optional |\n| [Add Organisation](https://docs.portkey.ai/key-features/custom-metadata) | `organisation` | `string` | \u2754 Optional |\n| [Add Environment](https://docs.portkey.ai/key-features/custom-metadata) | `environment` | `string` | \u2754 Optional |\n| [Add Prompt (version/id/string)](https://docs.portkey.ai/key-features/custom-metadata) | `prompt` | `string` | \u2754 Optional |\n\n\n## **Enabling all Portkey Features:**\n\n```py\nheaders = Portkey.Config(\n \n # Mandatory\n api_key=\"\", \n\t\n\t# Cache Options\n cache=\"semantic\", \n cache_force_refresh=\"True\", \n cache_age=1729, \n\n # Advanced\n retry_count=5, \n trace_id=\"langchain_agent\", \n\n # Metadata\n environment=\"production\", \n user=\"john\", \n organisation=\"acme\", \n prompt=\"Frost\"\n \n)\n```\n\n\nFor detailed information on each feature and how to use it, [please refer to the Portkey docs](https://docs.portkey.ai). If you have any questions or need further assistance, [reach out to us on Twitter.](https://twitter.com/portkeyai)." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\langsmith\\index.md", + "filetype": ".md", + "content": "---\nsidebar_class_name: hidden\n---\n\n# LangSmith\n\n[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you\nmove from prototype to production.\n\nCheck out the [interactive walkthrough](/docs/langsmith/walkthrough) to get started.\n\nFor more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).\n\nFor tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow,\ncheck out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:\n\n- Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).\n- Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).\n- How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).\n- How to fine-tune an LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).\n- How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\modules\\paul_graham_essay.txt", + "filetype": ".txt", + "content": "What I Worked On\n\nFebruary 2021\n\nBefore college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.\n\nThe first programs I tried writing were on the IBM 1401 that our school district used for what was then called \"data processing.\" This was in 9th grade, so I was 13 or 14. The school district's 1401 happened to be in the basement of our junior high school, and my friend Rich Draves and I got permission to use it. It was like a mini Bond villain's lair down there, with all these alien-looking machines \u2014 CPU, disk drives, printer, card reader \u2014 sitting up on a raised floor under bright fluorescent lights.\n\nThe language we used was an early version of Fortran. You had to type programs on punch cards, then stack them in the card reader and press a button to load the program into memory and run it. The result would ordinarily be to print something on the spectacularly loud printer.\n\nI was puzzled by the 1401. I couldn't figure out what to do with it. And in retrospect there's not much I could have done with it. The only form of input to programs was data stored on punched cards, and I didn't have any data stored on punched cards. The only other option was to do things that didn't rely on any input, like calculate approximations of pi, but I didn't know enough math to do anything interesting of that type. So I'm not surprised I can't remember any programs I wrote, because they can't have done much. My clearest memory is of the moment I learned it was possible for programs not to terminate, when one of mine didn't. On a machine without time-sharing, this was a social as well as a technical error, as the data center manager's expression made clear.\n\nWith microcomputers, everything changed. Now you could have a computer sitting right in front of you, on a desk, that could respond to your keystrokes as it was running instead of just churning through a stack of punch cards and then stopping. [1]\n\nThe first of my friends to get a microcomputer built it himself. It was sold as a kit by Heathkit. I remember vividly how impressed and envious I felt watching him sitting in front of it, typing programs right into the computer.\n\nComputers were expensive in those days and it took me years of nagging before I convinced my father to buy one, a TRS-80, in about 1980. The gold standard then was the Apple II, but a TRS-80 was good enough. This was when I really started programming. I wrote simple games, a program to predict how high my model rockets would fly, and a word processor that my father used to write at least one book. There was only room in memory for about 2 pages of text, so he'd write 2 pages at a time and then print them out, but it was a lot better than a typewriter.\n\nThough I liked programming, I didn't plan to study it in college. In college I was going to study philosophy, which sounded much more powerful. It seemed, to my naive high school self, to be the study of the ultimate truths, compared to which the things studied in other fields would be mere domain knowledge. What I discovered when I got to college was that the other fields took up so much of the space of ideas that there wasn't much left for these supposed ultimate truths. All that seemed left for philosophy were edge cases that people in other fields felt could safely be ignored.\n\nI couldn't have put this into words when I was 18. All I knew at the time was that I kept taking philosophy courses and they kept being boring. So I decided to switch to AI.\n\nAI was in the air in the mid 1980s, but there were two things especially that made me want to work on it: a novel by Heinlein called The Moon is a Harsh Mistress, which featured an intelligent computer called Mike, and a PBS documentary that showed Terry Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh Mistress, so I don't know how well it has aged, but when I read it I was drawn entirely into its world. It seemed only a matter of time before we'd have Mike, and when I saw Winograd using SHRDLU, it seemed like that time would be a few years at most. All you had to do was teach SHRDLU more words.\n\nThere weren't any classes in AI at Cornell then, not even graduate classes, so I started trying to teach myself. Which meant learning Lisp, since in those days Lisp was regarded as the language of AI. The commonly used programming languages then were pretty primitive, and programmers' ideas correspondingly so. The default language at Cornell was a Pascal-like language called PL/I, and the situation was similar elsewhere. Learning Lisp expanded my concept of a program so fast that it was years before I started to have a sense of where the new limits were. This was more like it; this was what I had expected college to do. It wasn't happening in a class, like it was supposed to, but that was ok. For the next couple years I was on a roll. I knew what I was going to do.\n\nFor my undergraduate thesis, I reverse-engineered SHRDLU. My God did I love working on that program. It was a pleasing bit of code, but what made it even more exciting was my belief \u2014 hard to imagine now, but not unique in 1985 \u2014 that it was already climbing the lower slopes of intelligence.\n\nI had gotten into a program at Cornell that didn't make you choose a major. You could take whatever classes you liked, and choose whatever you liked to put on your degree. I of course chose \"Artificial Intelligence.\" When I got the actual physical diploma, I was dismayed to find that the quotes had been included, which made them read as scare-quotes. At the time this bothered me, but now it seems amusingly accurate, for reasons I was about to discover.\n\nI applied to 3 grad schools: MIT and Yale, which were renowned for AI at the time, and Harvard, which I'd visited because Rich Draves went there, and was also home to Bill Woods, who'd invented the type of parser I used in my SHRDLU clone. Only Harvard accepted me, so that was where I went.\n\nI don't remember the moment it happened, or if there even was a specific moment, but during the first year of grad school I realized that AI, as practiced at the time, was a hoax. By which I mean the sort of AI in which a program that's told \"the dog is sitting on the chair\" translates this into some formal representation and adds it to the list of things it knows.\n\nWhat these programs really showed was that there's a subset of natural language that's a formal language. But a very proper subset. It was clear that there was an unbridgeable gap between what they could do and actually understanding natural language. It was not, in fact, simply a matter of teaching SHRDLU more words. That whole way of doing AI, with explicit data structures representing concepts, was not going to work. Its brokenness did, as so often happens, generate a lot of opportunities to write papers about various band-aids that could be applied to it, but it was never going to get us Mike.\n\nSo I looked around to see what I could salvage from the wreckage of my plans, and there was Lisp. I knew from experience that Lisp was interesting for its own sake and not just for its association with AI, even though that was the main reason people cared about it at the time. So I decided to focus on Lisp. In fact, I decided to write a book about Lisp hacking. It's scary to think how little I knew about Lisp hacking when I started writing that book. But there's nothing like writing a book about something to help you learn it. The book, On Lisp, wasn't published till 1993, but I wrote much of it in grad school.\n\nComputer Science is an uneasy alliance between two halves, theory and systems. The theory people prove things, and the systems people build things. I wanted to build things. I had plenty of respect for theory \u2014 indeed, a sneaking suspicion that it was the more admirable of the two halves \u2014 but building things seemed so much more exciting.\n\nThe problem with systems work, though, was that it didn't last. Any program you wrote today, no matter how good, would be obsolete in a couple decades at best. People might mention your software in footnotes, but no one would actually use it. And indeed, it would seem very feeble work. Only people with a sense of the history of the field would even realize that, in its time, it had been good.\n\nThere were some surplus Xerox Dandelions floating around the computer lab at one point. Anyone who wanted one to play around with could have one. I was briefly tempted, but they were so slow by present standards; what was the point? No one else wanted one either, so off they went. That was what happened to systems work.\n\nI wanted not just to build things, but to build things that would last.\n\nIn this dissatisfied state I went in 1988 to visit Rich Draves at CMU, where he was in grad school. One day I went to visit the Carnegie Institute, where I'd spent a lot of time as a kid. While looking at a painting there I realized something that might seem obvious, but was a big surprise to me. There, right on the wall, was something you could make that would last. Paintings didn't become obsolete. Some of the best ones were hundreds of years old.\n\nAnd moreover this was something you could make a living doing. Not as easily as you could by writing software, of course, but I thought if you were really industrious and lived really cheaply, it had to be possible to make enough to survive. And as an artist you could be truly independent. You wouldn't have a boss, or even need to get research funding.\n\nI had always liked looking at paintings. Could I make them? I had no idea. I'd never imagined it was even possible. I knew intellectually that people made art \u2014 that it didn't just appear spontaneously \u2014 but it was as if the people who made it were a different species. They either lived long ago or were mysterious geniuses doing strange things in profiles in Life magazine. The idea of actually being able to make art, to put that verb before that noun, seemed almost miraculous.\n\nThat fall I started taking art classes at Harvard. Grad students could take classes in any department, and my advisor, Tom Cheatham, was very easy going. If he even knew about the strange classes I was taking, he never said anything.\n\nSo now I was in a PhD program in computer science, yet planning to be an artist, yet also genuinely in love with Lisp hacking and working away at On Lisp. In other words, like many a grad student, I was working energetically on multiple projects that were not my thesis.\n\nI didn't see a way out of this situation. I didn't want to drop out of grad school, but how else was I going to get out? I remember when my friend Robert Morris got kicked out of Cornell for writing the internet worm of 1988, I was envious that he'd found such a spectacular way to get out of grad school.\n\nThen one day in April 1990 a crack appeared in the wall. I ran into professor Cheatham and he asked if I was far enough along to graduate that June. I didn't have a word of my dissertation written, but in what must have been the quickest bit of thinking in my life, I decided to take a shot at writing one in the 5 weeks or so that remained before the deadline, reusing parts of On Lisp where I could, and I was able to respond, with no perceptible delay \"Yes, I think so. I'll give you something to read in a few days.\"\n\nI picked applications of continuations as the topic. In retrospect I should have written about macros and embedded languages. There's a whole world there that's barely been explored. But all I wanted was to get out of grad school, and my rapidly written dissertation sufficed, just barely.\n\nMeanwhile I was applying to art schools. I applied to two: RISD in the US, and the Accademia di Belli Arti in Florence, which, because it was the oldest art school, I imagined would be good. RISD accepted me, and I never heard back from the Accademia, so off to Providence I went.\n\nI'd applied for the BFA program at RISD, which meant in effect that I had to go to college again. This was not as strange as it sounds, because I was only 25, and art schools are full of people of different ages. RISD counted me as a transfer sophomore and said I had to do the foundation that summer. The foundation means the classes that everyone has to take in fundamental subjects like drawing, color, and design.\n\nToward the end of the summer I got a big surprise: a letter from the Accademia, which had been delayed because they'd sent it to Cambridge England instead of Cambridge Massachusetts, inviting me to take the entrance exam in Florence that fall. This was now only weeks away. My nice landlady let me leave my stuff in her attic. I had some money saved from consulting work I'd done in grad school; there was probably enough to last a year if I lived cheaply. Now all I had to do was learn Italian.\n\nOnly stranieri (foreigners) had to take this entrance exam. In retrospect it may well have been a way of excluding them, because there were so many stranieri attracted by the idea of studying art in Florence that the Italian students would otherwise have been outnumbered. I was in decent shape at painting and drawing from the RISD foundation that summer, but I still don't know how I managed to pass the written exam. I remember that I answered the essay question by writing about Cezanne, and that I cranked up the intellectual level as high as I could to make the most of my limited vocabulary. [2]\n\nI'm only up to age 25 and already there are such conspicuous patterns. Here I was, yet again about to attend some august institution in the hopes of learning about some prestigious subject, and yet again about to be disappointed. The students and faculty in the painting department at the Accademia were the nicest people you could imagine, but they had long since arrived at an arrangement whereby the students wouldn't require the faculty to teach anything, and in return the faculty wouldn't require the students to learn anything. And at the same time all involved would adhere outwardly to the conventions of a 19th century atelier. We actually had one of those little stoves, fed with kindling, that you see in 19th century studio paintings, and a nude model sitting as close to it as possible without getting burned. Except hardly anyone else painted her besides me. The rest of the students spent their time chatting or occasionally trying to imitate things they'd seen in American art magazines.\n\nOur model turned out to live just down the street from me. She made a living from a combination of modelling and making fakes for a local antique dealer. She'd copy an obscure old painting out of a book, and then he'd take the copy and maltreat it to make it look old. [3]\n\nWhile I was a student at the Accademia I started painting still lives in my bedroom at night. These paintings were tiny, because the room was, and because I painted them on leftover scraps of canvas, which was all I could afford at the time. Painting still lives is different from painting people, because the subject, as its name suggests, can't move. People can't sit for more than about 15 minutes at a time, and when they do they don't sit very still. So the traditional m.o. for painting people is to know how to paint a generic person, which you then modify to match the specific person you're painting. Whereas a still life you can, if you want, copy pixel by pixel from what you're seeing. You don't want to stop there, of course, or you get merely photographic accuracy, and what makes a still life interesting is that it's been through a head. You want to emphasize the visual cues that tell you, for example, that the reason the color changes suddenly at a certain point is that it's the edge of an object. By subtly emphasizing such things you can make paintings that are more realistic than photographs not just in some metaphorical sense, but in the strict information-theoretic sense. [4]\n\nI liked painting still lives because I was curious about what I was seeing. In everyday life, we aren't consciously aware of much we're seeing. Most visual perception is handled by low-level processes that merely tell your brain \"that's a water droplet\" without telling you details like where the lightest and darkest points are, or \"that's a bush\" without telling you the shape and position of every leaf. This is a feature of brains, not a bug. In everyday life it would be distracting to notice every leaf on every bush. But when you have to paint something, you have to look more closely, and when you do there's a lot to see. You can still be noticing new things after days of trying to paint something people usually take for granted, just as you can after days of trying to write an essay about something people usually take for granted.\n\nThis is not the only way to paint. I'm not 100% sure it's even a good way to paint. But it seemed a good enough bet to be worth trying.\n\nOur teacher, professor Ulivi, was a nice guy. He could see I worked hard, and gave me a good grade, which he wrote down in a sort of passport each student had. But the Accademia wasn't teaching me anything except Italian, and my money was running out, so at the end of the first year I went back to the US.\n\nI wanted to go back to RISD, but I was now broke and RISD was very expensive, so I decided to get a job for a year and then return to RISD the next fall. I got one at a company called Interleaf, which made software for creating documents. You mean like Microsoft Word? Exactly. That was how I learned that low end software tends to eat high end software. But Interleaf still had a few years to live yet. [5]\n\nInterleaf had done something pretty bold. Inspired by Emacs, they'd added a scripting language, and even made the scripting language a dialect of Lisp. Now they wanted a Lisp hacker to write things in it. This was the closest thing I've had to a normal job, and I hereby apologize to my boss and coworkers, because I was a bad employee. Their Lisp was the thinnest icing on a giant C cake, and since I didn't know C and didn't want to learn it, I never understood most of the software. Plus I was terribly irresponsible. This was back when a programming job meant showing up every day during certain working hours. That seemed unnatural to me, and on this point the rest of the world is coming around to my way of thinking, but at the time it caused a lot of friction. Toward the end of the year I spent much of my time surreptitiously working on On Lisp, which I had by this time gotten a contract to publish.\n\nThe good part was that I got paid huge amounts of money, especially by art student standards. In Florence, after paying my part of the rent, my budget for everything else had been $7 a day. Now I was getting paid more than 4 times that every hour, even when I was just sitting in a meeting. By living cheaply I not only managed to save enough to go back to RISD, but also paid off my college loans.\n\nI learned some useful things at Interleaf, though they were mostly about what not to do. I learned that it's better for technology companies to be run by product people than sales people (though sales is a real skill and people who are good at it are really good at it), that it leads to bugs when code is edited by too many people, that cheap office space is no bargain if it's depressing, that planned meetings are inferior to corridor conversations, that big, bureaucratic customers are a dangerous source of money, and that there's not much overlap between conventional office hours and the optimal time for hacking, or conventional offices and the optimal place for it.\n\nBut the most important thing I learned, and which I used in both Viaweb and Y Combinator, is that the low end eats the high end: that it's good to be the \"entry level\" option, even though that will be less prestigious, because if you're not, someone else will be, and will squash you against the ceiling. Which in turn means that prestige is a danger sign.\n\nWhen I left to go back to RISD the next fall, I arranged to do freelance work for the group that did projects for customers, and this was how I survived for the next several years. When I came back to visit for a project later on, someone told me about a new thing called HTML, which was, as he described it, a derivative of SGML. Markup language enthusiasts were an occupational hazard at Interleaf and I ignored him, but this HTML thing later became a big part of my life.\n\nIn the fall of 1992 I moved back to Providence to continue at RISD. The foundation had merely been intro stuff, and the Accademia had been a (very civilized) joke. Now I was going to see what real art school was like. But alas it was more like the Accademia than not. Better organized, certainly, and a lot more expensive, but it was now becoming clear that art school did not bear the same relationship to art that medical school bore to medicine. At least not the painting department. The textile department, which my next door neighbor belonged to, seemed to be pretty rigorous. No doubt illustration and architecture were too. But painting was post-rigorous. Painting students were supposed to express themselves, which to the more worldly ones meant to try to cook up some sort of distinctive signature style.\n\nA signature style is the visual equivalent of what in show business is known as a \"schtick\": something that immediately identifies the work as yours and no one else's. For example, when you see a painting that looks like a certain kind of cartoon, you know it's by Roy Lichtenstein. So if you see a big painting of this type hanging in the apartment of a hedge fund manager, you know he paid millions of dollars for it. That's not always why artists have a signature style, but it's usually why buyers pay a lot for such work. [6]\n\nThere were plenty of earnest students too: kids who \"could draw\" in high school, and now had come to what was supposed to be the best art school in the country, to learn to draw even better. They tended to be confused and demoralized by what they found at RISD, but they kept going, because painting was what they did. I was not one of the kids who could draw in high school, but at RISD I was definitely closer to their tribe than the tribe of signature style seekers.\n\nI learned a lot in the color class I took at RISD, but otherwise I was basically teaching myself to paint, and I could do that for free. So in 1993 I dropped out. I hung around Providence for a bit, and then my college friend Nancy Parmet did me a big favor. A rent-controlled apartment in a building her mother owned in New York was becoming vacant. Did I want it? It wasn't much more than my current place, and New York was supposed to be where the artists were. So yes, I wanted it! [7]\n\nAsterix comics begin by zooming in on a tiny corner of Roman Gaul that turns out not to be controlled by the Romans. You can do something similar on a map of New York City: if you zoom in on the Upper East Side, there's a tiny corner that's not rich, or at least wasn't in 1993. It's called Yorkville, and that was my new home. Now I was a New York artist \u2014 in the strictly technical sense of making paintings and living in New York.\n\nI was nervous about money, because I could sense that Interleaf was on the way down. Freelance Lisp hacking work was very rare, and I didn't want to have to program in another language, which in those days would have meant C++ if I was lucky. So with my unerring nose for financial opportunity, I decided to write another book on Lisp. This would be a popular book, the sort of book that could be used as a textbook. I imagined myself living frugally off the royalties and spending all my time painting. (The painting on the cover of this book, ANSI Common Lisp, is one that I painted around this time.)\n\nThe best thing about New York for me was the presence of Idelle and Julian Weber. Idelle Weber was a painter, one of the early photorealists, and I'd taken her painting class at Harvard. I've never known a teacher more beloved by her students. Large numbers of former students kept in touch with her, including me. After I moved to New York I became her de facto studio assistant.\n\nShe liked to paint on big, square canvases, 4 to 5 feet on a side. One day in late 1994 as I was stretching one of these monsters there was something on the radio about a famous fund manager. He wasn't that much older than me, and was super rich. The thought suddenly occurred to me: why don't I become rich? Then I'll be able to work on whatever I want.\n\nMeanwhile I'd been hearing more and more about this new thing called the World Wide Web. Robert Morris showed it to me when I visited him in Cambridge, where he was now in grad school at Harvard. It seemed to me that the web would be a big deal. I'd seen what graphical user interfaces had done for the popularity of microcomputers. It seemed like the web would do the same for the internet.\n\nIf I wanted to get rich, here was the next train leaving the station. I was right about that part. What I got wrong was the idea. I decided we should start a company to put art galleries online. I can't honestly say, after reading so many Y Combinator applications, that this was the worst startup idea ever, but it was up there. Art galleries didn't want to be online, and still don't, not the fancy ones. That's not how they sell. I wrote some software to generate web sites for galleries, and Robert wrote some to resize images and set up an http server to serve the pages. Then we tried to sign up galleries. To call this a difficult sale would be an understatement. It was difficult to give away. A few galleries let us make sites for them for free, but none paid us.\n\nThen some online stores started to appear, and I realized that except for the order buttons they were identical to the sites we'd been generating for galleries. This impressive-sounding thing called an \"internet storefront\" was something we already knew how to build.\n\nSo in the summer of 1995, after I submitted the camera-ready copy of ANSI Common Lisp to the publishers, we started trying to write software to build online stores. At first this was going to be normal desktop software, which in those days meant Windows software. That was an alarming prospect, because neither of us knew how to write Windows software or wanted to learn. We lived in the Unix world. But we decided we'd at least try writing a prototype store builder on Unix. Robert wrote a shopping cart, and I wrote a new site generator for stores \u2014 in Lisp, of course.\n\nWe were working out of Robert's apartment in Cambridge. His roommate was away for big chunks of time, during which I got to sleep in his room. For some reason there was no bed frame or sheets, just a mattress on the floor. One morning as I was lying on this mattress I had an idea that made me sit up like a capital L. What if we ran the software on the server, and let users control it by clicking on links? Then we'd never have to write anything to run on users' computers. We could generate the sites on the same server we'd serve them from. Users wouldn't need anything more than a browser.\n\nThis kind of software, known as a web app, is common now, but at the time it wasn't clear that it was even possible. To find out, we decided to try making a version of our store builder that you could control through the browser. A couple days later, on August 12, we had one that worked. The UI was horrible, but it proved you could build a whole store through the browser, without any client software or typing anything into the command line on the server.\n\nNow we felt like we were really onto something. I had visions of a whole new generation of software working this way. You wouldn't need versions, or ports, or any of that crap. At Interleaf there had been a whole group called Release Engineering that seemed to be at least as big as the group that actually wrote the software. Now you could just update the software right on the server.\n\nWe started a new company we called Viaweb, after the fact that our software worked via the web, and we got $10,000 in seed funding from Idelle's husband Julian. In return for that and doing the initial legal work and giving us business advice, we gave him 10% of the company. Ten years later this deal became the model for Y Combinator's. We knew founders needed something like this, because we'd needed it ourselves.\n\nAt this stage I had a negative net worth, because the thousand dollars or so I had in the bank was more than counterbalanced by what I owed the government in taxes. (Had I diligently set aside the proper proportion of the money I'd made consulting for Interleaf? No, I had not.) So although Robert had his graduate student stipend, I needed that seed funding to live on.\n\nWe originally hoped to launch in September, but we got more ambitious about the software as we worked on it. Eventually we managed to build a WYSIWYG site builder, in the sense that as you were creating pages, they looked exactly like the static ones that would be generated later, except that instead of leading to static pages, the links all referred to closures stored in a hash table on the server.\n\nIt helped to have studied art, because the main goal of an online store builder is to make users look legit, and the key to looking legit is high production values. If you get page layouts and fonts and colors right, you can make a guy running a store out of his bedroom look more legit than a big company.\n\n(If you're curious why my site looks so old-fashioned, it's because it's still made with this software. It may look clunky today, but in 1996 it was the last word in slick.)\n\nIn September, Robert rebelled. \"We've been working on this for a month,\" he said, \"and it's still not done.\" This is funny in retrospect, because he would still be working on it almost 3 years later. But I decided it might be prudent to recruit more programmers, and I asked Robert who else in grad school with him was really good. He recommended Trevor Blackwell, which surprised me at first, because at that point I knew Trevor mainly for his plan to reduce everything in his life to a stack of notecards, which he carried around with him. But Rtm was right, as usual. Trevor turned out to be a frighteningly effective hacker.\n\nIt was a lot of fun working with Robert and Trevor. They're the two most independent-minded people I know, and in completely different ways. If you could see inside Rtm's brain it would look like a colonial New England church, and if you could see inside Trevor's it would look like the worst excesses of Austrian Rococo.\n\nWe opened for business, with 6 stores, in January 1996. It was just as well we waited a few months, because although we worried we were late, we were actually almost fatally early. There was a lot of talk in the press then about ecommerce, but not many people actually wanted online stores. [8]\n\nThere were three main parts to the software: the editor, which people used to build sites and which I wrote, the shopping cart, which Robert wrote, and the manager, which kept track of orders and statistics, and which Trevor wrote. In its time, the editor was one of the best general-purpose site builders. I kept the code tight and didn't have to integrate with any other software except Robert's and Trevor's, so it was quite fun to work on. If all I'd had to do was work on this software, the next 3 years would have been the easiest of my life. Unfortunately I had to do a lot more, all of it stuff I was worse at than programming, and the next 3 years were instead the most stressful.\n\nThere were a lot of startups making ecommerce software in the second half of the 90s. We were determined to be the Microsoft Word, not the Interleaf. Which meant being easy to use and inexpensive. It was lucky for us that we were poor, because that caused us to make Viaweb even more inexpensive than we realized. We charged $100 a month for a small store and $300 a month for a big one. This low price was a big attraction, and a constant thorn in the sides of competitors, but it wasn't because of some clever insight that we set the price low. We had no idea what businesses paid for things. $300 a month seemed like a lot of money to us.\n\nWe did a lot of things right by accident like that. For example, we did what's now called \"doing things that don't scale,\" although at the time we would have described it as \"being so lame that we're driven to the most desperate measures to get users.\" The most common of which was building stores for them. This seemed particularly humiliating, since the whole reason d'etre of our software was that people could use it to make their own stores. But anything to get users.\n\nWe learned a lot more about retail than we wanted to know. For example, that if you could only have a small image of a man's shirt (and all images were small then by present standards), it was better to have a closeup of the collar than a picture of the whole shirt. The reason I remember learning this was that it meant I had to rescan about 30 images of men's shirts. My first set of scans were so beautiful too.\n\nThough this felt wrong, it was exactly the right thing to be doing. Building stores for users taught us about retail, and about how it felt to use our software. I was initially both mystified and repelled by \"business\" and thought we needed a \"business person\" to be in charge of it, but once we started to get users, I was converted, in much the same way I was converted to fatherhood once I had kids. Whatever users wanted, I was all theirs. Maybe one day we'd have so many users that I couldn't scan their images for them, but in the meantime there was nothing more important to do.\n\nAnother thing I didn't get at the time is that growth rate is the ultimate test of a startup. Our growth rate was fine. We had about 70 stores at the end of 1996 and about 500 at the end of 1997. I mistakenly thought the thing that mattered was the absolute number of users. And that is the thing that matters in the sense that that's how much money you're making, and if you're not making enough, you might go out of business. But in the long term the growth rate takes care of the absolute number. If we'd been a startup I was advising at Y Combinator, I would have said: Stop being so stressed out, because you're doing fine. You're growing 7x a year. Just don't hire too many more people and you'll soon be profitable, and then you'll control your own destiny.\n\nAlas I hired lots more people, partly because our investors wanted me to, and partly because that's what startups did during the Internet Bubble. A company with just a handful of employees would have seemed amateurish. So we didn't reach breakeven until about when Yahoo bought us in the summer of 1998. Which in turn meant we were at the mercy of investors for the entire life of the company. And since both we and our investors were noobs at startups, the result was a mess even by startup standards.\n\nIt was a huge relief when Yahoo bought us. In principle our Viaweb stock was valuable. It was a share in a business that was profitable and growing rapidly. But it didn't feel very valuable to me; I had no idea how to value a business, but I was all too keenly aware of the near-death experiences we seemed to have every few months. Nor had I changed my grad student lifestyle significantly since we started. So when Yahoo bought us it felt like going from rags to riches. Since we were going to California, I bought a car, a yellow 1998 VW GTI. I remember thinking that its leather seats alone were by far the most luxurious thing I owned.\n\nThe next year, from the summer of 1998 to the summer of 1999, must have been the least productive of my life. I didn't realize it at the time, but I was worn out from the effort and stress of running Viaweb. For a while after I got to California I tried to continue my usual m.o. of programming till 3 in the morning, but fatigue combined with Yahoo's prematurely aged culture and grim cube farm in Santa Clara gradually dragged me down. After a few months it felt disconcertingly like working at Interleaf.\n\nYahoo had given us a lot of options when they bought us. At the time I thought Yahoo was so overvalued that they'd never be worth anything, but to my astonishment the stock went up 5x in the next year. I hung on till the first chunk of options vested, then in the summer of 1999 I left. It had been so long since I'd painted anything that I'd half forgotten why I was doing this. My brain had been entirely full of software and men's shirts for 4 years. But I had done this to get rich so I could paint, I reminded myself, and now I was rich, so I should go paint.\n\nWhen I said I was leaving, my boss at Yahoo had a long conversation with me about my plans. I told him all about the kinds of pictures I wanted to paint. At the time I was touched that he took such an interest in me. Now I realize it was because he thought I was lying. My options at that point were worth about $2 million a month. If I was leaving that kind of money on the table, it could only be to go and start some new startup, and if I did, I might take people with me. This was the height of the Internet Bubble, and Yahoo was ground zero of it. My boss was at that moment a billionaire. Leaving then to start a new startup must have seemed to him an insanely, and yet also plausibly, ambitious plan.\n\nBut I really was quitting to paint, and I started immediately. There was no time to lose. I'd already burned 4 years getting rich. Now when I talk to founders who are leaving after selling their companies, my advice is always the same: take a vacation. That's what I should have done, just gone off somewhere and done nothing for a month or two, but the idea never occurred to me.\n\nSo I tried to paint, but I just didn't seem to have any energy or ambition. Part of the problem was that I didn't know many people in California. I'd compounded this problem by buying a house up in the Santa Cruz Mountains, with a beautiful view but miles from anywhere. I stuck it out for a few more months, then in desperation I went back to New York, where unless you understand about rent control you'll be surprised to hear I still had my apartment, sealed up like a tomb of my old life. Idelle was in New York at least, and there were other people trying to paint there, even though I didn't know any of them.\n\nWhen I got back to New York I resumed my old life, except now I was rich. It was as weird as it sounds. I resumed all my old patterns, except now there were doors where there hadn't been. Now when I was tired of walking, all I had to do was raise my hand, and (unless it was raining) a taxi would stop to pick me up. Now when I walked past charming little restaurants I could go in and order lunch. It was exciting for a while. Painting started to go better. I experimented with a new kind of still life where I'd paint one painting in the old way, then photograph it and print it, blown up, on canvas, and then use that as the underpainting for a second still life, painted from the same objects (which hopefully hadn't rotted yet).\n\nMeanwhile I looked for an apartment to buy. Now I could actually choose what neighborhood to live in. Where, I asked myself and various real estate agents, is the Cambridge of New York? Aided by occasional visits to actual Cambridge, I gradually realized there wasn't one. Huh.\n\nAround this time, in the spring of 2000, I had an idea. It was clear from our experience with Viaweb that web apps were the future. Why not build a web app for making web apps? Why not let people edit code on our server through the browser, and then host the resulting applications for them? [9] You could run all sorts of services on the servers that these applications could use just by making an API call: making and receiving phone calls, manipulating images, taking credit card payments, etc.\n\nI got so excited about this idea that I couldn't think about anything else. It seemed obvious that this was the future. I didn't particularly want to start another company, but it was clear that this idea would have to be embodied as one, so I decided to move to Cambridge and start it. I hoped to lure Robert into working on it with me, but there I ran into a hitch. Robert was now a postdoc at MIT, and though he'd made a lot of money the last time I'd lured him into working on one of my schemes, it had also been a huge time sink. So while he agreed that it sounded like a plausible idea, he firmly refused to work on it.\n\nHmph. Well, I'd do it myself then. I recruited Dan Giffin, who had worked for Viaweb, and two undergrads who wanted summer jobs, and we got to work trying to build what it's now clear is about twenty companies and several open-source projects worth of software. The language for defining applications would of course be a dialect of Lisp. But I wasn't so naive as to assume I could spring an overt Lisp on a general audience; we'd hide the parentheses, like Dylan did.\n\nBy then there was a name for the kind of company Viaweb was, an \"application service provider,\" or ASP. This name didn't last long before it was replaced by \"software as a service,\" but it was current for long enough that I named this new company after it: it was going to be called Aspra.\n\nI started working on the application builder, Dan worked on network infrastructure, and the two undergrads worked on the first two services (images and phone calls). But about halfway through the summer I realized I really didn't want to run a company \u2014 especially not a big one, which it was looking like this would have to be. I'd only started Viaweb because I needed the money. Now that I didn't need money anymore, why was I doing this? If this vision had to be realized as a company, then screw the vision. I'd build a subset that could be done as an open-source project.\n\nMuch to my surprise, the time I spent working on this stuff was not wasted after all. After we started Y Combinator, I would often encounter startups working on parts of this new architecture, and it was very useful to have spent so much time thinking about it and even trying to write some of it.\n\nThe subset I would build as an open-source project was the new Lisp, whose parentheses I now wouldn't even have to hide. A lot of Lisp hackers dream of building a new Lisp, partly because one of the distinctive features of the language is that it has dialects, and partly, I think, because we have in our minds a Platonic form of Lisp that all existing dialects fall short of. I certainly did. So at the end of the summer Dan and I switched to working on this new dialect of Lisp, which I called Arc, in a house I bought in Cambridge.\n\nThe following spring, lightning struck. I was invited to give a talk at a Lisp conference, so I gave one about how we'd used Lisp at Viaweb. Afterward I put a postscript file of this talk online, on paulgraham.com, which I'd created years before using Viaweb but had never used for anything. In one day it got 30,000 page views. What on earth had happened? The referring urls showed that someone had posted it on Slashdot. [10]\n\nWow, I thought, there's an audience. If I write something and put it on the web, anyone can read it. That may seem obvious now, but it was surprising then. In the print era there was a narrow channel to readers, guarded by fierce monsters known as editors. The only way to get an audience for anything you wrote was to get it published as a book, or in a newspaper or magazine. Now anyone could publish anything.\n\nThis had been possible in principle since 1993, but not many people had realized it yet. I had been intimately involved with building the infrastructure of the web for most of that time, and a writer as well, and it had taken me 8 years to realize it. Even then it took me several years to understand the implications. It meant there would be a whole new generation of essays. [11]\n\nIn the print era, the channel for publishing essays had been vanishingly small. Except for a few officially anointed thinkers who went to the right parties in New York, the only people allowed to publish essays were specialists writing about their specialties. There were so many essays that had never been written, because there had been no way to publish them. Now they could be, and I was going to write them. [12]\n\nI've worked on several different things, but to the extent there was a turning point where I figured out what to work on, it was when I started publishing essays online. From then on I knew that whatever else I did, I'd always write essays too.\n\nI knew that online essays would be a marginal medium at first. Socially they'd seem more like rants posted by nutjobs on their GeoCities sites than the genteel and beautifully typeset compositions published in The New Yorker. But by this point I knew enough to find that encouraging instead of discouraging.\n\nOne of the most conspicuous patterns I've noticed in my life is how well it has worked, for me at least, to work on things that weren't prestigious. Still life has always been the least prestigious form of painting. Viaweb and Y Combinator both seemed lame when we started them. I still get the glassy eye from strangers when they ask what I'm writing, and I explain that it's an essay I'm going to publish on my web site. Even Lisp, though prestigious intellectually in something like the way Latin is, also seems about as hip.\n\nIt's not that unprestigious types of work are good per se. But when you find yourself drawn to some kind of work despite its current lack of prestige, it's a sign both that there's something real to be discovered there, and that you have the right kind of motives. Impure motives are a big danger for the ambitious. If anything is going to lead you astray, it will be the desire to impress people. So while working on things that aren't prestigious doesn't guarantee you're on the right track, it at least guarantees you're not on the most common type of wrong one.\n\nOver the next several years I wrote lots of essays about all kinds of different topics. O'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.\n\nOne night in October 2003 there was a big party at my house. It was a clever idea of my friend Maria Daniels, who was one of the thursday diners. Three separate hosts would all invite their friends to one party. So for every guest, two thirds of the other guests would be people they didn't know but would probably like. One of the guests was someone I didn't know but would turn out to like a lot: a woman called Jessica Livingston. A couple days later I asked her out.\n\nJessica was in charge of marketing at a Boston investment bank. This bank thought it understood startups, but over the next year, as she met friends of mine from the startup world, she was surprised how different reality was. And how colorful their stories were. So she decided to compile a book of interviews with startup founders.\n\nWhen the bank had financial problems and she had to fire half her staff, she started looking for a new job. In early 2005 she interviewed for a marketing job at a Boston VC firm. It took them weeks to make up their minds, and during this time I started telling her about all the things that needed to be fixed about venture capital. They should make a larger number of smaller investments instead of a handful of giant ones, they should be funding younger, more technical founders instead of MBAs, they should let the founders remain as CEO, and so on.\n\nOne of my tricks for writing essays had always been to give talks. The prospect of having to stand up in front of a group of people and tell them something that won't waste their time is a great spur to the imagination. When the Harvard Computer Society, the undergrad computer club, asked me to give a talk, I decided I would tell them how to start a startup. Maybe they'd be able to avoid the worst of the mistakes we'd made.\n\nSo I gave this talk, in the course of which I told them that the best sources of seed funding were successful startup founders, because then they'd be sources of advice too. Whereupon it seemed they were all looking expectantly at me. Horrified at the prospect of having my inbox flooded by business plans (if I'd only known), I blurted out \"But not me!\" and went on with the talk. But afterward it occurred to me that I should really stop procrastinating about angel investing. I'd been meaning to since Yahoo bought us, and now it was 7 years later and I still hadn't done one angel investment.\n\nMeanwhile I had been scheming with Robert and Trevor about projects we could work on together. I missed working with them, and it seemed like there had to be something we could collaborate on.\n\nAs Jessica and I were walking home from dinner on March 11, at the corner of Garden and Walker streets, these three threads converged. Screw the VCs who were taking so long to make up their minds. We'd start our own investment firm and actually implement the ideas we'd been talking about. I'd fund it, and Jessica could quit her job and work for it, and we'd get Robert and Trevor as partners too. [13]\n\nOnce again, ignorance worked in our favor. We had no idea how to be angel investors, and in Boston in 2005 there were no Ron Conways to learn from. So we just made what seemed like the obvious choices, and some of the things we did turned out to be novel.\n\nThere are multiple components to Y Combinator, and we didn't figure them all out at once. The part we got first was to be an angel firm. In those days, those two words didn't go together. There were VC firms, which were organized companies with people whose job it was to make investments, but they only did big, million dollar investments. And there were angels, who did smaller investments, but these were individuals who were usually focused on other things and made investments on the side. And neither of them helped founders enough in the beginning. We knew how helpless founders were in some respects, because we remembered how helpless we'd been. For example, one thing Julian had done for us that seemed to us like magic was to get us set up as a company. We were fine writing fairly difficult software, but actually getting incorporated, with bylaws and stock and all that stuff, how on earth did you do that? Our plan was not only to make seed investments, but to do for startups everything Julian had done for us.\n\nYC was not organized as a fund. It was cheap enough to run that we funded it with our own money. That went right by 99% of readers, but professional investors are thinking \"Wow, that means they got all the returns.\" But once again, this was not due to any particular insight on our part. We didn't know how VC firms were organized. It never occurred to us to try to raise a fund, and if it had, we wouldn't have known where to start. [14]\n\nThe most distinctive thing about YC is the batch model: to fund a bunch of startups all at once, twice a year, and then to spend three months focusing intensively on trying to help them. That part we discovered by accident, not merely implicitly but explicitly due to our ignorance about investing. We needed to get experience as investors. What better way, we thought, than to fund a whole bunch of startups at once? We knew undergrads got temporary jobs at tech companies during the summer. Why not organize a summer program where they'd start startups instead? We wouldn't feel guilty for being in a sense fake investors, because they would in a similar sense be fake founders. So while we probably wouldn't make much money out of it, we'd at least get to practice being investors on them, and they for their part would probably have a more interesting summer than they would working at Microsoft.\n\nWe'd use the building I owned in Cambridge as our headquarters. We'd all have dinner there once a week \u2014 on tuesdays, since I was already cooking for the thursday diners on thursdays \u2014 and after dinner we'd bring in experts on startups to give talks.\n\nWe knew undergrads were deciding then about summer jobs, so in a matter of days we cooked up something we called the Summer Founders Program, and I posted an announcement on my site, inviting undergrads to apply. I had never imagined that writing essays would be a way to get \"deal flow,\" as investors call it, but it turned out to be the perfect source. [15] We got 225 applications for the Summer Founders Program, and we were surprised to find that a lot of them were from people who'd already graduated, or were about to that spring. Already this SFP thing was starting to feel more serious than we'd intended.\n\nWe invited about 20 of the 225 groups to interview in person, and from those we picked 8 to fund. They were an impressive group. That first batch included reddit, Justin Kan and Emmett Shear, who went on to found Twitch, Aaron Swartz, who had already helped write the RSS spec and would a few years later become a martyr for open access, and Sam Altman, who would later become the second president of YC. I don't think it was entirely luck that the first batch was so good. You had to be pretty bold to sign up for a weird thing like the Summer Founders Program instead of a summer job at a legit place like Microsoft or Goldman Sachs.\n\nThe deal for startups was based on a combination of the deal we did with Julian ($10k for 10%) and what Robert said MIT grad students got for the summer ($6k). We invested $6k per founder, which in the typical two-founder case was $12k, in return for 6%. That had to be fair, because it was twice as good as the deal we ourselves had taken. Plus that first summer, which was really hot, Jessica brought the founders free air conditioners. [16]\n\nFairly quickly I realized that we had stumbled upon the way to scale startup funding. Funding startups in batches was more convenient for us, because it meant we could do things for a lot of startups at once, but being part of a batch was better for the startups too. It solved one of the biggest problems faced by founders: the isolation. Now you not only had colleagues, but colleagues who understood the problems you were facing and could tell you how they were solving them.\n\nAs YC grew, we started to notice other advantages of scale. The alumni became a tight community, dedicated to helping one another, and especially the current batch, whose shoes they remembered being in. We also noticed that the startups were becoming one another's customers. We used to refer jokingly to the \"YC GDP,\" but as YC grows this becomes less and less of a joke. Now lots of startups get their initial set of customers almost entirely from among their batchmates.\n\nI had not originally intended YC to be a full-time job. I was going to do three things: hack, write essays, and work on YC. As YC grew, and I grew more excited about it, it started to take up a lot more than a third of my attention. But for the first few years I was still able to work on other things.\n\nIn the summer of 2006, Robert and I started working on a new version of Arc. This one was reasonably fast, because it was compiled into Scheme. To test this new Arc, I wrote Hacker News in it. It was originally meant to be a news aggregator for startup founders and was called Startup News, but after a few months I got tired of reading about nothing but startups. Plus it wasn't startup founders we wanted to reach. It was future startup founders. So I changed the name to Hacker News and the topic to whatever engaged one's intellectual curiosity.\n\nHN was no doubt good for YC, but it was also by far the biggest source of stress for me. If all I'd had to do was select and help founders, life would have been so easy. And that implies that HN was a mistake. Surely the biggest source of stress in one's work should at least be something close to the core of the work. Whereas I was like someone who was in pain while running a marathon not from the exertion of running, but because I had a blister from an ill-fitting shoe. When I was dealing with some urgent problem during YC, there was about a 60% chance it had to do with HN, and a 40% chance it had do with everything else combined. [17]\n\nAs well as HN, I wrote all of YC's internal software in Arc. But while I continued to work a good deal in Arc, I gradually stopped working on Arc, partly because I didn't have time to, and partly because it was a lot less attractive to mess around with the language now that we had all this infrastructure depending on it. So now my three projects were reduced to two: writing essays and working on YC.\n\nYC was different from other kinds of work I've done. Instead of deciding for myself what to work on, the problems came to me. Every 6 months there was a new batch of startups, and their problems, whatever they were, became our problems. It was very engaging work, because their problems were quite varied, and the good founders were very effective. If you were trying to learn the most you could about startups in the shortest possible time, you couldn't have picked a better way to do it.\n\nThere were parts of the job I didn't like. Disputes between cofounders, figuring out when people were lying to us, fighting with people who maltreated the startups, and so on. But I worked hard even at the parts I didn't like. I was haunted by something Kevin Hale once said about companies: \"No one works harder than the boss.\" He meant it both descriptively and prescriptively, and it was the second part that scared me. I wanted YC to be good, so if how hard I worked set the upper bound on how hard everyone else worked, I'd better work very hard.\n\nOne day in 2010, when he was visiting California for interviews, Robert Morris did something astonishing: he offered me unsolicited advice. I can only remember him doing that once before. One day at Viaweb, when I was bent over double from a kidney stone, he suggested that it would be a good idea for him to take me to the hospital. That was what it took for Rtm to offer unsolicited advice. So I remember his exact words very clearly. \"You know,\" he said, \"you should make sure Y Combinator isn't the last cool thing you do.\"\n\nAt the time I didn't understand what he meant, but gradually it dawned on me that he was saying I should quit. This seemed strange advice, because YC was doing great. But if there was one thing rarer than Rtm offering advice, it was Rtm being wrong. So this set me thinking. It was true that on my current trajectory, YC would be the last thing I did, because it was only taking up more of my attention. It had already eaten Arc, and was in the process of eating essays too. Either YC was my life's work or I'd have to leave eventually. And it wasn't, so I would.\n\nIn the summer of 2012 my mother had a stroke, and the cause turned out to be a blood clot caused by colon cancer. The stroke destroyed her balance, and she was put in a nursing home, but she really wanted to get out of it and back to her house, and my sister and I were determined to help her do it. I used to fly up to Oregon to visit her regularly, and I had a lot of time to think on those flights. On one of them I realized I was ready to hand YC over to someone else.\n\nI asked Jessica if she wanted to be president, but she didn't, so we decided we'd try to recruit Sam Altman. We talked to Robert and Trevor and we agreed to make it a complete changing of the guard. Up till that point YC had been controlled by the original LLC we four had started. But we wanted YC to last for a long time, and to do that it couldn't be controlled by the founders. So if Sam said yes, we'd let him reorganize YC. Robert and I would retire, and Jessica and Trevor would become ordinary partners.\n\nWhen we asked Sam if he wanted to be president of YC, initially he said no. He wanted to start a startup to make nuclear reactors. But I kept at it, and in October 2013 he finally agreed. We decided he'd take over starting with the winter 2014 batch. For the rest of 2013 I left running YC more and more to Sam, partly so he could learn the job, and partly because I was focused on my mother, whose cancer had returned.\n\nShe died on January 15, 2014. We knew this was coming, but it was still hard when it did.\n\nI kept working on YC till March, to help get that batch of startups through Demo Day, then I checked out pretty completely. (I still talk to alumni and to new startups working on things I'm interested in, but that only takes a few hours a week.)\n\nWhat should I do next? Rtm's advice hadn't included anything about that. I wanted to do something completely different, so I decided I'd paint. I wanted to see how good I could get if I really focused on it. So the day after I stopped working on YC, I started painting. I was rusty and it took a while to get back into shape, but it was at least completely engaging. [18]\n\nI spent most of the rest of 2014 painting. I'd never been able to work so uninterruptedly before, and I got to be better than I had been. Not good enough, but better. Then in November, right in the middle of a painting, I ran out of steam. Up till that point I'd always been curious to see how the painting I was working on would turn out, but suddenly finishing this one seemed like a chore. So I stopped working on it and cleaned my brushes and haven't painted since. So far anyway.\n\nI realize that sounds rather wimpy. But attention is a zero sum game. If you can choose what to work on, and you choose a project that's not the best one (or at least a good one) for you, then it's getting in the way of another project that is. And at 50 there was some opportunity cost to screwing around.\n\nI started writing essays again, and wrote a bunch of new ones over the next few months. I even wrote a couple that weren't about startups. Then in March 2015 I started working on Lisp again.\n\nThe distinctive thing about Lisp is that its core is a language defined by writing an interpreter in itself. It wasn't originally intended as a programming language in the ordinary sense. It was meant to be a formal model of computation, an alternative to the Turing machine. If you want to write an interpreter for a language in itself, what's the minimum set of predefined operators you need? The Lisp that John McCarthy invented, or more accurately discovered, is an answer to that question. [19]\n\nMcCarthy didn't realize this Lisp could even be used to program computers till his grad student Steve Russell suggested it. Russell translated McCarthy's interpreter into IBM 704 machine language, and from that point Lisp started also to be a programming language in the ordinary sense. But its origins as a model of computation gave it a power and elegance that other languages couldn't match. It was this that attracted me in college, though I didn't understand why at the time.\n\nMcCarthy's 1960 Lisp did nothing more than interpret Lisp expressions. It was missing a lot of things you'd want in a programming language. So these had to be added, and when they were, they weren't defined using McCarthy's original axiomatic approach. That wouldn't have been feasible at the time. McCarthy tested his interpreter by hand-simulating the execution of programs. But it was already getting close to the limit of interpreters you could test that way \u2014 indeed, there was a bug in it that McCarthy had overlooked. To test a more complicated interpreter, you'd have had to run it, and computers then weren't powerful enough.\n\nNow they are, though. Now you could continue using McCarthy's axiomatic approach till you'd defined a complete programming language. And as long as every change you made to McCarthy's Lisp was a discoveredness-preserving transformation, you could, in principle, end up with a complete language that had this quality. Harder to do than to talk about, of course, but if it was possible in principle, why not try? So I decided to take a shot at it. It took 4 years, from March 26, 2015 to October 12, 2019. It was fortunate that I had a precisely defined goal, or it would have been hard to keep at it for so long.\n\nI wrote this new Lisp, called Bel, in itself in Arc. That may sound like a contradiction, but it's an indication of the sort of trickery I had to engage in to make this work. By means of an egregious collection of hacks I managed to make something close enough to an interpreter written in itself that could actually run. Not fast, but fast enough to test.\n\nI had to ban myself from writing essays during most of this time, or I'd never have finished. In late 2015 I spent 3 months writing essays, and when I went back to working on Bel I could barely understand the code. Not so much because it was badly written as because the problem is so convoluted. When you're working on an interpreter written in itself, it's hard to keep track of what's happening at what level, and errors can be practically encrypted by the time you get them.\n\nSo I said no more essays till Bel was done. But I told few people about Bel while I was working on it. So for years it must have seemed that I was doing nothing, when in fact I was working harder than I'd ever worked on anything. Occasionally after wrestling for hours with some gruesome bug I'd check Twitter or HN and see someone asking \"Does Paul Graham still code?\"\n\nWorking on Bel was hard but satisfying. I worked on it so intensively that at any given time I had a decent chunk of the code in my head and could write more there. I remember taking the boys to the coast on a sunny day in 2015 and figuring out how to deal with some problem involving continuations while I watched them play in the tide pools. It felt like I was doing life right. I remember that because I was slightly dismayed at how novel it felt. The good news is that I had more moments like this over the next few years.\n\nIn the summer of 2016 we moved to England. We wanted our kids to see what it was like living in another country, and since I was a British citizen by birth, that seemed the obvious choice. We only meant to stay for a year, but we liked it so much that we still live there. So most of Bel was written in England.\n\nIn the fall of 2019, Bel was finally finished. Like McCarthy's original Lisp, it's a spec rather than an implementation, although like McCarthy's Lisp it's a spec expressed as code.\n\nNow that I could write essays again, I wrote a bunch about topics I'd had stacked up. I kept writing essays through 2020, but I also started to think about other things I could work on. How should I choose what to do? Well, how had I chosen what to work on in the past? I wrote an essay for myself to answer that question, and I was surprised how long and messy the answer turned out to be. If this surprised me, who'd lived it, then I thought perhaps it would be interesting to other people, and encouraging to those with similarly messy lives. So I wrote a more detailed version for others to read, and this is the last sentence of it.\n\n\n\n\n\n\n\n\n\nNotes\n\n[1] My experience skipped a step in the evolution of computers: time-sharing machines with interactive OSes. I went straight from batch processing to microcomputers, which made microcomputers seem all the more exciting.\n\n[2] Italian words for abstract concepts can nearly always be predicted from their English cognates (except for occasional traps like polluzione). It's the everyday words that differ. So if you string together a lot of abstract concepts with a few simple verbs, you can make a little Italian go a long way.\n\n[3] I lived at Piazza San Felice 4, so my walk to the Accademia went straight down the spine of old Florence: past the Pitti, across the bridge, past Orsanmichele, between the Duomo and the Baptistery, and then up Via Ricasoli to Piazza San Marco. I saw Florence at street level in every possible condition, from empty dark winter evenings to sweltering summer days when the streets were packed with tourists.\n\n[4] You can of course paint people like still lives if you want to, and they're willing. That sort of portrait is arguably the apex of still life painting, though the long sitting does tend to produce pained expressions in the sitters.\n\n[5] Interleaf was one of many companies that had smart people and built impressive technology, and yet got crushed by Moore's Law. In the 1990s the exponential growth in the power of commodity (i.e. Intel) processors rolled up high-end, special-purpose hardware and software companies like a bulldozer.\n\n[6] The signature style seekers at RISD weren't specifically mercenary. In the art world, money and coolness are tightly coupled. Anything expensive comes to be seen as cool, and anything seen as cool will soon become equally expensive.\n\n[7] Technically the apartment wasn't rent-controlled but rent-stabilized, but this is a refinement only New Yorkers would know or care about. The point is that it was really cheap, less than half market price.\n\n[8] Most software you can launch as soon as it's done. But when the software is an online store builder and you're hosting the stores, if you don't have any users yet, that fact will be painfully obvious. So before we could launch publicly we had to launch privately, in the sense of recruiting an initial set of users and making sure they had decent-looking stores.\n\n[9] We'd had a code editor in Viaweb for users to define their own page styles. They didn't know it, but they were editing Lisp expressions underneath. But this wasn't an app editor, because the code ran when the merchants' sites were generated, not when shoppers visited them.\n\n[10] This was the first instance of what is now a familiar experience, and so was what happened next, when I read the comments and found they were full of angry people. How could I claim that Lisp was better than other languages? Weren't they all Turing complete? People who see the responses to essays I write sometimes tell me how sorry they feel for me, but I'm not exaggerating when I reply that it has always been like this, since the very beginning. It comes with the territory. An essay must tell readers things they don't already know, and some people dislike being told such things.\n\n[11] People put plenty of stuff on the internet in the 90s of course, but putting something online is not the same as publishing it online. Publishing online means you treat the online version as the (or at least a) primary version.\n\n[12] There is a general lesson here that our experience with Y Combinator also teaches: Customs continue to constrain you long after the restrictions that caused them have disappeared. Customary VC practice had once, like the customs about publishing essays, been based on real constraints. Startups had once been much more expensive to start, and proportionally rare. Now they could be cheap and common, but the VCs' customs still reflected the old world, just as customs about writing essays still reflected the constraints of the print era.\n\nWhich in turn implies that people who are independent-minded (i.e. less influenced by custom) will have an advantage in fields affected by rapid change (where customs are more likely to be obsolete).\n\nHere's an interesting point, though: you can't always predict which fields will be affected by rapid change. Obviously software and venture capital will be, but who would have predicted that essay writing would be?\n\n[13] Y Combinator was not the original name. At first we were called Cambridge Seed. But we didn't want a regional name, in case someone copied us in Silicon Valley, so we renamed ourselves after one of the coolest tricks in the lambda calculus, the Y combinator.\n\nI picked orange as our color partly because it's the warmest, and partly because no VC used it. In 2005 all the VCs used staid colors like maroon, navy blue, and forest green, because they were trying to appeal to LPs, not founders. The YC logo itself is an inside joke: the Viaweb logo had been a white V on a red circle, so I made the YC logo a white Y on an orange square.\n\n[14] YC did become a fund for a couple years starting in 2009, because it was getting so big I could no longer afford to fund it personally. But after Heroku got bought we had enough money to go back to being self-funded.\n\n[15] I've never liked the term \"deal flow,\" because it implies that the number of new startups at any given time is fixed. This is not only false, but it's the purpose of YC to falsify it, by causing startups to be founded that would not otherwise have existed.\n\n[16] She reports that they were all different shapes and sizes, because there was a run on air conditioners and she had to get whatever she could, but that they were all heavier than she could carry now.\n\n[17] Another problem with HN was a bizarre edge case that occurs when you both write essays and run a forum. When you run a forum, you're assumed to see if not every conversation, at least every conversation involving you. And when you write essays, people post highly imaginative misinterpretations of them on forums. Individually these two phenomena are tedious but bearable, but the combination is disastrous. You actually have to respond to the misinterpretations, because the assumption that you're present in the conversation means that not responding to any sufficiently upvoted misinterpretation reads as a tacit admission that it's correct. But that in turn encourages more; anyone who wants to pick a fight with you senses that now is their chance.\n\n[18] The worst thing about leaving YC was not working with Jessica anymore. We'd been working on YC almost the whole time we'd known each other, and we'd neither tried nor wanted to separate it from our personal lives, so leaving was like pulling up a deeply rooted tree.\n\n[19] One way to get more precise about the concept of invented vs discovered is to talk about space aliens. Any sufficiently advanced alien civilization would certainly know about the Pythagorean theorem, for example. I believe, though with less certainty, that they would also know about the Lisp in McCarthy's 1960 paper.\n\nBut if so there's no reason to suppose that this is the limit of the language that might be known to them. Presumably aliens need numbers and errors and I/O too. So it seems likely there exists at least one path out of McCarthy's Lisp along which discoveredness is preserved.\n\n\n\nThanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel Gackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj Taggar for reading drafts of this.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\modules\\state_of_the_union.txt", + "filetype": ".txt", + "content": "Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia\u2019s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n\nIn this struggle as President Zelenskyy said in his speech to the European Parliament \u201cLight will win over darkness.\u201d The Ukrainian Ambassador to the United States is here tonight. \n\nLet each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n\nPlease rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n\nThroughout our history we\u2019ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n\nThey keep moving. \n\nAnd the costs and the threats to America and the world keep rising. \n\nThat\u2019s why the NATO Alliance was created to secure peace and stability in Europe after World War 2. \n\nThe United States is a member along with 29 other nations. \n\nIt matters. American diplomacy matters. American resolve matters. \n\nPutin\u2019s latest attack on Ukraine was premeditated and unprovoked. \n\nHe rejected repeated efforts at diplomacy. \n\nHe thought the West and NATO wouldn\u2019t respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n\nWe prepared extensively and carefully. \n\nWe spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin. \n\nI spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n\nWe countered Russia\u2019s lies with truth. \n\nAnd now that he has acted the free world is holding him accountable. \n\nAlong with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n\nWe are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n\nTogether with our allies \u2013we are right now enforcing powerful economic sanctions. \n\nWe are cutting off Russia\u2019s largest banks from the international financial system. \n\nPreventing Russia\u2019s central bank from defending the Russian Ruble making Putin\u2019s $630 Billion \u201cwar fund\u201d worthless. \n\nWe are choking off Russia\u2019s access to technology that will sap its economic strength and weaken its military for years to come. \n\nTonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n\nThe U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n\nWe are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains. \n\nAnd tonight I am announcing that we will join our allies in closing off American air space to all Russian flights \u2013 further isolating Russia \u2013 and adding an additional squeeze \u2013on their economy. The Ruble has lost 30% of its value. \n\nThe Russian stock market has lost 40% of its value and trading remains suspended. Russia\u2019s economy is reeling and Putin alone is to blame. \n\nTogether with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n\nWe are giving more than $1 Billion in direct assistance to Ukraine. \n\nAnd we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n\nLet me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine. \n\nOur forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies \u2013 in the event that Putin decides to keep moving west. \n\nFor that purpose we\u2019ve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \n\nAs I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power. \n\nAnd we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \n\nPutin has unleashed violence and chaos. But while he may make gains on the battlefield \u2013 he will pay a continuing high price over the long run. \n\nAnd a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I\u2019ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I\u2019m taking robust action to make sure the pain of our sanctions is targeted at Russia\u2019s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what\u2019s happening can seem alarming. \n\nBut I want you to know that we are going to be okay. \n\nWhen the history of this era is written Putin\u2019s war on Ukraine will have left Russia weaker and the rest of the world stronger. \n\nWhile it shouldn\u2019t have taken something so terrible for people around the world to see what\u2019s at stake now everyone sees it clearly. \n\nWe see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine. \n\nIn the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \n\nThis is a real test. It\u2019s going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \n\nTo our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \n\nPutin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people. \n\nHe will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n\nWe meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n\nThe pandemic has been punishing. \n\nAnd so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n\nI understand. \n\nI remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n\nThat\u2019s why one of the first things I did as President was fight to pass the American Rescue Plan. \n\nBecause people were hurting. We needed to act, and we did. \n\nFew pieces of legislation have done more in a critical moment in our history to lift us out of crisis. \n\nIt fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \n\nHelped put food on their table, keep a roof over their heads, and cut the cost of health insurance. \n\nAnd as my Dad used to say, it gave people a little breathing room. \n\nAnd unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people\u2014and left no one behind. \n\nAnd it worked. It created jobs. Lots of jobs. \n\nIn fact\u2014our economy created over 6.5 Million new jobs just last year, more jobs created in one year \nthan ever before in the history of America. \n\nOur economy grew at a rate of 5.7% last year, the strongest growth in nearly 40 years, the first step in bringing fundamental change to an economy that hasn\u2019t worked for the working people of this nation for too long. \n\nFor the past 40 years we were told that if we gave tax breaks to those at the very top, the benefits would trickle down to everyone else. \n\nBut that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n\nVice President Harris and I ran for office with a new economic vision for America. \n\nInvest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \nand the middle out, not from the top down. \n\nBecause we know that when the middle class grows, the poor have a ladder up and the wealthy do very well. \n\nAmerica used to have the best roads, bridges, and airports on Earth. \n\nNow our infrastructure is ranked 13th in the world. \n\nWe won\u2019t be able to compete for the jobs of the 21st Century if we don\u2019t fix that. \n\nThat\u2019s why it was so important to pass the Bipartisan Infrastructure Law\u2014the most sweeping investment to rebuild America in history. \n\nThis was a bipartisan effort, and I want to thank the members of both parties who worked to make it happen. \n\nWe\u2019re done talking about infrastructure weeks. \n\nWe\u2019re going to have an infrastructure decade. \n\nIt is going to transform America and put us on a path to win the economic competition of the 21st Century that we face with the rest of the world\u2014particularly with China. \n\nAs I\u2019ve told Xi Jinping, it is never a good bet to bet against the American people. \n\nWe\u2019ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n\nAnd we\u2019ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice. \n\nWe\u2019ll build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes\u2014so every child\u2014and every American\u2014has clean water to drink at home and at school, provide affordable high-speed internet for every American\u2014urban, suburban, rural, and tribal communities. \n\n4,000 projects have already been announced. \n\nAnd tonight, I\u2019m announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. \n\nWhen we use taxpayer dollars to rebuild America \u2013 we are going to Buy American: buy American products to support American jobs. \n\nThe federal government spends about $600 Billion a year to keep the country safe and secure. \n\nThere\u2019s been a law on the books for almost a century \nto make sure taxpayers\u2019 dollars support American jobs and businesses. \n\nEvery Administration says they\u2019ll do it, but we are actually doing it. \n\nWe will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n\nBut to compete for the best jobs of the future, we also need to level the playing field with China and other competitors. \n\nThat\u2019s why it is so important to pass the Bipartisan Innovation Act sitting in Congress that will make record investments in emerging technologies and American manufacturing. \n\nLet me give you one example of why it\u2019s so important to pass it. \n\nIf you travel 20 miles east of Columbus, Ohio, you\u2019ll find 1,000 empty acres of land. \n\nIt won\u2019t look like much, but if you stop and look closely, you\u2019ll see a \u201cField of dreams,\u201d the ground on which America\u2019s future will be built. \n\nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor \u201cmega site\u201d. \n\nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. \n\nSome of the most sophisticated manufacturing in the world to make computer chips the size of a fingertip that power the world and our everyday lives. \n\nSmartphones. The Internet. Technology we have yet to invent. \n\nBut that\u2019s just the beginning. \n\nIntel\u2019s CEO, Pat Gelsinger, who is here tonight, told me they are ready to increase their investment from \n$20 billion to $100 billion. \n\nThat would be one of the biggest investments in manufacturing in American history. \n\nAnd all they\u2019re waiting for is for you to pass this bill. \n\nSo let\u2019s not wait any longer. Send it to my desk. I\u2019ll sign it. \n\nAnd we will really take off. \n\nAnd Intel is not alone. \n\nThere\u2019s something happening in America. \n\nJust look around and you\u2019ll see an amazing story. \n\nThe rebirth of the pride that comes from stamping products \u201cMade In America.\u201d The revitalization of American manufacturing. \n\nCompanies are choosing to build new factories here, when just a few years ago, they would have built them overseas. \n\nThat\u2019s what is happening. Ford is investing $11 billion to build electric vehicles, creating 11,000 jobs across the country. \n\nGM is making the largest investment in its history\u2014$7 billion to build electric vehicles, creating 4,000 jobs in Michigan. \n\nAll told, we created 369,000 new manufacturing jobs in America just last year. \n\nPowered by people I\u2019ve met like JoJo Burgess, from generations of union steelworkers from Pittsburgh, who\u2019s here with us tonight. \n\nAs Ohio Senator Sherrod Brown says, \u201cIt\u2019s time to bury the label \u201cRust Belt.\u201d \n\nIt\u2019s time. \n\nBut with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n\nInflation is robbing them of the gains they might otherwise feel. \n\nI get it. That\u2019s why my top priority is getting prices under control. \n\nLook, our economy roared back faster than most predicted, but the pandemic meant that businesses had a hard time hiring enough workers to keep up production in their factories. \n\nThe pandemic also disrupted global supply chains. \n\nWhen factories close, it takes longer to make goods and get them from the warehouse to the store, and prices go up. \n\nLook at cars. \n\nLast year, there weren\u2019t enough semiconductors to make all the cars that people wanted to buy. \n\nAnd guess what, prices of automobiles went up. \n\nSo\u2014we have a choice. \n\nOne way to fight inflation is to drive down wages and make Americans poorer. \n\nI have a better plan to fight inflation. \n\nLower your costs, not your wages. \n\nMake more cars and semiconductors in America. \n\nMore infrastructure and innovation in America. \n\nMore goods moving faster and cheaper in America. \n\nMore jobs where you can earn a good living in America. \n\nAnd instead of relying on foreign supply chains, let\u2019s make it in America. \n\nEconomists call it \u201cincreasing the productive capacity of our economy.\u201d \n\nI call it building a better America. \n\nMy plan to fight inflation will lower your costs and lower the deficit. \n\n17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And here\u2019s the plan: \n\nFirst \u2013 cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis. \n\nHe and his Dad both have Type 1 diabetes, which means they need insulin every day. Insulin costs about $10 a vial to make. \n\nBut drug companies charge families like Joshua and his Dad up to 30 times more. I spoke with Joshua\u2019s mom. \n\nImagine what it\u2019s like to look at your child who needs insulin and have no idea how you\u2019re going to pay for it. \n\nWhat it does to your dignity, your ability to look your child in the eye, to be the parent you expect to be. \n\nJoshua is here with us tonight. Yesterday was his birthday. Happy birthday, buddy. \n\nFor Joshua, and for the 200,000 other young people with Type 1 diabetes, let\u2019s cap the cost of insulin at $35 a month so everyone can afford it. \n\nDrug companies will still do very well. And while we\u2019re at it let Medicare negotiate lower prices for prescription drugs, like the VA already does. \n\nLook, the American Rescue Plan is helping millions of families on Affordable Care Act plans save $2,400 a year on their health care premiums. Let\u2019s close the coverage gap and make those savings permanent. \n\nSecond \u2013 cut energy costs for families an average of $500 a year by combatting climate change. \n\nLet\u2019s provide investments and tax credits to weatherize your homes and businesses to be energy efficient and you get a tax credit; double America\u2019s clean energy production in solar, wind, and so much more; lower the price of electric vehicles, saving you another $80 a month because you\u2019ll never have to pay at the gas pump again. \n\nThird \u2013 cut the cost of child care. Many families pay up to $14,000 a year for child care per child. \n\nMiddle-class and working families shouldn\u2019t have to pay more than 7% of their income for care of young children. \n\nMy plan will cut the cost in half for most families and help parents, including millions of women, who left the workforce during the pandemic because they couldn\u2019t afford child care, to be able to get back to work. \n\nMy plan doesn\u2019t stop there. It also includes home and long-term care. More affordable housing. And Pre-K for every 3- and 4-year-old. \n\nAll of these will lower costs. \n\nAnd under my plan, nobody earning less than $400,000 a year will pay an additional penny in new taxes. Nobody. \n\nThe one thing all Americans agree on is that the tax system is not fair. We have to fix it. \n\nI\u2019m not looking to punish anyone. But let\u2019s make sure corporations and the wealthiest Americans start paying their fair share. \n\nJust last year, 55 Fortune 500 corporations earned $40 billion in profits and paid zero dollars in federal income tax. \n\nThat\u2019s simply not fair. That\u2019s why I\u2019ve proposed a 15% minimum tax rate for corporations. \n\nWe got more than 130 countries to agree on a global minimum tax rate so companies can\u2019t get out of paying their taxes at home by shipping jobs and factories overseas. \n\nThat\u2019s why I\u2019ve proposed closing loopholes so the very wealthy don\u2019t pay a lower tax rate than a teacher or a firefighter. \n\nSo that\u2019s my plan. It will grow the economy and lower costs for families. \n\nSo what are we waiting for? Let\u2019s get this done. And while you\u2019re at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n\nMy plan will not only lower costs to give families a fair shot, it will lower the deficit. \n\nThe previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. \n\nBut in my administration, the watchdogs have been welcomed back. \n\nWe\u2019re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \n\nAnd tonight, I\u2019m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \n\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \n\nLowering your costs also means demanding more competition. \n\nI\u2019m a capitalist, but capitalism without competition isn\u2019t capitalism. \n\nIt\u2019s exploitation\u2014and it drives up prices. \n\nWhen corporations don\u2019t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \n\nWe see it happening with ocean carriers moving goods in and out of America. \n\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits. \n\nTonight, I\u2019m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe\u2019ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet\u2019s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet\u2019s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill\u2014our First Lady who teaches full-time\u2014calls America\u2019s best-kept secret: community colleges. \n\nAnd let\u2019s pass the PRO Act when a majority of workers want to form a union\u2014they shouldn\u2019t be stopped. \n\nWhen we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we haven\u2019t done in a long time: build a better America. \n\nFor more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n\nAnd I know you\u2019re tired, frustrated, and exhausted. \n\nBut I also know this. \n\nBecause of the progress we\u2019ve made, because of your resilience and the tools we have, tonight I can say \nwe are moving forward safely, back to more normal routines. \n\nWe\u2019ve reached a new moment in the fight against COVID-19, with severe cases down to a level not seen since last July. \n\nJust a few days ago, the Centers for Disease Control and Prevention\u2014the CDC\u2014issued new mask guidelines. \n\nUnder these new guidelines, most Americans in most of the country can now be mask free. \n\nAnd based on the projections, more of the country will reach that point across the next couple of weeks. \n\nThanks to the progress we have made this past year, COVID-19 need no longer control our lives. \n\nI know some are talking about \u201cliving with COVID-19\u201d. Tonight \u2013 I say that we will never just accept living with COVID-19. \n\nWe will continue to combat the virus as we do other diseases. And because this is a virus that mutates and spreads, we will stay on guard. \n\nHere are four common sense steps as we move forward safely. \n\nFirst, stay protected with vaccines and treatments. We know how incredibly effective vaccines are. If you\u2019re vaccinated and boosted you have the highest degree of protection. \n\nWe will never give up on vaccinating more Americans. Now, I know parents with kids under 5 are eager to see a vaccine authorized for their children. \n\nThe scientists are working hard to get that done and we\u2019ll be ready with plenty of vaccines when they do. \n\nWe\u2019re also ready with anti-viral treatments. If you get COVID-19, the Pfizer pill reduces your chances of ending up in the hospital by 90%. \n\nWe\u2019ve ordered more of these pills than anyone in the world. And Pfizer is working overtime to get us 1 Million pills this month and more than double that next month. \n\nAnd we\u2019re launching the \u201cTest to Treat\u201d initiative so people can get tested at a pharmacy, and if they\u2019re positive, receive antiviral pills on the spot at no cost. \n\nIf you\u2019re immunocompromised or have some other vulnerability, we have treatments and free high-quality masks. \n\nWe\u2019re leaving no one behind or ignoring anyone\u2019s needs as we move forward. \n\nAnd on testing, we have made hundreds of millions of tests available for you to order for free. \n\nEven if you already ordered free tests tonight, I am announcing that you can order more from covidtests.gov starting next week. \n\nSecond \u2013 we must prepare for new variants. Over the past year, we\u2019ve gotten much better at detecting new variants. \n\nIf necessary, we\u2019ll be able to deploy new vaccines within 100 days instead of many more months or years. \n\nAnd, if Congress provides the funds we need, we\u2019ll have new stockpiles of tests, masks, and pills ready if needed. \n\nI cannot promise a new variant won\u2019t come. But I can promise you we\u2019ll do everything within our power to be ready if it does. \n\nThird \u2013 we can end the shutdown of schools and businesses. We have the tools we need. \n\nIt\u2019s time for Americans to get back to work and fill our great downtowns again. People working from home can feel safe to begin to return to the office. \n\nWe\u2019re doing that here in the federal government. The vast majority of federal workers will once again work in person. \n\nOur schools are open. Let\u2019s keep it that way. Our kids need to be in school. \n\nAnd with 75% of adult Americans fully vaccinated and hospitalizations down by 77%, most Americans can remove their masks, return to work, stay in the classroom, and move forward safely. \n\nWe achieved this because we provided free vaccines, treatments, tests, and masks. \n\nOf course, continuing this costs money. \n\nI will soon send Congress a request. \n\nThe vast majority of Americans have used these tools and may want to again, so I expect Congress to pass it quickly. \n\nFourth, we will continue vaccinating the world. \n\nWe\u2019ve sent 475 Million vaccine doses to 112 countries, more than any other nation. \n\nAnd we won\u2019t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet\u2019s use this moment to reset. Let\u2019s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet\u2019s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can\u2019t change how divided we\u2019ve been. But we can change how we move forward\u2014on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who\u2019d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n\nI\u2019ve worked on these issues a long time. \n\nI know what works: Investing in crime prevention and community police officers who\u2019ll walk the beat, who\u2019ll know the neighborhood, and who can restore trust and safety. \n\nSo let\u2019s not abandon our streets. Or choose between safety and equal justice. \n\nLet\u2019s come together to protect our communities, restore trust, and hold law enforcement accountable. \n\nThat\u2019s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers. \n\nThat\u2019s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption\u2014trusted messengers breaking the cycle of violence and trauma and giving young people hope. \n\nWe should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities. \n\nI ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe. \n\nAnd I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home\u2014they have no serial numbers and can\u2019t be traced. \n\nAnd I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon? \n\nBan assault weapons and high-capacity magazines. \n\nRepeal the liability shield that makes gun manufacturers the only industry in America that can\u2019t be sued. \n\nThese laws don\u2019t infringe on the Second Amendment. They save lives. \n\nThe most fundamental right in America is the right to vote \u2013 and to have it counted. And it\u2019s under assault. \n\nIn state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \n\nWe cannot let this happen. \n\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you\u2019re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I\u2019d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer\u2014an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation\u2019s top legal minds, who will continue Justice Breyer\u2019s legacy of excellence. \n\nA former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she\u2019s been nominated, she\u2019s received a broad range of support\u2014from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we\u2019ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe\u2019ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe\u2019re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe\u2019re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders. \n\nWe can do all this while keeping lit the torch of liberty that has led generations of immigrants to this land\u2014my forefathers and so many of yours. \n\nProvide a pathway to citizenship for Dreamers, those on temporary status, farm workers, and essential workers. \n\nRevise our laws so businesses have the workers they need and families don\u2019t wait decades to reunite. \n\nIt\u2019s not only the right thing to do\u2014it\u2019s the economically smart thing to do. \n\nThat\u2019s why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n\nLet\u2019s get it done once and for all. \n\nAdvancing liberty and justice also requires protecting the rights of women. \n\nThe constitutional right affirmed in Roe v. Wade\u2014standing precedent for half a century\u2014is under attack as never before. \n\nIf we want to go forward\u2014not backward\u2014we must protect access to health care. Preserve a woman\u2019s right to choose. And let\u2019s continue to advance maternal health care in America. \n\nAnd for our LGBTQ+ Americans, let\u2019s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn\u2019t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we\u2019ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I\u2019m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic. \n\nThere is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery. \n\nGet rid of outdated rules that stop doctors from prescribing treatments. And stop the flow of illicit drugs by working with state and local law enforcement to go after traffickers. \n\nIf you\u2019re suffering from addiction, know you are not alone. I believe in recovery, and I celebrate the 23 million Americans in recovery. \n\nSecond, let\u2019s take on mental health. Especially among our children, whose lives and education have been turned upside down. \n\nThe American Rescue Plan gave schools money to hire teachers and help students make up for lost learning. \n\nI urge every parent to make sure your school does just that. And we can all play a part\u2014sign up to be a tutor or a mentor. \n\nChildren were also struggling before the pandemic. Bullying, violence, trauma, and the harms of social media. \n\nAs Frances Haugen, who is here with us tonight, has shown, we must hold social media platforms accountable for the national experiment they\u2019re conducting on our children for profit. \n\nIt\u2019s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children. \n\nAnd let\u2019s get all Americans the mental health services they need. More people they can turn to for help, and full parity between physical and mental health care. \n\nThird, support our veterans. \n\nVeterans are the best of us. \n\nI\u2019ve always believed that we have a sacred obligation to equip all those we send to war and care for them and their families when they come home. \n\nMy administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n\nOur troops in Iraq and Afghanistan faced many dangers. \n\nOne was stationed at bases and breathing in toxic smoke from \u201cburn pits\u201d that incinerated wastes of war\u2014medical and hazard material, jet fuel, and more. \n\nWhen they came home, many of the world\u2019s fittest and best trained warriors were never the same. \n\nHeadaches. Numbness. Dizziness. \n\nA cancer that would put them in a flag-draped coffin. \n\nI know. \n\nOne of those soldiers was my son Major Beau Biden. \n\nWe don\u2019t know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n\nBut I\u2019m committed to finding out everything we can. \n\nCommitted to military families like Danielle Robinson from Ohio. \n\nThe widow of Sergeant First Class Heath Robinson. \n\nHe was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n\nStationed near Baghdad, just yards from burn pits the size of football fields. \n\nHeath\u2019s widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n\nBut cancer from prolonged exposure to burn pits ravaged Heath\u2019s lungs and body. \n\nDanielle says Heath was a fighter to the very end. \n\nHe didn\u2019t know how to stop fighting, and neither did she. \n\nThrough her pain she found purpose to demand we do better. \n\nTonight, Danielle\u2014we are. \n\nThe VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n\nAnd tonight, I\u2019m announcing we\u2019re expanding eligibility to veterans suffering from nine respiratory cancers. \n\nI\u2019m also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n\nAnd fourth, let\u2019s end cancer as we know it. \n\nThis is personal to me and Jill, to Kamala, and to so many of you. \n\nCancer is the #2 cause of death in America\u2013second only to heart disease. \n\nLast month, I announced our plan to supercharge \nthe Cancer Moonshot that President Obama asked me to lead six years ago. \n\nOur goal is to cut the cancer death rate by at least 50% over the next 25 years, turn more cancers from death sentences into treatable diseases. \n\nMore support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt\u2019s based on DARPA\u2014the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose\u2014to drive breakthroughs in cancer, Alzheimer\u2019s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans\u2014tonight , we have gathered in a sacred space\u2014the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation. \n\nWe will meet the test. \n\nTo protect freedom and liberty, to expand fairness and opportunity. \n\nWe will save democracy. \n\nAs hard as these times have been, I am more optimistic about America today than I have been my whole life. \n\nBecause I see the future that is within our grasp. \n\nBecause I know there is simply nothing beyond our capacity. \n\nWe are the only nation on Earth that has always turned every crisis we have faced into an opportunity. \n\nThe only nation that can be defined by a single word: possibilities. \n\nSo on this night, in our 245th year as a nation, I have come to report on the State of the Union. \n\nAnd my report is this: the State of the Union is strong\u2014because you, the American people, are strong. \n\nWe are stronger today than we were a year ago. \n\nAnd we will be stronger a year from now than we are today. \n\nNow is our moment to meet and overcome the challenges of our time. \n\nAnd we will, as one people. \n\nOne America. \n\nThe United States of America. \n\nMay God bless you all. May God protect our troops." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\docs\\docs\\modules\\model_io\\prompts\\simple_template.txt", + "filetype": ".txt", + "content": "Tell me a {adjective} joke about {content}." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\CONTRIBUTING.md", + "filetype": ".md", + "content": "# Contributing to langchain-cli\n\nUpdate CLI versions with `poe bump` to ensure that version commands display correctly.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\DOCS.md", + "filetype": ".md", + "content": "# `langchain`\n\n**Usage**:\n\n```console\n$ langchain [OPTIONS] COMMAND [ARGS]...\n```\n\n**Options**:\n\n* `--help`: Show this message and exit.\n* `-v, --version`: Print current CLI version.\n\n**Commands**:\n\n* `app`: Manage LangChain apps\n* `serve`: Start the LangServe app, whether it's a...\n* `template`: Develop installable templates.\n\n## `langchain app`\n\nManage LangChain apps\n\n**Usage**:\n\n```console\n$ langchain app [OPTIONS] COMMAND [ARGS]...\n```\n\n**Options**:\n\n* `--help`: Show this message and exit.\n\n**Commands**:\n\n* `add`: Adds the specified template to the current...\n* `new`: Create a new LangServe application.\n* `remove`: Removes the specified package from the...\n* `serve`: Starts the LangServe app.\n\n### `langchain app add`\n\nAdds the specified template to the current LangServe app.\n\ne.g.:\nlangchain app add extraction-openai-functions\nlangchain app add git+ssh://git@github.com/efriis/simple-pirate.git\n\n**Usage**:\n\n```console\n$ langchain app add [OPTIONS] [DEPENDENCIES]...\n```\n\n**Arguments**:\n\n* `[DEPENDENCIES]...`: The dependency to add\n\n**Options**:\n\n* `--api-path TEXT`: API paths to add\n* `--project-dir PATH`: The project directory\n* `--repo TEXT`: Install templates from a specific github repo instead\n* `--branch TEXT`: Install templates from a specific branch\n* `--help`: Show this message and exit.\n\n### `langchain app new`\n\nCreate a new LangServe application.\n\n**Usage**:\n\n```console\n$ langchain app new [OPTIONS] NAME\n```\n\n**Arguments**:\n\n* `NAME`: The name of the folder to create [required]\n\n**Options**:\n\n* `--package TEXT`: Packages to seed the project with\n* `--help`: Show this message and exit.\n\n### `langchain app remove`\n\nRemoves the specified package from the current LangServe app.\n\n**Usage**:\n\n```console\n$ langchain app remove [OPTIONS] API_PATHS...\n```\n\n**Arguments**:\n\n* `API_PATHS...`: The API paths to remove [required]\n\n**Options**:\n\n* `--help`: Show this message and exit.\n\n### `langchain app serve`\n\nStarts the LangServe app.\n\n**Usage**:\n\n```console\n$ langchain app serve [OPTIONS]\n```\n\n**Options**:\n\n* `--port INTEGER`: The port to run the server on\n* `--host TEXT`: The host to run the server on\n* `--app TEXT`: The app to run, e.g. `app.server:app`\n* `--help`: Show this message and exit.\n\n## `langchain serve`\n\nStart the LangServe app, whether it's a template or an app.\n\n**Usage**:\n\n```console\n$ langchain serve [OPTIONS]\n```\n\n**Options**:\n\n* `--port INTEGER`: The port to run the server on\n* `--host TEXT`: The host to run the server on\n* `--help`: Show this message and exit.\n\n## `langchain template`\n\nDevelop installable templates.\n\n**Usage**:\n\n```console\n$ langchain template [OPTIONS] COMMAND [ARGS]...\n```\n\n**Options**:\n\n* `--help`: Show this message and exit.\n\n**Commands**:\n\n* `new`: Creates a new template package.\n* `serve`: Starts a demo app for this template.\n\n### `langchain template new`\n\nCreates a new template package.\n\n**Usage**:\n\n```console\n$ langchain template new [OPTIONS] NAME\n```\n\n**Arguments**:\n\n* `NAME`: The name of the folder to create [required]\n\n**Options**:\n\n* `--with-poetry / --no-poetry`: Don't run poetry install [default: no-poetry]\n* `--help`: Show this message and exit.\n\n### `langchain template serve`\n\nStarts a demo app for this template.\n\n**Usage**:\n\n```console\n$ langchain template serve [OPTIONS]\n```\n\n**Options**:\n\n* `--port INTEGER`: The port to run the server on\n* `--host TEXT`: The host to run the server on\n* `--help`: Show this message and exit.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\README.md", + "filetype": ".md", + "content": "# langchain-cli\n\nThis package implements the official CLI for LangChain. Right now, it is most useful\nfor getting started with LangChain Templates!\n\n[CLI Docs](https://github.com/langchain-ai/langchain/blob/master/libs/cli/DOCS.md)\n\n[LangServe Templates Quickstart](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\langchain_cli\\integration_template\\README.md", + "filetype": ".md", + "content": "# __package_name__\n\nThis package contains the LangChain integration with __ModuleName__\n\n## Installation\n\n```bash\npip install -U __package_name__\n```\n\nAnd you should configure credentials by setting the following environment variables:\n\n* TODO: fill this out\n\n## Chat Models\n\n`Chat__ModuleName__` class exposes chat models from __ModuleName__.\n\n```python\nfrom __module_name__ import Chat__ModuleName__\n\nllm = Chat__ModuleName__()\nllm.invoke(\"Sing a ballad of LangChain.\")\n```\n\n## Embeddings\n\n`__ModuleName__Embeddings` class exposes embeddings from __ModuleName__.\n\n```python\nfrom __module_name__ import __ModuleName__Embeddings\n\nembeddings = __ModuleName__Embeddings()\nembeddings.embed_query(\"What is the meaning of life?\")\n```\n\n## LLMs\n`__ModuleName__LLM` class exposes LLMs from __ModuleName__.\n\n```python\nfrom __module_name__ import __ModuleName__LLM\n\nllm = __ModuleName__LLM()\nllm.invoke(\"The meaning of life is\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\langchain_cli\\package_template\\README.md", + "filetype": ".md", + "content": "# __package_name__\n\nTODO: What does this package do\n\n## Environment Setup\n\nTODO: What environment variables need to be set (if any)\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package __package_name__\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add __package_name__\n```\n\nAnd add the following code to your `server.py` file:\n```python\n__app_route_code__\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/__package_name__/playground](http://127.0.0.1:8000/__package_name__/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/__package_name__\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\langchain_cli\\project_template\\README.md", + "filetype": ".md", + "content": "# __app_name__\n\n## Installation\n\nInstall the LangChain CLI if you haven't yet\n\n```bash\npip install -U langchain-cli\n```\n\n## Adding packages\n\n```bash\n# adding packages from \n# https://github.com/langchain-ai/langchain/tree/master/templates\nlangchain app add $PROJECT_NAME\n\n# adding custom GitHub repo packages\nlangchain app add --repo $OWNER/$REPO\n# or with whole git string (supports other git providers):\n# langchain app add git+https://github.com/hwchase17/chain-of-verification\n\n# with a custom api mount point (defaults to `/{package_name}`)\nlangchain app add $PROJECT_NAME --api_path=/my/custom/path/rag\n```\n\nNote: you remove packages by their api path\n\n```bash\nlangchain app remove my/custom/path/rag\n```\n\n## Setup LangSmith (Optional)\nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\n## Launch LangServe\n\n```bash\nlangchain serve\n```\n\n## Running in Docker\n\nThis project folder includes a Dockerfile that allows you to easily build and host your LangServe app.\n\n### Building the Image\n\nTo build the image, you simply:\n\n```shell\ndocker build . -t my-langserve-app\n```\n\nIf you tag your image with something other than `my-langserve-app`,\nnote it for use in the next step.\n\n### Running the Image Locally\n\nTo run the image, you'll need to include any environment variables\nnecessary for your application.\n\nIn the below example, we inject the `OPENAI_API_KEY` environment\nvariable with the value set in my local environment\n(`$OPENAI_API_KEY`)\n\nWe also expose port 8080 with the `-p 8080:8080` option.\n\n```shell\ndocker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 8080:8080 my-langserve-app\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\cli\\langchain_cli\\project_template\\packages\\README.md", + "filetype": ".md", + "content": "" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\README.md", + "filetype": ".md", + "content": "# \ud83e\udd9c\ufe0f\ud83e\uddd1\u200d\ud83e\udd1d\u200d\ud83e\uddd1 LangChain Community\n\n[![Downloads](https://static.pepy.tech/badge/langchain_community/month)](https://pepy.tech/project/langchain_community)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## Quick Install\n\n```bash\npip install langchain-community\n```\n\n## What is it?\n\nLangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application.\n\nFor full documentation see the [API reference](https://api.python.langchain.com/en/stable/community_api_reference.html).\n\n![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png \"LangChain Framework Overview\")\n\n## \ud83d\udcd5 Releases & Versioning\n\n`langchain-community` is currently on version `0.0.x`\n\nAll changes will be accompanied by a patch version increase.\n\n## \ud83d\udc81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/)." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\examples\\whatsapp_chat.txt", + "filetype": ".txt", + "content": "[05.05.23, 15:48:11] James: Hi here\n[11/8/21, 9:41:32 AM] User name: Message 123\n1/23/23, 3:19 AM - User 2: Bye!\n1/23/23, 3:22_AM - User 1: And let me know if anything changes\n[1/24/21, 12:41:03 PM] ~ User name 2: Of course!\n[2023/5/4, 16:13:23] ~ User 2: See you!\n7/19/22, 11:32\u202fPM - User 1: Hello\n7/20/22, 11:32\u202fam - User 2: Goodbye\n4/20/23, 9:42\u202fam - User 3: \n6/29/23, 12:16\u202fam - User 4: This message was deleted\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\integration_tests\\examples\\whatsapp_chat.txt", + "filetype": ".txt", + "content": "[05.05.23, 15:48:11] James: Hi here\n[11/8/21, 9:41:32 AM] User name: Message 123\n1/23/23, 3:19 AM - User 2: Bye!\n1/23/23, 3:22_AM - User 1: And let me know if anything changes\n[1/24/21, 12:41:03 PM] ~ User name 2: Of course!\n[2023/5/4, 16:13:23] ~ User 2: See you!\n7/19/22, 11:32\u202fPM - User 1: Hello\n7/20/22, 11:32\u202fam - User 2: Goodbye\n4/20/23, 9:42\u202fam - User 3: \n6/29/23, 12:16\u202fam - User 4: This message was deleted\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\integration_tests\\vectorstores\\fixtures\\sharks.txt", + "filetype": ".txt", + "content": "Sharks are a group of elasmobranch fish characterized by a cartilaginous skeleton, five to seven gill slits on the sides of the head, and pectoral fins that are not fused to the head. Modern sharks are classified within the clade Selachimorpha (or Selachii) and are the sister group to the Batoidea (rays and kin). Some sources extend the term \"shark\" as an informal category including extinct members of Chondrichthyes (cartilaginous fish) with a shark-like morphology, such as hybodonts and xenacanths. Shark-like chondrichthyans such as Cladoselache and Doliodus first appeared in the Devonian Period (419-359 Ma), though some fossilized chondrichthyan-like scales are as old as the Late Ordovician (458-444 Ma). The oldest modern sharks (selachians) are known from the Early Jurassic, about 200 Ma.\n\nSharks range in size from the small dwarf lanternshark (Etmopterus perryi), a deep sea species that is only 17 centimetres (6.7 in) in length, to the whale shark (Rhincodon typus), the largest fish in the world, which reaches approximately 12 metres (40 ft) in length. They are found in all seas and are common to depths up to 2,000 metres (6,600 ft). They generally do not live in freshwater, although there are a few known exceptions, such as the bull shark and the river shark, which can be found in both seawater and freshwater.[3] Sharks have a covering of dermal denticles that protects their skin from damage and parasites in addition to improving their fluid dynamics. They have numerous sets of replaceable teeth.\n\nSeveral species are apex predators, which are organisms that are at the top of their food chain. Select examples include the tiger shark, blue shark, great white shark, mako shark, thresher shark, and hammerhead shark.\n\nSharks are caught by humans for shark meat or shark fin soup. Many shark populations are threatened by human activities. Since 1970, shark populations have been reduced by 71%, mostly from overfishing." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\chat_loaders\\data\\whatsapp_chat.txt", + "filetype": ".txt", + "content": "[8/15/23, 9:12:33 AM] Dr. Feather: \u200eMessages and calls are end-to-end encrypted. No one outside of this chat, not even WhatsApp, can read or listen to them.\n[8/15/23, 9:12:43 AM] Dr. Feather: I spotted a rare Hyacinth Macaw yesterday in the Amazon Rainforest. Such a magnificent creature!\n\u200e[8/15/23, 9:12:48 AM] Dr. Feather: \u200eimage omitted\n[8/15/23, 9:13:15 AM] Jungle Jane: That's stunning! Were you able to observe its behavior?\n\u200e[8/15/23, 9:13:23 AM] Dr. Feather: \u200eimage omitted\n[8/15/23, 9:14:02 AM] Dr. Feather: Yes, it seemed quite social with other macaws. They're known for their playful nature.\n[8/15/23, 9:14:15 AM] Jungle Jane: How's the research going on parrot communication?\n\u200e[8/15/23, 9:14:30 AM] Dr. Feather: \u200eimage omitted\n[8/15/23, 9:14:50 AM] Dr. Feather: It's progressing well. We're learning so much about how they use sound and color to communicate.\n[8/15/23, 9:15:10 AM] Jungle Jane: That's fascinating! Can't wait to read your paper on it.\n[8/15/23, 9:15:20 AM] Dr. Feather: Thank you! I'll send you a draft soon.\n[8/15/23, 9:25:16 PM] Jungle Jane: Looking forward to it! Keep up the great work.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\document_loaders\\sample_documents\\obsidian\\bad_frontmatter.md", + "filetype": ".md", + "content": "---\nanArray:\n one\n- two\n- three\ntags: 'onetag', 'twotag' ]\n---\n\nA document with frontmatter that isn't valid.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\document_loaders\\sample_documents\\obsidian\\frontmatter.md", + "filetype": ".md", + "content": "---\ntags: journal/entry, obsidian\n---\n\nNo other content than the frontmatter.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\document_loaders\\sample_documents\\obsidian\\no_frontmatter.md", + "filetype": ".md", + "content": "### Description\n#recipes #dessert #cookies \n\nA document with HR elements that might trip up a front matter parser:\n\n---\n\n### Ingredients\n\n- 3/4 cup (170g) **unsalted butter**, slightly softened to\u00a0room temperature.\n- 1 and 1/2 cups\u00a0(180g) **confectioners\u2019\u00a0sugar**\n\n---\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\document_loaders\\sample_documents\\obsidian\\no_metadata.md", + "filetype": ".md", + "content": "A markdown document with no additional metadata.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\document_loaders\\sample_documents\\obsidian\\tags_and_frontmatter.md", + "filetype": ".md", + "content": "---\naFloat: 13.12345\nanInt: 15\naBool: true\naString: string value\nanArray:\n- one\n- two\n- three\naDict:\n dictId1: '58417'\n dictId2: 1500\ntags: [ 'onetag', 'twotag' ]\n---\n\n# Tags\n\n ()#notatag\n#12345\n #read\nsomething #tagWithCases\n- #tag-with-dash\n#tag_with_underscore #tag/with/nesting\n\n# Dataview\n\nHere is some data in a [dataview1:: a value] line.\nHere is even more data in a (dataview2:: another value) line.\ndataview3:: more data\nnotdataview4: this is not a field\nnotdataview5: this is not a field\n\n# Text content\n\nhttps://example.com/blog/#not-a-tag\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\document_loaders\\sample_documents\\obsidian\\template_var_frontmatter.md", + "filetype": ".md", + "content": "---\naString: {{var}}\nanArray:\n- element\n- {{varElement}}\naDict:\n dictId1: 'val'\n dictId2: '{{varVal}}'\ntags: [ 'tag', '{{varTag}}' ]\n---\n\nFrontmatter contains template variables.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\examples\\example-non-utf8.txt", + "filetype": ".txt", + "content": "Error reading file" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\community\\tests\\unit_tests\\examples\\example-utf8.txt", + "filetype": ".txt", + "content": "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor\nincididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis\nnostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu\nfugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in\nculpa qui officia deserunt mollit anim id est laborum.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\core\\README.md", + "filetype": ".md", + "content": "# \ud83e\udd9c\ud83c\udf4e\ufe0f LangChain Core\n\n[![Downloads](https://static.pepy.tech/badge/langchain_core/month)](https://pepy.tech/project/langchain_core)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## Quick Install\n\n```bash\npip install langchain-core\n```\n\n## What is it?\n\nLangChain Core contains the base abstractions that power the rest of the LangChain ecosystem.\n\nThese abstractions are designed to be as modular and simple as possible. Examples of these abstractions include those for language models, document loaders, embedding models, vectorstores, retrievers, and more.\n\nThe benefit of having these abstractions is that any provider can implement the required interface and then easily be used in the rest of the LangChain ecosystem.\n\nFor full documentation see the [API reference](https://api.python.langchain.com/en/stable/core_api_reference.html).\n\n## 1\ufe0f\u20e3 Core Interface: Runnables\n\nThe concept of a Runnable is central to LangChain Core \u2013 it is the interface that most LangChain Core components implement, giving them\n\n- a common invocation interface (invoke, batch, stream, etc.)\n- built-in utilities for retries, fallbacks, schemas and runtime configurability\n- easy deployment with [LangServe](https://github.com/langchain-ai/langserve)\n\nFor more check out the [runnable docs](https://python.langchain.com/docs/expression_language/interface). Examples of components that implement the interface include: LLMs, Chat Models, Prompts, Retrievers, Tools, Output Parsers.\n\nYou can use LangChain Core objects in two ways:\n\n1. **imperative**, ie. call them directly, eg. `model.invoke(...)`\n\n2. **declarative**, with LangChain Expression Language (LCEL)\n\n3. or a mix of both! eg. one of the steps in your LCEL sequence can be a custom function\n\n| Feature | Imperative | Declarative |\n| --------- | ------------------------------- | -------------- |\n| Syntax | All of Python | LCEL |\n| Tracing | \u2705 \u2013 Automatic | \u2705 \u2013 Automatic |\n| Parallel | \u2705 \u2013 with threads or coroutines | \u2705 \u2013 Automatic |\n| Streaming | \u2705 \u2013 by yielding | \u2705 \u2013 Automatic |\n| Async | \u2705 \u2013 by writing async functions | \u2705 \u2013 Automatic |\n\n## \u26a1\ufe0f What is LangChain Expression Language?\n\nLangChain Expression Language (LCEL) is a _declarative language_ for composing LangChain Core runnables into sequences (or DAGs), covering the most common patterns when building with LLMs.\n\nLangChain Core compiles LCEL sequences to an _optimized execution plan_, with automatic parallelization, streaming, tracing, and async support.\n\nFor more check out the [LCEL docs](https://python.langchain.com/docs/expression_language/).\n\n![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](../../docs/static/img/langchain_stack.png \"LangChain Framework Overview\")\n\nFor more advanced use cases, also check out [LangGraph](https://github.com/langchain-ai/langgraph), which is a graph-based runner for cyclic and recursive LLM workflows.\n\n## \ud83d\udcd5 Releases & Versioning\n\n`langchain-core` is currently on version `0.1.x`.\n\nAs `langchain-core` contains the base abstractions and runtime for the whole LangChain ecosystem, we will communicate any breaking changes with advance notice and version bumps. The exception for this is anything in `langchain_core.beta`. The reason for `langchain_core.beta` is that given the rate of change of the field, being able to move quickly is still a priority, and this module is our attempt to do so.\n\nMinor version increases will occur for:\n\n- Breaking changes for any public interfaces NOT in `langchain_core.beta`\n\nPatch version increases will occur for:\n\n- Bug fixes\n- New features\n- Any changes to private interfaces\n- Any changes to `langchain_core.beta`\n\n## \ud83d\udc81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).\n\n## \u26f0\ufe0f Why build on top of LangChain Core?\n\nThe whole LangChain ecosystem is built on top of LangChain Core, so you're in good company when building on top of it. Some of the benefits:\n\n- **Modularity**: LangChain Core is designed around abstractions that are independent of each other, and not tied to any specific model provider.\n- **Stability**: We are committed to a stable versioning scheme, and will communicate any breaking changes with advance notice and version bumps.\n- **Battle-tested**: LangChain Core components have the largest install base in the LLM ecosystem, and are used in production by many companies.\n- **Community**: LangChain Core is developed in the open, and we welcome contributions from the community.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\core\\tests\\unit_tests\\prompt_file.txt", + "filetype": ".txt", + "content": "Question: {question}\nAnswer:" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\core\\tests\\unit_tests\\data\\prompt_file.txt", + "filetype": ".txt", + "content": "Question: {question}\nAnswer:" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\core\\tests\\unit_tests\\examples\\example-non-utf8.txt", + "filetype": ".txt", + "content": "Error reading file" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\core\\tests\\unit_tests\\examples\\example-utf8.txt", + "filetype": ".txt", + "content": "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor\nincididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis\nnostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu\nfugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in\nculpa qui officia deserunt mollit anim id est laborum.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\core\\tests\\unit_tests\\examples\\simple_template.txt", + "filetype": ".txt", + "content": "Tell me a {adjective} joke about {content}." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\experimental\\README.md", + "filetype": ".md", + "content": "# \ud83e\udd9c\ufe0f\ud83e\uddea LangChain Experimental\n\nThis package holds experimental LangChain code, intended for research and experimental\nuses.\n\n> [!WARNING]\n> Portions of the code in this package may be dangerous if not properly deployed\n> in a sandboxed environment. Please be wary of deploying experimental code\n> to production unless you've taken appropriate precautions and\n> have already discussed it with your security team.\n\nSome of the code here may be marked with security notices. However,\ngiven the exploratory and experimental nature of the code in this package,\nthe lack of a security notice on a piece of code does not mean that\nthe code in question does not require additional security considerations\nin order to be safe to use." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\experimental\\langchain_experimental\\cpal\\README.md", + "filetype": ".md", + "content": "# Causal program-aided language (CPAL) chain\n\n\nsee https://github.com/langchain-ai/langchain/pull/6255\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\README.md", + "filetype": ".md", + "content": "# \ud83e\udd9c\ufe0f\ud83d\udd17 LangChain\n\n\u26a1 Building applications with LLMs through composability \u26a1\n\n[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)\n[![lint](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/lint.yml)\n[![test](https://github.com/langchain-ai/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/test.yml)\n[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)\n[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)\n[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)\n[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)\n[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=social)](https://star-history.com/#langchain-ai/langchain)\n[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)\n[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/issues)\n\n\nLooking for the JS/TS version? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).\n\nTo help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com). \n[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications. \nFill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.\n\n## Quick Install\n\n`pip install langchain`\nor\n`pip install langsmith && conda install langchain -c conda-forge`\n\n## \ud83e\udd14 What is this?\n\nLarge language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. However, using these LLMs in isolation is often insufficient for creating a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge.\n\nThis library aims to assist in the development of those types of applications. Common examples of these applications include:\n\n**\u2753 Question Answering over specific documents**\n\n- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)\n- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)\n\n**\ud83d\udcac Chatbots**\n\n- [Documentation](https://python.langchain.com/docs/use_cases/chatbots/)\n- End-to-end Example: [Chat-LangChain](https://github.com/langchain-ai/chat-langchain)\n\n**\ud83e\udd16 Agents**\n\n- [Documentation](https://python.langchain.com/docs/modules/agents/)\n- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)\n\n## \ud83d\udcd6 Documentation\n\nPlease see [here](https://python.langchain.com) for full documentation on:\n\n- Getting started (installation, setting up the environment, simple examples)\n- How-To examples (demos, integrations, helper functions)\n- Reference (full API docs)\n- Resources (high-level explanation of core concepts)\n\n## \ud83d\ude80 What can this help with?\n\nThere are six main areas that LangChain is designed to help with.\nThese are, in increasing order of complexity:\n\n**\ud83d\udcc3 LLMs and Prompts:**\n\nThis includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.\n\n**\ud83d\udd17 Chains:**\n\nChains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.\n\n**\ud83d\udcda Data Augmented Generation:**\n\nData Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.\n\n**\ud83e\udd16 Agents:**\n\nAgents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.\n\n**\ud83e\udde0 Memory:**\n\nMemory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.\n\n**\ud83e\uddd0 Evaluation:**\n\n[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.\n\nFor more information on these concepts, please see our [full documentation](https://python.langchain.com).\n\n## \ud83d\udc81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\langchain\\chains\\llm_summarization_checker\\prompts\\are_all_true_prompt.txt", + "filetype": ".txt", + "content": "Below are some assertions that have been fact checked and are labeled as true or false.\n\nIf all of the assertions are true, return \"True\". If any of the assertions are false, return \"False\".\n\nHere are some examples:\n===\n\nChecked Assertions: \"\"\"\n- The sky is red: False\n- Water is made of lava: False\n- The sun is a star: True\n\"\"\"\nResult: False\n\n===\n\nChecked Assertions: \"\"\"\n- The sky is blue: True\n- Water is wet: True\n- The sun is a star: True\n\"\"\"\nResult: True\n\n===\n\nChecked Assertions: \"\"\"\n- The sky is blue - True\n- Water is made of lava- False\n- The sun is a star - True\n\"\"\"\nResult: False\n\n===\n\nChecked Assertions:\"\"\"\n{checked_assertions}\n\"\"\"\nResult:" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\langchain\\chains\\llm_summarization_checker\\prompts\\check_facts.txt", + "filetype": ".txt", + "content": "You are an expert fact checker. You have been hired by a major news organization to fact check a very important story.\n\nHere is a bullet point list of facts:\n\"\"\"\n{assertions}\n\"\"\"\n\nFor each fact, determine whether it is true or false about the subject. If you are unable to determine whether the fact is true or false, output \"Undetermined\".\nIf the fact is false, explain why.\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\langchain\\chains\\llm_summarization_checker\\prompts\\create_facts.txt", + "filetype": ".txt", + "content": "Given some text, extract a list of facts from the text.\n\nFormat your output as a bulleted list.\n\nText:\n\"\"\"\n{summary}\n\"\"\"\n\nFacts:" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\langchain\\chains\\llm_summarization_checker\\prompts\\revise_summary.txt", + "filetype": ".txt", + "content": "Below are some assertions that have been fact checked and are labeled as true or false. If the answer is false, a suggestion is given for a correction.\n\nChecked Assertions:\n\"\"\"\n{checked_assertions}\n\"\"\"\n\nOriginal Summary:\n\"\"\"\n{summary}\n\"\"\"\n\nUsing these checked assertions, rewrite the original summary to be completely true.\n\nThe output should have the same structure and formatting as the original summary.\n\nSummary:" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\tests\\README.md", + "filetype": ".md", + "content": "# Langchain Tests\n\n[This guide has moved to the docs](https://python.langchain.com/docs/contributing/testing)\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\tests\\integration_tests\\examples\\whatsapp_chat.txt", + "filetype": ".txt", + "content": "[05.05.23, 15:48:11] James: Hi here\n[11/8/21, 9:41:32 AM] User name: Message 123\n1/23/23, 3:19 AM - User 2: Bye!\n1/23/23, 3:22_AM - User 1: And let me know if anything changes\n[1/24/21, 12:41:03 PM] ~ User name 2: Of course!\n[2023/5/4, 16:13:23] ~ User 2: See you!\n7/19/22, 11:32\u202fPM - User 1: Hello\n7/20/22, 11:32\u202fam - User 2: Goodbye\n4/20/23, 9:42\u202fam - User 3: \n6/29/23, 12:16\u202fam - User 4: This message was deleted\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\tests\\unit_tests\\data\\prompt_file.txt", + "filetype": ".txt", + "content": "Question: {question}\nAnswer:" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\tests\\unit_tests\\examples\\example-non-utf8.txt", + "filetype": ".txt", + "content": "Error reading file" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\langchain\\tests\\unit_tests\\examples\\example-utf8.txt", + "filetype": ".txt", + "content": "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor\nincididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis\nnostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu\nfugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in\nculpa qui officia deserunt mollit anim id est laborum.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\ai21\\README.md", + "filetype": ".md", + "content": "# langchain-ai21\n\nThis package contains the LangChain integrations for [AI21](https://docs.ai21.com/) through their [AI21](https://pypi.org/project/ai21/) SDK.\n\n## Installation and Setup\n\n- Install the AI21 partner package\n```bash\npip install langchain-ai21\n```\n- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)\n\n\n## Chat Models\n\nThis package contains the `ChatAI21` class, which is the recommended way to interface with AI21 Chat models.\n\nTo use, install the requirements, and configure your environment.\n\n```bash\nexport AI21_API_KEY=your-api-key\n```\n\nThen initialize\n\n```python\nfrom langchain_core.messages import HumanMessage\nfrom langchain_ai21.chat_models import ChatAI21\n\nchat = ChatAI21(model=\"j2-ultra\")\nmessages = [HumanMessage(content=\"Hello from AI21\")]\nchat.invoke(messages)\n```\n\n## LLMs\nYou can use AI21's generative AI models as Langchain LLMs:\n\n```python\nfrom langchain.prompts import PromptTemplate\nfrom langchain_ai21 import AI21LLM\n\nllm = AI21LLM(model=\"j2-ultra\")\n\ntemplate = \"\"\"Question: {question}\n\nAnswer: Let's think step by step.\"\"\"\nprompt = PromptTemplate.from_template(template)\n\nchain = prompt | llm\n\nquestion = \"Which scientist discovered relativity?\"\nprint(chain.invoke({\"question\": question}))\n```\n\n## Embeddings\n\nYou can use AI21's embeddings models as:\n\n### Query\n\n```python\nfrom langchain_ai21 import AI21Embeddings\n\nembeddings = AI21Embeddings()\nembeddings.embed_query(\"Hello! This is some query\")\n```\n\n### Document\n\n```python\nfrom langchain_ai21 import AI21Embeddings\n\nembeddings = AI21Embeddings()\nembeddings.embed_documents([\"Hello! This is document 1\", \"And this is document 2!\"])\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\airbyte\\README.md", + "filetype": ".md", + "content": "# langchain-airbyte\n\nThis package contains the LangChain integration with Airbyte\n\n## Installation\n\n```bash\npip install -U langchain-airbyte\n```\n\nThe integration package doesn't have any global environment variables that need to be\nset, but some integrations (e.g. `source-github`) may need credentials passed in.\n\n## Document Loaders\n\n`AirbyteLoader` class exposes a single document loader for Airbyte sources.\n\n```python\nfrom langchain_airbyte import AirbyteLoader\n\nloader = AirbyteLoader(\n source=\"source-faker\",\n stream=\"users\",\n config={\"count\": 100},\n)\ndocs = loader.load()\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\anthropic\\README.md", + "filetype": ".md", + "content": "# langchain-anthropic\n\nThis package contains the LangChain integration for Anthropic's generative models.\n\n## Installation\n\n`pip install -U langchain-anthropic`\n\n## Chat Models\n\n| API Model Name | Model Family |\n| ------------------ | -------------- |\n| claude-instant-1.2 | Claude Instant |\n| claude-2.1 | Claude |\n| claude-2.0 | Claude |\n\nTo use, you should have an Anthropic API key configured. Initialize the model as:\n\n```\nfrom langchain_anthropic import ChatAnthropicMessages\nfrom langchain_core.messages import AIMessage, HumanMessage\n\nmodel = ChatAnthropicMessages(model=\"claude-2.1\", temperature=0, max_tokens=1024)\n```\n\n### Define the input message\n\n`message = HumanMessage(content=\"What is the capital of France?\")`\n\n### Generate a response using the model\n\n`response = model.invoke([message])`\n\nFor a more detailed walkthrough see [here](https://python.langchain.com/docs/integrations/chat/anthropic).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\astradb\\README.md", + "filetype": ".md", + "content": "This package has moved!\n\nhttps://github.com/langchain-ai/langchain-datastax/tree/main/libs/astradb" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\elasticsearch\\README.md", + "filetype": ".md", + "content": "# langchain-elasticsearch\n\nThis package contains the LangChain integration with Elasticsearch.\n\n## Installation\n\n```bash\npip install -U langchain-elasticsearch\n```\n\nTODO document how to get id and key\n\n## Usage\n\nThe `ElasticsearchStore` class exposes the connection to the Pinecone vector store.\n\n```python\nfrom langchain_elasticsearch import ElasticsearchStore\n\nembeddings = ... # use a LangChain Embeddings class\n\nvectorstore = ElasticsearchStore(\n es_cloud_id=\"your-cloud-id\",\n es_api_key=\"your-api-key\",\n index_name=\"your-index-name\",\n embeddings=embeddings,\n)\n```\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\exa\\README.md", + "filetype": ".md", + "content": "# langchain-exa\n\nThis package contains the LangChain integrations for Exa Cloud generative models.\n\n## Installation\n\n```bash\npip install -U langchain-exa\n```\n\n## Exa Search Retriever\n\nYou can retrieve search results as follows\n\n```python\nfrom langchain_exa import ExaSearchRetriever\n\nexa_api_key = \"YOUR API KEY\"\n\n# Create a new instance of the ExaSearchRetriever\nexa = ExaSearchRetriever(exa_api_key=exa_api_key)\n\n# Search for a query and save the results\nresults = exa.get_relevant_documents(query=\"What is the capital of France?\")\n\n# Print the results\nprint(results)\n```\n\n## Exa Search Results\n\nYou can run the ExaSearchResults module as follows\n\n```python\nfrom langchain_exa import ExaSearchResults\n\n# Initialize the ExaSearchResults tool\nsearch_tool = ExaSearchResults(exa_api_key=\"YOUR API KEY\")\n\n# Perform a search query\nsearch_results = search_tool._run(\n query=\"When was the last time the New York Knicks won the NBA Championship?\",\n num_results=5,\n text_contents_options=True,\n highlights=True\n)\n\nprint(\"Search Results:\", search_results)\n```\n\n## Exa Find Similar Results\n\nYou can run the ExaFindSimilarResults module as follows\n\n```python\nfrom langchain_exa import ExaFindSimilarResults\n\n# Initialize the ExaFindSimilarResults tool\nfind_similar_tool = ExaFindSimilarResults(exa_api_key=\"YOUR API KEY\")\n\n# Find similar results based on a URL\nsimilar_results = find_similar_tool._run(\n url=\"http://espn.com\",\n num_results=5,\n text_contents_options=True,\n highlights=True\n)\n\nprint(\"Similar Results:\", similar_results)\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\fireworks\\README.md", + "filetype": ".md", + "content": "# LangChain-Fireworks\n\nThis is the partner package for tying Fireworks.ai and LangChain. Fireworks really strive to provide good support for LangChain use cases, so if you run into any issues please let us know. You can reach out to us [in our Discord channel](https://discord.com/channels/1137072072808472616/)\n\n\n## Installation\n\nTo use the `langchain-fireworks` package, follow these installation steps:\n\n```bash\npip install langchain-fireworks\n```\n\n\n\n## Basic usage\n\n### Setting up\n\n1. Sign in to [Fireworks AI](http://fireworks.ai/) to obtain an API Key to access the models, and make sure it is set as the `FIREWORKS_API_KEY` environment variable.\n\n Once you've signed in and obtained an API key, follow these steps to set the `FIREWORKS_API_KEY` environment variable:\n - **Linux/macOS:** Open your terminal and execute the following command:\n ```bash\n export FIREWORKS_API_KEY='your_api_key'\n ```\n **Note:** To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.\n\n - **Windows:** For Command Prompt, use:\n ```cmd\n set FIREWORKS_API_KEY=your_api_key\n ```\n\n2. Set up your model using a model id. If the model is not set, the default model is `fireworks-llama-v2-7b-chat`. See the full, most up-to-date model list on [fireworks.ai](https://fireworks.ai/models).\n\n```python\nimport getpass\nimport os\n\n# Initialize a Fireworks model\nllm = Fireworks(\n model=\"accounts/fireworks/models/mixtral-8x7b-instruct\",\n base_url=\"https://api.fireworks.ai/inference/v1/completions\",\n)\n```\n\n\n### Calling the Model Directly\n\nYou can call the model directly with string prompts to get completions.\n\n```python\n# Single prompt\noutput = llm.invoke(\"Who's the best quarterback in the NFL?\")\nprint(output)\n```\n\n```python\n# Calling multiple prompts\noutput = llm.generate(\n [\n \"Who's the best cricket player in 2016?\",\n \"Who's the best basketball player in the league?\",\n ]\n)\nprint(output.generations)\n```\n\n\n\n\n\n## Advanced usage\n### Tool use: LangChain Agent + Fireworks function calling model\nPlease checkout how to teach Fireworks function calling model to use a [calculator here](https://github.com/fw-ai/cookbook/blob/main/examples/function_calling/fireworks_langchain_tool_usage.ipynb). \n\nFireworks focus on delivering the best experience for fast model inference as well as tool use. You can check out [our blog](https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling) for more details on how it fares compares to GPT-4, the punchline is that it is on par with GPT-4 in terms just function calling use cases, but it is way faster and much cheaper.\n\n### RAG: LangChain agent + Fireworks function calling model + MongoDB + Nomic AI embeddings\nPlease check out the [cookbook here](https://github.com/fw-ai/cookbook/blob/main/examples/rag/mongodb_agent.ipynb) for an end to end flow" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\google-genai\\README.md", + "filetype": ".md", + "content": "This package has moved!\n\nhttps://github.com/langchain-ai/langchain-google/tree/main/libs/genai" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\google-vertexai\\README.md", + "filetype": ".md", + "content": "This package has moved!\n\nhttps://github.com/langchain-ai/langchain-google/tree/main/libs/vertexai" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\groq\\README.md", + "filetype": ".md", + "content": "# langchain-groq\n\n## Welcome to Groq! \ud83d\ude80\n\nAt Groq, we've developed the world's first Language Processing Unit\u2122, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.\n\nBeyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:\n\n* Achieve uncompromised low latency and performance for real-time AI and HPC inferences \ud83d\udd25\n* Know the exact performance and compute time for any given workload \ud83d\udd2e\n* Take advantage of our cutting-edge technology to stay ahead of the competition \ud83d\udcaa\n\nWant more Groq? Check out our [website](https://groq.com) for more resources and join our [Discord community](https://discord.gg/JvNsBDKeCG) to connect with our developers!\n\n\n## Installation and Setup\nInstall the integration package:\n\n```bash\npip install langchain-groq\n```\n\nRequest an [API key](https://wow.groq.com) and set it as an environment variable\n\n```bash\nexport GROQ_API_KEY=gsk_...\n```\n\n## Chat Model\nSee a [usage example](https://python.langchain.com/docs/integrations/chat/groq).\n\n## Development\n\nTo develop the `langchain-groq` package, you'll need to follow these instructions:\n\n### Install dev dependencies\n\n```bash\npoetry install --with test,test_integration,lint,codespell\n```\n\n### Build the package\n\n```bash\npoetry build\n```\n\n### Run unit tests\n\nUnit tests live in `tests/unit_tests` and SHOULD NOT require an internet connection or a valid API KEY. Run unit tests with\n\n```bash\nmake tests\n```\n\n### Run integration tests\n\nIntegration tests live in `tests/integration_tests` and require a connection to the Groq API and a valid API KEY.\n\n```bash\nmake integration_tests\n```\n\n### Lint & Format\n\nRun additional tests and linters to ensure your code is up to standard.\n\n```bash\nmake lint spell_check check_imports\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\ibm\\README.md", + "filetype": ".md", + "content": "# langchain-ibm\n\nThis package provides the integration between LangChain and IBM watsonx.ai through the `ibm-watsonx-ai` SDK.\n\n## Installation\n\nTo use the `langchain-ibm` package, follow these installation steps:\n\n```bash\npip install langchain-ibm\n```\n\n## Usage\n\n### Setting up\n\nTo use IBM's models, you must have an IBM Cloud user API key. Here's how to obtain and set up your API key:\n\n1. **Obtain an API Key:** For more details on how to create and manage an API key, refer to IBM's [documentation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui).\n2. **Set the API Key as an Environment Variable:** For security reasons, it's recommended to not hard-code your API key directly in your scripts. Instead, set it up as an environment variable. You can use the following code to prompt for the API key and set it as an environment variable:\n\n```python\nimport os\nfrom getpass import getpass\n\nwatsonx_api_key = getpass()\nos.environ[\"WATSONX_APIKEY\"] = watsonx_api_key\n```\n\nIn alternative, you can set the environment variable in your terminal.\n\n- **Linux/macOS:** Open your terminal and execute the following command:\n ```bash\n export WATSONX_APIKEY='your_ibm_api_key'\n ```\n To make this environment variable persistent across terminal sessions, add the above line to your `~/.bashrc`, `~/.bash_profile`, or `~/.zshrc` file.\n\n- **Windows:** For Command Prompt, use:\n ```cmd\n set WATSONX_APIKEY=your_ibm_api_key\n ```\n\n### Loading the model\n\nYou might need to adjust model parameters for different models or tasks. For more details on the parameters, refer to IBM's [documentation](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#metanames.GenTextParamsMetaNames).\n\n```python\nparameters = {\n \"decoding_method\": \"sample\",\n \"max_new_tokens\": 100,\n \"min_new_tokens\": 1,\n \"temperature\": 0.5,\n \"top_k\": 50,\n \"top_p\": 1,\n}\n```\n\nInitialize the WatsonxLLM class with the previously set parameters.\n\n```python\nfrom langchain_ibm import WatsonxLLM\n\nwatsonx_llm = WatsonxLLM(\n model_id=\"PASTE THE CHOSEN MODEL_ID HERE\",\n url=\"PASTE YOUR URL HERE\",\n project_id=\"PASTE YOUR PROJECT_ID HERE\",\n params=parameters,\n)\n```\n\n**Note:**\n- You must provide a `project_id` or `space_id`. For more information refer to IBM's [documentation](https://www.ibm.com/docs/en/watsonx-as-a-service?topic=projects).\n- Depending on the region of your provisioned service instance, use one of the urls described [here](https://ibm.github.io/watsonx-ai-python-sdk/setup_cloud.html#authentication).\n- You need to specify the model you want to use for inferencing through `model_id`. You can find the list of available models [here](https://ibm.github.io/watsonx-ai-python-sdk/fm_model.html#ibm_watsonx_ai.foundation_models.utils.enums.ModelTypes).\n\n\nAlternatively you can use Cloud Pak for Data credentials. For more details, refer to IBM's [documentation](https://ibm.github.io/watsonx-ai-python-sdk/setup_cpd.html).\n\n```python\nwatsonx_llm = WatsonxLLM(\n model_id=\"ibm/granite-13b-instruct-v2\",\n url=\"PASTE YOUR URL HERE\",\n username=\"PASTE YOUR USERNAME HERE\",\n password=\"PASTE YOUR PASSWORD HERE\",\n instance_id=\"openshift\",\n version=\"4.8\",\n project_id=\"PASTE YOUR PROJECT_ID HERE\",\n params=parameters,\n)\n```\n\n### Create a Chain\n\nCreate `PromptTemplate` objects which will be responsible for creating a random question.\n\n```python\nfrom langchain.prompts import PromptTemplate\n\ntemplate = \"Generate a random question about {topic}: Question: \"\nprompt = PromptTemplate.from_template(template)\n```\n\nProvide a topic and run the LLMChain.\n\n```python\nfrom langchain.chains import LLMChain\n\nllm_chain = LLMChain(prompt=prompt, llm=watsonx_llm)\nresponse = llm_chain.invoke(\"dog\")\nprint(response)\n```\n\n### Calling the Model Directly\nTo obtain completions, you can call the model directly using a string prompt.\n\n```python\n# Calling a single prompt\n\nresponse = watsonx_llm.invoke(\"Who is man's best friend?\")\nprint(response)\n```\n\n```python\n# Calling multiple prompts\n\nresponse = watsonx_llm.generate(\n [\n \"The fastest dog in the world?\",\n \"Describe your chosen dog breed\",\n ]\n)\nprint(response)\n```\n\n### Streaming the Model output\n\nYou can stream the model output.\n\n```python\nfor chunk in watsonx_llm.stream(\n \"Describe your favorite breed of dog and why it is your favorite.\"\n):\n print(chunk, end=\"\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\mistralai\\README.md", + "filetype": ".md", + "content": "# langchain-mistralai\n\nThis package contains the LangChain integrations for [MistralAI](https://docs.mistral.ai) through their [mistralai](https://pypi.org/project/mistralai/) SDK.\n\n## Installation\n\n```bash\npip install -U langchain-mistralai\n```\n\n## Chat Models\n\nThis package contains the `ChatMistralAI` class, which is the recommended way to interface with MistralAI models.\n\nTo use, install the requirements, and configure your environment.\n\n```bash\nexport MISTRAL_API_KEY=your-api-key\n```\n\nThen initialize\n\n```python\nfrom langchain_core.messages import HumanMessage\nfrom langchain_mistralai.chat_models import ChatMistralAI\n\nchat = ChatMistralAI(model=\"mistral-small\")\nmessages = [HumanMessage(content=\"say a brief hello\")]\nchat.invoke(messages)\n```\n\n`ChatMistralAI` also supports async and streaming functionality:\n\n```python\n# For async...\nawait chat.ainvoke(messages)\n\n# For streaming...\nfor chunk in chat.stream(messages):\n print(chunk.content, end=\"\", flush=True)\n```\n\n## Embeddings\n\nWith `MistralAIEmbeddings`, you can directly use the default model 'mistral-embed', or set a different one if available.\n\n### Choose model\n\n`embedding.model = 'mistral-embed'`\n\n### Simple query\n\n`res_query = embedding.embed_query(\"The test information\")`\n\n### Documents\n\n`res_document = embedding.embed_documents([\"test1\", \"another test\"])`" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\mongodb\\README.md", + "filetype": ".md", + "content": "# langchain-mongodb\n\n# Installation\n```\npip install -U langchain-mongodb\n```\n\n# Usage\n- See [integrations doc](../../../docs/docs/integrations/vectorstores/mongodb.ipynb) for more in-depth usage instructions.\n- See [Getting Started with the LangChain Integration](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/#get-started-with-the-langchain-integration) for a walkthrough on using your first LangChain implementation with MongoDB Atlas.\n\n## Using MongoDBAtlasVectorSearch\n```python\nfrom langchain_mongodb import MongoDBAtlasVectorSearch\n\n# Pull MongoDB Atlas URI from environment variables\nMONGODB_ATLAS_CLUSTER_URI = os.environ.get(\"MONGODB_ATLAS_CLUSTER_URI\")\n\nDB_NAME = \"langchain_db\"\nCOLLECTION_NAME = \"test\"\nATLAS_VECTOR_SEARCH_INDEX_NAME = \"index_name\"\nMONGODB_COLLECTION = client[DB_NAME][COLLECITON_NAME]\n\n# Create the vector search via `from_connection_string`\nvector_search = MongoDBAtlasVectorSearch.from_connection_string(\n MONGODB_ATLAS_CLUSTER_URI,\n DB_NAME + \".\" + COLLECTION_NAME,\n OpenAIEmbeddings(disallowed_special=()),\n index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n)\n\n# Initialize MongoDB python client\nclient = MongoClient(MONGODB_ATLAS_CLUSTER_URI)\n# Create the vector search via instantiation\nvector_search_2 = MongoDBAtlasVectorSearch(\n collection=MONGODB_COLLECTION,\n embeddings=OpenAIEmbeddings(disallowed_special=()),\n index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n)\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\nomic\\README.md", + "filetype": ".md", + "content": "# langchain-nomic\n\nThis package contains the LangChain integration with Nomic\n\n## Installation\n\n```bash\npip install -U langchain-nomic\n```\n\nAnd you should configure credentials by setting the following environment variables:\n\n* `NOMIC_API_KEY`: your nomic API key\n\n## Embeddings\n\n`NomicEmbeddings` class exposes embeddings from Nomic.\n\n```python\nfrom langchain_nomic import NomicEmbeddings\n\nembeddings = NomicEmbeddings()\nembeddings.embed_query(\"What is the meaning of life?\")" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\nvidia-ai-endpoints\\README.md", + "filetype": ".md", + "content": "# langchain-nvidia-ai-endpoints\n\nThe `langchain-nvidia-ai-endpoints` package contains LangChain integrations for chat models and embeddings powered by the [NVIDIA AI Foundation Model](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) playground environment. \n\n> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to hosted endpoints for generative AI models like Llama-2, SteerLM, Mistral, etc. Using the API, you can query live endpoints available on the [NVIDIA GPU Cloud (NGC)](https://catalog.ngc.nvidia.com/ai-foundation-models) to get quick results from a DGX-hosted cloud compute environment. All models are source-accessible and can be deployed on your own compute cluster.\n\nBelow is an example on how to use some common functionality surrounding text-generative and embedding models\n\n## Installation\n\n\n```python\n%pip install -U --quiet langchain-nvidia-ai-endpoints\n```\n\n## Setup\n\n**To get started:**\n1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.\n2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.\n3. Select the `API` option and click `Generate Key`.\n4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.\n\n\n```python\nimport getpass\nimport os\n\nif not os.environ.get(\"NVIDIA_API_KEY\", \"\").startswith(\"nvapi-\"):\n nvidia_api_key = getpass.getpass(\"Enter your NVIDIA AIPLAY API key: \")\n assert nvidia_api_key.startswith(\"nvapi-\"), f\"{nvidia_api_key[:5]}... is not a valid key\"\n os.environ[\"NVIDIA_API_KEY\"] = nvidia_api_key\n```\n\n\n```python\n## Core LC Chat Interface\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\n\nllm = ChatNVIDIA(model=\"mixtral_8x7b\")\nresult = llm.invoke(\"Write a ballad about LangChain.\")\nprint(result.content)\n```\n\n\n## Stream, Batch, and Async\n\nThese models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.\n\n\n```python\nprint(llm.batch([\"What's 2*3?\", \"What's 2*6?\"]))\n# Or via the async API\n# await llm.abatch([\"What's 2*3?\", \"What's 2*6?\"])\n```\n\n\n```python\nfor chunk in llm.stream(\"How far can a seagull fly in one day?\"):\n # Show the token separations\n print(chunk.content, end=\"|\")\n```\n\n\n```python\nasync for chunk in llm.astream(\"How long does it take for monarch butterflies to migrate?\"):\n print(chunk.content, end=\"|\")\n```\n\n## Supported models\n\nQuerying `available_models` will still give you all of the other models offered by your API credentials.\n\nThe `playground_` prefix is optional.\n\n\n```python\nlist(llm.available_models)\n\n\n# ['playground_llama2_13b',\n# 'playground_llama2_code_13b',\n# 'playground_clip',\n# 'playground_fuyu_8b',\n# 'playground_mistral_7b',\n# 'playground_nvolveqa_40k',\n# 'playground_yi_34b',\n# 'playground_nemotron_steerlm_8b',\n# 'playground_nv_llama2_rlhf_70b',\n# 'playground_llama2_code_34b',\n# 'playground_mixtral_8x7b',\n# 'playground_neva_22b',\n# 'playground_steerlm_llama_70b',\n# 'playground_nemotron_qa_8b',\n# 'playground_sdxl']\n```\n\n\n## Model types\n\nAll of these models above are supported and can be accessed via `ChatNVIDIA`. \n\nSome model types support unique prompting techniques and chat messages. We will review a few important ones below.\n\n\n**To find out more about a specific model, please navigate to the API section of an AI Foundation Model [as linked here](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ai-foundation/models/codellama-13b/api).**\n\n### General Chat\n\nModels such as `llama2_13b` and `mixtral_8x7b` are good all-around models that you can use for with any LangChain chat messages. Example below.\n\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\n\nprompt = ChatPromptTemplate.from_messages(\n [\n (\"system\", \"You are a helpful AI assistant named Fred.\"),\n (\"user\", \"{input}\")\n ]\n)\nchain = (\n prompt\n | ChatNVIDIA(model=\"llama2_13b\")\n | StrOutputParser()\n)\n\nfor txt in chain.stream({\"input\": \"What's your name?\"}):\n print(txt, end=\"\")\n```\n\n\n### Code Generation\n\nThese models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `llama2_code_13b`.\n\n\n```python\nprompt = ChatPromptTemplate.from_messages(\n [\n (\"system\", \"You are an expert coding AI. Respond only in valid python; no narration whatsoever.\"),\n (\"user\", \"{input}\")\n ]\n)\nchain = (\n prompt\n | ChatNVIDIA(model=\"llama2_code_13b\")\n | StrOutputParser()\n)\n\nfor txt in chain.stream({\"input\": \"How do I solve this fizz buzz problem?\"}):\n print(txt, end=\"\")\n```\n\n## Steering LLMs\n\n> [SteerLM-optimized models](https://developer.nvidia.com/blog/announcing-steerlm-a-simple-and-practical-technique-to-customize-llms-during-inference/) supports \"dynamic steering\" of model outputs at inference time.\n\nThis lets you \"control\" the complexity, verbosity, and creativity of the model via integer labels on a scale from 0 to 9. Under the hood, these are passed as a special type of assistant message to the model.\n\nThe \"steer\" models support this type of input, such as `steerlm_llama_70b`\n\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\n\nllm = ChatNVIDIA(model=\"steerlm_llama_70b\")\n# Try making it uncreative and not verbose\ncomplex_result = llm.invoke(\n \"What's a PB&J?\",\n labels={\"creativity\": 0, \"complexity\": 3, \"verbosity\": 0}\n)\nprint(\"Un-creative\\n\")\nprint(complex_result.content)\n\n# Try making it very creative and verbose\nprint(\"\\n\\nCreative\\n\")\ncreative_result = llm.invoke(\n \"What's a PB&J?\",\n labels={\"creativity\": 9, \"complexity\": 3, \"verbosity\": 9}\n)\nprint(creative_result.content)\n```\n\n\n#### Use within LCEL\n\nThe labels are passed as invocation params. You can `bind` these to the LLM using the `bind` method on the LLM to include it within a declarative, functional chain. Below is an example.\n\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\n\nprompt = ChatPromptTemplate.from_messages(\n [\n (\"system\", \"You are a helpful AI assistant named Fred.\"),\n (\"user\", \"{input}\")\n ]\n)\nchain = (\n prompt\n | ChatNVIDIA(model=\"steerlm_llama_70b\").bind(labels={\"creativity\": 9, \"complexity\": 0, \"verbosity\": 9})\n | StrOutputParser()\n)\n\nfor txt in chain.stream({\"input\": \"Why is a PB&J?\"}):\n print(txt, end=\"\")\n```\n\n## Multimodal\n\nNVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over.\n\nThese models also accept `labels`, similar to the Steering LLMs above. In addition to `creativity`, `complexity`, and `verbosity`, these models support a `quality` toggle.\n\nAn example model supporting multimodal inputs is `playground_neva_22b`.\n\nThese models accept LangChain's standard image formats. Below are examples.\n\n\n```python\nimport requests\n\nimage_url = \"https://picsum.photos/seed/kitten/300/200\"\nimage_content = requests.get(image_url).content\n```\n\nInitialize the model like so:\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\n\nllm = ChatNVIDIA(model=\"playground_neva_22b\")\n```\n\n#### Passing an image as a URL\n\n\n```python\nfrom langchain_core.messages import HumanMessage\n\nllm.invoke(\n [\n HumanMessage(content=[\n {\"type\": \"text\", \"text\": \"Describe this image:\"},\n {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n ])\n ])\n```\n\n\n```python\n### You can specify the labels for steering here as well. You can try setting a low verbosity, for instance\n\nfrom langchain_core.messages import HumanMessage\n\nllm.invoke(\n [\n HumanMessage(content=[\n {\"type\": \"text\", \"text\": \"Describe this image:\"},\n {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n ])\n ],\n labels={\n \"creativity\": 0,\n \"quality\": 9,\n \"complexity\": 0,\n \"verbosity\": 0\n }\n)\n```\n\n\n\n#### Passing an image as a base64 encoded string\n\n\n```python\nimport base64\nb64_string = base64.b64encode(image_content).decode('utf-8')\nllm.invoke(\n [\n HumanMessage(content=[\n {\"type\": \"text\", \"text\": \"Describe this image:\"},\n {\"type\": \"image_url\", \"image_url\": {\"url\": f\"data:image/png;base64,{b64_string}\"}},\n ])\n ])\n```\n\n#### Directly within the string\n\nThe NVIDIA API uniquely accepts images as base64 images inlined within HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly.\n\n\n```python\nbase64_with_mime_type = f\"data:image/png;base64,{b64_string}\"\nllm.invoke(\n f'What\\'s in this image?\\n'\n)\n```\n\n\n\n## RAG: Context models\n\nNVIDIA also has Q&A models that support a special \"context\" chat message containing retrieved context (such as documents within a RAG chain). This is useful to avoid prompt-injecting the model.\n\n**Note:** Only \"user\" (human) and \"context\" chat messages are supported for these models, not system or AI messages useful in conversational flows.\n\nThe `_qa_` models like `nemotron_qa_8b` support this.\n\n\n```python\nfrom langchain_nvidia_ai_endpoints import ChatNVIDIA\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.output_parsers import StrOutputParser\nfrom langchain_core.messages import ChatMessage\nprompt = ChatPromptTemplate.from_messages(\n [\n ChatMessage(role=\"context\", content=\"Parrots and Cats have signed the peace accord.\"),\n (\"user\", \"{input}\")\n ]\n)\nllm = ChatNVIDIA(model=\"nemotron_qa_8b\")\nchain = (\n prompt\n | llm\n | StrOutputParser()\n)\nchain.invoke({\"input\": \"What was signed?\"})\n```\n\n## Embeddings\n\nYou can also connect to embeddings models through this package. Below is an example:\n\n```\nfrom langchain_nvidia_ai_endpoints import NVIDIAEmbeddings\n\nembedder = NVIDIAEmbeddings(model=\"nvolveqa_40k\")\nembedder.embed_query(\"What's the temperature today?\")\nembedder.embed_documents([\n \"The temperature is 42 degrees.\",\n \"Class is dismissed at 9 PM.\"\n])\n```\n\nBy default the embedding model will use the \"passage\" type for documents and \"query\" type for queries, but you can fix this on the instance.\n\n```python\nquery_embedder = NVIDIAEmbeddings(model=\"nvolveqa_40k\", model_type=\"query\")\ndoc_embeddder = NVIDIAEmbeddings(model=\"nvolveqa_40k\", model_type=\"passage\")\n```\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\nvidia-trt\\README.md", + "filetype": ".md", + "content": "# langchain-nvidia-trt\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\openai\\README.md", + "filetype": ".md", + "content": "# langchain-openai\n\nThis package contains the LangChain integrations for OpenAI through their `openai` SDK.\n\n## Installation and Setup\n\n- Install the LangChain partner package\n```bash\npip install langchain-openai\n```\n- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)\n\n\n## LLM\n\nSee a [usage example](http://python.langchain.com/docs/integrations/llms/openai).\n\n```python\nfrom langchain_openai import OpenAI\n```\n\nIf you are using a model hosted on `Azure`, you should use different wrapper for that:\n```python\nfrom langchain_openai import AzureOpenAI\n```\nFor a more detailed walkthrough of the `Azure` wrapper, see [here](http://python.langchain.com/docs/integrations/llms/azure_openai)\n\n\n## Chat model\n\nSee a [usage example](http://python.langchain.com/docs/integrations/chat/openai).\n\n```python\nfrom langchain_openai import ChatOpenAI\n```\n\nIf you are using a model hosted on `Azure`, you should use different wrapper for that:\n```python\nfrom langchain_openai import AzureChatOpenAI\n```\nFor a more detailed walkthrough of the `Azure` wrapper, see [here](http://python.langchain.com/docs/integrations/chat/azure_chat_openai)\n\n\n## Text Embedding Model\n\nSee a [usage example](http://python.langchain.com/docs/integrations/text_embedding/openai)\n\n```python\nfrom langchain_openai import OpenAIEmbeddings\n```\n\nIf you are using a model hosted on `Azure`, you should use different wrapper for that:\n```python\nfrom langchain_openai import AzureOpenAIEmbeddings\n```\nFor a more detailed walkthrough of the `Azure` wrapper, see [here](https://python.langchain.com/docs/integrations/text_embedding/azureopenai)" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\pinecone\\README.md", + "filetype": ".md", + "content": "# langchain-pinecone\n\nThis package contains the LangChain integration with Pinecone.\n\n## Installation\n\n```bash\npip install -U langchain-pinecone\n```\n\nAnd you should configure credentials by setting the following environment variables:\n\n- `PINECONE_API_KEY`\n- `PINECONE_INDEX_NAME`\n\n## Usage\n\nThe `PineconeVectorStore` class exposes the connection to the Pinecone vector store.\n\n```python\nfrom langchain_pinecone import PineconeVectorStore\n\nembeddings = ... # use a LangChain Embeddings class\n\nvectorstore = PineconeVectorStore(embeddings=embeddings)\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\robocorp\\README.md", + "filetype": ".md", + "content": "# langchain-robocorp\n\nThis package contains the LangChain integrations for [Robocorp](https://github.com/robocorp/robocorp).\n\n## Installation\n\n```bash\npip install -U langchain-robocorp\n```\n\n## Action Server Toolkit\n\nSee [ActionServerToolkit](https://python.langchain.com/docs/integrations/toolkits/robocorp) for detailed documentation.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\partners\\together\\README.md", + "filetype": ".md", + "content": "# langchain-together\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\libs\\text-splitters\\README.md", + "filetype": ".md", + "content": "# \ud83e\udd9c\u2702\ufe0f LangChain Text Splitters\n\n[![Downloads](https://static.pepy.tech/badge/langchain_text_splitters/month)](https://pepy.tech/project/langchain_text_splitters)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\n## Quick Install\n\n```bash\npip install langchain-text-splitters\n```\n\n## What is it?\n\nLangChain Text Splitters contains utilities for splitting into chunks a wide variety of text documents.\n\nFor full documentation see the [API reference](https://api.python.langchain.com/en/stable/text_splitters_api_reference.html)\nand the [Text Splitters](https://python.langchain.com/docs/modules/data_connection/document_transformers/) module in the main docs.\n\n## \ud83d\udcd5 Releases & Versioning\n\n`langchain-text-splitters` is currently on version `0.0.x`.\n\nMinor version increases will occur for:\n\n- Breaking changes for any public interfaces NOT marked `beta`\n\nPatch version increases will occur for:\n\n- Bug fixes\n- New features\n- Any changes to private interfaces\n- Any changes to `beta` features\n\n## \ud83d\udc81 Contributing\n\nAs an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.\n\nFor detailed information on how to contribute, see the [Contributing Guide](https://python.langchain.com/docs/contributing/).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\README.md", + "filetype": ".md", + "content": "# LangChain Templates\n\nLangChain Templates are the easiest and fastest way to build a production-ready LLM application.\nThese templates serve as a set of reference architectures for a wide variety of popular LLM use cases.\nThey are all in a standard format which make it easy to deploy them with [LangServe](https://github.com/langchain-ai/langserve).\n\n\ud83d\udea9 We will be releasing a hosted version of LangServe for one-click deployments of LangChain applications. [Sign up here](https://airtable.com/app0hN6sd93QcKubv/shrAjst60xXa6quV2) to get on the waitlist.\n\n## Quick Start\n\nTo use, first install the LangChain CLI.\n\n```shell\npip install -U langchain-cli\n```\n\nNext, create a new LangChain project:\n\n```shell\nlangchain app new my-app\n```\n\nThis will create a new directory called `my-app` with two folders:\n\n- `app`: This is where LangServe code will live\n- `packages`: This is where your chains or agents will live\n\nTo pull in an existing template as a package, you first need to go into your new project:\n\n```shell\ncd my-app\n```\n\nAnd you can the add a template as a project.\nIn this getting started guide, we will add a simple `pirate-speak` project.\nAll this project does is convert user input into pirate speak.\n\n```shell\nlangchain app add pirate-speak\n```\n\nThis will pull in the specified template into `packages/pirate-speak`\n\nYou will then be prompted if you want to install it. \nThis is the equivalent of running `pip install -e packages/pirate-speak`.\nYou should generally accept this (or run that same command afterwards).\nWe install it with `-e` so that if you modify the template at all (which you likely will) the changes are updated.\n\nAfter that, it will ask you if you want to generate route code for this project.\nThis is code you need to add to your app to start using this chain.\nIf we accept, we will see the following code generated:\n\n```shell\nfrom pirate_speak.chain import chain as pirate_speak_chain\n\nadd_routes(app, pirate_speak_chain, path=\"/pirate-speak\")\n```\n\nYou can now edit the template you pulled down.\nYou can change the code files in `packages/pirate-speak` to use a different model, different prompt, different logic.\nNote that the above code snippet always expects the final chain to be importable as `from pirate_speak.chain import chain`,\nso you should either keep the structure of the package similar enough to respect that or be prepared to update that code snippet.\n\nOnce you have done as much of that as you want, it is \nIn order to have LangServe use this project, you then need to modify `app/server.py`.\nSpecifically, you should add the above code snippet to `app/server.py` so that file looks like:\n\n```python\nfrom fastapi import FastAPI\nfrom langserve import add_routes\nfrom pirate_speak.chain import chain as pirate_speak_chain\n\napp = FastAPI()\n\nadd_routes(app, pirate_speak_chain, path=\"/pirate-speak\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nFor this particular application, we will use OpenAI as the LLM, so we need to export our OpenAI API key:\n\n```shell\nexport OPENAI_API_KEY=sk-...\n```\n\nYou can then spin up production-ready endpoints, along with a playground, by running:\n\n```shell\nlangchain serve\n```\n\nThis now gives a fully deployed LangServe application.\nFor example, you get a playground out-of-the-box at [http://127.0.0.1:8000/pirate-speak/playground/](http://127.0.0.1:8000/pirate-speak/playground/):\n\n![Screenshot of the LangServe Playground interface with input and output fields demonstrating pirate speak conversion.](docs/playground.png \"LangServe Playground Interface\")\n\nAccess API documentation at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\n\n![Screenshot of the API documentation interface showing available endpoints for the pirate-speak application.](docs/docs.png \"API Documentation Interface\")\n\nUse the LangServe python or js SDK to interact with the API as if it were a regular [Runnable](https://python.langchain.com/docs/expression_language/).\n\n```python\nfrom langserve import RemoteRunnable\n\napi = RemoteRunnable(\"http://127.0.0.1:8000/pirate-speak\")\napi.invoke({\"text\": \"hi\"})\n```\n\nThat's it for the quick start!\nYou have successfully downloaded your first template and deployed it with LangServe.\n\n\n## Additional Resources\n\n### [Index of Templates](docs/INDEX.md)\n\nExplore the many templates available to use - from advanced RAG to agents.\n\n### [Contributing](docs/CONTRIBUTING.md)\n\nWant to contribute your own template? It's pretty easy! These instructions walk through how to do that.\n\n### [Launching LangServe from a Package](docs/LAUNCHING_PACKAGE.md)\n\nYou can also launch LangServe from a package directly (without having to create a new project).\nThese instructions cover how to do that.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\anthropic-iterative-search\\README.md", + "filetype": ".md", + "content": "\n# anthropic-iterative-search\n\nThis template will create a virtual research assistant with the ability to search Wikipedia to find answers to your questions.\n\nIt is heavily inspired by [this notebook](https://github.com/anthropics/anthropic-cookbook/blob/main/long_context/wikipedia-search-cookbook.ipynb).\n\n## Environment Setup\n\nSet the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package anthropic-iterative-search\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add anthropic-iterative-search\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom anthropic_iterative_search import chain as anthropic_iterative_search_chain\n\nadd_routes(app, anthropic_iterative_search_chain, path=\"/anthropic-iterative-search\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/anthropic-iterative-search/playground](http://127.0.0.1:8000/anthropic-iterative-search/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/anthropic-iterative-search\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\basic-critique-revise\\README.md", + "filetype": ".md", + "content": "# basic-critique-revise\n\nIteratively generate schema candidates and revise them based on errors.\n\n## Environment Setup\n\nThis template uses OpenAI function calling, so you will need to set the `OPENAI_API_KEY` environment variable in order to use this template.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package basic-critique-revise\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add basic-critique-revise\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom basic_critique_revise import chain as basic_critique_revise_chain\n\nadd_routes(app, basic_critique_revise_chain, path=\"/basic-critique-revise\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/basic-critique-revise/playground](http://127.0.0.1:8000/basic-critique-revise/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/basic-critique-revise\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\bedrock-jcvd\\README.md", + "filetype": ".md", + "content": "# Bedrock JCVD \ud83d\udd7a\ud83e\udd4b\n\n## Overview\n\nLangChain template that uses [Anthropic's Claude on Amazon Bedrock](https://aws.amazon.com/bedrock/claude/) to behave like JCVD.\n\n> I am the Fred Astaire of Chatbots! \ud83d\udd7a\n\n'![Animated GIF of Jean-Claude Van Damme dancing.](https://media.tenor.com/CVp9l7g3axwAAAAj/jean-claude-van-damme-jcvd.gif \"Jean-Claude Van Damme Dancing\")\n\n## Environment Setup\n\n### AWS Credentials\n\nThis template uses [Boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html), the AWS SDK for Python, to call [Amazon Bedrock](https://aws.amazon.com/bedrock/). You **must** configure both AWS credentials *and* an AWS Region in order to make requests. \n\n> For information on how to do this, see [AWS Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) (Developer Guide > Credentials).\n\n### Foundation Models\n\nBy default, this template uses [Anthropic's Claude v2](https://aws.amazon.com/about-aws/whats-new/2023/08/claude-2-foundation-model-anthropic-amazon-bedrock/) (`anthropic.claude-v2`).\n\n> To request access to a specific model, check out the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html) (Model access)\n\nTo use a different model, set the environment variable `BEDROCK_JCVD_MODEL_ID`. A list of base models is available in the [Amazon Bedrock User Guide](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html) (Use the API > API operations > Run inference > Base Model IDs).\n\n> The full list of available models (including base and [custom models](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html)) is available in the [Amazon Bedrock Console](https://docs.aws.amazon.com/bedrock/latest/userguide/using-console.html) under **Foundation Models** or by calling [`aws bedrock list-foundation-models`](https://docs.aws.amazon.com/cli/latest/reference/bedrock/list-foundation-models.html).\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package bedrock-jcvd\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add bedrock-jcvd\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom bedrock_jcvd import chain as bedrock_jcvd_chain\n\nadd_routes(app, bedrock_jcvd_chain, path=\"/bedrock-jcvd\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs).\n\nWe can also access the playground at [http://127.0.0.1:8000/bedrock-jcvd/playground](http://127.0.0.1:8000/bedrock-jcvd/playground)\n\n![Screenshot of the LangServe Playground interface with an example input and output demonstrating a Jean-Claude Van Damme voice imitation.](jcvd_langserve.png \"JCVD Playground\")" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\cassandra-entomology-rag\\README.md", + "filetype": ".md", + "content": "\n# cassandra-entomology-rag\n\nThis template will perform RAG using Apache Cassandra\u00ae or Astra DB through CQL (`Cassandra` vector store class)\n\n## Environment Setup\n\nFor the setup, you will require:\n- an [Astra](https://astra.datastax.com) Vector Database. You must have a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure), specifically the string starting with `AstraCS:...`.\n- [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier).\n- an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access))\n\nYou may also use a regular Cassandra cluster. In this case, provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.\n\nThe connection parameters and secrets must be provided through environment variables. Refer to `.env.template` for the required variables.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package cassandra-entomology-rag\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add cassandra-entomology-rag\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom cassandra_entomology_rag import chain as cassandra_entomology_rag_chain\n\nadd_routes(app, cassandra_entomology_rag_chain, path=\"/cassandra-entomology-rag\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/cassandra-entomology-rag/playground](http://127.0.0.1:8000/cassandra-entomology-rag/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/cassandra-entomology-rag\")\n```\n\n## Reference\n\nStand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_cassandra_entomology_rag).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\cassandra-entomology-rag\\sources.txt", + "filetype": ".txt", + "content": "# source: https://www.thoughtco.com/a-guide-to-the-twenty-nine-insect-orders-1968419\n\nOrder Thysanura: The silverfish and firebrats are found in the order Thysanura. They are wingless insects often found in people's attics, and have a lifespan of several years. There are about 600 species worldwide.\nOrder Diplura: Diplurans are the most primitive insect species, with no eyes or wings. They have the unusual ability among insects to regenerate body parts. There are over 400 members of the order Diplura in the world.\nOrder Protura: Another very primitive group, the proturans have no eyes, no antennae, and no wings. They are uncommon, with perhaps less than 100 species known.\nOrder Collembola: The order Collembola includes the springtails, primitive insects without wings. There are approximately 2,000 species of Collembola worldwide.\nOrder Ephemeroptera: The mayflies of order Ephemeroptera are short-lived, and undergo incomplete metamorphosis. The larvae are aquatic, feeding on algae and other plant life. Entomologists have described about 2,100 species worldwide.\nOrder Odonata: The order Odonata includes dragonflies and damselflies, which undergo incomplete metamorphosis. They are predators of other insects, even in their immature stage. There are about 5,000 species in the order Odonata.\nOrder Plecoptera: The stoneflies of order Plecoptera are aquatic and undergo incomplete metamorphosis. The nymphs live under rocks in well flowing streams. Adults are usually seen on the ground along stream and river banks. There are roughly 3,000 species in this group.\nOrder Grylloblatodea: Sometimes referred to as \"living fossils,\" the insects of the order Grylloblatodea have changed little from their ancient ancestors. This order is the smallest of all the insect orders, with perhaps only 25 known species living today. Grylloblatodea live at elevations above 1500 ft., and are commonly named ice bugs or rock crawlers.\nOrder Orthoptera: These are familiar insects (grasshoppers, locusts, katydids, and crickets) and one of the largest orders of herbivorous insects. Many species in the order Orthoptera can produce and detect sounds. Approximately 20,000 species exist in this group.\nOrder Phasmida: The order Phasmida are masters of camouflage, the stick and leaf insects. They undergo incomplete metamorphosis and feed on leaves. There are some 3,000 insects in this group, but only a small fraction of this number is leaf insects. Stick insects are the longest insects in the world.\nOrder Dermaptera: This order contains the earwigs, an easily recognized insect that often has pincers at the end of the abdomen. Many earwigs are scavengers, eating both plant and animal matter. The order Dermaptera includes less than 2,000 species.\nOrder Embiidina: The order Embioptera is another ancient order with few species, perhaps only 200 worldwide. The web spinners have silk glands in their front legs and weave nests under leaf litter and in tunnels where they live. Webspinners live in tropical or subtropical climates.\nOrder Dictyoptera: The order Dictyoptera includes roaches and mantids. Both groups have long, segmented antennae and leathery forewings held tightly against their backs. They undergo incomplete metamorphosis. Worldwide, there approximately 6,000 species in this order, most living in tropical regions.\nOrder Isoptera: Termites feed on wood and are important decomposers in forest ecosystems. They also feed on wood products and are thought of as pests for the destruction they cause to man-made structures. There are between 2,000 and 3,000 species in this order.\nOrder Zoraptera: Little is know about the angel insects, which belong to the order Zoraptera. Though they are grouped with winged insects, many are actually wingless. Members of this group are blind, small, and often found in decaying wood. There are only about 30 described species worldwide.\nOrder Psocoptera: Bark lice forage on algae, lichen, and fungus in moist, dark places. Booklice frequent human dwellings, where they feed on book paste and grains. They undergo incomplete metamorphosis. Entomologists have named about 3,200 species in the order Psocoptera.\nOrder Mallophaga: Biting lice are ectoparasites that feed on birds and some mammals. There are an estimated 3,000 species in the order Mallophaga, all of which undergo incomplete metamorphosis.\nOrder Siphunculata: The order Siphunculata are the sucking lice, which feed on the fresh blood of mammals. Their mouthparts are adapted for sucking or siphoning blood. There are only about 500 species of sucking lice.\nOrder Hemiptera: Most people use the term \"bugs\" to mean insects; an entomologist uses the term to refer to the order Hemiptera. The Hemiptera are the true bugs, and include cicadas, aphids, and spittlebugs, and others. This is a large group of over 70,000 species worldwide.\nOrder Thysanoptera: The thrips of order Thysanoptera are small insects that feed on plant tissue. Many are considered agricultural pests for this reason. Some thrips prey on other small insects as well. This order contains about 5,000 species.\nOrder Neuroptera: Commonly called the order of lacewings, this group actually includes a variety of other insects, too: dobsonflies, owlflies, mantidflies, antlions, snakeflies, and alderflies. Insects in the order Neuroptera undergo complete metamorphosis. Worldwide, there are over 5,500 species in this group.\nOrder Mecoptera: This order includes the scorpionflies, which live in moist, wooded habitats. Scorpionflies are omnivorous in both their larval and adult forms. The larva are caterpillar-like. There are less than 500 described species in the order Mecoptera.\nOrder Siphonaptera: Pet lovers fear insects in the order Siphonaptera - the fleas. Fleas are blood-sucking ectoparasites that feed on mammals, and rarely, birds. There are well over 2,000 species of fleas in the world.\nOrder Coleoptera: This group, the beetles and weevils, is the largest order in the insect world, with over 300,000 distinct species known. The order Coleoptera includes well-known families: june beetles, lady beetles, click beetles, and fireflies. All have hardened forewings that fold over the abdomen to protect the delicate hindwings used for flight.\nOrder Strepsiptera: Insects in this group are parasites of other insects, particularly bees, grasshoppers, and the true bugs. The immature Strepsiptera lies in wait on a flower and quickly burrows into any host insect that comes along. Strepsiptera undergo complete metamorphosis and pupate within the host insect's body.\nOrder Diptera: Diptera is one of the largest orders, with nearly 100,000 insects named to the order. These are the true flies, mosquitoes, and gnats. Insects in this group have modified hindwings which are used for balance during flight. The forewings function as the propellers for flying.\nOrder Lepidoptera: The butterflies and moths of the order Lepidoptera comprise the second largest group in the class Insecta. These well-known insects have scaly wings with interesting colors and patterns. You can often identify an insect in this order just by the wing shape and color.\nOrder Trichoptera: Caddisflies are nocturnal as adults and aquatic when immature. The caddisfly adults have silky hairs on their wings and body, which is key to identifying a Trichoptera member. The larvae spin traps for prey with silk. They also make cases from the silk and other materials that they carry and use for protection.\nOrder Hymenoptera: The order Hymenoptera includes many of the most common insects - ants, bees, and wasps. The larvae of some wasps cause trees to form galls, which then provides food for the immature wasps. Other wasps are parasitic, living in caterpillars, beetles, or even aphids. This is the third-largest insect order with just over 100,000 species.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\cassandra-synonym-caching\\README.md", + "filetype": ".md", + "content": "\n# cassandra-synonym-caching\n\nThis template provides a simple chain template showcasing the usage of LLM Caching backed by Apache Cassandra\u00ae or Astra DB through CQL.\n\n## Environment Setup\n\nTo set up your environment, you will need the following:\n\n- an [Astra](https://astra.datastax.com) Vector Database (free tier is fine!). **You need a [Database Administrator token](https://awesome-astra.github.io/docs/pages/astra/create-token/#c-procedure)**, in particular the string starting with `AstraCS:...`;\n- likewise, get your [Database ID](https://awesome-astra.github.io/docs/pages/astra/faq/#where-should-i-find-a-database-identifier) ready, you will have to enter it below;\n- an **OpenAI API Key**. (More info [here](https://cassio.org/start_here/#llm-access), note that out-of-the-box this demo supports OpenAI unless you tinker with the code.)\n\n_Note:_ you can alternatively use a regular Cassandra cluster: to do so, make sure you provide the `USE_CASSANDRA_CLUSTER` entry as shown in `.env.template` and the subsequent environment variables to specify how to connect to it.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package cassandra-synonym-caching\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add cassandra-synonym-caching\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom cassandra_synonym_caching import chain as cassandra_synonym_caching_chain\n\nadd_routes(app, cassandra_synonym_caching_chain, path=\"/cassandra-synonym-caching\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/cassandra-synonym-caching/playground](http://127.0.0.1:8000/cassandra-synonym-caching/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/cassandra-synonym-caching\")\n```\n\n## Reference\n\nStand-alone LangServe template repo: [here](https://github.com/hemidactylus/langserve_cassandra_synonym_caching).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\chain-of-note-wiki\\README.md", + "filetype": ".md", + "content": "# Chain-of-Note (Wikipedia)\n\nImplements Chain-of-Note as described in https://arxiv.org/pdf/2311.09210.pdf by Yu, et al. Uses Wikipedia for retrieval.\n\nCheck out the prompt being used here https://smith.langchain.com/hub/bagatur/chain-of-note-wiki.\n\n## Environment Setup\n\nUses Anthropic claude-2 chat model. Set Anthropic API key:\n```bash\nexport ANTHROPIC_API_KEY=\"...\"\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package chain-of-note-wiki\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add chain-of-note-wiki\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom chain_of_note_wiki import chain as chain_of_note_wiki_chain\n\nadd_routes(app, chain_of_note_wiki_chain, path=\"/chain-of-note-wiki\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/chain-of-note-wiki/playground](http://127.0.0.1:8000/chain-of-note-wiki/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/chain-of-note-wiki\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\chat-bot-feedback\\README.md", + "filetype": ".md", + "content": "# Chat Bot Feedback Template\n\nThis template shows how to evaluate your chat bot without explicit user feedback. It defines a simple chat bot in [chain.py](https://github.com/langchain-ai/langchain/blob/master/templates/chat-bot-feedback/chat_bot_feedback/chain.py) and custom evaluator that scores bot response effectiveness based on the subsequent user response. You can apply this run evaluator to your own chat bot by calling `with_config` on the chat bot before serving. You can also directly deploy your chat app using this template.\n\n[Chat bots](https://python.langchain.com/docs/use_cases/chatbots) are one of the most common interfaces for deploying LLMs. The quality of chat bots varies, making continuous development important. But users are wont to leave explicit feedback through mechanisms like thumbs-up or thumbs-down buttons. Furthermore, traditional analytics such as \"session length\" or \"conversation length\" often lack clarity. However, multi-turn conversations with a chat bot can provide a wealth of information, which we can transform into metrics for fine-tuning, evaluation, and product analytics.\n\nTaking [Chat Langchain](https://chat.langchain.com/) as a case study, only about 0.04% of all queries receive explicit feedback. Yet, approximately 70% of the queries are follow-ups to previous questions. A significant portion of these follow-up queries continue useful information we can use to infer the quality of the previous AI response. \n\n\nThis template helps solve this \"feedback scarcity\" problem. Below is an example invocation of this chat bot:\n\n[![Screenshot of a chat bot interaction where the AI responds in a pirate accent to a user asking where their keys are.](./static/chat_interaction.png \"Chat Bot Interaction Example\")](https://smith.langchain.com/public/3378daea-133c-4fe8-b4da-0a3044c5dbe8/r?runtab=1)\n\nWhen the user responds to this ([link](https://smith.langchain.com/public/a7e2df54-4194-455d-9978-cecd8be0df1e/r)), the response evaluator is invoked, resulting in the following evaluationrun:\n\n[![Screenshot of an evaluator run showing the AI's response effectiveness score based on the user's follow-up message expressing frustration.](./static/evaluator.png \"Chat Bot Evaluator Run\")](https://smith.langchain.com/public/534184ee-db8f-4831-a386-3f578145114c/r)\n\nAs shown, the evaluator sees that the user is increasingly frustrated, indicating that the prior response was not effective\n\n## LangSmith Feedback\n\n[LangSmith](https://smith.langchain.com/) is a platform for building production-grade LLM applications. Beyond its debugging and offline evaluation features, LangSmith helps you capture both user and model-assisted feedback to refine your LLM application. This template uses an LLM to generate feedback for your application, which you can use to continuously improve your service. For more examples on collecting feedback using LangSmith, consult the [documentation](https://docs.smith.langchain.com/cookbook/feedback-examples).\n\n## Evaluator Implementation\n\nThe user feedback is inferred by custom `RunEvaluator`. This evaluator is called using the `EvaluatorCallbackHandler`, which run it in a separate thread to avoid interfering with the chat bot's runtime. You can use this custom evaluator on any compatible chat bot by calling the following function on your LangChain object:\n\n```python\nmy_chain.with_config(\n callbacks=[\n EvaluatorCallbackHandler(\n evaluators=[\n ResponseEffectivenessEvaluator(evaluate_response_effectiveness)\n ]\n )\n ],\n)\n```\n\nThe evaluator instructs an LLM, specifically `gpt-3.5-turbo`, to evaluate the AI's most recent chat message based on the user's followup response. It generates a score and accompanying reasoning that is converted to feedback in LangSmith, applied to the value provided as the `last_run_id`.\n\nThe prompt used within the LLM [is available on the hub](https://smith.langchain.com/hub/wfh/response-effectiveness). Feel free to customize it with things like additional app context (such as the goal of the app or the types of questions it should respond to) or \"symptoms\" you'd like the LLM to focus on. This evaluator also utilizes OpenAI's function-calling API to ensure a more consistent, structured output for the grade.\n\n## Environment Variables\n\nEnsure that `OPENAI_API_KEY` is set to use OpenAI models. Also, configure LangSmith by setting your `LANGSMITH_API_KEY`.\n\n```bash\nexport OPENAI_API_KEY=sk-...\nexport LANGSMITH_API_KEY=...\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_PROJECT=my-project # Set to the project you want to save to\n```\n\n## Usage\n\nIf deploying via `LangServe`, we recommend configuring the server to return callback events as well. This will ensure the backend traces are included in whatever traces you generate using the `RemoteRunnable`.\n\n```python\nfrom chat_bot_feedback.chain import chain\n\nadd_routes(app, chain, path=\"/chat-bot-feedback\", include_callback_events=True)\n```\n\nWith the server running, you can use the following code snippet to stream the chat bot responses for a 2 turn conversation.\n\n```python\nfrom functools import partial\nfrom typing import Dict, Optional, Callable, List\nfrom langserve import RemoteRunnable\nfrom langchain.callbacks.manager import tracing_v2_enabled\nfrom langchain_core.messages import BaseMessage, AIMessage, HumanMessage\n\n# Update with the URL provided by your LangServe server\nchain = RemoteRunnable(\"http://127.0.0.1:8031/chat-bot-feedback\")\n\ndef stream_content(\n text: str,\n chat_history: Optional[List[BaseMessage]] = None,\n last_run_id: Optional[str] = None,\n on_chunk: Callable = None,\n):\n results = []\n with tracing_v2_enabled() as cb:\n for chunk in chain.stream(\n {\"text\": text, \"chat_history\": chat_history, \"last_run_id\": last_run_id},\n ):\n on_chunk(chunk)\n results.append(chunk)\n last_run_id = cb.latest_run.id if cb.latest_run else None\n return last_run_id, \"\".join(results)\n\nchat_history = []\ntext = \"Where are my keys?\"\nlast_run_id, response_message = stream_content(text, on_chunk=partial(print, end=\"\"))\nprint()\nchat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])\ntext = \"I CAN'T FIND THEM ANYWHERE\" # The previous response will likely receive a low score,\n# as the user's frustration appears to be escalating.\nlast_run_id, response_message = stream_content(\n text,\n chat_history=chat_history,\n last_run_id=str(last_run_id),\n on_chunk=partial(print, end=\"\"),\n)\nprint()\nchat_history.extend([HumanMessage(content=text), AIMessage(content=response_message)])\n```\n\nThis uses the `tracing_v2_enabled` callback manager to get the run ID of the call, which we provide in subsequent calls in the same chat thread, so the evaluator can assign feedback to the appropriate trace.\n\n\n## Conclusion\n\nThis template provides a simple chat bot definition you can directly deploy using LangServe. It defines a custom evaluator to log evaluation feedback for the bot without any explicit user ratings. This is an effective way to augment your analytics and to better select data points for fine-tuning and evaluation." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\cohere-librarian\\README.md", + "filetype": ".md", + "content": "\n# cohere-librarian\n\nThis template turns Cohere into a librarian.\n\nIt demonstrates the use of a router to switch between chains that can handle different things: a vector database with Cohere embeddings; a chat bot that has a prompt with some information about the library; and finally a RAG chatbot that has access to the internet.\n\nFor a fuller demo of the book recomendation, consider replacing books_with_blurbs.csv with a larger sample from the following dataset: https://www.kaggle.com/datasets/jdobrow/57000-books-with-metadata-and-blurbs/ .\n\n## Environment Setup\n\nSet the `COHERE_API_KEY` environment variable to access the Cohere models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package cohere-librarian\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add cohere-librarian\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom cohere_librarian.chain import chain as cohere_librarian_chain\n\nadd_routes(app, cohere_librarian_chain, path=\"/cohere-librarian\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://localhost:8000/docs](http://localhost:8000/docs)\nWe can access the playground at [http://localhost:8000/cohere-librarian/playground](http://localhost:8000/cohere-librarian/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/cohere-librarian\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\csv-agent\\README.md", + "filetype": ".md", + "content": "\n# csv-agent\n\nThis template uses a [csv agent](https://python.langchain.com/docs/integrations/toolkits/csv) with tools (Python REPL) and memory (vectorstore) for interaction (question-answering) with text data.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nTo set up the environment, the `ingest.py` script should be run to handle the ingestion into a vectorstore.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package csv-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add csv-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom csv_agent.agent import agent_executor as csv_agent_chain\n\nadd_routes(app, csv_agent_chain, path=\"/csv-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/csv-agent/playground](http://127.0.0.1:8000/csv-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/csv-agent\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\docs\\CONTRIBUTING.md", + "filetype": ".md", + "content": "# Contributing\n\nThanks for taking the time to contribute a new template!\nWe've tried to make this process as simple and painless as possible.\nIf you need any help at all, please reach out!\n\nTo contribute a new template, first fork this repository.\nThen clone that fork and pull it down locally.\nSet up an appropriate dev environment, and make sure you are in this `templates` directory.\n\nMake sure you have `langchain-cli` installed.\n\n```shell\npip install -U langchain-cli\n```\n\nYou can then run the following command to create a new skeleton of a package.\nBy convention, package names should use `-` delimiters (not `_`).\n\n```shell\nlangchain template new $PROJECT_NAME\n```\n\nYou can then edit the contents of the package as you desire.\nNote that by default we expect the main chain to be exposed as `chain` in the `__init__.py` file of the package.\nYou can change this (either the name or the location), but if you do so it is important to update the `tool.langchain`\npart of `pyproject.toml`.\nFor example, if you update the main chain exposed to be called `agent_executor`, then that section should look like:\n\n```text\n[tool.langserve]\nexport_module = \"...\"\nexport_attr = \"agent_executor\"\n```\n\nMake sure to add any requirements of the package to `pyproject.toml` (and to remove any that are not used).\n\nPlease update the `README.md` file to give some background on your package and how to set it up.\n\nIf you want to change the license of your template for whatever, you may! Note that by default it is MIT licensed.\n\nIf you want to test out your package at any point in time, you can spin up a LangServe instance directly from the package.\nSee instructions [here](LAUNCHING_PACKAGE.md) on how to best do that.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\docs\\INDEX.md", + "filetype": ".md", + "content": "# Templates\n\nHighlighting a few different categories of templates\n\n## \u2b50 Popular\n\nThese are some of the more popular templates to get started with.\n\n- [Retrieval Augmented Generation Chatbot](../rag-conversation): Build a chatbot over your data. Defaults to OpenAI and PineconeVectorStore.\n- [Extraction with OpenAI Functions](../extraction-openai-functions): Do extraction of structured data from unstructured data. Uses OpenAI function calling.\n- [Local Retrieval Augmented Generation](../rag-chroma-private): Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.\n- [OpenAI Functions Agent](../openai-functions-agent): Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.\n- [XML Agent](../xml-agent): Build a chatbot that can take actions. Uses Anthropic and You.com.\n\n\n## \ud83d\udce5 Advanced Retrieval\n\nThese templates cover advanced retrieval techniques, which can be used for chat and QA over databases or documents.\n\n- [Reranking](../rag-pinecone-rerank): This retrieval technique uses Cohere's reranking endpoint to rerank documents from an initial retrieval step.\n- [Anthropic Iterative Search](../anthropic-iterative-search): This retrieval technique uses iterative prompting to determine what to retrieve and whether the retriever documents are good enough.\n- **Parent Document Retrieval** using [Neo4j](../neo4j-parent) or [MongoDB](../mongo-parent-document-retrieval): This retrieval technique stores embeddings for smaller chunks, but then returns larger chunks to pass to the model for generation.\n- [Semi-Structured RAG](../rag-semi-structured): The template shows how to do retrieval over semi-structured data (e.g. data that involves both text and tables).\n- [Temporal RAG](../rag-timescale-hybrid-search-time): The template shows how to do hybrid search over data with a time-based component using [Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral).\n\n## \ud83d\udd0dAdvanced Retrieval - Query Transformation\n\nA selection of advanced retrieval methods that involve transforming the original user query, which can improve retrieval quality.\n\n- [Hypothetical Document Embeddings](../hyde): A retrieval technique that generates a hypothetical document for a given query, and then uses the embedding of that document to do semantic search. [Paper](https://arxiv.org/abs/2212.10496).\n- [Rewrite-Retrieve-Read](../rewrite-retrieve-read): A retrieval technique that rewrites a given query before passing it to a search engine. [Paper](https://arxiv.org/abs/2305.14283).\n- [Step-back QA Prompting](../stepback-qa-prompting): A retrieval technique that generates a \"step-back\" question and then retrieves documents relevant to both that question and the original question. [Paper](https://arxiv.org/abs//2310.06117).\n- [RAG-Fusion](../rag-fusion): A retrieval technique that generates multiple queries and then reranks the retrieved documents using reciprocal rank fusion. [Article](https://towardsdatascience.com/forget-rag-the-future-is-rag-fusion-1147298d8ad1).\n- [Multi-Query Retriever](../rag-pinecone-multi-query): This retrieval technique uses an LLM to generate multiple queries and then fetches documents for all queries.\n\n\n## \ud83e\udde0Advanced Retrieval - Query Construction\n\nA selection of advanced retrieval methods that involve constructing a query in a separate DSL from natural language, which enable natural language chat over various structured databases.\n\n- [Elastic Query Generator](../elastic-query-generator): Generate elastic search queries from natural language.\n- [Neo4j Cypher Generation](../neo4j-cypher): Generate cypher statements from natural language. Available with a [\"full text\" option](../neo4j-cypher-ft) as well.\n- [Supabase Self Query](../self-query-supabase): Parse a natural language query into a semantic query as well as a metadata filter for Supabase.\n\n## \ud83e\udd99 OSS Models\n\nThese templates use OSS models, which enable privacy for sensitive data.\n\n- [Local Retrieval Augmented Generation](../rag-chroma-private): Build a chatbot over your data. Uses only local tooling: Ollama, GPT4all, Chroma.\n- [SQL Question Answering (Replicate)](../sql-llama2): Question answering over a SQL database, using Llama2 hosted on [Replicate](https://replicate.com/).\n- [SQL Question Answering (LlamaCpp)](../sql-llamacpp): Question answering over a SQL database, using Llama2 through [LlamaCpp](https://github.com/ggerganov/llama.cpp).\n- [SQL Question Answering (Ollama)](../sql-ollama): Question answering over a SQL database, using Llama2 through [Ollama](https://github.com/jmorganca/ollama).\n\n## \u26cf\ufe0f Extraction\n\nThese templates extract data in a structured format based upon a user-specified schema.\n\n- [Extraction Using OpenAI Functions](../extraction-openai-functions): Extract information from text using OpenAI Function Calling.\n- [Extraction Using Anthropic Functions](../extraction-anthropic-functions): Extract information from text using a LangChain wrapper around the Anthropic endpoints intended to simulate function calling.\n- [Extract BioTech Plate Data](../plate-chain): Extract microplate data from messy Excel spreadsheets into a more normalized format.\n\n## \u26cf\ufe0fSummarization and tagging\n\nThese templates summarize or categorize documents and text. \n\n- [Summarization using Anthropic](../summarize-anthropic): Uses Anthropic's Claude2 to summarize long documents.\n\n## \ud83e\udd16 Agents\n\nThese templates build chatbots that can take actions, helping to automate tasks.\n\n- [OpenAI Functions Agent](../openai-functions-agent): Build a chatbot that can take actions. Uses OpenAI function calling and Tavily.\n- [XML Agent](../xml-agent): Build a chatbot that can take actions. Uses Anthropic and You.com.\n\n## :rotating_light: Safety and evaluation\n\nThese templates enable moderation or evaluation of LLM outputs.\n\n- [Guardrails Output Parser](../guardrails-output-parser): Use guardrails-ai to validate LLM output.\n- [Chatbot Feedback](../chat-bot-feedback): Use LangSmith to evaluate chatbot responses.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\docs\\LAUNCHING_PACKAGE.md", + "filetype": ".md", + "content": "# Launching LangServe from a Package\n\nYou can also launch LangServe directly from a package, without having to pull it into a project.\nThis can be useful when you are developing a package and want to test it quickly.\nThe downside of this is that it gives you a little less control over how the LangServe APIs are configured,\nwhich is why for proper projects we recommend creating a full project.\n\nIn order to do this, first change your working directory to the package itself.\nFor example, if you are currently in this `templates` module, you can go into the `pirate-speak` package with:\n\n```shell\ncd pirate-speak\n```\n\nInside this package there is a `pyproject.toml` file.\nThis file contains a `tool.langchain` section that contains information on how this package should be used.\nFor example, in `pirate-speak` we see:\n\n```text\n[tool.langserve]\nexport_module = \"pirate_speak.chain\"\nexport_attr = \"chain\"\n```\n\nThis information can be used to launch a LangServe instance automatically.\nIn order to do this, first make sure the CLI is installed:\n\n```shell\npip install -U langchain-cli\n```\n\nYou can then run:\n\n```shell\nlangchain template serve\n```\n\nThis will spin up endpoints, documentation, and playground for this chain.\nFor example, you can access the playground at [http://127.0.0.1:8000/playground/](http://127.0.0.1:8000/playground/)\n\n![Screenshot of the LangServe Playground web interface with input and output fields.](playground.png \"LangServe Playground Interface\")\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\elastic-query-generator\\README.md", + "filetype": ".md", + "content": "\n# elastic-query-generator\n\nThis template allows interacting with Elasticsearch analytics databases in natural language using LLMs. \n\nIt builds search queries via the Elasticsearch DSL API (filters and aggregations). \n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n### Installing Elasticsearch\n\nThere are a number of ways to run Elasticsearch. However, one recommended way is through Elastic Cloud.\n\nCreate a free trial account on [Elastic Cloud](https://cloud.elastic.co/registration?utm_source=langchain&utm_content=langserve).\n\nWith a deployment, update the connection string.\n\nPassword and connection (elasticsearch url) can be found on the deployment console.\n\nNote that the Elasticsearch client must have permissions for index listing, mapping description, and search queries.\n\n### Populating with data\n\nIf you want to populate the DB with some example info, you can run `python ingest.py`.\n\nThis will create a `customers` index. In this package, we specify indexes to generate queries against, and we specify `[\"customers\"]`. This is specific to setting up your Elastic index.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package elastic-query-generator\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add elastic-query-generator\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom elastic_query_generator.chain import chain as elastic_query_generator_chain\n\nadd_routes(app, elastic_query_generator_chain, path=\"/elastic-query-generator\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/elastic-query-generator/playground](http://127.0.0.1:8000/elastic-query-generator/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/elastic-query-generator\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\extraction-anthropic-functions\\README.md", + "filetype": ".md", + "content": "\n# extraction-anthropic-functions\n\nThis template enables [Anthropic function calling](https://python.langchain.com/docs/integrations/chat/anthropic_functions). \n\nThis can be used for various tasks, such as extraction or tagging.\n\nThe function output schema can be set in `chain.py`. \n\n## Environment Setup\n\nSet the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package extraction-anthropic-functions\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add extraction-anthropic-functions\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom extraction_anthropic_functions import chain as extraction_anthropic_functions_chain\n\nadd_routes(app, extraction_anthropic_functions_chain, path=\"/extraction-anthropic-functions\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/extraction-anthropic-functions/playground](http://127.0.0.1:8000/extraction-anthropic-functions/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/extraction-anthropic-functions\")\n```\n\nBy default, the package will extract the title and author of papers from the information you specify in `chain.py`. This template will use `Claude2` by default. \n\n---\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\extraction-openai-functions\\README.md", + "filetype": ".md", + "content": "\n# extraction-openai-functions\n\nThis template uses [OpenAI function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) for extraction of structured output from unstructured input text.\n\nThe extraction output schema can be set in `chain.py`. \n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package extraction-openai-functions\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add extraction-openai-functions\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom extraction_openai_functions import chain as extraction_openai_functions_chain\n\nadd_routes(app, extraction_openai_functions_chain, path=\"/extraction-openai-functions\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/extraction-openai-functions/playground](http://127.0.0.1:8000/extraction-openai-functions/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/extraction-openai-functions\")\n```\nBy default, this package is set to extract the title and author of papers, as specified in the `chain.py` file. \n\nLLM is leveraged by the OpenAI function by default.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\gemini-functions-agent\\README.md", + "filetype": ".md", + "content": "\n# gemini-functions-agent\n\nThis template creates an agent that uses Google Gemini function calling to communicate its decisions on what actions to take. \n\nThis example creates an agent that can optionally look up information on the internet using Tavily's search engine.\n\n[See an example LangSmith trace here](https://smith.langchain.com/public/0ebf1bd6-b048-4019-b4de-25efe8d3d18c/r)\n\n## Environment Setup\n\nThe following environment variables need to be set:\n\nSet the `TAVILY_API_KEY` environment variable to access Tavily\n\nSet the `GOOGLE_API_KEY` environment variable to access the Google Gemini APIs.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package gemini-functions-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add gemini-functions-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom gemini_functions_agent import agent_executor as gemini_functions_agent_chain\n\nadd_routes(app, gemini_functions_agent_chain, path=\"/openai-functions-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/gemini-functions-agent/playground](http://127.0.0.1:8000/gemini-functions-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/gemini-functions-agent\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\guardrails-output-parser\\README.md", + "filetype": ".md", + "content": "\n# guardrails-output-parser\n\nThis template uses [guardrails-ai](https://github.com/guardrails-ai/guardrails) to validate LLM output. \n\nThe `GuardrailsOutputParser` is set in `chain.py`.\n \nThe default example protects against profanity.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package guardrails-output-parser\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add guardrails-output-parser\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom guardrails_output_parser.chain import chain as guardrails_output_parser_chain\n\nadd_routes(app, guardrails_output_parser_chain, path=\"/guardrails-output-parser\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/guardrails-output-parser/playground](http://127.0.0.1:8000/guardrails-output-parser/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/guardrails-output-parser\")\n```\n\nIf Guardrails does not find any profanity, then the translated output is returned as is. If Guardrails does find profanity, then an empty string is returned.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\hybrid-search-weaviate\\README.md", + "filetype": ".md", + "content": "# Hybrid Search in Weaviate\nThis template shows you how to use the hybrid search feature in Weaviate. Hybrid search combines multiple search algorithms to improve the accuracy and relevance of search results. \n\nWeaviate uses both sparse and dense vectors to represent the meaning and context of search queries and documents. The results use a combination of `bm25` and vector search ranking to return the top results. \n\n## Configurations\nConnect to your hosted Weaviate Vectorstore by setting a few env variables in `chain.py`:\n\n* `WEAVIATE_ENVIRONMENT`\n* `WEAVIATE_API_KEY`\n\nYou will also need to set your `OPENAI_API_KEY` to use the OpenAI models.\n\n## Get Started \nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package hybrid-search-weaviate\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add hybrid-search-weaviate\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom hybrid_search_weaviate import chain as hybrid_search_weaviate_chain\n\nadd_routes(app, hybrid_search_weaviate_chain, path=\"/hybrid-search-weaviate\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/hybrid-search-weaviate/playground](http://127.0.0.1:8000/hybrid-search-weaviate/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/hybrid-search-weaviate\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\hyde\\README.md", + "filetype": ".md", + "content": "\n# hyde\n\nThis template uses HyDE with RAG. \n\nHyde is a retrieval method that stands for Hypothetical Document Embeddings (HyDE). It is a method used to enhance retrieval by generating a hypothetical document for an incoming query. \n\nThe document is then embedded, and that embedding is utilized to look up real documents that are similar to the hypothetical document. \n\nThe underlying concept is that the hypothetical document may be closer in the embedding space than the query. \n\nFor a more detailed description, see the paper [here](https://arxiv.org/abs/2212.10496).\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package hyde\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add hyde\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom hyde.chain import chain as hyde_chain\n\nadd_routes(app, hyde_chain, path=\"/hyde\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/hyde/playground](http://127.0.0.1:8000/hyde/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/hyde\")\n```\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\llama2-functions\\README.md", + "filetype": ".md", + "content": "\n# llama2-functions\n\nThis template performs extraction of structured data from unstructured data using a [LLaMA2 model that supports a specified JSON output schema](https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md). \n\nThe extraction schema can be set in `chain.py`.\n\n## Environment Setup\n\nThis will use a [LLaMA2-13b model hosted by Replicate](https://replicate.com/andreasjansson/llama-2-13b-chat-gguf/versions).\n\nEnsure that `REPLICATE_API_TOKEN` is set in your environment.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package llama2-functions\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add llama2-functions\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom llama2_functions import chain as llama2_functions_chain\n\nadd_routes(app, llama2_functions_chain, path=\"/llama2-functions\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/llama2-functions/playground](http://127.0.0.1:8000/llama2-functions/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/llama2-functions\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\mongo-parent-document-retrieval\\README.md", + "filetype": ".md", + "content": "# mongo-parent-document-retrieval\n\nThis template performs RAG using MongoDB and OpenAI.\nIt does a more advanced form of RAG called Parent-Document Retrieval.\n\nIn this form of retrieval, a large document is first split into medium sized chunks.\nFrom there, those medium size chunks are split into small chunks.\nEmbeddings are created for the small chunks.\nWhen a query comes in, an embedding is created for that query and compared to the small chunks.\nBut rather than passing the small chunks directly to the LLM for generation, the medium-sized chunks\nfrom whence the smaller chunks came are passed.\nThis helps enable finer-grained search, but then passing of larger context (which can be useful during generation).\n\n## Environment Setup\n\nYou should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY.\nIf you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.\n\n```shell\nexport MONGO_URI=...\nexport OPENAI_API_KEY=...\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package mongo-parent-document-retrieval\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add mongo-parent-document-retrieval\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom mongo_parent_document_retrieval import chain as mongo_parent_document_retrieval_chain\n\nadd_routes(app, mongo_parent_document_retrieval_chain, path=\"/mongo-parent-document-retrieval\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you DO NOT already have a Mongo Search Index you want to connect to, see `MongoDB Setup` section below before proceeding.\nNote that because Parent Document Retrieval uses a different indexing strategy, it's likely you will want to run this new setup.\n\nIf you DO have a MongoDB Search index you want to connect to, edit the connection details in `mongo_parent_document_retrieval/chain.py`\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/mongo-parent-document-retrieval/playground](http://127.0.0.1:8000/mongo-parent-document-retrieval/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/mongo-parent-document-retrieval\")\n```\n\nFor additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).\n\n\n## MongoDB Setup\n\nUse this step if you need to setup your MongoDB account and ingest data.\nWe will first follow the standard MongoDB Atlas setup instructions [here](https://www.mongodb.com/docs/atlas/getting-started/).\n\n1. Create an account (if not already done)\n2. Create a new project (if not already done)\n3. Locate your MongoDB URI.\n\nThis can be done by going to the deployement overview page and connecting to you database\n\n![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png \"MongoDB Atlas Connect Button\")\n\nWe then look at the drivers available\n\n![Screenshot showing the MongoDB Atlas drivers section for connecting to the database.](_images/driver.png \"MongoDB Atlas Drivers Section\")\n\nAmong which we will see our URI listed\n\n![Screenshot displaying the MongoDB Atlas URI in the connection instructions.](_images/uri.png \"MongoDB Atlas URI Display\")\n\nLet's then set that as an environment variable locally:\n\n```shell\nexport MONGO_URI=...\n```\n\n4. Let's also set an environment variable for OpenAI (which we will use as an LLM)\n\n```shell\nexport OPENAI_API_KEY=...\n```\n\n5. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:\n\n```shell\npython ingest.py\n```\n\nNote that you can (and should!) change this to ingest data of your choice\n\n6. We now need to set up a vector index on our data.\n\nWe can first connect to the cluster where our database lives\n\n![cluster.png](_images%2Fcluster.png)\n\nWe can then navigate to where all our collections are listed\n\n![collections.png](_images%2Fcollections.png)\n\nWe can then find the collection we want and look at the search indexes for that collection\n\n![search-indexes.png](_images%2Fsearch-indexes.png)\n\nThat should likely be empty, and we want to create a new one:\n\n![create.png](_images%2Fcreate.png)\n\nWe will use the JSON editor to create it\n\n![json_editor.png](_images%2Fjson_editor.png)\n\nAnd we will paste the following JSON in:\n\n```text\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"doc_level\": [\n {\n \"type\": \"token\"\n }\n ],\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\n```\n![json.png](_images%2Fjson.png)\n\nFrom there, hit \"Next\" and then \"Create Search Index\". It will take a little bit but you should then have an index over your data!\n\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-advanced-rag\\dune.txt", + "filetype": ".txt", + "content": "Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It is one of the world's best-selling science fiction novels.Dune is set in the distant future in a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or \"spice\", a drug that extends life and enhances mental abilities. Melange is also necessary for space navigation, which requires a kind of multidimensional awareness and foresight that only the drug provides. As melange can only be produced on Arrakis, control of the planet is a coveted and dangerous undertaking. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.\nHerbert wrote five sequels: Dune Messiah, Children of Dune, God Emperor of Dune, Heretics of Dune, and Chapterhouse: Dune. Following Herbert's death in 1986, his son Brian Herbert and author Kevin J. Anderson continued the series in over a dozen additional novels since 1999.\nAdaptations of the novel to cinema have been notoriously difficult and complicated. In the 1970s, cult filmmaker Alejandro Jodorowsky attempted to make a film based on the novel. After three years of development, the project was canceled due to a constantly growing budget. In 1984, a film adaptation directed by David Lynch was released to mostly negative responses from critics and failure at the box office, although it later developed a cult following. The book was also adapted into the 2000 Sci-Fi Channel miniseries Frank Herbert's Dune and its 2003 sequel Frank Herbert's Children of Dune (the latter of which combines the events of Dune Messiah and Children of Dune). A second film adaptation directed by Denis Villeneuve was released on October 21, 2021, to positive reviews. It grossed $401 million worldwide and went on to be nominated for ten Academy Awards, winning six. Villeneuve's film covers roughly the first half of the original novel; a sequel, which will cover the remaining story, will be released in March 2024.\nThe series has also been used as the basis for several board, role-playing, and video games.\nSince 2009, the names of planets from the Dune novels have been adopted for the real-life nomenclature of plains and other features on Saturn's moon Titan.\n\n\n== Origins ==\nAfter his novel The Dragon in the Sea was published in 1957, Herbert traveled to Florence, Oregon, at the north end of the Oregon Dunes. Here, the United States Department of Agriculture was attempting to use poverty grasses to stabilize the sand dunes. Herbert claimed in a letter to his literary agent, Lurton Blassingame, that the moving dunes could \"swallow whole cities, lakes, rivers, highways.\" Herbert's article on the dunes, \"They Stopped the Moving Sands\", was never completed (and only published decades later in The Road to Dune), but its research sparked Herbert's interest in ecology and deserts.Herbert further drew inspiration from Native American mentors like \"Indian Henry\" (as Herbert referred to the man to his son; likely a Henry Martin of the Hoh tribe) and Howard Hansen. Both Martin and Hansen grew up on the Quileute reservation near Herbert's hometown. According to historian Daniel Immerwahr, Hansen regularly shared his writing with Herbert. \"White men are eating the earth,\" Hansen told Herbert in 1958, after sharing a piece on the effect of logging on the Quileute reservation. \"They're gonna turn this whole planet into a wasteland, just like North Africa.\" The world could become a \"big dune,\" Herbert responded in agreement.Herbert was also interested in the idea of the superhero mystique and messiahs. He believed that feudalism was a natural condition humans fell into, where some led and others gave up the responsibility of making decisions and just followed orders. He found that desert environments have historically given birth to several major religions with messianic impulses. He decided to join his interests together so he could play religious and ecological ideas against each other. In addition, he was influenced by the story of T. E. Lawrence and the \"messianic overtones\" in Lawrence's involvement in the Arab Revolt during World War I. In an early version of Dune, the hero was actually very similar to Lawrence of Arabia, but Herbert decided the plot was too straightforward and added more layers to his story.Herbert drew heavy inspiration also from Lesley Blanch's The Sabres of Paradise (1960), a narrative history recounting a mid-19th century conflict in the Caucasus between rugged Islamized caucasian tribes and the expansive Russian Empire. Language used on both sides of that conflict become terms in Herbert's world\u2014chakobsa, a Caucasian hunting language, becomes a battle language of humans spread across the galaxy; kanly, a word for blood feud in the 19th century Caucasus, represents a feud between Dune's noble Houses; sietch and tabir are both words for camp borrowed from Ukrainian Cossacks (of the Pontic\u2013Caspian steppe).Herbert also borrowed some lines which Blanch stated were Caucasian proverbs. \"To kill with the point lacked artistry\", used by Blanch to describe the Caucasus peoples' love of swordsmanship, becomes in Dune \"Killing with the tip lacks artistry\", a piece of advice given to a young Paul during his training. \"Polish comes from the city, wisdom from the hills\", a Caucasian aphorism, turns into a desert expression: \"Polish comes from the cities, wisdom from the desert\".\n\nAnother significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account of meeting Herbert in the 1980s:Frank went on to tell me that much of the premise of Dune\u2014the magic spice (spores) that allowed the bending of space (tripping), the giant sand worms (maggots digesting mushrooms), the eyes of the Fremen (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by the tales of Maria Sabina and the sacred mushroom cults of Mexico)\u2014came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.Herbert spent the next five years researching, writing, and revising. He published a three-part serial Dune World in the monthly Analog, from December 1963 to February 1964. The serial was accompanied by several illustrations that were not published again. After an interval of a year, he published the much slower-paced five-part The Prophet of Dune in the January\u2013May 1965 issues. The first serial became \"Book 1: Dune\" in the final published Dune novel, and the second serial was divided into \"Book Two: Muad'dib\" and \"Book Three: The Prophet\". The serialized version was expanded, reworked, and submitted to more than twenty publishers, each of whom rejected it. The novel, Dune, was finally accepted and published in August 1965 by Chilton Books, a printing house better known for publishing auto repair manuals. Sterling Lanier, an editor at Chilton, had seen Herbert's manuscript and had urged his company to take a risk in publishing the book. However, the first printing, priced at $5.95 (equivalent to $55.25 in 2022), did not sell well and was poorly received by critics as being atypical of science fiction at the time. Chilton considered the publication of Dune a write-off and Lanier was fired. Over the course of time, the book gained critical acclaim, and its popularity spread by word-of-mouth to allow Herbert to start working full time on developing the sequels to Dune, elements of which were already written alongside Dune.At first Herbert considered using Mars as setting for his novel, but eventually decided to use a fictional planet instead. His son Brian said that \"Readers would have too many preconceived ideas about that planet, due to the number of stories that had been written about it.\"Herbert dedicated his work \"to the people whose labors go beyond ideas into the realm of 'real materials'\u2014to the dry-land ecologists, wherever they may be, in whatever time they work, this effort at prediction is dedicated in humility and admiration.\"\n\n\n== Plot ==\nDuke Leto Atreides of House Atreides, ruler of the ocean planet Caladan, is assigned by the Padishah Emperor Shaddam IV to serve as fief ruler of the planet Arrakis. Although Arrakis is a harsh and inhospitable desert planet, it is of enormous importance because it is the only planetary source of melange, or the \"spice\", a unique and incredibly valuable substance that extends human youth, vitality and lifespan. It is also through the consumption of spice that Spacing Guild Navigators are able to effect safe interstellar travel. Shaddam, jealous of Duke Leto Atreides's rising popularity in the Landsraad, sees House Atreides as a potential future rival and threat, so conspires with House Harkonnen, the former stewards of Arrakis and the longstanding enemies of House Atreides, to destroy Leto and his family after their arrival. Leto is aware his assignment is a trap of some kind, but is compelled to obey the Emperor's orders anyway.\nLeto's concubine Lady Jessica is an acolyte of the Bene Gesserit, an exclusively female group that pursues mysterious political aims and wields seemingly superhuman physical and mental abilities, such as the ability to control their bodies down to the cellular level, and also decide the sex of their children. Though Jessica was instructed by the Bene Gesserit to bear a daughter as part of their breeding program, out of love for Leto she bore a son, Paul. From a young age, Paul has been trained in warfare by Leto's aides, the elite soldiers Duncan Idaho and Gurney Halleck. Thufir Hawat, the Duke's Mentat (human computers, able to store vast amounts of data and perform advanced calculations on demand), has instructed Paul in the ways of political intrigue. Jessica has also trained her son in Bene Gesserit disciplines.\nPaul's prophetic dreams interest Jessica's superior, the Reverend Mother Gaius Helen Mohiam, who subjects Paul to the deadly gom jabbar test. Holding a poisonous needle to his neck ready to strike should he be unable to resist the impulse to withdraw his hand from the nerve induction box, she tests Paul's self-control to overcome the extreme psychological pain he is being subjected to through the box.\nLeto, Jessica, and Paul travel with their household to occupy Arrakeen, the capital on Arrakis formerly held by House Harkonnen. Leto learns of the dangers involved in harvesting the spice, which is protected by giant sandworms, and seeks to negotiate with the planet's native Fremen people, seeing them as a valuable ally rather than foes. Soon after the Atreides's arrival, Harkonnen forces attack, joined by the Emperor's ferocious Sardaukar troops in disguise. Leto is betrayed by his personal physician, the Suk doctor Wellington Yueh, who delivers a drugged Leto to the Baron Vladimir Harkonnen and his twisted Mentat, Piter De Vries. Yueh, however, arranges for Jessica and Paul to escape into the desert, where they are presumed dead by the Harkonnens. Yueh replaces one of Leto's teeth with a poison gas capsule, hoping Leto can kill the Baron during their encounter. The Baron narrowly avoids the gas due to his shield, which kills Leto, De Vries, and the others in the room. The Baron forces Hawat to take over De Vries's position by dosing him with a long-lasting, fatal poison and threatening to withhold the regular antidote doses unless he obeys. While he follows the Baron's orders, Hawat works secretly to undermine the Harkonnens.\nHaving fled into the desert, Paul is exposed to high concentrations of spice and has visions through which he realizes he has significant powers (as a result of the Bene Gesserit breeding scheme). He foresees potential futures in which he lives among the planet's native Fremen before leading them on a Holy Jihad across the known universe.\nIt is revealed Jessica is the daughter of Baron Harkonnen, a secret kept from her by the Bene Gesserit. After being captured by Fremen, Paul and Jessica are accepted into the Fremen community of Sietch Tabr, and teach the Fremen the Bene Gesserit fighting technique known as the \"weirding way\". Paul proves his manhood by killing a Fremen named Jamis in a ritualistic crysknife fight and chooses the Fremen name Muad'Dib, while Jessica opts to undergo a ritual to become a Reverend Mother by drinking the poisonous Water of Life. Pregnant with Leto's daughter, she inadvertently causes the unborn child, Alia, to become infused with the same powers in the womb. Paul takes a Fremen lover, Chani, and has a son with her, Leto II.\nTwo years pass and Paul's powerful prescience manifests, which confirms for the Fremen that he is their prophesied messiah, a legend planted by the Bene Gesserit's Missionaria Protectiva. Paul embraces his father's belief that the Fremen could be a powerful fighting force to take back Arrakis, but also sees that if he does not control them, their jihad could consume the entire universe. Word of the new Fremen leader reaches both Baron Harkonnen and the Emperor as spice production falls due to their increasingly destructive raids. The Baron encourages his brutish nephew Glossu Rabban to rule with an iron fist, hoping the contrast with his shrewder nephew Feyd-Rautha will make the latter popular among the people of Arrakis when he eventually replaces Rabban. The Emperor, suspecting the Baron of trying to create troops more powerful than the Sardaukar to seize power, sends spies to monitor activity on Arrakis. Hawat uses the opportunity to sow seeds of doubt in the Baron about the Emperor's true plans, putting further strain on their alliance.\nGurney, having survived the Harkonnen coup becomes a smuggler, reuniting with Paul and Jessica after a Fremen raid on his harvester. Believing Jessica to be the traitor, Gurney threatens to kill her, but is stopped by Paul. Paul did not foresee Gurney's attack, and concludes he must increase his prescience by drinking the Water of Life, which is traditionally fatal to males. Paul falls into unconsciousness for three weeks after drinking the poison, but when he wakes, he has clairvoyance across time and space: he is the Kwisatz Haderach, the ultimate goal of the Bene Gesserit breeding program.\nPaul senses the Emperor and Baron are amassing fleets around Arrakis to quell the Fremen rebellion, and prepares the Fremen for a major offensive against the Harkonnen troops. The Emperor arrives with the Baron on Arrakis. The Emperor's troops seize a Fremen outpost, killing many including young Leto II, while Alia is captured and taken to the Emperor. Under cover of an electric storm, which shorts out the Emperor's troops' defensive shields, Paul and the Fremen, riding giant sandworms, assault the capital while Alia assassinates the Baron and escapes. The Fremen quickly defeat both the Harkonnen and Sardaukar troops.\nPaul faces the Emperor, threatening to destroy spice production forever unless Shaddam abdicates the throne. Feyd-Rautha attempts to stop Paul by challenging him to a ritualistic knife fight, during which he attempts to cheat and kill Paul with a poison spur in his belt. Paul gains the upper hand and kills him. The Emperor reluctantly cedes the throne to Paul and promises his daughter Princess Irulan's hand in marriage. As Paul takes control of the Empire, he realizes that while he has achieved his goal, he is no longer able to stop the Fremen jihad, as their belief in him is too powerful to restrain.\n\n\n== Characters ==\nHouse AtreidesPaul Atreides, the Duke's son, and main character of the novel\nDuke Leto Atreides, head of House Atreides\nLady Jessica, Bene Gesserit and concubine of the Duke, mother of Paul and Alia\nAlia Atreides, Paul's younger sister\nThufir Hawat, Mentat and Master of Assassins to House Atreides\nGurney Halleck, staunchly loyal troubadour warrior of the Atreides\nDuncan Idaho, Swordmaster for House Atreides, graduate of the Ginaz School\nWellington Yueh, Suk doctor for the Atreides who is secretly working for House HarkonnenHouse HarkonnenBaron Vladimir Harkonnen, head of House Harkonnen\nPiter De Vries, twisted Mentat\nFeyd-Rautha, nephew and heir-presumptive of the Baron\nGlossu \"Beast\" Rabban, also called Rabban Harkonnen, older nephew of the Baron\nIakin Nefud, Captain of the GuardHouse CorrinoShaddam IV, Padishah Emperor of the Known Universe (the Imperium)\nPrincess Irulan, Shaddam's eldest daughter and heir, also a historian\nCount Fenring, the Emperor's closest friend, advisor, and \"errand boy\"Bene GesseritReverend Mother Gaius Helen Mohiam, Proctor Superior of the Bene Gesserit school and the Emperor's Truthsayer\nLady Margot Fenring, Bene Gesserit wife of Count FenringFremenThe Fremen, native inhabitants of Arrakis\nStilgar, Fremen leader of Sietch Tabr\nChani, Paul's Fremen concubine and a Sayyadina (female acolyte) of Sietch Tabr\nDr. Liet-Kynes, the Imperial Planetologist on Arrakis and father of Chani, as well as a revered figure among the Fremen\nThe Shadout Mapes, head housekeeper of imperial residence on Arrakis\nJamis, Fremen killed by Paul in ritual duel\nHarah, wife of Jamis and later servant to Paul who helps raise Alia among the Fremen\nReverend Mother Ramallo, religious leader of Sietch TabrSmugglersEsmar Tuek, a powerful smuggler and the father of Staban Tuek\nStaban Tuek, the son of Esmar Tuek and a powerful smuggler who befriends and takes in Gurney Halleck and his surviving men after the attack on the Atreides\n\n\n== Themes and influences ==\nThe Dune series is a landmark of science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. For example, a key pre-history event to the novel's present is the \"Butlerian Jihad\", in which all robots and computers were destroyed, eliminating these common elements to science fiction from the novel as to allow focus on humanity. Dune considers the way humans and their institutions might change over time. Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of \"the human condition and its moral dilemmas\", and said:\n\nA lot of people refer to Dune as science fiction. I never do. I consider it an epic adventure in the classic storytelling tradition, a story of myth and legend not unlike the Morte d'Arthur or any messiah story. It just happens to be set in the future ... The story is actually more relevant today than when Herbert wrote it. In the 1960s, there were just these two colossal superpowers duking it out. Today we're living in a more feudal, corporatized world more akin to Herbert's universe of separate families, power centers and business interests, all interrelated and kept together by the one commodity necessary to all.\nBut Dune has also been called a mix of soft and hard science fiction since \"the attention to ecology is hard, the anthropology and the psychic abilities are soft.\" Hard elements include the ecology of Arrakis, suspensor technology, weapon systems, and ornithopters, while soft elements include issues relating to religion, physical and mental training, cultures, politics, and psychology.Herbert said Paul's messiah figure was inspired by the Arthurian legend, and that the scarcity of water on Arrakis was a metaphor for oil, as well as air and water itself, and for the shortages of resources caused by overpopulation. Novelist Brian Herbert, his son and biographer, wrote:\n\nDune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange, the geriatric spice that represents, among other things, the finite resource of oil. The planet Arrakis features immense, ferocious worms that are like dragons of lore, with \"great teeth\" and a \"bellows breath of cinnamon.\" This resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs, at the edge of the sea. The desert of Frank Herbert's classic novel is a vast ocean of sand, with giant worms diving into the depths, the mysterious and unrevealed domain of Shai-hulud. Dune tops are like the crests of waves, and there are powerful sandstorms out there, creating extreme danger. On Arrakis, life is said to emanate from the Maker (Shai-hulud) in the desert-sea; similarly all life on Earth is believed to have evolved from our oceans. Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know \u2026 and the book characters of his imagination are not so different from people familiar to us.\nEach chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context and other details intended to enhance understanding of Herbert's complex fictional universe and themes. They act as foreshadowing and invite the reader to keep reading to close the gap between what the epigraph says and what is happening in the main narrative. The epigraphs also give the reader the feeling that the world they are reading about is epically distanced, since Irulan writes about an idealized image of Paul as if he had already passed into memory. Brian Herbert wrote: \"Dad told me that you could follow any of the novel's layers as you read it, and then start the book all over again, focusing on an entirely different layer. At the end of the book, he intentionally left loose ends and said he did this to send the readers spinning out of the story with bits and pieces of it still clinging to them, so that they would want to go back and read it again.\"\n\n\n=== Middle-Eastern and Islamic references ===\nDue to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' \"Islamic undertones\" and themes, a Middle-Eastern influence on Herbert's works has been noted repeatedly. In his descriptions of the Fremen culture and language, Herbert uses both authentic Arabic words and Arabic-sounding words. For example, one of the names for the sandworm, Shai-hulud, is derived from Arabic: \u0634\u064a\u0621 \u062e\u0644\u0648\u062f, romanized: \u0161ay\u02be \u1e2bul\u016bd, lit.\u2009'immortal thing' or Arabic: \u0634\u064a\u062e \u062e\u0644\u0648\u062f, romanized: \u0161ay\u1e2b \u1e2bul\u016bd, lit.\u2009'old man of eternity'. The title of the Fremen housekeeper, the Shadout Mapes, is borrowed from the Arabic: \u0634\u0627\u062f\u0648\u0641\u200e, romanized: \u0161\u0101d\u016bf, the Egyptian term for a device used to raise water. In particular, words related to the messianic religion of the Fremen, first implanted by the Bene Gesserit, are taken from Arabic, including Muad'Dib (from Arabic: \u0645\u0624\u062f\u0628, romanized: mu\u02beaddib, lit.\u2009'educator'), Usul (from Arabic: \u0623\u0635\u0648\u0644, romanized: \u02beu\u1e63\u016bl, lit.\u2009'fundamental principles'), Shari-a (from Arabic: \u0634\u0631\u064a\u0639\u0629, romanized: \u0161ar\u012b\u02bfa, lit.\u2009'sharia; path'), Shaitan (from Arabic: \u0634\u064a\u0637\u0627\u0646, romanized: \u0161ay\u1e6d\u0101n, lit.\u2009'Shaitan; devil; fiend', and jinn (from Arabic: \u062c\u0646, romanized: \u01e7inn, lit.\u2009'jinn; spirit; demon; mythical being'). It is likely Herbert relied on second-hand resources such as phrasebooks and desert adventure stories to find these Arabic words and phrases for the Fremen. They are meaningful and carefully chosen, and help create an \"imagined desert culture that resonates with exotic sounds, enigmas, and pseudo-Islamic references\" and has a distinctly Bedouin aesthetic.As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides bears many similarities to the historical T. E. Lawrence. His 1962 biopic Lawrence of Arabia has also been identified as a potential influence. The Sabres of Paradise (1960) has also been identified as a potential influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune.The environment of the desert planet Arrakis was primarily inspired by the environments of the Middle East. Similarly Arrakis as a bioregion is presented as a particular kind of political site. Herbert has made it resemble a desertified petrostate area. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted at by Herbert's reference to Khaldun's book Kit\u0101b al-\u02bfibar (\"The Book of Lessons\"). The fictionalized version of the \"Kitab al-ibar\" in Dune is a combination of a Fremen religious manual and a desert survival book.\n\n\n==== Additional language and historic influences ====\nIn addition to Arabic, Dune derives words and names from a variety of other languages, including Hebrew, Navajo, Latin, Dutch (\"Landsraad\"), Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, Sanskrit (\"prana bindu\", \"prajna\"), Russian, Turkish, Finnish, and Old English. Bene Gesserit is simply the Latin for \"It will have been well fought\", also carrying the sense of \"It will have been well managed\", which stands as a statement of the order's goal and as a pledge of faithfulness to that goal. Critics tend to miss the literal meaning of the phrase, some positing that the term is derived from the Latin meaning \"it will have been well borne\", which interpretation is not well supported by their doctrine in the story.Through the inspiration from The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was \"aristocratic bureaucracy\" which he saw as analogous to the Soviet Union.\n\n\n=== Environmentalism and ecology ===\nDune has been called the \"first planetary ecology novel on a grand scale\". Herbert hoped it would be seen as an \"environmental awareness handbook\" and said the title was meant to \"echo the sound of 'doom'\". It was reviewed in the best selling countercultural Whole Earth Catalog in 1968 as a \"rich re-readable fantasy with clear portrayal of the fierce environment it takes to cohere a community\".After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of Arrakis life, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune's popularity as a novel depicting a planet as a complex\u2014almost living\u2014thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day.While the genre of climate fiction was popularized in the 2010s in response to real global climate change, Dune as well as other early science fiction works from authors like J. G. Ballard (The Drowned World) and Kim Stanley Robinson (the Mars trilogy) have retroactively been considered pioneering examples of the genre.\n\n\n=== Declining empires ===\nThe Imperium in Dune contains features of various empires in Europe and the Near East, including the Roman Empire, Holy Roman Empire, and Ottoman Empire. Lorenzo DiTommaso compared Dune's portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In \"The Articulation of Imperial Decadence and Decline in Epic Science Fiction\" (2007), DiTommaso outlines similarities between the two works by highlighting the excesses of the Emperor on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as \"parasites\". The Baron Harkonnen is similarly corrupt and materially indulgent. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. The Emperor's Sardaukar fighters are little match for the Fremen of Dune not only because of the Sardaukar's overconfidence and the fact that Jessica and Paul have trained the Fremen in their battle tactics, but because of the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others.The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Frank Herbert as depicting \"war as a collective orgasm\" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare), themes that would reappear in God Emperor of Dune's Scattering and Leto II's all-female Fish Speaker army.\n\n\n=== Gender dynamics ===\nGender dynamics are complex in Dune. Within the Fremen sietch communities, women have almost full equality. They carry weapons and travel in raiding parties with men, fighting when necessary alongside the men. They can take positions of leadership as a Sayyadina or as a Reverend Mother (if she can survive the ritual of ingesting the Water of Life.) Both of these sietch religious leaders are routinely consulted by the all-male Council and can have a decisive voice in all matters of sietch life, security and internal politics. They are also protected by the entire community. Due to the high mortality rate among their men, women outnumber men in most sietches. Polygamy is common, and sexual relationships are voluntary and consensual; as Stilgar says to Jessica, \"women among us are not taken against their will.\" \nIn contrast, the Imperial aristocracy leaves young women of noble birth very little agency. Frequently trained by the Bene Gesserit, they are raised to eventually marry other aristocrats. Marriages between Major and Minor Houses are political tools to forge alliances or heal old feuds; women are given very little say in the matter. Many such marriages are quietly maneuvered by the Bene Gesserit to produce offspring with some genetic characteristics needed by the sisterhood's human-breeding program. In addition, such highly-placed sisters were in a position to subtly influence their husbands' actions in ways that could move the politics of the Imperium toward Bene Gesserit goals. \nThe gom jabbar test of humanity is administered by the female Bene Gesserit order but rarely to males. The Bene Gesserit have seemingly mastered the unconscious and can play on the unconscious weaknesses of others using the Voice, yet their breeding program seeks after a male Kwisatz Haderach. Their plan is to produce a male who can \"possess complete racial memory, both male and female,\" and look into the black hole in the collective unconscious that they fear. A central theme of the book is the connection, in Jessica's son, of this female aspect with his male aspect. This aligns with concepts in Jungian psychology, which features conscious/unconscious and taking/giving roles associated with males and females, as well as the idea of the collective unconscious. Paul's approach to power consistently requires his upbringing under the matriarchal Bene Gesserit, who operate as a long-dominating shadow government behind all of the great houses and their marriages or divisions. He is trained by Jessica in the Bene Gesserit Way, which includes prana-bindu training in nerve and muscle control and precise perception. Paul also receives Mentat training, thus helping prepare him to be a type of androgynous Kwisatz Haderach, a male Reverend Mother.In a Bene Gesserit test early in the book, it is implied that people are generally \"inhuman\" in that they irrationally place desire over self-interest and reason. This applies Herbert's philosophy that humans are not created equal, while equal justice and equal opportunity are higher ideals than mental, physical, or moral equality.\n\n\n=== Heroism ===\nI am showing you the superhero syndrome and your own participation in it.\nThroughout Paul's rise to superhuman status, he follows a plotline common to many stories describing the birth of a hero. He has unfortunate circumstances forced onto him. After a long period of hardship and exile, he confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Eventually, Paul Atreides gains a level of omniscience which allows him to take over the planet and the galaxy, and causes the Fremen of Arrakis to worship him like a god. Author Frank Herbert said in 1979, \"The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes.\" He wrote in 1985, \"Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question.\"Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of \"painful and slow personal progress.\" And unlike other superheroes of the 1960s\u2014who are the exception among ordinary people in their respective worlds\u2014Herbert's characters grow their powers through \"the application of mystical philosophies and techniques.\" For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Ginaz swordsmen and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators).\n\n\n=== Zen and religion ===\n\nEarly in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who \"gave a crucial boost to his thinking\". Zen teachings ultimately had \"a profound and continuing influence on [Herbert's] work\". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are referred to as Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In \"Dune Genesis\", Frank Herbert wrote:\n\nWhat especially pleases me is to see the interwoven themes, the fugue like relationships of images that exactly replay the way Dune took shape. As in an Escher lithograph, I involved myself with recurrent themes that turn into paradox. The central paradox concerns the human vision of time. What about Paul's gift of prescience - the Presbyterian fixation? For the Delphic Oracle to perform, it must tangle itself in a web of predestination. Yet predestination negates surprises and, in fact, sets up a mathematically enclosed universe whose limits are always inconsistent, always encountering the unprovable. It's like a koan, a Zen mind breaker. It's like the Cretan Epimenides saying, \"All Cretans are liars.\"\nBrian Herbert called the Dune universe \"a spiritual melting pot\", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which \"religious beliefs have combined into interesting forms\" represents the author's solution to eliminating arguments between religions, each of which claimed to have \"the one and only revelation.\"" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-advanced-rag\\README.md", + "filetype": ".md", + "content": "# neo4j-advanced-rag\n\nThis template allows you to balance precise embeddings and context retention by implementing advanced retrieval strategies.\n\n## Strategies\n\n1. **Typical RAG**:\n - Traditional method where the exact data indexed is the data retrieved.\n2. **Parent retriever**:\n - Instead of indexing entire documents, data is divided into smaller chunks, referred to as Parent and Child documents.\n - Child documents are indexed for better representation of specific concepts, while parent documents is retrieved to ensure context retention.\n3. **Hypothetical Questions**:\n - Documents are processed to determine potential questions they might answer.\n - These questions are then indexed for better representation of specific concepts, while parent documents are retrieved to ensure context retention.\n4. **Summaries**:\n - Instead of indexing the entire document, a summary of the document is created and indexed.\n - Similarly, the parent document is retrieved in a RAG application.\n\n## Environment Setup\n\nYou need to define the following environment variables\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Populating with data\n\nIf you want to populate the DB with some example data, you can run `python ingest.py`.\nThe script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.\nFirst, the text is divided into larger chunks (\"parents\") and then further subdivided into smaller chunks (\"children\"), where both parent and child chunks overlap slightly to maintain context.\nAfter storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis.\nFor every parent node, hypothetical questions and summaries are generated, embedded, and added to the database. \nAdditionally, a vector index for each retrieval strategy is created for efficient querying of these embeddings.\n\n*Note that ingestion can take a minute or two due to LLMs velocity of generating hypothetical questions and summaries.*\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-advanced-rag\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-advanced-rag\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_advanced_rag import chain as neo4j_advanced_chain\n\nadd_routes(app, neo4j_advanced_chain, path=\"/neo4j-advanced-rag\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-advanced-rag/playground](http://127.0.0.1:8000/neo4j-advanced-rag/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-advanced-rag\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-cypher\\README.md", + "filetype": ".md", + "content": "\n# neo4j_cypher\n\nThis template allows you to interact with a Neo4j graph database in natural language, using an OpenAI LLM. \n\nIt transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.\n\n[![Diagram showing the workflow of a user asking a question, which is processed by a Cypher generating chain, resulting in a Cypher query to the Neo4j Knowledge Graph, and then an answer generating chain that provides a generated answer based on the information from the graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher/static/workflow.png \"Neo4j Cypher Workflow Diagram\")](https://medium.com/neo4j/langchain-cypher-search-tips-tricks-f7c9e9abca4d)\n\n## Environment Setup\n\nDefine the following environment variables:\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Neo4j database setup\n\nThere are a number of ways to set up a Neo4j database.\n\n### Neo4j Aura\n\nNeo4j AuraDB is a fully managed cloud graph database service.\nCreate a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).\nWhen you initiate a free database instance, you'll receive credentials to access the database.\n\n## Populating with data\n\nIf you want to populate the DB with some example data, you can run `python ingest.py`.\nThis script will populate the database with sample movie data.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-cypher\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-cypher\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_cypher import chain as neo4j_cypher_chain\n\nadd_routes(app, neo4j_cypher_chain, path=\"/neo4j-cypher\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j_cypher/playground](http://127.0.0.1:8000/neo4j_cypher/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-cypher\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-cypher-ft\\README.md", + "filetype": ".md", + "content": "\n# neo4j-cypher-ft\n\nThis template allows you to interact with a Neo4j graph database using natural language, leveraging OpenAI's LLM. \n\nIts main function is to convert natural language questions into Cypher queries (the language used to query Neo4j databases), execute these queries, and provide natural language responses based on the query's results. \n\nThe package utilizes a full-text index for efficient mapping of text values to database entries, thereby enhancing the generation of accurate Cypher statements. \n\nIn the provided example, the full-text index is used to map names of people and movies from the user's query to corresponding database entries.\n\n![Workflow diagram showing the process from a user asking a question to generating an answer using the Neo4j knowledge graph and full-text index.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-ft/static/workflow.png \"Neo4j Cypher Workflow Diagram\")\n\n## Environment Setup\n\nThe following environment variables need to be set:\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\nAdditionally, if you wish to populate the DB with some example data, you can run `python ingest.py`.\nThis script will populate the database with sample movie data and create a full-text index named `entity`, which is used to map person and movies from user input to database values for precise Cypher statement generation.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-cypher-ft\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-cypher-ft\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_cypher_ft import chain as neo4j_cypher_ft_chain\n\nadd_routes(app, neo4j_cypher_ft_chain, path=\"/neo4j-cypher-ft\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-cypher-ft/playground](http://127.0.0.1:8000/neo4j-cypher-ft/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-cypher-ft\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-cypher-memory\\README.md", + "filetype": ".md", + "content": "\n# neo4j-cypher-memory\n\nThis template allows you to have conversations with a Neo4j graph database in natural language, using an OpenAI LLM.\nIt transforms a natural language question into a Cypher query (used to fetch data from Neo4j databases), executes the query, and provides a natural language response based on the query results.\nAdditionally, it features a conversational memory module that stores the dialogue history in the Neo4j graph database.\nThe conversation memory is uniquely maintained for each user session, ensuring personalized interactions.\nTo facilitate this, please supply both the `user_id` and `session_id` when using the conversation chain.\n\n![Workflow diagram illustrating the process of a user asking a question, generating a Cypher query, retrieving conversational history, executing the query on a Neo4j database, generating an answer, and storing conversational memory.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-cypher-memory/static/workflow.png \"Neo4j Cypher Memory Workflow Diagram\")\n\n## Environment Setup\n\nDefine the following environment variables:\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Neo4j database setup\n\nThere are a number of ways to set up a Neo4j database.\n\n### Neo4j Aura\n\nNeo4j AuraDB is a fully managed cloud graph database service.\nCreate a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).\nWhen you initiate a free database instance, you'll receive credentials to access the database.\n\n## Populating with data\n\nIf you want to populate the DB with some example data, you can run `python ingest.py`.\nThis script will populate the database with sample movie data.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-cypher-memory\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-cypher-memory\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_cypher_memory import chain as neo4j_cypher_memory_chain\n\nadd_routes(app, neo4j_cypher_memory_chain, path=\"/neo4j-cypher-memory\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j_cypher_memory/playground](http://127.0.0.1:8000/neo4j_cypher/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-cypher-memory\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-generation\\README.md", + "filetype": ".md", + "content": "\n# neo4j-generation\n\nThis template pairs LLM-based knowledge graph extraction with Neo4j AuraDB, a fully managed cloud graph database.\n\nYou can create a free instance on [Neo4j Aura](https://neo4j.com/cloud/platform/aura-graph-database?utm_source=langchain&utm_content=langserve).\n\nWhen you initiate a free database instance, you'll receive credentials to access the database.\n\nThis template is flexible and allows users to guide the extraction process by specifying a list of node labels and relationship types.\n\nFor more details on the functionality and capabilities of this package, please refer to [this blog post](https://blog.langchain.dev/constructing-knowledge-graphs-from-text-using-openai-functions/).\n\n## Environment Setup\n\nYou need to set the following environment variables:\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-generation\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-generation\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_generation.chain import chain as neo4j_generation_chain\n\nadd_routes(app, neo4j_generation_chain, path=\"/neo4j-generation\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-generation/playground](http://127.0.0.1:8000/neo4j-generation/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-generation\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-parent\\dune.txt", + "filetype": ".txt", + "content": "Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It is one of the world's best-selling science fiction novels.Dune is set in the distant future in a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or \"spice\", a drug that extends life and enhances mental abilities. Melange is also necessary for space navigation, which requires a kind of multidimensional awareness and foresight that only the drug provides. As melange can only be produced on Arrakis, control of the planet is a coveted and dangerous undertaking. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.\nHerbert wrote five sequels: Dune Messiah, Children of Dune, God Emperor of Dune, Heretics of Dune, and Chapterhouse: Dune. Following Herbert's death in 1986, his son Brian Herbert and author Kevin J. Anderson continued the series in over a dozen additional novels since 1999.\nAdaptations of the novel to cinema have been notoriously difficult and complicated. In the 1970s, cult filmmaker Alejandro Jodorowsky attempted to make a film based on the novel. After three years of development, the project was canceled due to a constantly growing budget. In 1984, a film adaptation directed by David Lynch was released to mostly negative responses from critics and failure at the box office, although it later developed a cult following. The book was also adapted into the 2000 Sci-Fi Channel miniseries Frank Herbert's Dune and its 2003 sequel Frank Herbert's Children of Dune (the latter of which combines the events of Dune Messiah and Children of Dune). A second film adaptation directed by Denis Villeneuve was released on October 21, 2021, to positive reviews. It grossed $401 million worldwide and went on to be nominated for ten Academy Awards, winning six. Villeneuve's film covers roughly the first half of the original novel; a sequel, which will cover the remaining story, will be released in March 2024.\nThe series has also been used as the basis for several board, role-playing, and video games.\nSince 2009, the names of planets from the Dune novels have been adopted for the real-life nomenclature of plains and other features on Saturn's moon Titan.\n\n\n== Origins ==\nAfter his novel The Dragon in the Sea was published in 1957, Herbert traveled to Florence, Oregon, at the north end of the Oregon Dunes. Here, the United States Department of Agriculture was attempting to use poverty grasses to stabilize the sand dunes. Herbert claimed in a letter to his literary agent, Lurton Blassingame, that the moving dunes could \"swallow whole cities, lakes, rivers, highways.\" Herbert's article on the dunes, \"They Stopped the Moving Sands\", was never completed (and only published decades later in The Road to Dune), but its research sparked Herbert's interest in ecology and deserts.Herbert further drew inspiration from Native American mentors like \"Indian Henry\" (as Herbert referred to the man to his son; likely a Henry Martin of the Hoh tribe) and Howard Hansen. Both Martin and Hansen grew up on the Quileute reservation near Herbert's hometown. According to historian Daniel Immerwahr, Hansen regularly shared his writing with Herbert. \"White men are eating the earth,\" Hansen told Herbert in 1958, after sharing a piece on the effect of logging on the Quileute reservation. \"They're gonna turn this whole planet into a wasteland, just like North Africa.\" The world could become a \"big dune,\" Herbert responded in agreement.Herbert was also interested in the idea of the superhero mystique and messiahs. He believed that feudalism was a natural condition humans fell into, where some led and others gave up the responsibility of making decisions and just followed orders. He found that desert environments have historically given birth to several major religions with messianic impulses. He decided to join his interests together so he could play religious and ecological ideas against each other. In addition, he was influenced by the story of T. E. Lawrence and the \"messianic overtones\" in Lawrence's involvement in the Arab Revolt during World War I. In an early version of Dune, the hero was actually very similar to Lawrence of Arabia, but Herbert decided the plot was too straightforward and added more layers to his story.Herbert drew heavy inspiration also from Lesley Blanch's The Sabres of Paradise (1960), a narrative history recounting a mid-19th century conflict in the Caucasus between rugged Islamized caucasian tribes and the expansive Russian Empire. Language used on both sides of that conflict become terms in Herbert's world\u2014chakobsa, a Caucasian hunting language, becomes a battle language of humans spread across the galaxy; kanly, a word for blood feud in the 19th century Caucasus, represents a feud between Dune's noble Houses; sietch and tabir are both words for camp borrowed from Ukrainian Cossacks (of the Pontic\u2013Caspian steppe).Herbert also borrowed some lines which Blanch stated were Caucasian proverbs. \"To kill with the point lacked artistry\", used by Blanch to describe the Caucasus peoples' love of swordsmanship, becomes in Dune \"Killing with the tip lacks artistry\", a piece of advice given to a young Paul during his training. \"Polish comes from the city, wisdom from the hills\", a Caucasian aphorism, turns into a desert expression: \"Polish comes from the cities, wisdom from the desert\".\n\nAnother significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account of meeting Herbert in the 1980s:Frank went on to tell me that much of the premise of Dune\u2014the magic spice (spores) that allowed the bending of space (tripping), the giant sand worms (maggots digesting mushrooms), the eyes of the Fremen (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by the tales of Maria Sabina and the sacred mushroom cults of Mexico)\u2014came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.Herbert spent the next five years researching, writing, and revising. He published a three-part serial Dune World in the monthly Analog, from December 1963 to February 1964. The serial was accompanied by several illustrations that were not published again. After an interval of a year, he published the much slower-paced five-part The Prophet of Dune in the January\u2013May 1965 issues. The first serial became \"Book 1: Dune\" in the final published Dune novel, and the second serial was divided into \"Book Two: Muad'dib\" and \"Book Three: The Prophet\". The serialized version was expanded, reworked, and submitted to more than twenty publishers, each of whom rejected it. The novel, Dune, was finally accepted and published in August 1965 by Chilton Books, a printing house better known for publishing auto repair manuals. Sterling Lanier, an editor at Chilton, had seen Herbert's manuscript and had urged his company to take a risk in publishing the book. However, the first printing, priced at $5.95 (equivalent to $55.25 in 2022), did not sell well and was poorly received by critics as being atypical of science fiction at the time. Chilton considered the publication of Dune a write-off and Lanier was fired. Over the course of time, the book gained critical acclaim, and its popularity spread by word-of-mouth to allow Herbert to start working full time on developing the sequels to Dune, elements of which were already written alongside Dune.At first Herbert considered using Mars as setting for his novel, but eventually decided to use a fictional planet instead. His son Brian said that \"Readers would have too many preconceived ideas about that planet, due to the number of stories that had been written about it.\"Herbert dedicated his work \"to the people whose labors go beyond ideas into the realm of 'real materials'\u2014to the dry-land ecologists, wherever they may be, in whatever time they work, this effort at prediction is dedicated in humility and admiration.\"\n\n\n== Plot ==\nDuke Leto Atreides of House Atreides, ruler of the ocean planet Caladan, is assigned by the Padishah Emperor Shaddam IV to serve as fief ruler of the planet Arrakis. Although Arrakis is a harsh and inhospitable desert planet, it is of enormous importance because it is the only planetary source of melange, or the \"spice\", a unique and incredibly valuable substance that extends human youth, vitality and lifespan. It is also through the consumption of spice that Spacing Guild Navigators are able to effect safe interstellar travel. Shaddam, jealous of Duke Leto Atreides's rising popularity in the Landsraad, sees House Atreides as a potential future rival and threat, so conspires with House Harkonnen, the former stewards of Arrakis and the longstanding enemies of House Atreides, to destroy Leto and his family after their arrival. Leto is aware his assignment is a trap of some kind, but is compelled to obey the Emperor's orders anyway.\nLeto's concubine Lady Jessica is an acolyte of the Bene Gesserit, an exclusively female group that pursues mysterious political aims and wields seemingly superhuman physical and mental abilities, such as the ability to control their bodies down to the cellular level, and also decide the sex of their children. Though Jessica was instructed by the Bene Gesserit to bear a daughter as part of their breeding program, out of love for Leto she bore a son, Paul. From a young age, Paul has been trained in warfare by Leto's aides, the elite soldiers Duncan Idaho and Gurney Halleck. Thufir Hawat, the Duke's Mentat (human computers, able to store vast amounts of data and perform advanced calculations on demand), has instructed Paul in the ways of political intrigue. Jessica has also trained her son in Bene Gesserit disciplines.\nPaul's prophetic dreams interest Jessica's superior, the Reverend Mother Gaius Helen Mohiam, who subjects Paul to the deadly gom jabbar test. Holding a poisonous needle to his neck ready to strike should he be unable to resist the impulse to withdraw his hand from the nerve induction box, she tests Paul's self-control to overcome the extreme psychological pain he is being subjected to through the box.\nLeto, Jessica, and Paul travel with their household to occupy Arrakeen, the capital on Arrakis formerly held by House Harkonnen. Leto learns of the dangers involved in harvesting the spice, which is protected by giant sandworms, and seeks to negotiate with the planet's native Fremen people, seeing them as a valuable ally rather than foes. Soon after the Atreides's arrival, Harkonnen forces attack, joined by the Emperor's ferocious Sardaukar troops in disguise. Leto is betrayed by his personal physician, the Suk doctor Wellington Yueh, who delivers a drugged Leto to the Baron Vladimir Harkonnen and his twisted Mentat, Piter De Vries. Yueh, however, arranges for Jessica and Paul to escape into the desert, where they are presumed dead by the Harkonnens. Yueh replaces one of Leto's teeth with a poison gas capsule, hoping Leto can kill the Baron during their encounter. The Baron narrowly avoids the gas due to his shield, which kills Leto, De Vries, and the others in the room. The Baron forces Hawat to take over De Vries's position by dosing him with a long-lasting, fatal poison and threatening to withhold the regular antidote doses unless he obeys. While he follows the Baron's orders, Hawat works secretly to undermine the Harkonnens.\nHaving fled into the desert, Paul is exposed to high concentrations of spice and has visions through which he realizes he has significant powers (as a result of the Bene Gesserit breeding scheme). He foresees potential futures in which he lives among the planet's native Fremen before leading them on a Holy Jihad across the known universe.\nIt is revealed Jessica is the daughter of Baron Harkonnen, a secret kept from her by the Bene Gesserit. After being captured by Fremen, Paul and Jessica are accepted into the Fremen community of Sietch Tabr, and teach the Fremen the Bene Gesserit fighting technique known as the \"weirding way\". Paul proves his manhood by killing a Fremen named Jamis in a ritualistic crysknife fight and chooses the Fremen name Muad'Dib, while Jessica opts to undergo a ritual to become a Reverend Mother by drinking the poisonous Water of Life. Pregnant with Leto's daughter, she inadvertently causes the unborn child, Alia, to become infused with the same powers in the womb. Paul takes a Fremen lover, Chani, and has a son with her, Leto II.\nTwo years pass and Paul's powerful prescience manifests, which confirms for the Fremen that he is their prophesied messiah, a legend planted by the Bene Gesserit's Missionaria Protectiva. Paul embraces his father's belief that the Fremen could be a powerful fighting force to take back Arrakis, but also sees that if he does not control them, their jihad could consume the entire universe. Word of the new Fremen leader reaches both Baron Harkonnen and the Emperor as spice production falls due to their increasingly destructive raids. The Baron encourages his brutish nephew Glossu Rabban to rule with an iron fist, hoping the contrast with his shrewder nephew Feyd-Rautha will make the latter popular among the people of Arrakis when he eventually replaces Rabban. The Emperor, suspecting the Baron of trying to create troops more powerful than the Sardaukar to seize power, sends spies to monitor activity on Arrakis. Hawat uses the opportunity to sow seeds of doubt in the Baron about the Emperor's true plans, putting further strain on their alliance.\nGurney, having survived the Harkonnen coup becomes a smuggler, reuniting with Paul and Jessica after a Fremen raid on his harvester. Believing Jessica to be the traitor, Gurney threatens to kill her, but is stopped by Paul. Paul did not foresee Gurney's attack, and concludes he must increase his prescience by drinking the Water of Life, which is traditionally fatal to males. Paul falls into unconsciousness for three weeks after drinking the poison, but when he wakes, he has clairvoyance across time and space: he is the Kwisatz Haderach, the ultimate goal of the Bene Gesserit breeding program.\nPaul senses the Emperor and Baron are amassing fleets around Arrakis to quell the Fremen rebellion, and prepares the Fremen for a major offensive against the Harkonnen troops. The Emperor arrives with the Baron on Arrakis. The Emperor's troops seize a Fremen outpost, killing many including young Leto II, while Alia is captured and taken to the Emperor. Under cover of an electric storm, which shorts out the Emperor's troops' defensive shields, Paul and the Fremen, riding giant sandworms, assault the capital while Alia assassinates the Baron and escapes. The Fremen quickly defeat both the Harkonnen and Sardaukar troops.\nPaul faces the Emperor, threatening to destroy spice production forever unless Shaddam abdicates the throne. Feyd-Rautha attempts to stop Paul by challenging him to a ritualistic knife fight, during which he attempts to cheat and kill Paul with a poison spur in his belt. Paul gains the upper hand and kills him. The Emperor reluctantly cedes the throne to Paul and promises his daughter Princess Irulan's hand in marriage. As Paul takes control of the Empire, he realizes that while he has achieved his goal, he is no longer able to stop the Fremen jihad, as their belief in him is too powerful to restrain.\n\n\n== Characters ==\nHouse AtreidesPaul Atreides, the Duke's son, and main character of the novel\nDuke Leto Atreides, head of House Atreides\nLady Jessica, Bene Gesserit and concubine of the Duke, mother of Paul and Alia\nAlia Atreides, Paul's younger sister\nThufir Hawat, Mentat and Master of Assassins to House Atreides\nGurney Halleck, staunchly loyal troubadour warrior of the Atreides\nDuncan Idaho, Swordmaster for House Atreides, graduate of the Ginaz School\nWellington Yueh, Suk doctor for the Atreides who is secretly working for House HarkonnenHouse HarkonnenBaron Vladimir Harkonnen, head of House Harkonnen\nPiter De Vries, twisted Mentat\nFeyd-Rautha, nephew and heir-presumptive of the Baron\nGlossu \"Beast\" Rabban, also called Rabban Harkonnen, older nephew of the Baron\nIakin Nefud, Captain of the GuardHouse CorrinoShaddam IV, Padishah Emperor of the Known Universe (the Imperium)\nPrincess Irulan, Shaddam's eldest daughter and heir, also a historian\nCount Fenring, the Emperor's closest friend, advisor, and \"errand boy\"Bene GesseritReverend Mother Gaius Helen Mohiam, Proctor Superior of the Bene Gesserit school and the Emperor's Truthsayer\nLady Margot Fenring, Bene Gesserit wife of Count FenringFremenThe Fremen, native inhabitants of Arrakis\nStilgar, Fremen leader of Sietch Tabr\nChani, Paul's Fremen concubine and a Sayyadina (female acolyte) of Sietch Tabr\nDr. Liet-Kynes, the Imperial Planetologist on Arrakis and father of Chani, as well as a revered figure among the Fremen\nThe Shadout Mapes, head housekeeper of imperial residence on Arrakis\nJamis, Fremen killed by Paul in ritual duel\nHarah, wife of Jamis and later servant to Paul who helps raise Alia among the Fremen\nReverend Mother Ramallo, religious leader of Sietch TabrSmugglersEsmar Tuek, a powerful smuggler and the father of Staban Tuek\nStaban Tuek, the son of Esmar Tuek and a powerful smuggler who befriends and takes in Gurney Halleck and his surviving men after the attack on the Atreides\n\n\n== Themes and influences ==\nThe Dune series is a landmark of science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. For example, a key pre-history event to the novel's present is the \"Butlerian Jihad\", in which all robots and computers were destroyed, eliminating these common elements to science fiction from the novel as to allow focus on humanity. Dune considers the way humans and their institutions might change over time. Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of \"the human condition and its moral dilemmas\", and said:\n\nA lot of people refer to Dune as science fiction. I never do. I consider it an epic adventure in the classic storytelling tradition, a story of myth and legend not unlike the Morte d'Arthur or any messiah story. It just happens to be set in the future ... The story is actually more relevant today than when Herbert wrote it. In the 1960s, there were just these two colossal superpowers duking it out. Today we're living in a more feudal, corporatized world more akin to Herbert's universe of separate families, power centers and business interests, all interrelated and kept together by the one commodity necessary to all.\nBut Dune has also been called a mix of soft and hard science fiction since \"the attention to ecology is hard, the anthropology and the psychic abilities are soft.\" Hard elements include the ecology of Arrakis, suspensor technology, weapon systems, and ornithopters, while soft elements include issues relating to religion, physical and mental training, cultures, politics, and psychology.Herbert said Paul's messiah figure was inspired by the Arthurian legend, and that the scarcity of water on Arrakis was a metaphor for oil, as well as air and water itself, and for the shortages of resources caused by overpopulation. Novelist Brian Herbert, his son and biographer, wrote:\n\nDune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange, the geriatric spice that represents, among other things, the finite resource of oil. The planet Arrakis features immense, ferocious worms that are like dragons of lore, with \"great teeth\" and a \"bellows breath of cinnamon.\" This resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs, at the edge of the sea. The desert of Frank Herbert's classic novel is a vast ocean of sand, with giant worms diving into the depths, the mysterious and unrevealed domain of Shai-hulud. Dune tops are like the crests of waves, and there are powerful sandstorms out there, creating extreme danger. On Arrakis, life is said to emanate from the Maker (Shai-hulud) in the desert-sea; similarly all life on Earth is believed to have evolved from our oceans. Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know \u2026 and the book characters of his imagination are not so different from people familiar to us.\nEach chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context and other details intended to enhance understanding of Herbert's complex fictional universe and themes. They act as foreshadowing and invite the reader to keep reading to close the gap between what the epigraph says and what is happening in the main narrative. The epigraphs also give the reader the feeling that the world they are reading about is epically distanced, since Irulan writes about an idealized image of Paul as if he had already passed into memory. Brian Herbert wrote: \"Dad told me that you could follow any of the novel's layers as you read it, and then start the book all over again, focusing on an entirely different layer. At the end of the book, he intentionally left loose ends and said he did this to send the readers spinning out of the story with bits and pieces of it still clinging to them, so that they would want to go back and read it again.\"\n\n\n=== Middle-Eastern and Islamic references ===\nDue to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' \"Islamic undertones\" and themes, a Middle-Eastern influence on Herbert's works has been noted repeatedly. In his descriptions of the Fremen culture and language, Herbert uses both authentic Arabic words and Arabic-sounding words. For example, one of the names for the sandworm, Shai-hulud, is derived from Arabic: \u0634\u064a\u0621 \u062e\u0644\u0648\u062f, romanized: \u0161ay\u02be \u1e2bul\u016bd, lit.\u2009'immortal thing' or Arabic: \u0634\u064a\u062e \u062e\u0644\u0648\u062f, romanized: \u0161ay\u1e2b \u1e2bul\u016bd, lit.\u2009'old man of eternity'. The title of the Fremen housekeeper, the Shadout Mapes, is borrowed from the Arabic: \u0634\u0627\u062f\u0648\u0641\u200e, romanized: \u0161\u0101d\u016bf, the Egyptian term for a device used to raise water. In particular, words related to the messianic religion of the Fremen, first implanted by the Bene Gesserit, are taken from Arabic, including Muad'Dib (from Arabic: \u0645\u0624\u062f\u0628, romanized: mu\u02beaddib, lit.\u2009'educator'), Usul (from Arabic: \u0623\u0635\u0648\u0644, romanized: \u02beu\u1e63\u016bl, lit.\u2009'fundamental principles'), Shari-a (from Arabic: \u0634\u0631\u064a\u0639\u0629, romanized: \u0161ar\u012b\u02bfa, lit.\u2009'sharia; path'), Shaitan (from Arabic: \u0634\u064a\u0637\u0627\u0646, romanized: \u0161ay\u1e6d\u0101n, lit.\u2009'Shaitan; devil; fiend', and jinn (from Arabic: \u062c\u0646, romanized: \u01e7inn, lit.\u2009'jinn; spirit; demon; mythical being'). It is likely Herbert relied on second-hand resources such as phrasebooks and desert adventure stories to find these Arabic words and phrases for the Fremen. They are meaningful and carefully chosen, and help create an \"imagined desert culture that resonates with exotic sounds, enigmas, and pseudo-Islamic references\" and has a distinctly Bedouin aesthetic.As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides bears many similarities to the historical T. E. Lawrence. His 1962 biopic Lawrence of Arabia has also been identified as a potential influence. The Sabres of Paradise (1960) has also been identified as a potential influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune.The environment of the desert planet Arrakis was primarily inspired by the environments of the Middle East. Similarly Arrakis as a bioregion is presented as a particular kind of political site. Herbert has made it resemble a desertified petrostate area. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted at by Herbert's reference to Khaldun's book Kit\u0101b al-\u02bfibar (\"The Book of Lessons\"). The fictionalized version of the \"Kitab al-ibar\" in Dune is a combination of a Fremen religious manual and a desert survival book.\n\n\n==== Additional language and historic influences ====\nIn addition to Arabic, Dune derives words and names from a variety of other languages, including Hebrew, Navajo, Latin, Dutch (\"Landsraad\"), Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, Sanskrit (\"prana bindu\", \"prajna\"), Russian, Turkish, Finnish, and Old English. Bene Gesserit is simply the Latin for \"It will have been well fought\", also carrying the sense of \"It will have been well managed\", which stands as a statement of the order's goal and as a pledge of faithfulness to that goal. Critics tend to miss the literal meaning of the phrase, some positing that the term is derived from the Latin meaning \"it will have been well borne\", which interpretation is not well supported by their doctrine in the story.Through the inspiration from The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was \"aristocratic bureaucracy\" which he saw as analogous to the Soviet Union.\n\n\n=== Environmentalism and ecology ===\nDune has been called the \"first planetary ecology novel on a grand scale\". Herbert hoped it would be seen as an \"environmental awareness handbook\" and said the title was meant to \"echo the sound of 'doom'\". It was reviewed in the best selling countercultural Whole Earth Catalog in 1968 as a \"rich re-readable fantasy with clear portrayal of the fierce environment it takes to cohere a community\".After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of Arrakis life, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune's popularity as a novel depicting a planet as a complex\u2014almost living\u2014thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day.While the genre of climate fiction was popularized in the 2010s in response to real global climate change, Dune as well as other early science fiction works from authors like J. G. Ballard (The Drowned World) and Kim Stanley Robinson (the Mars trilogy) have retroactively been considered pioneering examples of the genre.\n\n\n=== Declining empires ===\nThe Imperium in Dune contains features of various empires in Europe and the Near East, including the Roman Empire, Holy Roman Empire, and Ottoman Empire. Lorenzo DiTommaso compared Dune's portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In \"The Articulation of Imperial Decadence and Decline in Epic Science Fiction\" (2007), DiTommaso outlines similarities between the two works by highlighting the excesses of the Emperor on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as \"parasites\". The Baron Harkonnen is similarly corrupt and materially indulgent. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. The Emperor's Sardaukar fighters are little match for the Fremen of Dune not only because of the Sardaukar's overconfidence and the fact that Jessica and Paul have trained the Fremen in their battle tactics, but because of the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others.The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Frank Herbert as depicting \"war as a collective orgasm\" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare), themes that would reappear in God Emperor of Dune's Scattering and Leto II's all-female Fish Speaker army.\n\n\n=== Gender dynamics ===\nGender dynamics are complex in Dune. Within the Fremen sietch communities, women have almost full equality. They carry weapons and travel in raiding parties with men, fighting when necessary alongside the men. They can take positions of leadership as a Sayyadina or as a Reverend Mother (if she can survive the ritual of ingesting the Water of Life.) Both of these sietch religious leaders are routinely consulted by the all-male Council and can have a decisive voice in all matters of sietch life, security and internal politics. They are also protected by the entire community. Due to the high mortality rate among their men, women outnumber men in most sietches. Polygamy is common, and sexual relationships are voluntary and consensual; as Stilgar says to Jessica, \"women among us are not taken against their will.\" \nIn contrast, the Imperial aristocracy leaves young women of noble birth very little agency. Frequently trained by the Bene Gesserit, they are raised to eventually marry other aristocrats. Marriages between Major and Minor Houses are political tools to forge alliances or heal old feuds; women are given very little say in the matter. Many such marriages are quietly maneuvered by the Bene Gesserit to produce offspring with some genetic characteristics needed by the sisterhood's human-breeding program. In addition, such highly-placed sisters were in a position to subtly influence their husbands' actions in ways that could move the politics of the Imperium toward Bene Gesserit goals. \nThe gom jabbar test of humanity is administered by the female Bene Gesserit order but rarely to males. The Bene Gesserit have seemingly mastered the unconscious and can play on the unconscious weaknesses of others using the Voice, yet their breeding program seeks after a male Kwisatz Haderach. Their plan is to produce a male who can \"possess complete racial memory, both male and female,\" and look into the black hole in the collective unconscious that they fear. A central theme of the book is the connection, in Jessica's son, of this female aspect with his male aspect. This aligns with concepts in Jungian psychology, which features conscious/unconscious and taking/giving roles associated with males and females, as well as the idea of the collective unconscious. Paul's approach to power consistently requires his upbringing under the matriarchal Bene Gesserit, who operate as a long-dominating shadow government behind all of the great houses and their marriages or divisions. He is trained by Jessica in the Bene Gesserit Way, which includes prana-bindu training in nerve and muscle control and precise perception. Paul also receives Mentat training, thus helping prepare him to be a type of androgynous Kwisatz Haderach, a male Reverend Mother.In a Bene Gesserit test early in the book, it is implied that people are generally \"inhuman\" in that they irrationally place desire over self-interest and reason. This applies Herbert's philosophy that humans are not created equal, while equal justice and equal opportunity are higher ideals than mental, physical, or moral equality.\n\n\n=== Heroism ===\nI am showing you the superhero syndrome and your own participation in it.\nThroughout Paul's rise to superhuman status, he follows a plotline common to many stories describing the birth of a hero. He has unfortunate circumstances forced onto him. After a long period of hardship and exile, he confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Eventually, Paul Atreides gains a level of omniscience which allows him to take over the planet and the galaxy, and causes the Fremen of Arrakis to worship him like a god. Author Frank Herbert said in 1979, \"The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes.\" He wrote in 1985, \"Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question.\"Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of \"painful and slow personal progress.\" And unlike other superheroes of the 1960s\u2014who are the exception among ordinary people in their respective worlds\u2014Herbert's characters grow their powers through \"the application of mystical philosophies and techniques.\" For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Ginaz swordsmen and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators).\n\n\n=== Zen and religion ===\n\nEarly in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who \"gave a crucial boost to his thinking\". Zen teachings ultimately had \"a profound and continuing influence on [Herbert's] work\". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are referred to as Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In \"Dune Genesis\", Frank Herbert wrote:\n\nWhat especially pleases me is to see the interwoven themes, the fugue like relationships of images that exactly replay the way Dune took shape. As in an Escher lithograph, I involved myself with recurrent themes that turn into paradox. The central paradox concerns the human vision of time. What about Paul's gift of prescience - the Presbyterian fixation? For the Delphic Oracle to perform, it must tangle itself in a web of predestination. Yet predestination negates surprises and, in fact, sets up a mathematically enclosed universe whose limits are always inconsistent, always encountering the unprovable. It's like a koan, a Zen mind breaker. It's like the Cretan Epimenides saying, \"All Cretans are liars.\"\nBrian Herbert called the Dune universe \"a spiritual melting pot\", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which \"religious beliefs have combined into interesting forms\" represents the author's solution to eliminating arguments between religions, each of which claimed to have \"the one and only revelation.\"\n\n\n=== Asimov's Foundation ===\nTim O'Reilly suggests that Herbert also wrote Dune as a counterpoint to Isaac Asimov's Foundation series. In his monograph on Frank Herbert, O'Reilly wrote that \"Dune is clearly a commentary on the Foundation trilogy. Herbert has taken a look at the same imaginative situation that provoked Asimov's classic\u2014the decay of a galactic empire\u2014and restated it in a way that draws on different assumptions and suggests radically different conclusions. The twist he has introduced into Dune is that the Mule, not the Foundation, is his hero.\" According to O'Reilly, Herbert bases the Bene Gesserit on the scientific shamans of the Foundation, though they use biological rather than statistical science. In contrast to the Foundation series and its praise of science and rationality, Dune proposes that the unconscious and unexpected are actually what are needed for humanity.Both Herbert and Asimov explore the implications of prescience (i.e., visions of the future) both psychologically and socially. The Foundation series deploys a broadly determinist approach to prescient vision rooted in mathematical reasoning on a macroscopic social level. Dune, by contrast, invents a biologically rooted power of prescience that becomes determinist when the user actively relies on it to navigate past an undefined threshold of detail. Herbert\u2019s eugenically produced and spice-enhanced prescience is also personalized to individual actors whose roles in later books constrain each other's visions, rendering the future more or less mutable as time progresses. In what might be a comment on Foundation, Herbert's most powerfully prescient being in God Emperor of Dune laments the boredom engendered by prescience, and values surprises, especially regarding one's death, as a psychological necessity.However, both works contain a similar theme of the restoration of civilization and seem to make the fundamental assumption that \"political maneuvering, the need to control material resources, and friendship or mating bonds will be fundamentally the same in the future as they are now.\"\n\n\n== Critical reception ==\nDune tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and won the inaugural Nebula Award for Best Novel. Reviews of the novel have been largely positive, and Dune is considered by some critics to be the best science fiction book ever written. The novel has been translated into dozens of languages, and has sold almost 20 million copies. Dune has been regularly cited as one of the world's best-selling science fiction novels.Arthur C. Clarke described Dune as \"unique\" and wrote, \"I know nothing comparable to it except The Lord of the Rings.\" Robert A. Heinlein described the novel as \"powerful, convincing, and most ingenious.\" It was described as \"one of the monuments of modern science fiction\" by the Chicago Tribune, and P. Schuyler Miller called Dune \"one of the landmarks of modern science fiction ... an amazing feat of creation.\" The Washington Post described it as \"a portrayal of an alien society more complete and deeply detailed than any other author in the field has managed ... a story absorbing equally for its action and philosophical vistas ... An astonishing science fiction phenomenon.\" Algis Budrys praised Dune for the vividness of its imagined setting, saying \"The time lives. It breathes, it speaks, and Herbert has smelt it in his nostrils\". He found that the novel, however, \"turns flat and tails off at the end. ... [T]ruly effective villains simply simper and melt; fierce men and cunning statesmen and seeresses all bend before this new Messiah\". Budrys faulted in particular Herbert's decision to kill Paul's infant son offstage, with no apparent emotional impact, saying \"you cannot be so busy saving a world that you cannot hear an infant shriek\". After criticizing unrealistic science fiction, Carl Sagan in 1978 listed Dune as among stories \"that are so tautly constructed, so rich in the accommodating details of an unfamiliar society that they sweep me along before I have even a chance to be critical\".The Louisville Times wrote, \"Herbert's creation of this universe, with its intricate development and analysis of ecology, religion, politics, and philosophy, remains one of the supreme and seminal achievements in science fiction.\" Writing for The New Yorker, Jon Michaud praised Herbert's \"clever authorial decision\" to exclude robots and computers (\"two staples of the genre\") from his fictional universe, but suggested that this may be one explanation why Dune lacks \"true fandom among science-fiction fans\" to the extent that it \"has not penetrated popular culture in the way that The Lord of the Rings and Star Wars have\". Tamara I. Hladik wrote that the story \"crafts a universe where lesser novels promulgate excuses for sequels. All its rich elements are in balance and plausible\u2014not the patchwork confederacy of made-up languages, contrived customs, and meaningless histories that are the hallmark of so many other, lesser novels.\"On November 5, 2019, the BBC News listed Dune on its list of the 100 most influential novels.J. R. R. Tolkien refused to review Dune, on the grounds that he disliked it \"with some intensity\" and thus felt it would be unfair to Herbert, another working author, if he gave an honest review of the book.\n\n\n== First edition prints and manuscripts ==\nThe first edition of Dune is one of the most valuable in science fiction book collecting. Copies have been sold for more than $10,000 at auction. The Chilton first edition of the novel is 9+1\u20444 inches (235 mm) tall, with bluish green boards and a price of $5.95 on the dust jacket, and notes Toronto as the Canadian publisher on the copyright page. Up to this point, Chilton had been publishing only automobile repair manuals.California State University, Fullerton's Pollak Library has several of Herbert's draft manuscripts of Dune and other works, with the author's notes, in their Frank Herbert Archives.\n\n\n== Sequels and prequels ==\n\nAfter Dune proved to be a critical and financial success for Herbert, he was able to devote himself full time to writing additional novels in the series. He had already drafted parts of the second and third while writing Dune. The series included Dune Messiah (1969), Children of Dune (1976), God Emperor of Dune (1981), Heretics of Dune (1984), and Chapterhouse: Dune (1985), each sequentially continuing on the narrative from Dune. Herbert died on February 11, 1986.Herbert's son, Brian Herbert, had found several thousand pages of notes left by his father that outlined ideas for other narratives related to Dune. Brian Herbert enlisted author Kevin J. Anderson to help build out prequel novels to the events of Dune. Brian Herbert's and Anderson's Dune prequels first started publication in 1999, and have led to additional stories that take place between those of Frank Herbert's books. The notes for what would have been Dune 7 also enabled them to publish Hunters of Dune (2006) and Sandworms of Dune (2007), sequels to Frank Herbert's final novel Chapterhouse: Dune, which complete the chronological progression of his original series, and wrap up storylines that began in Heretics of Dune.\n\n\n== Adaptations ==\n\nDune has been considered as an \"unfilmable\" and \"uncontainable\" work to adapt from novel to film or other visual medium. Described by Wired, \"It has four appendices and a glossary of its own gibberish, and its action takes place on two planets, one of which is a desert overrun by worms the size of airport runways. Lots of important people die or try to kill each other, and they're all tethered to about eight entangled subplots.\" There have been several attempts to achieve this difficult conversion with various degrees of success.\n\n\n=== Early stalled attempts ===\nIn 1971, the production company Apjac International (APJ) (headed by Arthur P. Jacobs) optioned the rights to film Dune. As Jacobs was busy with other projects, such as the sequel to Planet of the Apes, Dune was delayed for another year. Jacobs' first choice for director was David Lean, but he turned down the offer. Charles Jarrott was also considered to direct. Work was also under way on a script while the hunt for a director continued. Initially, the first treatment had been handled by Robert Greenhut, the producer who had lobbied Jacobs to make the movie in the first place, but subsequently Rospo Pallenberg was approached to write the script, with shooting scheduled to begin in 1974. However, Jacobs died in 1973.\nIn December 1974, a French consortium led by Jean-Paul Gibon purchased the film rights from APJ, with Alejandro Jodorowsky set to direct. In 1975, Jodorowsky planned to film the story as a 14-hour feature, set to star his own son Brontis Jodorowsky in the lead role of Paul Atreides, Salvador Dal\u00ed as Shaddam IV, Padishah Emperor, Amanda Lear as Princess Irulan, Orson Welles as Baron Vladimir Harkonnen, Gloria Swanson as Reverend Mother Gaius Helen Mohiam, David Carradine as Duke Leto Atreides, Geraldine Chaplin as Lady Jessica, Alain Delon as Duncan Idaho, Herv\u00e9 Villechaize as Gurney Halleck, Udo Kier as Piter De Vries, and Mick Jagger as Feyd-Rautha. It was at first proposed to score the film with original music by Karlheinz Stockhausen, Henry Cow, and Magma; later on, the soundtrack was to be provided by Pink Floyd. Jodorowsky set up a pre-production unit in Paris consisting of Chris Foss, a British artist who designed covers for science fiction periodicals, Jean Giraud (Moebius), a French illustrator who created and also wrote and drew for Metal Hurlant magazine, and H. R. Giger. Moebius began designing creatures and characters for the film, while Foss was brought in to design the film's space ships and hardware. Giger began designing the Harkonnen Castle based on Moebius's storyboards. Dan O'Bannon was to head the special effects department.Dal\u00ed was cast as the Emperor. Dal\u00ed later demanded to be paid $100,000 per hour; Jodorowsky agreed, but tailored Dal\u00ed's part to be filmed in one hour, drafting plans for other scenes of the emperor to use a mechanical mannequin as substitute for Dal\u00ed. According to Giger, Dal\u00ed was \"later invited to leave the film because of his pro-Franco statements\". Just as the storyboards, designs, and script were finished, the financial backing dried up. Frank Herbert traveled to Europe in 1976 to find that $2 million of the $9.5 million budget had already been spent in pre-production, and that Jodorowsky's script would result in a 14-hour movie (\"It was the size of a phone book\", Herbert later recalled). Jodorowsky took creative liberties with the source material, but Herbert said that he and Jodorowsky had an amicable relationship. Jodorowsky said in 1985 that he found the Dune story mythical and had intended to recreate it rather than adapt the novel; though he had an \"enthusiastic admiration\" for Herbert, Jodorowsky said he had done everything possible to distance the author and his input from the project. Although Jodorowsky was embittered by the experience, he said the Dune project changed his life, and some of the ideas were used in his and Moebius's The Incal. O'Bannon entered a psychiatric hospital after the production failed, then worked on 13 scripts, the last of which became Alien. A 2013 documentary, Jodorowsky's Dune, was made about Jodorowsky's failed attempt at an adaptation.\nIn 1976, Dino De Laurentiis acquired the rights from Gibon's consortium. De Laurentiis commissioned Herbert to write a new screenplay in 1978; the script Herbert turned in was 175 pages long, the equivalent of nearly three hours of screen time. De Laurentiis then hired director Ridley Scott in 1979, with Rudy Wurlitzer writing the screenplay and H. R. Giger retained from the Jodorowsky production; Scott and Giger had also just worked together on the film Alien, after O'Bannon recommended the artist. Scott intended to split the novel into two movies. He worked on three drafts of the script, using The Battle of Algiers as a point of reference, before moving on to direct another science fiction film, Blade Runner (1982). As he recalls, the pre-production process was slow, and finishing the project would have been even more time-intensive:\n\nBut after seven months I dropped out of Dune, by then Rudy Wurlitzer had come up with a first-draft script which I felt was a decent distillation of Frank Herbert's. But I also realised Dune was going to take a lot more work\u2014at least two and a half years' worth. And I didn't have the heart to attack that because my older brother Frank unexpectedly died of cancer while I was prepping the De Laurentiis picture. Frankly, that freaked me out. So I went to Dino and told him the Dune script was his.\n\u2014From Ridley Scott: The Making of his Movies by Paul M. Sammon\n\n\n=== 1984 film by David Lynch ===\n\nIn 1981, the nine-year film rights were set to expire. De Laurentiis re-negotiated the rights from the author, adding to them the rights to the Dune sequels (written and unwritten). After seeing The Elephant Man, De Laurentiis' daughter Raffaella decided that David Lynch should direct the movie. Around that time Lynch received several other directing offers, including Return of the Jedi. He agreed to direct Dune and write the screenplay even though he had not read the book, was not familiar with the story, or even been interested in science fiction. Lynch worked on the script for six months with Eric Bergren and Christopher De Vore. The team yielded two drafts of the script before it split over creative differences. Lynch would subsequently work on five more drafts. Production of the work was troubled by problems at the Mexican studio and hampering the film's timeline. Lynch ended up producing a nearly three-hour long film, but at demands from Universal Pictures, the film's distributor, he cut it back to about two hours, hastily filming additional scenes to make up for some of the cut footage.This first film of Dune, directed by Lynch, was released in 1984, nearly 20 years after the book's publication. Though Herbert said the book's depth and symbolism seemed to intimidate many filmmakers, he was pleased with the film, saying that \"They've got it. It begins as Dune does. And I hear my dialogue all the way through. There are some interpretations and liberties, but you're gonna come out knowing you've seen Dune.\" Reviews of the film were negative, saying that it was incomprehensible to those unfamiliar with the book, and that fans would be disappointed by the way it strayed from the book's plot. Upon release for television and other forms of home media, Universal opted to reintroduce much of the footage that Lynch had cut, creating an over-three-hour long version with extensive monologue exposition. Lynch was extremely displeased with this move, and demanded that Universal replace his name on these cuts with the pseudonym \"Alan Smithee\", and has generally distanced himself from the film since.\n\n\n=== 2000 miniseries by John Harrison ===\n\nIn 2000, John Harrison adapted the novel into Frank Herbert's Dune, a miniseries which premiered on American Sci-Fi Channel. As of 2004, the miniseries was one of the three highest-rated programs broadcast on the Sci-Fi Channel.\n\n\n=== Further film attempts ===\nIn 2008, Paramount Pictures announced that they would produce a new film based on the book, with Peter Berg attached to direct. Producer Kevin Misher, who spent a year securing the rights from the Herbert estate, was to be joined by Richard Rubinstein and John Harrison (of both Sci-Fi Channel miniseries) as well as Sarah Aubrey and Mike Messina. The producers stated that they were going for a \"faithful adaptation\" of the novel, and considered \"its theme of finite ecological resources particularly timely.\" Science fiction author Kevin J. Anderson and Frank Herbert's son Brian Herbert, who had together written multiple Dune sequels and prequels since 1999, were attached to the project as technical advisors. In October 2009, Berg dropped out of the project, later saying that it \"for a variety of reasons wasn't the right thing\" for him. Subsequently, with a script draft by Joshua Zetumer, Paramount reportedly sought a new director who could do the film for under $175 million. In 2010, Pierre Morel was signed on to direct, with screenwriter Chase Palmer incorporating Morel's vision of the project into Zetumer's original draft. By November 2010, Morel left the project. Paramount finally dropped plans for a remake in March 2011.\n\n\n=== Films by Denis Villeneuve ===\n\nIn November 2016, Legendary Entertainment acquired the film and TV rights for Dune. Variety reported in December 2016 that Denis Villeneuve was in negotiations to direct the project, which was confirmed in February 2017. In April 2017, Legendary announced that Eric Roth would write the screenplay. Villeneuve explained in March 2018 that his adaptation will be split into two films, with the first installment scheduled to begin production in 2019. Casting includes Timoth\u00e9e Chalamet as Paul Atreides, Dave Bautista as Rabban, Stellan Skarsg\u00e5rd as Baron Harkonnen, Rebecca Ferguson as Lady Jessica, Charlotte Rampling as Reverend Mother Mohiam, Oscar Isaac as Duke Leto Atreides, Zendaya as Chani, Javier Bardem as Stilgar, Josh Brolin as Gurney Halleck, Jason Momoa as Duncan Idaho, David Dastmalchian as Piter De Vries, Chang Chen as Dr. Yueh, and Stephen Henderson as Thufir Hawat. Warner Bros. Pictures distributed the film, which had its initial premiere on September 3, 2021, at the Venice Film Festival, and wide release in both theaters and streaming on HBO Max on October 21, 2021, as part of Warner Bros.'s approach to handling the impact of the COVID-19 pandemic on the film industry. The film received \"generally favorable reviews\" on Metacritic. It has gone on to win multiple awards and was named by the National Board of Review as one of the 10 best films of 2021, as well as the American Film Institute in their annual top 10 list. The film went on to be nominated for ten Academy Awards, winning six, the most wins of the night for any film in contention.A sequel, Dune: Part Two, was scheduled for release on November 3, 2023, but will now instead be released on March 15th 2024 amid the 2023 SAG-AFTRA strike.\n\n\n=== Audiobooks ===\nIn 1993, Recorded Books Inc. released a 20-disc audiobook narrated by George Guidall. In 2007, Audio Renaissance released an audio book narrated by Simon Vance with some parts performed by Scott Brick, Orlagh Cassidy, Euan Morton, and other performers.\n\n\n== Cultural influence ==\nDune has been widely influential, inspiring numerous novels, music, films, television, games, and comic books. It is considered one of the greatest and most influential science fiction novels of all time, with numerous modern science fiction works such as Star Wars owing their existence to Dune. Dune has also been referenced in numerous other works of popular culture, including Star Trek, Chronicles of Riddick, The Kingkiller Chronicle and Futurama. Dune was cited as a source of inspiration for Hayao Miyazaki's anime film Nausica\u00e4 of the Valley of the Wind (1984) for its post-apocalyptic world.Dune was parodied in 1984's National Lampoon's Doon by Ellis Weiner, which William F. Touponce called \"something of a tribute to Herbert's success on college campuses\", noting that \"the only other book to have been so honored is Tolkien's The Lord of the Rings,\" which was parodied by The Harvard Lampoon in 1969.\n\n\n=== Music ===\nIn 1978, French electronic musician Richard Pinhas released the nine-track Dune-inspired album Chronolyse, which includes the seven-part Variations sur le th\u00e8me des Bene Gesserit.\nIn 1979, German electronic music pioneer Klaus Schulze released an LP titled Dune featuring motifs and lyrics inspired by the novel.\nA similar musical project, Visions of Dune, was released also in 1979 by Zed (a pseudonym of French electronic musician Bernard Sjazner).\nHeavy metal band Iron Maiden wrote the song \"To Tame a Land\" based on the Dune story. It appears as the closing track to their 1983 album Piece of Mind. The original working title of the song was \"Dune\"; however, the band was denied permission to use it, with Frank Herbert's agents stating \"Frank Herbert doesn't like rock bands, particularly heavy rock bands, and especially bands like Iron Maiden\".\nDune inspired the German happy hardcore band Dune, who have released several albums with space travel-themed songs.\nThe progressive hardcore band Shai Hulud took their name from Dune.\n\"Traveller in Time\", from the 1991 Blind Guardian album Tales from the Twilight World, is based mostly on Paul Atreides' visions of future and past.\nThe title of the 1993 Fear Factory album Fear is The Mindkiller is a quote from the \"litany against fear\".\nThe song \"Near Fantastica\", from the Matthew Good album Avalanche, makes reference to the \"litany against fear\", repeating \"can't feel fear, fear's the mind killer\" through a section of the song.\nIn the Fatboy Slim song \"Weapon of Choice\", the line \"If you walk without rhythm/You won't attract the worm\" is a near quotation from the sections of novel in which Stilgar teaches Paul to ride sandworms.\nDune also inspired the 1999 album The 2nd Moon by the German death metal band Golem, which is a concept album about the series.\nDune influenced Thirty Seconds to Mars on their self-titled debut album.\nThe Youngblood Brass Band's song \"Is an Elegy\" on Center:Level:Roar references \"Muad'Dib\", \"Arrakis\" and other elements from the novel.\nThe debut album of Canadian musician Grimes, called Geidi Primes, is a concept album based on Dune.\nJapanese singer Kenshi Yonezu, released a song titled \"Dune\", also known as \"Sand Planet\". The song was released on 2017, and it was created using the voice synthesizer Hatsune Miku for her 10th anniversary.\n\"Fear is the Mind Killer\", a song released in 2018 by Zheani (an Australian rapper) uses a quote from Dune.\n\"Litany Against Fear\" is a spoken track released in 2018 under the 'Eight' album by Zheani. She recites an extract from Dune.\nSleep's 2018 album The Sciences features a song, Giza Butler, that references several aspects of Dune.\nTool's 2019 album Fear Inoculum has a song entitled \"Litanie contre la peur (Litany against fear)\".\n\"Rare to Wake\", from Shannon Lay's album Geist (2019), is inspired by Dune.\nHeavy Metal band Diamond Head based the song \"The Sleeper\" and its prelude, both off the album The Coffin Train, on the series.\n\n\n=== Games ===\n\nThere have been a number of games based on the book, starting with the strategy\u2013adventure game Dune (1992). The most important game adaptation is Dune II (1992), which established the conventions of modern real-time strategy games and is considered to be among the most influential video games of all time.The online game Lost Souls includes Dune-derived elements, including sandworms and melange\u2014addiction to which can produce psychic talents. The 2016 game Enter the Gungeon features the spice melange as a random item which gives the player progressively stronger abilities and penalties with repeated uses, mirroring the long-term effects melange has on users.Rick Priestley cites Dune as a major influence on his 1987 wargame, Warhammer 40,000.In 2023, Funcom announced Dune: Awakening, an upcoming massively multiplayer online game set in the universe of Dune.\n\n\n=== Space exploration ===\nThe Apollo 15 astronauts named a small crater on Earth's Moon after the novel during the 1971 mission, and the name was formally adopted by the International Astronomical Union in 1973. Since 2009, the names of planets from the Dune novels have been adopted for the real-world nomenclature of plains and other features on Saturn's moon Titan, like Arrakis Planitia.\n\n\n== See also ==\nSoft science fiction \u2013 Sub-genre of science fiction emphasizing \"soft\" sciences or human emotions\nHydraulic empire \u2013 Government by control of access to water\n\n\n== References ==\n\n\n== Further reading ==\nClute, John; Nicholls, Peter (1995). The Encyclopedia of Science Fiction. New York: St. Martin's Press. p. 1386. ISBN 978-0-312-13486-0.\nClute, John; Nicholls, Peter (1995). The Multimedia Encyclopedia of Science Fiction (CD-ROM). Danbury, CT: Grolier. ISBN 978-0-7172-3999-3.\nHuddleston, Tom. The Worlds of Dune: The Places and Cultures That Inspired Frank Herbert. Minneapolis: Quarto Publishing Group UK, 2023.\nJakubowski, Maxim; Edwards, Malcolm (1983). The Complete Book of Science Fiction and Fantasy Lists. St Albans, Herts, UK: Granada Publishing Ltd. p. 350. ISBN 978-0-586-05678-3.\nKennedy, Kara. Frank Herbert's Dune: A Critical Companion. Cham, Switzerland: Palgrave Macmillan, 2022.\nKennedy, Kara. Women's Agency in the Dune Universe: Tracing Women's Liberation through Science Fiction. Cham, Switzerland: Palgrave Macmillan, 2020.\nNardi, Dominic J. & N. Trevor Brierly, eds. Discovering Dune: Essays on Frank Herbert's Epic Saga. Jefferson, NC: McFarland & Co., 2022.\nNicholas, Jeffery, ed. Dune and Philosophy: Weirding Way of Mentat. Chicago: Open Court, 2011.\nNicholls, Peter (1979). The Encyclopedia of Science Fiction. St Albans, Herts, UK: Granada Publishing Ltd. p. 672. ISBN 978-0-586-05380-5.\nO\u2019Reilly, Timothy. Frank Herbert. New York: Frederick Ungar, 1981.\nPringle, David (1990). The Ultimate Guide to Science Fiction. London: Grafton Books Ltd. p. 407. ISBN 978-0-246-13635-0.\nTuck, Donald H. (1974). The Encyclopedia of Science Fiction and Fantasy. Chicago: Advent. p. 136. ISBN 978-0-911682-20-5.\nWilliams, Kevin C. The Wisdom of the Sand: Philosophy and Frank Herbert's Dune. New York: Hampton Press, 2013.\n\n\n== External links ==\n\nOfficial website for Dune and its sequels\nDune title listing at the Internet Speculative Fiction Database\nTurner, Paul (October 1973). \"Vertex Interviews Frank Herbert\" (Interview). Vol. 1, no. 4. Archived from the original on May 19, 2009.\nSpark Notes: Dune, detailed study guide\nDuneQuotes.com \u2013 Collection of quotes from the Dune series\nDune by Frank Herbert, reviewed by Ted Gioia (Conceptual Fiction)\n\"Frank Herbert Biography and Bibliography at LitWeb.net\". www.litweb.net. Archived from the original on April 2, 2009. Retrieved January 2, 2009.\nWorks of Frank Herbert at Curlie\nTimberg, Scott (April 18, 2010). \"Frank Herbert's Dune holds timely \u2013 and timeless \u2013 appeal\". Los Angeles Times. Archived from the original on December 3, 2013. Retrieved November 27, 2013.\nWalton, Jo (January 12, 2011). \"In league with the future: Frank Herbert's Dune (Review)\". Tor.com. Retrieved November 27, 2013.\nLeonard, Andrew (June 4, 2015). \"To Save California, Read Dune\". Nautilus. Archived from the original on November 4, 2017. Retrieved June 15, 2015.\nDune by Frank Herbert \u2013 Foreshadowing & Dedication at Fact Behind Fiction\nFrank Herbert by Tim O'Reilly\nDuneScholar.com \u2013 Collection of scholarly essays" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-parent\\README.md", + "filetype": ".md", + "content": "\n# neo4j-parent\n\nThis template allows you to balance precise embeddings and context retention by splitting documents into smaller chunks and retrieving their original or larger text information. \n\nUsing a Neo4j vector index, the package queries child nodes using vector similarity search and retrieves the corresponding parent's text by defining an appropriate `retrieval_query` parameter.\n\n## Environment Setup\n\nYou need to define the following environment variables\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Populating with data\n\nIf you want to populate the DB with some example data, you can run `python ingest.py`.\nThe script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.\nFirst, the text is divided into larger chunks (\"parents\") and then further subdivided into smaller chunks (\"children\"), where both parent and child chunks overlap slightly to maintain context.\nAfter storing these chunks in the database, embeddings for the child nodes are computed using OpenAI's embeddings and stored back in the graph for future retrieval or analysis.\nAdditionally, a vector index named `retrieval` is created for efficient querying of these embeddings.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-parent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-parent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_parent import chain as neo4j_parent_chain\n\nadd_routes(app, neo4j_parent_chain, path=\"/neo4j-parent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-parent/playground](http://127.0.0.1:8000/neo4j-parent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-parent\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-semantic-layer\\README.md", + "filetype": ".md", + "content": "# neo4j-semantic-layer\n\nThis template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.\nThe semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's intent.\nLearn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49).\n\n![Diagram illustrating the workflow of the Neo4j semantic layer with an agent interacting with tools like Information, Recommendation, and Memory, connected to a knowledge graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-layer/static/workflow.png \"Neo4j Semantic Layer Workflow Diagram\")\n\n## Tools\n\nThe agent utilizes several tools to interact with the Neo4j graph database effectively:\n\n1. **Information tool**:\n - Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.\n2. **Recommendation Tool**:\n - Provides movie recommendations based upon user preferences and input.\n3. **Memory Tool**:\n - Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.\n\n## Environment Setup\n\nYou need to define the following environment variables\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Populating with data\n\nIf you want to populate the DB with an example movie dataset, you can run `python ingest.py`.\nThe script import information about movies and their rating by users.\nAdditionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-semantic-layer\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-semantic-layer\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_semantic_layer import agent_executor as neo4j_semantic_agent\n\nadd_routes(app, neo4j_semantic_agent, path=\"/neo4j-semantic-layer\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-semantic-layer/playground](http://127.0.0.1:8000/neo4j-semantic-layer/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-semantic-layer\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-semantic-ollama\\README.md", + "filetype": ".md", + "content": "# neo4j-semantic-ollama\n\nThis template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent.\nThe semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph database based on the user's intent.\nLearn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49).\n\n![Diagram illustrating the workflow of the Neo4j semantic layer with an agent interacting with tools like Information, Recommendation, and Memory, connected to a knowledge graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-ollama/static/workflow.png \"Neo4j Semantic Layer Workflow Diagram\")\n\n## Tools\n\nThe agent utilizes several tools to interact with the Neo4j graph database effectively:\n\n1. **Information tool**:\n - Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.\n2. **Recommendation Tool**:\n - Provides movie recommendations based upon user preferences and input.\n3. **Memory Tool**:\n - Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.\n4. **Smalltalk Tool**:\n - Allows an agent to deal with smalltalk.\n\n## Environment Setup\n\nBefore using this template, you need to set up Ollama and Neo4j database.\n\n1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.\n\n2. Download your LLM of interest:\n\n * This package uses `mixtral`: `ollama pull mixtral`\n * You can choose from many LLMs [here](https://ollama.ai/library)\n\nYou need to define the following environment variables\n\n```\nOLLAMA_BASE_URL=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Populating with data\n\nIf you want to populate the DB with an example movie dataset, you can run `python ingest.py`.\nThe script import information about movies and their rating by users.\nAdditionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-semantic-ollama\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-semantic-ollama\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_semantic_layer import agent_executor as neo4j_semantic_agent\n\nadd_routes(app, neo4j_semantic_agent, path=\"/neo4j-semantic-ollama\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-semantic-ollama/playground](http://127.0.0.1:8000/neo4j-semantic-ollama/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-semantic-ollama\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-vector-memory\\dune.txt", + "filetype": ".txt", + "content": "Dune is a 1965 epic science fiction novel by American author Frank Herbert, originally published as two separate serials in Analog magazine. It tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and it won the inaugural Nebula Award for Best Novel. It is the first installment of the Dune Chronicles. It is one of the world's best-selling science fiction novels.Dune is set in the distant future in a feudal interstellar society in which various noble houses control planetary fiefs. It tells the story of young Paul Atreides, whose family accepts the stewardship of the planet Arrakis. While the planet is an inhospitable and sparsely populated desert wasteland, it is the only source of melange, or \"spice\", a drug that extends life and enhances mental abilities. Melange is also necessary for space navigation, which requires a kind of multidimensional awareness and foresight that only the drug provides. As melange can only be produced on Arrakis, control of the planet is a coveted and dangerous undertaking. The story explores the multilayered interactions of politics, religion, ecology, technology, and human emotion, as the factions of the empire confront each other in a struggle for the control of Arrakis and its spice.\nHerbert wrote five sequels: Dune Messiah, Children of Dune, God Emperor of Dune, Heretics of Dune, and Chapterhouse: Dune. Following Herbert's death in 1986, his son Brian Herbert and author Kevin J. Anderson continued the series in over a dozen additional novels since 1999.\nAdaptations of the novel to cinema have been notoriously difficult and complicated. In the 1970s, cult filmmaker Alejandro Jodorowsky attempted to make a film based on the novel. After three years of development, the project was canceled due to a constantly growing budget. In 1984, a film adaptation directed by David Lynch was released to mostly negative responses from critics and failure at the box office, although it later developed a cult following. The book was also adapted into the 2000 Sci-Fi Channel miniseries Frank Herbert's Dune and its 2003 sequel Frank Herbert's Children of Dune (the latter of which combines the events of Dune Messiah and Children of Dune). A second film adaptation directed by Denis Villeneuve was released on October 21, 2021, to positive reviews. It grossed $401 million worldwide and went on to be nominated for ten Academy Awards, winning six. Villeneuve's film covers roughly the first half of the original novel; a sequel, which will cover the remaining story, will be released in March 2024.\nThe series has also been used as the basis for several board, role-playing, and video games.\nSince 2009, the names of planets from the Dune novels have been adopted for the real-life nomenclature of plains and other features on Saturn's moon Titan.\n\n\n== Origins ==\nAfter his novel The Dragon in the Sea was published in 1957, Herbert traveled to Florence, Oregon, at the north end of the Oregon Dunes. Here, the United States Department of Agriculture was attempting to use poverty grasses to stabilize the sand dunes. Herbert claimed in a letter to his literary agent, Lurton Blassingame, that the moving dunes could \"swallow whole cities, lakes, rivers, highways.\" Herbert's article on the dunes, \"They Stopped the Moving Sands\", was never completed (and only published decades later in The Road to Dune), but its research sparked Herbert's interest in ecology and deserts.Herbert further drew inspiration from Native American mentors like \"Indian Henry\" (as Herbert referred to the man to his son; likely a Henry Martin of the Hoh tribe) and Howard Hansen. Both Martin and Hansen grew up on the Quileute reservation near Herbert's hometown. According to historian Daniel Immerwahr, Hansen regularly shared his writing with Herbert. \"White men are eating the earth,\" Hansen told Herbert in 1958, after sharing a piece on the effect of logging on the Quileute reservation. \"They're gonna turn this whole planet into a wasteland, just like North Africa.\" The world could become a \"big dune,\" Herbert responded in agreement.Herbert was also interested in the idea of the superhero mystique and messiahs. He believed that feudalism was a natural condition humans fell into, where some led and others gave up the responsibility of making decisions and just followed orders. He found that desert environments have historically given birth to several major religions with messianic impulses. He decided to join his interests together so he could play religious and ecological ideas against each other. In addition, he was influenced by the story of T. E. Lawrence and the \"messianic overtones\" in Lawrence's involvement in the Arab Revolt during World War I. In an early version of Dune, the hero was actually very similar to Lawrence of Arabia, but Herbert decided the plot was too straightforward and added more layers to his story.Herbert drew heavy inspiration also from Lesley Blanch's The Sabres of Paradise (1960), a narrative history recounting a mid-19th century conflict in the Caucasus between rugged Islamized caucasian tribes and the expansive Russian Empire. Language used on both sides of that conflict become terms in Herbert's world\u2014chakobsa, a Caucasian hunting language, becomes a battle language of humans spread across the galaxy; kanly, a word for blood feud in the 19th century Caucasus, represents a feud between Dune's noble Houses; sietch and tabir are both words for camp borrowed from Ukrainian Cossacks (of the Pontic\u2013Caspian steppe).Herbert also borrowed some lines which Blanch stated were Caucasian proverbs. \"To kill with the point lacked artistry\", used by Blanch to describe the Caucasus peoples' love of swordsmanship, becomes in Dune \"Killing with the tip lacks artistry\", a piece of advice given to a young Paul during his training. \"Polish comes from the city, wisdom from the hills\", a Caucasian aphorism, turns into a desert expression: \"Polish comes from the cities, wisdom from the desert\".\n\nAnother significant source of inspiration for Dune was Herbert's experiences with psilocybin and his hobby of cultivating mushrooms, according to mycologist Paul Stamets's account of meeting Herbert in the 1980s:Frank went on to tell me that much of the premise of Dune\u2014the magic spice (spores) that allowed the bending of space (tripping), the giant sand worms (maggots digesting mushrooms), the eyes of the Fremen (the cerulean blue of Psilocybe mushrooms), the mysticism of the female spiritual warriors, the Bene Gesserits (influenced by the tales of Maria Sabina and the sacred mushroom cults of Mexico)\u2014came from his perception of the fungal life cycle, and his imagination was stimulated through his experiences with the use of magic mushrooms.Herbert spent the next five years researching, writing, and revising. He published a three-part serial Dune World in the monthly Analog, from December 1963 to February 1964. The serial was accompanied by several illustrations that were not published again. After an interval of a year, he published the much slower-paced five-part The Prophet of Dune in the January\u2013May 1965 issues. The first serial became \"Book 1: Dune\" in the final published Dune novel, and the second serial was divided into \"Book Two: Muad'dib\" and \"Book Three: The Prophet\". The serialized version was expanded, reworked, and submitted to more than twenty publishers, each of whom rejected it. The novel, Dune, was finally accepted and published in August 1965 by Chilton Books, a printing house better known for publishing auto repair manuals. Sterling Lanier, an editor at Chilton, had seen Herbert's manuscript and had urged his company to take a risk in publishing the book. However, the first printing, priced at $5.95 (equivalent to $55.25 in 2022), did not sell well and was poorly received by critics as being atypical of science fiction at the time. Chilton considered the publication of Dune a write-off and Lanier was fired. Over the course of time, the book gained critical acclaim, and its popularity spread by word-of-mouth to allow Herbert to start working full time on developing the sequels to Dune, elements of which were already written alongside Dune.At first Herbert considered using Mars as setting for his novel, but eventually decided to use a fictional planet instead. His son Brian said that \"Readers would have too many preconceived ideas about that planet, due to the number of stories that had been written about it.\"Herbert dedicated his work \"to the people whose labors go beyond ideas into the realm of 'real materials'\u2014to the dry-land ecologists, wherever they may be, in whatever time they work, this effort at prediction is dedicated in humility and admiration.\"\n\n\n== Plot ==\nDuke Leto Atreides of House Atreides, ruler of the ocean planet Caladan, is assigned by the Padishah Emperor Shaddam IV to serve as fief ruler of the planet Arrakis. Although Arrakis is a harsh and inhospitable desert planet, it is of enormous importance because it is the only planetary source of melange, or the \"spice\", a unique and incredibly valuable substance that extends human youth, vitality and lifespan. It is also through the consumption of spice that Spacing Guild Navigators are able to effect safe interstellar travel. Shaddam, jealous of Duke Leto Atreides's rising popularity in the Landsraad, sees House Atreides as a potential future rival and threat, so conspires with House Harkonnen, the former stewards of Arrakis and the longstanding enemies of House Atreides, to destroy Leto and his family after their arrival. Leto is aware his assignment is a trap of some kind, but is compelled to obey the Emperor's orders anyway.\nLeto's concubine Lady Jessica is an acolyte of the Bene Gesserit, an exclusively female group that pursues mysterious political aims and wields seemingly superhuman physical and mental abilities, such as the ability to control their bodies down to the cellular level, and also decide the sex of their children. Though Jessica was instructed by the Bene Gesserit to bear a daughter as part of their breeding program, out of love for Leto she bore a son, Paul. From a young age, Paul has been trained in warfare by Leto's aides, the elite soldiers Duncan Idaho and Gurney Halleck. Thufir Hawat, the Duke's Mentat (human computers, able to store vast amounts of data and perform advanced calculations on demand), has instructed Paul in the ways of political intrigue. Jessica has also trained her son in Bene Gesserit disciplines.\nPaul's prophetic dreams interest Jessica's superior, the Reverend Mother Gaius Helen Mohiam, who subjects Paul to the deadly gom jabbar test. Holding a poisonous needle to his neck ready to strike should he be unable to resist the impulse to withdraw his hand from the nerve induction box, she tests Paul's self-control to overcome the extreme psychological pain he is being subjected to through the box.\nLeto, Jessica, and Paul travel with their household to occupy Arrakeen, the capital on Arrakis formerly held by House Harkonnen. Leto learns of the dangers involved in harvesting the spice, which is protected by giant sandworms, and seeks to negotiate with the planet's native Fremen people, seeing them as a valuable ally rather than foes. Soon after the Atreides's arrival, Harkonnen forces attack, joined by the Emperor's ferocious Sardaukar troops in disguise. Leto is betrayed by his personal physician, the Suk doctor Wellington Yueh, who delivers a drugged Leto to the Baron Vladimir Harkonnen and his twisted Mentat, Piter De Vries. Yueh, however, arranges for Jessica and Paul to escape into the desert, where they are presumed dead by the Harkonnens. Yueh replaces one of Leto's teeth with a poison gas capsule, hoping Leto can kill the Baron during their encounter. The Baron narrowly avoids the gas due to his shield, which kills Leto, De Vries, and the others in the room. The Baron forces Hawat to take over De Vries's position by dosing him with a long-lasting, fatal poison and threatening to withhold the regular antidote doses unless he obeys. While he follows the Baron's orders, Hawat works secretly to undermine the Harkonnens.\nHaving fled into the desert, Paul is exposed to high concentrations of spice and has visions through which he realizes he has significant powers (as a result of the Bene Gesserit breeding scheme). He foresees potential futures in which he lives among the planet's native Fremen before leading them on a Holy Jihad across the known universe.\nIt is revealed Jessica is the daughter of Baron Harkonnen, a secret kept from her by the Bene Gesserit. After being captured by Fremen, Paul and Jessica are accepted into the Fremen community of Sietch Tabr, and teach the Fremen the Bene Gesserit fighting technique known as the \"weirding way\". Paul proves his manhood by killing a Fremen named Jamis in a ritualistic crysknife fight and chooses the Fremen name Muad'Dib, while Jessica opts to undergo a ritual to become a Reverend Mother by drinking the poisonous Water of Life. Pregnant with Leto's daughter, she inadvertently causes the unborn child, Alia, to become infused with the same powers in the womb. Paul takes a Fremen lover, Chani, and has a son with her, Leto II.\nTwo years pass and Paul's powerful prescience manifests, which confirms for the Fremen that he is their prophesied messiah, a legend planted by the Bene Gesserit's Missionaria Protectiva. Paul embraces his father's belief that the Fremen could be a powerful fighting force to take back Arrakis, but also sees that if he does not control them, their jihad could consume the entire universe. Word of the new Fremen leader reaches both Baron Harkonnen and the Emperor as spice production falls due to their increasingly destructive raids. The Baron encourages his brutish nephew Glossu Rabban to rule with an iron fist, hoping the contrast with his shrewder nephew Feyd-Rautha will make the latter popular among the people of Arrakis when he eventually replaces Rabban. The Emperor, suspecting the Baron of trying to create troops more powerful than the Sardaukar to seize power, sends spies to monitor activity on Arrakis. Hawat uses the opportunity to sow seeds of doubt in the Baron about the Emperor's true plans, putting further strain on their alliance.\nGurney, having survived the Harkonnen coup becomes a smuggler, reuniting with Paul and Jessica after a Fremen raid on his harvester. Believing Jessica to be the traitor, Gurney threatens to kill her, but is stopped by Paul. Paul did not foresee Gurney's attack, and concludes he must increase his prescience by drinking the Water of Life, which is traditionally fatal to males. Paul falls into unconsciousness for three weeks after drinking the poison, but when he wakes, he has clairvoyance across time and space: he is the Kwisatz Haderach, the ultimate goal of the Bene Gesserit breeding program.\nPaul senses the Emperor and Baron are amassing fleets around Arrakis to quell the Fremen rebellion, and prepares the Fremen for a major offensive against the Harkonnen troops. The Emperor arrives with the Baron on Arrakis. The Emperor's troops seize a Fremen outpost, killing many including young Leto II, while Alia is captured and taken to the Emperor. Under cover of an electric storm, which shorts out the Emperor's troops' defensive shields, Paul and the Fremen, riding giant sandworms, assault the capital while Alia assassinates the Baron and escapes. The Fremen quickly defeat both the Harkonnen and Sardaukar troops.\nPaul faces the Emperor, threatening to destroy spice production forever unless Shaddam abdicates the throne. Feyd-Rautha attempts to stop Paul by challenging him to a ritualistic knife fight, during which he attempts to cheat and kill Paul with a poison spur in his belt. Paul gains the upper hand and kills him. The Emperor reluctantly cedes the throne to Paul and promises his daughter Princess Irulan's hand in marriage. As Paul takes control of the Empire, he realizes that while he has achieved his goal, he is no longer able to stop the Fremen jihad, as their belief in him is too powerful to restrain.\n\n\n== Characters ==\nHouse AtreidesPaul Atreides, the Duke's son, and main character of the novel\nDuke Leto Atreides, head of House Atreides\nLady Jessica, Bene Gesserit and concubine of the Duke, mother of Paul and Alia\nAlia Atreides, Paul's younger sister\nThufir Hawat, Mentat and Master of Assassins to House Atreides\nGurney Halleck, staunchly loyal troubadour warrior of the Atreides\nDuncan Idaho, Swordmaster for House Atreides, graduate of the Ginaz School\nWellington Yueh, Suk doctor for the Atreides who is secretly working for House HarkonnenHouse HarkonnenBaron Vladimir Harkonnen, head of House Harkonnen\nPiter De Vries, twisted Mentat\nFeyd-Rautha, nephew and heir-presumptive of the Baron\nGlossu \"Beast\" Rabban, also called Rabban Harkonnen, older nephew of the Baron\nIakin Nefud, Captain of the GuardHouse CorrinoShaddam IV, Padishah Emperor of the Known Universe (the Imperium)\nPrincess Irulan, Shaddam's eldest daughter and heir, also a historian\nCount Fenring, the Emperor's closest friend, advisor, and \"errand boy\"Bene GesseritReverend Mother Gaius Helen Mohiam, Proctor Superior of the Bene Gesserit school and the Emperor's Truthsayer\nLady Margot Fenring, Bene Gesserit wife of Count FenringFremenThe Fremen, native inhabitants of Arrakis\nStilgar, Fremen leader of Sietch Tabr\nChani, Paul's Fremen concubine and a Sayyadina (female acolyte) of Sietch Tabr\nDr. Liet-Kynes, the Imperial Planetologist on Arrakis and father of Chani, as well as a revered figure among the Fremen\nThe Shadout Mapes, head housekeeper of imperial residence on Arrakis\nJamis, Fremen killed by Paul in ritual duel\nHarah, wife of Jamis and later servant to Paul who helps raise Alia among the Fremen\nReverend Mother Ramallo, religious leader of Sietch TabrSmugglersEsmar Tuek, a powerful smuggler and the father of Staban Tuek\nStaban Tuek, the son of Esmar Tuek and a powerful smuggler who befriends and takes in Gurney Halleck and his surviving men after the attack on the Atreides\n\n\n== Themes and influences ==\nThe Dune series is a landmark of science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. For example, a key pre-history event to the novel's present is the \"Butlerian Jihad\", in which all robots and computers were destroyed, eliminating these common elements to science fiction from the novel as to allow focus on humanity. Dune considers the way humans and their institutions might change over time. Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of \"the human condition and its moral dilemmas\", and said:\n\nA lot of people refer to Dune as science fiction. I never do. I consider it an epic adventure in the classic storytelling tradition, a story of myth and legend not unlike the Morte d'Arthur or any messiah story. It just happens to be set in the future ... The story is actually more relevant today than when Herbert wrote it. In the 1960s, there were just these two colossal superpowers duking it out. Today we're living in a more feudal, corporatized world more akin to Herbert's universe of separate families, power centers and business interests, all interrelated and kept together by the one commodity necessary to all.\nBut Dune has also been called a mix of soft and hard science fiction since \"the attention to ecology is hard, the anthropology and the psychic abilities are soft.\" Hard elements include the ecology of Arrakis, suspensor technology, weapon systems, and ornithopters, while soft elements include issues relating to religion, physical and mental training, cultures, politics, and psychology.Herbert said Paul's messiah figure was inspired by the Arthurian legend, and that the scarcity of water on Arrakis was a metaphor for oil, as well as air and water itself, and for the shortages of resources caused by overpopulation. Novelist Brian Herbert, his son and biographer, wrote:\n\nDune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange, the geriatric spice that represents, among other things, the finite resource of oil. The planet Arrakis features immense, ferocious worms that are like dragons of lore, with \"great teeth\" and a \"bellows breath of cinnamon.\" This resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs, at the edge of the sea. The desert of Frank Herbert's classic novel is a vast ocean of sand, with giant worms diving into the depths, the mysterious and unrevealed domain of Shai-hulud. Dune tops are like the crests of waves, and there are powerful sandstorms out there, creating extreme danger. On Arrakis, life is said to emanate from the Maker (Shai-hulud) in the desert-sea; similarly all life on Earth is believed to have evolved from our oceans. Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know \u2026 and the book characters of his imagination are not so different from people familiar to us.\nEach chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context and other details intended to enhance understanding of Herbert's complex fictional universe and themes. They act as foreshadowing and invite the reader to keep reading to close the gap between what the epigraph says and what is happening in the main narrative. The epigraphs also give the reader the feeling that the world they are reading about is epically distanced, since Irulan writes about an idealized image of Paul as if he had already passed into memory. Brian Herbert wrote: \"Dad told me that you could follow any of the novel's layers as you read it, and then start the book all over again, focusing on an entirely different layer. At the end of the book, he intentionally left loose ends and said he did this to send the readers spinning out of the story with bits and pieces of it still clinging to them, so that they would want to go back and read it again.\"\n\n\n=== Middle-Eastern and Islamic references ===\nDue to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' \"Islamic undertones\" and themes, a Middle-Eastern influence on Herbert's works has been noted repeatedly. In his descriptions of the Fremen culture and language, Herbert uses both authentic Arabic words and Arabic-sounding words. For example, one of the names for the sandworm, Shai-hulud, is derived from Arabic: \u0634\u064a\u0621 \u062e\u0644\u0648\u062f, romanized: \u0161ay\u02be \u1e2bul\u016bd, lit.\u2009'immortal thing' or Arabic: \u0634\u064a\u062e \u062e\u0644\u0648\u062f, romanized: \u0161ay\u1e2b \u1e2bul\u016bd, lit.\u2009'old man of eternity'. The title of the Fremen housekeeper, the Shadout Mapes, is borrowed from the Arabic: \u0634\u0627\u062f\u0648\u0641\u200e, romanized: \u0161\u0101d\u016bf, the Egyptian term for a device used to raise water. In particular, words related to the messianic religion of the Fremen, first implanted by the Bene Gesserit, are taken from Arabic, including Muad'Dib (from Arabic: \u0645\u0624\u062f\u0628, romanized: mu\u02beaddib, lit.\u2009'educator'), Usul (from Arabic: \u0623\u0635\u0648\u0644, romanized: \u02beu\u1e63\u016bl, lit.\u2009'fundamental principles'), Shari-a (from Arabic: \u0634\u0631\u064a\u0639\u0629, romanized: \u0161ar\u012b\u02bfa, lit.\u2009'sharia; path'), Shaitan (from Arabic: \u0634\u064a\u0637\u0627\u0646, romanized: \u0161ay\u1e6d\u0101n, lit.\u2009'Shaitan; devil; fiend', and jinn (from Arabic: \u062c\u0646, romanized: \u01e7inn, lit.\u2009'jinn; spirit; demon; mythical being'). It is likely Herbert relied on second-hand resources such as phrasebooks and desert adventure stories to find these Arabic words and phrases for the Fremen. They are meaningful and carefully chosen, and help create an \"imagined desert culture that resonates with exotic sounds, enigmas, and pseudo-Islamic references\" and has a distinctly Bedouin aesthetic.As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides bears many similarities to the historical T. E. Lawrence. His 1962 biopic Lawrence of Arabia has also been identified as a potential influence. The Sabres of Paradise (1960) has also been identified as a potential influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune.The environment of the desert planet Arrakis was primarily inspired by the environments of the Middle East. Similarly Arrakis as a bioregion is presented as a particular kind of political site. Herbert has made it resemble a desertified petrostate area. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted at by Herbert's reference to Khaldun's book Kit\u0101b al-\u02bfibar (\"The Book of Lessons\"). The fictionalized version of the \"Kitab al-ibar\" in Dune is a combination of a Fremen religious manual and a desert survival book.\n\n\n==== Additional language and historic influences ====\nIn addition to Arabic, Dune derives words and names from a variety of other languages, including Hebrew, Navajo, Latin, Dutch (\"Landsraad\"), Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, Sanskrit (\"prana bindu\", \"prajna\"), Russian, Turkish, Finnish, and Old English. Bene Gesserit is simply the Latin for \"It will have been well fought\", also carrying the sense of \"It will have been well managed\", which stands as a statement of the order's goal and as a pledge of faithfulness to that goal. Critics tend to miss the literal meaning of the phrase, some positing that the term is derived from the Latin meaning \"it will have been well borne\", which interpretation is not well supported by their doctrine in the story.Through the inspiration from The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was \"aristocratic bureaucracy\" which he saw as analogous to the Soviet Union.\n\n\n=== Environmentalism and ecology ===\nDune has been called the \"first planetary ecology novel on a grand scale\". Herbert hoped it would be seen as an \"environmental awareness handbook\" and said the title was meant to \"echo the sound of 'doom'\". It was reviewed in the best selling countercultural Whole Earth Catalog in 1968 as a \"rich re-readable fantasy with clear portrayal of the fierce environment it takes to cohere a community\".After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of Arrakis life, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune's popularity as a novel depicting a planet as a complex\u2014almost living\u2014thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day.While the genre of climate fiction was popularized in the 2010s in response to real global climate change, Dune as well as other early science fiction works from authors like J. G. Ballard (The Drowned World) and Kim Stanley Robinson (the Mars trilogy) have retroactively been considered pioneering examples of the genre.\n\n\n=== Declining empires ===\nThe Imperium in Dune contains features of various empires in Europe and the Near East, including the Roman Empire, Holy Roman Empire, and Ottoman Empire. Lorenzo DiTommaso compared Dune's portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In \"The Articulation of Imperial Decadence and Decline in Epic Science Fiction\" (2007), DiTommaso outlines similarities between the two works by highlighting the excesses of the Emperor on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as \"parasites\". The Baron Harkonnen is similarly corrupt and materially indulgent. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. The Emperor's Sardaukar fighters are little match for the Fremen of Dune not only because of the Sardaukar's overconfidence and the fact that Jessica and Paul have trained the Fremen in their battle tactics, but because of the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others.The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Frank Herbert as depicting \"war as a collective orgasm\" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare), themes that would reappear in God Emperor of Dune's Scattering and Leto II's all-female Fish Speaker army.\n\n\n=== Gender dynamics ===\nGender dynamics are complex in Dune. Within the Fremen sietch communities, women have almost full equality. They carry weapons and travel in raiding parties with men, fighting when necessary alongside the men. They can take positions of leadership as a Sayyadina or as a Reverend Mother (if she can survive the ritual of ingesting the Water of Life.) Both of these sietch religious leaders are routinely consulted by the all-male Council and can have a decisive voice in all matters of sietch life, security and internal politics. They are also protected by the entire community. Due to the high mortality rate among their men, women outnumber men in most sietches. Polygamy is common, and sexual relationships are voluntary and consensual; as Stilgar says to Jessica, \"women among us are not taken against their will.\" \nIn contrast, the Imperial aristocracy leaves young women of noble birth very little agency. Frequently trained by the Bene Gesserit, they are raised to eventually marry other aristocrats. Marriages between Major and Minor Houses are political tools to forge alliances or heal old feuds; women are given very little say in the matter. Many such marriages are quietly maneuvered by the Bene Gesserit to produce offspring with some genetic characteristics needed by the sisterhood's human-breeding program. In addition, such highly-placed sisters were in a position to subtly influence their husbands' actions in ways that could move the politics of the Imperium toward Bene Gesserit goals. \nThe gom jabbar test of humanity is administered by the female Bene Gesserit order but rarely to males. The Bene Gesserit have seemingly mastered the unconscious and can play on the unconscious weaknesses of others using the Voice, yet their breeding program seeks after a male Kwisatz Haderach. Their plan is to produce a male who can \"possess complete racial memory, both male and female,\" and look into the black hole in the collective unconscious that they fear. A central theme of the book is the connection, in Jessica's son, of this female aspect with his male aspect. This aligns with concepts in Jungian psychology, which features conscious/unconscious and taking/giving roles associated with males and females, as well as the idea of the collective unconscious. Paul's approach to power consistently requires his upbringing under the matriarchal Bene Gesserit, who operate as a long-dominating shadow government behind all of the great houses and their marriages or divisions. He is trained by Jessica in the Bene Gesserit Way, which includes prana-bindu training in nerve and muscle control and precise perception. Paul also receives Mentat training, thus helping prepare him to be a type of androgynous Kwisatz Haderach, a male Reverend Mother.In a Bene Gesserit test early in the book, it is implied that people are generally \"inhuman\" in that they irrationally place desire over self-interest and reason. This applies Herbert's philosophy that humans are not created equal, while equal justice and equal opportunity are higher ideals than mental, physical, or moral equality.\n\n\n=== Heroism ===\nI am showing you the superhero syndrome and your own participation in it.\nThroughout Paul's rise to superhuman status, he follows a plotline common to many stories describing the birth of a hero. He has unfortunate circumstances forced onto him. After a long period of hardship and exile, he confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Eventually, Paul Atreides gains a level of omniscience which allows him to take over the planet and the galaxy, and causes the Fremen of Arrakis to worship him like a god. Author Frank Herbert said in 1979, \"The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes.\" He wrote in 1985, \"Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question.\"Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of \"painful and slow personal progress.\" And unlike other superheroes of the 1960s\u2014who are the exception among ordinary people in their respective worlds\u2014Herbert's characters grow their powers through \"the application of mystical philosophies and techniques.\" For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Ginaz swordsmen and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators).\n\n\n=== Zen and religion ===\n\nEarly in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who \"gave a crucial boost to his thinking\". Zen teachings ultimately had \"a profound and continuing influence on [Herbert's] work\". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are referred to as Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In \"Dune Genesis\", Frank Herbert wrote:\n\nWhat especially pleases me is to see the interwoven themes, the fugue like relationships of images that exactly replay the way Dune took shape. As in an Escher lithograph, I involved myself with recurrent themes that turn into paradox. The central paradox concerns the human vision of time. What about Paul's gift of prescience - the Presbyterian fixation? For the Delphic Oracle to perform, it must tangle itself in a web of predestination. Yet predestination negates surprises and, in fact, sets up a mathematically enclosed universe whose limits are always inconsistent, always encountering the unprovable. It's like a koan, a Zen mind breaker. It's like the Cretan Epimenides saying, \"All Cretans are liars.\"\nBrian Herbert called the Dune universe \"a spiritual melting pot\", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which \"religious beliefs have combined into interesting forms\" represents the author's solution to eliminating arguments between religions, each of which claimed to have \"the one and only revelation.\"\n\n\n=== Asimov's Foundation ===\nTim O'Reilly suggests that Herbert also wrote Dune as a counterpoint to Isaac Asimov's Foundation series. In his monograph on Frank Herbert, O'Reilly wrote that \"Dune is clearly a commentary on the Foundation trilogy. Herbert has taken a look at the same imaginative situation that provoked Asimov's classic\u2014the decay of a galactic empire\u2014and restated it in a way that draws on different assumptions and suggests radically different conclusions. The twist he has introduced into Dune is that the Mule, not the Foundation, is his hero.\" According to O'Reilly, Herbert bases the Bene Gesserit on the scientific shamans of the Foundation, though they use biological rather than statistical science. In contrast to the Foundation series and its praise of science and rationality, Dune proposes that the unconscious and unexpected are actually what are needed for humanity.Both Herbert and Asimov explore the implications of prescience (i.e., visions of the future) both psychologically and socially. The Foundation series deploys a broadly determinist approach to prescient vision rooted in mathematical reasoning on a macroscopic social level. Dune, by contrast, invents a biologically rooted power of prescience that becomes determinist when the user actively relies on it to navigate past an undefined threshold of detail. Herbert\u2019s eugenically produced and spice-enhanced prescience is also personalized to individual actors whose roles in later books constrain each other's visions, rendering the future more or less mutable as time progresses. In what might be a comment on Foundation, Herbert's most powerfully prescient being in God Emperor of Dune laments the boredom engendered by prescience, and values surprises, especially regarding one's death, as a psychological necessity.However, both works contain a similar theme of the restoration of civilization and seem to make the fundamental assumption that \"political maneuvering, the need to control material resources, and friendship or mating bonds will be fundamentally the same in the future as they are now.\"\n\n\n== Critical reception ==\nDune tied with Roger Zelazny's This Immortal for the Hugo Award in 1966 and won the inaugural Nebula Award for Best Novel. Reviews of the novel have been largely positive, and Dune is considered by some critics to be the best science fiction book ever written. The novel has been translated into dozens of languages, and has sold almost 20 million copies. Dune has been regularly cited as one of the world's best-selling science fiction novels.Arthur C. Clarke described Dune as \"unique\" and wrote, \"I know nothing comparable to it except The Lord of the Rings.\" Robert A. Heinlein described the novel as \"powerful, convincing, and most ingenious.\" It was described as \"one of the monuments of modern science fiction\" by the Chicago Tribune, and P. Schuyler Miller called Dune \"one of the landmarks of modern science fiction ... an amazing feat of creation.\" The Washington Post described it as \"a portrayal of an alien society more complete and deeply detailed than any other author in the field has managed ... a story absorbing equally for its action and philosophical vistas ... An astonishing science fiction phenomenon.\" Algis Budrys praised Dune for the vividness of its imagined setting, saying \"The time lives. It breathes, it speaks, and Herbert has smelt it in his nostrils\". He found that the novel, however, \"turns flat and tails off at the end. ... [T]ruly effective villains simply simper and melt; fierce men and cunning statesmen and seeresses all bend before this new Messiah\". Budrys faulted in particular Herbert's decision to kill Paul's infant son offstage, with no apparent emotional impact, saying \"you cannot be so busy saving a world that you cannot hear an infant shriek\". After criticizing unrealistic science fiction, Carl Sagan in 1978 listed Dune as among stories \"that are so tautly constructed, so rich in the accommodating details of an unfamiliar society that they sweep me along before I have even a chance to be critical\".The Louisville Times wrote, \"Herbert's creation of this universe, with its intricate development and analysis of ecology, religion, politics, and philosophy, remains one of the supreme and seminal achievements in science fiction.\" Writing for The New Yorker, Jon Michaud praised Herbert's \"clever authorial decision\" to exclude robots and computers (\"two staples of the genre\") from his fictional universe, but suggested that this may be one explanation why Dune lacks \"true fandom among science-fiction fans\" to the extent that it \"has not penetrated popular culture in the way that The Lord of the Rings and Star Wars have\". Tamara I. Hladik wrote that the story \"crafts a universe where lesser novels promulgate excuses for sequels. All its rich elements are in balance and plausible\u2014not the patchwork confederacy of made-up languages, contrived customs, and meaningless histories that are the hallmark of so many other, lesser novels.\"On November 5, 2019, the BBC News listed Dune on its list of the 100 most influential novels.J. R. R. Tolkien refused to review Dune, on the grounds that he disliked it \"with some intensity\" and thus felt it would be unfair to Herbert, another working author, if he gave an honest review of the book.\n\n\n== First edition prints and manuscripts ==\nThe first edition of Dune is one of the most valuable in science fiction book collecting. Copies have been sold for more than $10,000 at auction. The Chilton first edition of the novel is 9+1\u20444 inches (235 mm) tall, with bluish green boards and a price of $5.95 on the dust jacket, and notes Toronto as the Canadian publisher on the copyright page. Up to this point, Chilton had been publishing only automobile repair manuals.California State University, Fullerton's Pollak Library has several of Herbert's draft manuscripts of Dune and other works, with the author's notes, in their Frank Herbert Archives.\n\n\n== Sequels and prequels ==\n\nAfter Dune proved to be a critical and financial success for Herbert, he was able to devote himself full time to writing additional novels in the series. He had already drafted parts of the second and third while writing Dune. The series included Dune Messiah (1969), Children of Dune (1976), God Emperor of Dune (1981), Heretics of Dune (1984), and Chapterhouse: Dune (1985), each sequentially continuing on the narrative from Dune. Herbert died on February 11, 1986.Herbert's son, Brian Herbert, had found several thousand pages of notes left by his father that outlined ideas for other narratives related to Dune. Brian Herbert enlisted author Kevin J. Anderson to help build out prequel novels to the events of Dune. Brian Herbert's and Anderson's Dune prequels first started publication in 1999, and have led to additional stories that take place between those of Frank Herbert's books. The notes for what would have been Dune 7 also enabled them to publish Hunters of Dune (2006) and Sandworms of Dune (2007), sequels to Frank Herbert's final novel Chapterhouse: Dune, which complete the chronological progression of his original series, and wrap up storylines that began in Heretics of Dune.\n\n\n== Adaptations ==\n\nDune has been considered as an \"unfilmable\" and \"uncontainable\" work to adapt from novel to film or other visual medium. Described by Wired, \"It has four appendices and a glossary of its own gibberish, and its action takes place on two planets, one of which is a desert overrun by worms the size of airport runways. Lots of important people die or try to kill each other, and they're all tethered to about eight entangled subplots.\" There have been several attempts to achieve this difficult conversion with various degrees of success.\n\n\n=== Early stalled attempts ===\nIn 1971, the production company Apjac International (APJ) (headed by Arthur P. Jacobs) optioned the rights to film Dune. As Jacobs was busy with other projects, such as the sequel to Planet of the Apes, Dune was delayed for another year. Jacobs' first choice for director was David Lean, but he turned down the offer. Charles Jarrott was also considered to direct. Work was also under way on a script while the hunt for a director continued. Initially, the first treatment had been handled by Robert Greenhut, the producer who had lobbied Jacobs to make the movie in the first place, but subsequently Rospo Pallenberg was approached to write the script, with shooting scheduled to begin in 1974. However, Jacobs died in 1973.\nIn December 1974, a French consortium led by Jean-Paul Gibon purchased the film rights from APJ, with Alejandro Jodorowsky set to direct. In 1975, Jodorowsky planned to film the story as a 14-hour feature, set to star his own son Brontis Jodorowsky in the lead role of Paul Atreides, Salvador Dal\u00ed as Shaddam IV, Padishah Emperor, Amanda Lear as Princess Irulan, Orson Welles as Baron Vladimir Harkonnen, Gloria Swanson as Reverend Mother Gaius Helen Mohiam, David Carradine as Duke Leto Atreides, Geraldine Chaplin as Lady Jessica, Alain Delon as Duncan Idaho, Herv\u00e9 Villechaize as Gurney Halleck, Udo Kier as Piter De Vries, and Mick Jagger as Feyd-Rautha. It was at first proposed to score the film with original music by Karlheinz Stockhausen, Henry Cow, and Magma; later on, the soundtrack was to be provided by Pink Floyd. Jodorowsky set up a pre-production unit in Paris consisting of Chris Foss, a British artist who designed covers for science fiction periodicals, Jean Giraud (Moebius), a French illustrator who created and also wrote and drew for Metal Hurlant magazine, and H. R. Giger. Moebius began designing creatures and characters for the film, while Foss was brought in to design the film's space ships and hardware. Giger began designing the Harkonnen Castle based on Moebius's storyboards. Dan O'Bannon was to head the special effects department.Dal\u00ed was cast as the Emperor. Dal\u00ed later demanded to be paid $100,000 per hour; Jodorowsky agreed, but tailored Dal\u00ed's part to be filmed in one hour, drafting plans for other scenes of the emperor to use a mechanical mannequin as substitute for Dal\u00ed. According to Giger, Dal\u00ed was \"later invited to leave the film because of his pro-Franco statements\". Just as the storyboards, designs, and script were finished, the financial backing dried up. Frank Herbert traveled to Europe in 1976 to find that $2 million of the $9.5 million budget had already been spent in pre-production, and that Jodorowsky's script would result in a 14-hour movie (\"It was the size of a phone book\", Herbert later recalled). Jodorowsky took creative liberties with the source material, but Herbert said that he and Jodorowsky had an amicable relationship. Jodorowsky said in 1985 that he found the Dune story mythical and had intended to recreate it rather than adapt the novel; though he had an \"enthusiastic admiration\" for Herbert, Jodorowsky said he had done everything possible to distance the author and his input from the project. Although Jodorowsky was embittered by the experience, he said the Dune project changed his life, and some of the ideas were used in his and Moebius's The Incal. O'Bannon entered a psychiatric hospital after the production failed, then worked on 13 scripts, the last of which became Alien. A 2013 documentary, Jodorowsky's Dune, was made about Jodorowsky's failed attempt at an adaptation.\nIn 1976, Dino De Laurentiis acquired the rights from Gibon's consortium. De Laurentiis commissioned Herbert to write a new screenplay in 1978; the script Herbert turned in was 175 pages long, the equivalent of nearly three hours of screen time. De Laurentiis then hired director Ridley Scott in 1979, with Rudy Wurlitzer writing the screenplay and H. R. Giger retained from the Jodorowsky production; Scott and Giger had also just worked together on the film Alien, after O'Bannon recommended the artist. Scott intended to split the novel into two movies. He worked on three drafts of the script, using The Battle of Algiers as a point of reference, before moving on to direct another science fiction film, Blade Runner (1982). As he recalls, the pre-production process was slow, and finishing the project would have been even more time-intensive:\n\nBut after seven months I dropped out of Dune, by then Rudy Wurlitzer had come up with a first-draft script which I felt was a decent distillation of Frank Herbert's. But I also realised Dune was going to take a lot more work\u2014at least two and a half years' worth. And I didn't have the heart to attack that because my older brother Frank unexpectedly died of cancer while I was prepping the De Laurentiis picture. Frankly, that freaked me out. So I went to Dino and told him the Dune script was his.\n\u2014From Ridley Scott: The Making of his Movies by Paul M. Sammon\n\n\n=== 1984 film by David Lynch ===\n\nIn 1981, the nine-year film rights were set to expire. De Laurentiis re-negotiated the rights from the author, adding to them the rights to the Dune sequels (written and unwritten). After seeing The Elephant Man, De Laurentiis' daughter Raffaella decided that David Lynch should direct the movie. Around that time Lynch received several other directing offers, including Return of the Jedi. He agreed to direct Dune and write the screenplay even though he had not read the book, was not familiar with the story, or even been interested in science fiction. Lynch worked on the script for six months with Eric Bergren and Christopher De Vore. The team yielded two drafts of the script before it split over creative differences. Lynch would subsequently work on five more drafts. Production of the work was troubled by problems at the Mexican studio and hampering the film's timeline. Lynch ended up producing a nearly three-hour long film, but at demands from Universal Pictures, the film's distributor, he cut it back to about two hours, hastily filming additional scenes to make up for some of the cut footage.This first film of Dune, directed by Lynch, was released in 1984, nearly 20 years after the book's publication. Though Herbert said the book's depth and symbolism seemed to intimidate many filmmakers, he was pleased with the film, saying that \"They've got it. It begins as Dune does. And I hear my dialogue all the way through. There are some interpretations and liberties, but you're gonna come out knowing you've seen Dune.\" Reviews of the film were negative, saying that it was incomprehensible to those unfamiliar with the book, and that fans would be disappointed by the way it strayed from the book's plot. Upon release for television and other forms of home media, Universal opted to reintroduce much of the footage that Lynch had cut, creating an over-three-hour long version with extensive monologue exposition. Lynch was extremely displeased with this move, and demanded that Universal replace his name on these cuts with the pseudonym \"Alan Smithee\", and has generally distanced himself from the film since.\n\n\n=== 2000 miniseries by John Harrison ===\n\nIn 2000, John Harrison adapted the novel into Frank Herbert's Dune, a miniseries which premiered on American Sci-Fi Channel. As of 2004, the miniseries was one of the three highest-rated programs broadcast on the Sci-Fi Channel.\n\n\n=== Further film attempts ===\nIn 2008, Paramount Pictures announced that they would produce a new film based on the book, with Peter Berg attached to direct. Producer Kevin Misher, who spent a year securing the rights from the Herbert estate, was to be joined by Richard Rubinstein and John Harrison (of both Sci-Fi Channel miniseries) as well as Sarah Aubrey and Mike Messina. The producers stated that they were going for a \"faithful adaptation\" of the novel, and considered \"its theme of finite ecological resources particularly timely.\" Science fiction author Kevin J. Anderson and Frank Herbert's son Brian Herbert, who had together written multiple Dune sequels and prequels since 1999, were attached to the project as technical advisors. In October 2009, Berg dropped out of the project, later saying that it \"for a variety of reasons wasn't the right thing\" for him. Subsequently, with a script draft by Joshua Zetumer, Paramount reportedly sought a new director who could do the film for under $175 million. In 2010, Pierre Morel was signed on to direct, with screenwriter Chase Palmer incorporating Morel's vision of the project into Zetumer's original draft. By November 2010, Morel left the project. Paramount finally dropped plans for a remake in March 2011.\n\n\n=== Films by Denis Villeneuve ===\n\nIn November 2016, Legendary Entertainment acquired the film and TV rights for Dune. Variety reported in December 2016 that Denis Villeneuve was in negotiations to direct the project, which was confirmed in February 2017. In April 2017, Legendary announced that Eric Roth would write the screenplay. Villeneuve explained in March 2018 that his adaptation will be split into two films, with the first installment scheduled to begin production in 2019. Casting includes Timoth\u00e9e Chalamet as Paul Atreides, Dave Bautista as Rabban, Stellan Skarsg\u00e5rd as Baron Harkonnen, Rebecca Ferguson as Lady Jessica, Charlotte Rampling as Reverend Mother Mohiam, Oscar Isaac as Duke Leto Atreides, Zendaya as Chani, Javier Bardem as Stilgar, Josh Brolin as Gurney Halleck, Jason Momoa as Duncan Idaho, David Dastmalchian as Piter De Vries, Chang Chen as Dr. Yueh, and Stephen Henderson as Thufir Hawat. Warner Bros. Pictures distributed the film, which had its initial premiere on September 3, 2021, at the Venice Film Festival, and wide release in both theaters and streaming on HBO Max on October 21, 2021, as part of Warner Bros.'s approach to handling the impact of the COVID-19 pandemic on the film industry. The film received \"generally favorable reviews\" on Metacritic. It has gone on to win multiple awards and was named by the National Board of Review as one of the 10 best films of 2021, as well as the American Film Institute in their annual top 10 list. The film went on to be nominated for ten Academy Awards, winning six, the most wins of the night for any film in contention.A sequel, Dune: Part Two, was scheduled for release on November 3, 2023, but will now instead be released on March 15th 2024 amid the 2023 SAG-AFTRA strike.\n\n\n=== Audiobooks ===\nIn 1993, Recorded Books Inc. released a 20-disc audiobook narrated by George Guidall. In 2007, Audio Renaissance released an audio book narrated by Simon Vance with some parts performed by Scott Brick, Orlagh Cassidy, Euan Morton, and other performers.\n\n\n== Cultural influence ==\nDune has been widely influential, inspiring numerous novels, music, films, television, games, and comic books. It is considered one of the greatest and most influential science fiction novels of all time, with numerous modern science fiction works such as Star Wars owing their existence to Dune. Dune has also been referenced in numerous other works of popular culture, including Star Trek, Chronicles of Riddick, The Kingkiller Chronicle and Futurama. Dune was cited as a source of inspiration for Hayao Miyazaki's anime film Nausica\u00e4 of the Valley of the Wind (1984) for its post-apocalyptic world.Dune was parodied in 1984's National Lampoon's Doon by Ellis Weiner, which William F. Touponce called \"something of a tribute to Herbert's success on college campuses\", noting that \"the only other book to have been so honored is Tolkien's The Lord of the Rings,\" which was parodied by The Harvard Lampoon in 1969.\n\n\n=== Music ===\nIn 1978, French electronic musician Richard Pinhas released the nine-track Dune-inspired album Chronolyse, which includes the seven-part Variations sur le th\u00e8me des Bene Gesserit.\nIn 1979, German electronic music pioneer Klaus Schulze released an LP titled Dune featuring motifs and lyrics inspired by the novel.\nA similar musical project, Visions of Dune, was released also in 1979 by Zed (a pseudonym of French electronic musician Bernard Sjazner).\nHeavy metal band Iron Maiden wrote the song \"To Tame a Land\" based on the Dune story. It appears as the closing track to their 1983 album Piece of Mind. The original working title of the song was \"Dune\"; however, the band was denied permission to use it, with Frank Herbert's agents stating \"Frank Herbert doesn't like rock bands, particularly heavy rock bands, and especially bands like Iron Maiden\".\nDune inspired the German happy hardcore band Dune, who have released several albums with space travel-themed songs.\nThe progressive hardcore band Shai Hulud took their name from Dune.\n\"Traveller in Time\", from the 1991 Blind Guardian album Tales from the Twilight World, is based mostly on Paul Atreides' visions of future and past.\nThe title of the 1993 Fear Factory album Fear is The Mindkiller is a quote from the \"litany against fear\".\nThe song \"Near Fantastica\", from the Matthew Good album Avalanche, makes reference to the \"litany against fear\", repeating \"can't feel fear, fear's the mind killer\" through a section of the song.\nIn the Fatboy Slim song \"Weapon of Choice\", the line \"If you walk without rhythm/You won't attract the worm\" is a near quotation from the sections of novel in which Stilgar teaches Paul to ride sandworms.\nDune also inspired the 1999 album The 2nd Moon by the German death metal band Golem, which is a concept album about the series.\nDune influenced Thirty Seconds to Mars on their self-titled debut album.\nThe Youngblood Brass Band's song \"Is an Elegy\" on Center:Level:Roar references \"Muad'Dib\", \"Arrakis\" and other elements from the novel.\nThe debut album of Canadian musician Grimes, called Geidi Primes, is a concept album based on Dune.\nJapanese singer Kenshi Yonezu, released a song titled \"Dune\", also known as \"Sand Planet\". The song was released on 2017, and it was created using the voice synthesizer Hatsune Miku for her 10th anniversary.\n\"Fear is the Mind Killer\", a song released in 2018 by Zheani (an Australian rapper) uses a quote from Dune.\n\"Litany Against Fear\" is a spoken track released in 2018 under the 'Eight' album by Zheani. She recites an extract from Dune.\nSleep's 2018 album The Sciences features a song, Giza Butler, that references several aspects of Dune.\nTool's 2019 album Fear Inoculum has a song entitled \"Litanie contre la peur (Litany against fear)\".\n\"Rare to Wake\", from Shannon Lay's album Geist (2019), is inspired by Dune.\nHeavy Metal band Diamond Head based the song \"The Sleeper\" and its prelude, both off the album The Coffin Train, on the series.\n\n\n=== Games ===\n\nThere have been a number of games based on the book, starting with the strategy\u2013adventure game Dune (1992). The most important game adaptation is Dune II (1992), which established the conventions of modern real-time strategy games and is considered to be among the most influential video games of all time.The online game Lost Souls includes Dune-derived elements, including sandworms and melange\u2014addiction to which can produce psychic talents. The 2016 game Enter the Gungeon features the spice melange as a random item which gives the player progressively stronger abilities and penalties with repeated uses, mirroring the long-term effects melange has on users.Rick Priestley cites Dune as a major influence on his 1987 wargame, Warhammer 40,000.In 2023, Funcom announced Dune: Awakening, an upcoming massively multiplayer online game set in the universe of Dune.\n\n\n=== Space exploration ===\nThe Apollo 15 astronauts named a small crater on Earth's Moon after the novel during the 1971 mission, and the name was formally adopted by the International Astronomical Union in 1973. Since 2009, the names of planets from the Dune novels have been adopted for the real-world nomenclature of plains and other features on Saturn's moon Titan, like Arrakis Planitia.\n\n\n== See also ==\nSoft science fiction \u2013 Sub-genre of science fiction emphasizing \"soft\" sciences or human emotions\nHydraulic empire \u2013 Government by control of access to water\n\n\n== References ==\n\n\n== Further reading ==\nClute, John; Nicholls, Peter (1995). The Encyclopedia of Science Fiction. New York: St. Martin's Press. p. 1386. ISBN 978-0-312-13486-0.\nClute, John; Nicholls, Peter (1995). The Multimedia Encyclopedia of Science Fiction (CD-ROM). Danbury, CT: Grolier. ISBN 978-0-7172-3999-3.\nHuddleston, Tom. The Worlds of Dune: The Places and Cultures That Inspired Frank Herbert. Minneapolis: Quarto Publishing Group UK, 2023.\nJakubowski, Maxim; Edwards, Malcolm (1983). The Complete Book of Science Fiction and Fantasy Lists. St Albans, Herts, UK: Granada Publishing Ltd. p. 350. ISBN 978-0-586-05678-3.\nKennedy, Kara. Frank Herbert's Dune: A Critical Companion. Cham, Switzerland: Palgrave Macmillan, 2022.\nKennedy, Kara. Women's Agency in the Dune Universe: Tracing Women's Liberation through Science Fiction. Cham, Switzerland: Palgrave Macmillan, 2020.\nNardi, Dominic J. & N. Trevor Brierly, eds. Discovering Dune: Essays on Frank Herbert's Epic Saga. Jefferson, NC: McFarland & Co., 2022.\nNicholas, Jeffery, ed. Dune and Philosophy: Weirding Way of Mentat. Chicago: Open Court, 2011.\nNicholls, Peter (1979). The Encyclopedia of Science Fiction. St Albans, Herts, UK: Granada Publishing Ltd. p. 672. ISBN 978-0-586-05380-5.\nO\u2019Reilly, Timothy. Frank Herbert. New York: Frederick Ungar, 1981.\nPringle, David (1990). The Ultimate Guide to Science Fiction. London: Grafton Books Ltd. p. 407. ISBN 978-0-246-13635-0.\nTuck, Donald H. (1974). The Encyclopedia of Science Fiction and Fantasy. Chicago: Advent. p. 136. ISBN 978-0-911682-20-5.\nWilliams, Kevin C. The Wisdom of the Sand: Philosophy and Frank Herbert's Dune. New York: Hampton Press, 2013.\n\n\n== External links ==\n\nOfficial website for Dune and its sequels\nDune title listing at the Internet Speculative Fiction Database\nTurner, Paul (October 1973). \"Vertex Interviews Frank Herbert\" (Interview). Vol. 1, no. 4. Archived from the original on May 19, 2009.\nSpark Notes: Dune, detailed study guide\nDuneQuotes.com \u2013 Collection of quotes from the Dune series\nDune by Frank Herbert, reviewed by Ted Gioia (Conceptual Fiction)\n\"Frank Herbert Biography and Bibliography at LitWeb.net\". www.litweb.net. Archived from the original on April 2, 2009. Retrieved January 2, 2009.\nWorks of Frank Herbert at Curlie\nTimberg, Scott (April 18, 2010). \"Frank Herbert's Dune holds timely \u2013 and timeless \u2013 appeal\". Los Angeles Times. Archived from the original on December 3, 2013. Retrieved November 27, 2013.\nWalton, Jo (January 12, 2011). \"In league with the future: Frank Herbert's Dune (Review)\". Tor.com. Retrieved November 27, 2013.\nLeonard, Andrew (June 4, 2015). \"To Save California, Read Dune\". Nautilus. Archived from the original on November 4, 2017. Retrieved June 15, 2015.\nDune by Frank Herbert \u2013 Foreshadowing & Dedication at Fact Behind Fiction\nFrank Herbert by Tim O'Reilly\nDuneScholar.com \u2013 Collection of scholarly essays" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\neo4j-vector-memory\\README.md", + "filetype": ".md", + "content": "\n# neo4j-vector-memory\n\nThis template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store.\nAdditionally, it uses the graph capabilities of the Neo4j database to store and retrieve the dialogue history of a specific user's session.\nHaving the dialogue history stored as a graph allows for seamless conversational flows but also gives you the ability to analyze user behavior and text chunk retrieval through graph analytics.\n\n\n## Environment Setup\n\nYou need to define the following environment variables\n\n```\nOPENAI_API_KEY=\nNEO4J_URI=\nNEO4J_USERNAME=\nNEO4J_PASSWORD=\n```\n\n## Populating with data\n\nIf you want to populate the DB with some example data, you can run `python ingest.py`.\nThe script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.\nAdditionally, a vector index named `dune` is created for efficient querying of these embeddings.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package neo4j-vector-memory\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add neo4j-vector-memory\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom neo4j_vector_memory import chain as neo4j_vector_memory_chain\n\nadd_routes(app, neo4j_vector_memory_chain, path=\"/neo4j-vector-memory\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/neo4j-vector-memory/playground](http://127.0.0.1:8000/neo4j-parent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/neo4j-vector-memory\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\nvidia-rag-canonical\\README.md", + "filetype": ".md", + "content": "\n# nvidia-rag-canonical\n\nThis template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).\n\n## Environment Setup\n\nYou should export your NVIDIA API Key as an environment variable.\nIf you do not have an NVIDIA API Key, you can create one by following these steps:\n1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.\n2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.\n3. Select the `API` option and click `Generate Key`.\n4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.\n\n```shell\nexport NVIDIA_API_KEY=...\n```\n\nFor instructions on hosting the Milvus Vector Store, refer to the section at the bottom.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo use the NVIDIA models, install the Langchain NVIDIA AI Endpoints package:\n```shell\npip install -U langchain_nvidia_aiplay\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package nvidia-rag-canonical\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add nvidia-rag-canonical\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom nvidia_rag_canonical import chain as nvidia_rag_canonical_chain\n\nadd_routes(app, nvidia_rag_canonical_chain, path=\"/nvidia-rag-canonical\")\n```\n\nIf you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:\n```python\nfrom nvidia_rag_canonical import ingest as nvidia_rag_ingest\n\nadd_routes(app, nvidia_rag_ingest, path=\"/nvidia-rag-ingest\")\n```\nNote that for files ingested by the ingestion API, the server will need to be restarted for the newly ingested files to be accessible by the retriever.\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you DO NOT already have a Milvus Vector Store you want to connect to, see `Milvus Setup` section below before proceeding.\n\nIf you DO have a Milvus Vector Store you want to connect to, edit the connection details in `nvidia_rag_canonical/chain.py`\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/nvidia-rag-canonical/playground](http://127.0.0.1:8000/nvidia-rag-canonical/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/nvidia-rag-canonical\")\n```\n\n\n## Milvus Setup\n\nUse this step if you need to create a Milvus Vector Store and ingest data.\nWe will first follow the standard Milvus setup instructions [here](https://milvus.io/docs/install_standalone-docker.md).\n\n1. Download the Docker Compose YAML file.\n ```shell\n wget https://github.com/milvus-io/milvus/releases/download/v2.3.3/milvus-standalone-docker-compose.yml -O docker-compose.yml\n ```\n2. Start the Milvus Vector Store container\n ```shell\n sudo docker compose up -d\n ```\n3. Install the PyMilvus package to interact with the Milvus container.\n ```shell\n pip install pymilvus\n ```\n4. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:\n\n ```shell\n python ingest.py\n ```\n\n Note that you can (and should!) change this to ingest data of your choice.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\openai-functions-agent\\README.md", + "filetype": ".md", + "content": "\n# openai-functions-agent\n\nThis template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take. \n\nThis example creates an agent that can optionally look up information on the internet using Tavily's search engine.\n\n## Environment Setup\n\nThe following environment variables need to be set:\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nSet the `TAVILY_API_KEY` environment variable to access Tavily.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package openai-functions-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add openai-functions-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom openai_functions_agent import agent_executor as openai_functions_agent_chain\n\nadd_routes(app, openai_functions_agent_chain, path=\"/openai-functions-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/openai-functions-agent\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\openai-functions-agent-gmail\\README.md", + "filetype": ".md", + "content": "# OpenAI Functions Agent - Gmail\n\nEver struggled to reach inbox zero? \n\nUsing this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search engine so it can search for relevant information about any topics or people in the email thread before writing, ensuring the drafts include all the relevant information needed to sound well-informed.\n\n![Animated GIF showing the interface of the Gmail Agent Playground with a cursor interacting with the input field.](./static/gmail-agent-playground.gif \"Gmail Agent Playground Interface\")\n\n## The details\n\nThis assistant uses OpenAI's [function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) support to reliably select and invoke the tools you've provided\n\nThis template also imports directly from [langchain-core](https://pypi.org/project/langchain-core/) and [`langchain-community`](https://pypi.org/project/langchain-community/) where appropriate. We have restructured LangChain to let you select the specific integrations needed for your use case. While you can still import from `langchain` (we are making this transition backwards-compatible), we have separated the homes of most of the classes to reflect ownership and to make your dependency lists lighter. Most of the integrations you need can be found in the `langchain-community` package, and if you are just using the core expression language API's, you can even build solely based on `langchain-core`.\n\n## Environment Setup\n\nThe following environment variables need to be set:\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nSet the `TAVILY_API_KEY` environment variable to access Tavily search.\n\nCreate a [`credentials.json`](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application) file containing your OAuth client ID from Gmail. To customize authentication, see the [Customize Auth](#customize-auth) section below.\n\n_*Note:* The first time you run this app, it will force you to go through a user authentication flow._\n\n(Optional): Set `GMAIL_AGENT_ENABLE_SEND` to `true` (or modify the `agent.py` file in this template) to give it access to the \"Send\" tool. This will give your assistant permissions to send emails on your behalf without your explicit review, which is not recommended.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package openai-functions-agent-gmail\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add openai-functions-agent-gmail\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom openai_functions_agent import agent_executor as openai_functions_agent_chain\n\nadd_routes(app, openai_functions_agent_chain, path=\"/openai-functions-agent-gmail\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/openai-functions-agent-gmail/playground](http://127.0.0.1:8000/openai-functions-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/openai-functions-agent-gmail\")\n```\n\n## Customize Auth\n\n```\nfrom langchain_community.tools.gmail.utils import build_resource_service, get_gmail_credentials\n\n# Can review scopes here https://developers.google.com/gmail/api/auth/scopes\n# For instance, readonly scope is 'https://www.googleapis.com/auth/gmail.readonly'\ncredentials = get_gmail_credentials(\n token_file=\"token.json\",\n scopes=[\"https://mail.google.com/\"],\n client_secrets_file=\"credentials.json\",\n)\napi_resource = build_resource_service(credentials=credentials)\ntoolkit = GmailToolkit(api_resource=api_resource)\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\openai-functions-tool-retrieval-agent\\README.md", + "filetype": ".md", + "content": "# openai-functions-tool-retrieval-agent\n\nThe novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n\nIn this template we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.\n\nThis template is based on [this Agent How-To](https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval).\n\n## Environment Setup\n\nThe following environment variables need to be set:\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nSet the `TAVILY_API_KEY` environment variable to access Tavily.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package openai-functions-tool-retrieval-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add openai-functions-tool-retrieval-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom openai_functions_tool_retrieval_agent import chain as openai_functions_tool_retrieval_agent_chain\n\nadd_routes(app, openai_functions_tool_retrieval_agent_chain, path=\"/openai-functions-tool-retrieval-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground](http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/openai-functions-tool-retrieval-agent\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\pii-protected-chatbot\\README.md", + "filetype": ".md", + "content": "# pii-protected-chatbot\n\nThis template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.\n\n## Environment Setup\n\nThe following environment variables need to be set:\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package pii-protected-chatbot\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add pii-protected-chatbot\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom pii_protected_chatbot.chain import chain as pii_protected_chatbot\n\nadd_routes(app, pii_protected_chatbot, path=\"/openai-functions-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/pii_protected_chatbot/playground](http://127.0.0.1:8000/pii_protected_chatbot/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/pii_protected_chatbot\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\pirate-speak\\README.md", + "filetype": ".md", + "content": "\n# pirate-speak\n\nThis template converts user input into pirate speak.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package pirate-speak\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add pirate-speak\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom pirate_speak.chain import chain as pirate_speak_chain\n\nadd_routes(app, pirate_speak_chain, path=\"/pirate-speak\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/pirate-speak/playground](http://127.0.0.1:8000/pirate-speak/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/pirate-speak\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\pirate-speak-configurable\\README.md", + "filetype": ".md", + "content": "# pirate-speak-configurable\n\nThis template converts user input into pirate speak. It shows how you can allow\n`configurable_alternatives` in the Runnable, allowing you to select from \nOpenAI, Anthropic, or Cohere as your LLM Provider in the playground (or via API).\n\n## Environment Setup\n\nSet the following environment variables to access all 3 configurable alternative\nmodel providers:\n\n- `OPENAI_API_KEY`\n- `ANTHROPIC_API_KEY`\n- `COHERE_API_KEY`\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package pirate-speak-configurable\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add pirate-speak-configurable\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom pirate_speak_configurable import chain as pirate_speak_configurable_chain\n\nadd_routes(app, pirate_speak_configurable_chain, path=\"/pirate-speak-configurable\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/pirate-speak-configurable/playground](http://127.0.0.1:8000/pirate-speak-configurable/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/pirate-speak-configurable\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\plate-chain\\README.md", + "filetype": ".md", + "content": "\n# plate-chain\n\nThis template enables parsing of data from laboratory plates. \n\nIn the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format. \n\nThis can parse the resulting data into standardized (e.g., JSON) format for further processing.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo utilize plate-chain, you must have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nCreating a new LangChain project and installing plate-chain as the only package can be done with:\n\n```shell\nlangchain app new my-app --package plate-chain\n```\n\nIf you wish to add this to an existing project, simply run:\n\n```shell\nlangchain app add plate-chain\n```\n\nThen add the following code to your `server.py` file:\n\n```python\nfrom plate_chain import chain as plate_chain\n\nadd_routes(app, plate_chain, path=\"/plate-chain\")\n```\n\n(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you're in this directory, you can start a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis starts the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nAll templates can be viewed at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nAccess the playground at [http://127.0.0.1:8000/plate-chain/playground](http://127.0.0.1:8000/plate-chain/playground) \n\nYou can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/plate-chain\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\propositional-retrieval\\README.md", + "filetype": ".md", + "content": "# propositional-retrieval\n\nThis template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s [Dense X Retrieval: What Retrieval Granularity Should We Use?](https://arxiv.org/abs/2312.06648). The prompt, which you can [try out on the hub](https://smith.langchain.com/hub/wfh/proposal-indexing), directs an LLM to generate de-contextualized \"propositions\" which can be vectorized to increase the retrieval accuracy. You can see the full definition in `proposal_chain.py`.\n\n![Diagram illustrating the multi-vector indexing strategy for information retrieval, showing the process from Wikipedia data through a Proposition-izer to FactoidWiki, and the retrieval of information units for a QA model.](https://github.com/langchain-ai/langchain/raw/master/templates/propositional-retrieval/_images/retriever_diagram.png \"Retriever Diagram\")\n\n## Storage\n\nFor this demo, we index a simple academic paper using the RecursiveUrlLoader, and store all retriever information locally (using chroma and a bytestore stored on the local filesystem). You can modify the storage layer in `storage.py`.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access `gpt-3.5` and the OpenAI Embeddings classes.\n\n## Indexing\n\nCreate the index by running the following:\n\n```python\npoetry install\npoetry run python propositional_retrieval/ingest.py\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package propositional-retrieval\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add propositional-retrieval\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom propositional_retrieval import chain\n\nadd_routes(app, chain, path=\"/propositional-retrieval\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/propositional-retrieval/playground](http://127.0.0.1:8000/propositional-retrieval/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/propositional-retrieval\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\python-lint\\README.md", + "filetype": ".md", + "content": "# python-lint\n\nThis agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses `black`, `ruff`, and `mypy` to ensure the code meets standard quality checks.\n\nThis streamlines the coding process by integrating and responding to these checks, resulting in reliable and consistent code output.\n\nIt cannot actually execute the code it writes, as code execution may introduce additional dependencies and potential security vulnerabilities.\nThis makes the agent both a secure and efficient solution for code generation tasks.\n\nYou can use it to generate Python code directly, or network it with planning and execution agents.\n\n## Environment Setup\n\n- Install `black`, `ruff`, and `mypy`: `pip install -U black ruff mypy`\n- Set `OPENAI_API_KEY` environment variable.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package python-lint\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add python-lint\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom python_lint import agent_executor as python_lint_agent\n\nadd_routes(app, python_lint_agent, path=\"/python-lint\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/python-lint/playground](http://127.0.0.1:8000/python-lint/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/python-lint\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-astradb\\README.md", + "filetype": ".md", + "content": "\n# rag-astradb\n\nThis template will perform RAG using Astra DB (`AstraDB` vector store class)\n\n## Environment Setup\n\nAn [Astra DB](https://astra.datastax.com) database is required; free tier is fine.\n\n- You need the database **API endpoint** (such as `https://0123...-us-east1.apps.astra.datastax.com`) ...\n- ... and a **token** (`AstraCS:...`).\n\nAlso, an **OpenAI API Key** is required. _Note that out-of-the-box this demo supports OpenAI only, unless you tinker with the code._\n\nProvide the connection parameters and secrets through environment variables. Please refer to `.env.template` for the variable names.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-astradb\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-astradb\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom astradb_entomology_rag import chain as astradb_entomology_rag_chain\n\nadd_routes(app, astradb_entomology_rag_chain, path=\"/rag-astradb\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-astradb/playground](http://127.0.0.1:8000/rag-astradb/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-astradb\")\n```\n\n## Reference\n\nStand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_astradb_entomology_rag).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-astradb\\sources.txt", + "filetype": ".txt", + "content": "# source: https://www.thoughtco.com/a-guide-to-the-twenty-nine-insect-orders-1968419\n\nOrder Thysanura: The silverfish and firebrats are found in the order Thysanura. They are wingless insects often found in people's attics, and have a lifespan of several years. There are about 600 species worldwide.\nOrder Diplura: Diplurans are the most primitive insect species, with no eyes or wings. They have the unusual ability among insects to regenerate body parts. There are over 400 members of the order Diplura in the world.\nOrder Protura: Another very primitive group, the proturans have no eyes, no antennae, and no wings. They are uncommon, with perhaps less than 100 species known.\nOrder Collembola: The order Collembola includes the springtails, primitive insects without wings. There are approximately 2,000 species of Collembola worldwide.\nOrder Ephemeroptera: The mayflies of order Ephemeroptera are short-lived, and undergo incomplete metamorphosis. The larvae are aquatic, feeding on algae and other plant life. Entomologists have described about 2,100 species worldwide.\nOrder Odonata: The order Odonata includes dragonflies and damselflies, which undergo incomplete metamorphosis. They are predators of other insects, even in their immature stage. There are about 5,000 species in the order Odonata.\nOrder Plecoptera: The stoneflies of order Plecoptera are aquatic and undergo incomplete metamorphosis. The nymphs live under rocks in well flowing streams. Adults are usually seen on the ground along stream and river banks. There are roughly 3,000 species in this group.\nOrder Grylloblatodea: Sometimes referred to as \"living fossils,\" the insects of the order Grylloblatodea have changed little from their ancient ancestors. This order is the smallest of all the insect orders, with perhaps only 25 known species living today. Grylloblatodea live at elevations above 1500 ft., and are commonly named ice bugs or rock crawlers.\nOrder Orthoptera: These are familiar insects (grasshoppers, locusts, katydids, and crickets) and one of the largest orders of herbivorous insects. Many species in the order Orthoptera can produce and detect sounds. Approximately 20,000 species exist in this group.\nOrder Phasmida: The order Phasmida are masters of camouflage, the stick and leaf insects. They undergo incomplete metamorphosis and feed on leaves. There are some 3,000 insects in this group, but only a small fraction of this number is leaf insects. Stick insects are the longest insects in the world.\nOrder Dermaptera: This order contains the earwigs, an easily recognized insect that often has pincers at the end of the abdomen. Many earwigs are scavengers, eating both plant and animal matter. The order Dermaptera includes less than 2,000 species.\nOrder Embiidina: The order Embioptera is another ancient order with few species, perhaps only 200 worldwide. The web spinners have silk glands in their front legs and weave nests under leaf litter and in tunnels where they live. Webspinners live in tropical or subtropical climates.\nOrder Dictyoptera: The order Dictyoptera includes roaches and mantids. Both groups have long, segmented antennae and leathery forewings held tightly against their backs. They undergo incomplete metamorphosis. Worldwide, there approximately 6,000 species in this order, most living in tropical regions.\nOrder Isoptera: Termites feed on wood and are important decomposers in forest ecosystems. They also feed on wood products and are thought of as pests for the destruction they cause to man-made structures. There are between 2,000 and 3,000 species in this order.\nOrder Zoraptera: Little is know about the angel insects, which belong to the order Zoraptera. Though they are grouped with winged insects, many are actually wingless. Members of this group are blind, small, and often found in decaying wood. There are only about 30 described species worldwide.\nOrder Psocoptera: Bark lice forage on algae, lichen, and fungus in moist, dark places. Booklice frequent human dwellings, where they feed on book paste and grains. They undergo incomplete metamorphosis. Entomologists have named about 3,200 species in the order Psocoptera.\nOrder Mallophaga: Biting lice are ectoparasites that feed on birds and some mammals. There are an estimated 3,000 species in the order Mallophaga, all of which undergo incomplete metamorphosis.\nOrder Siphunculata: The order Siphunculata are the sucking lice, which feed on the fresh blood of mammals. Their mouthparts are adapted for sucking or siphoning blood. There are only about 500 species of sucking lice.\nOrder Hemiptera: Most people use the term \"bugs\" to mean insects; an entomologist uses the term to refer to the order Hemiptera. The Hemiptera are the true bugs, and include cicadas, aphids, and spittlebugs, and others. This is a large group of over 70,000 species worldwide.\nOrder Thysanoptera: The thrips of order Thysanoptera are small insects that feed on plant tissue. Many are considered agricultural pests for this reason. Some thrips prey on other small insects as well. This order contains about 5,000 species.\nOrder Neuroptera: Commonly called the order of lacewings, this group actually includes a variety of other insects, too: dobsonflies, owlflies, mantidflies, antlions, snakeflies, and alderflies. Insects in the order Neuroptera undergo complete metamorphosis. Worldwide, there are over 5,500 species in this group.\nOrder Mecoptera: This order includes the scorpionflies, which live in moist, wooded habitats. Scorpionflies are omnivorous in both their larval and adult forms. The larva are caterpillar-like. There are less than 500 described species in the order Mecoptera.\nOrder Siphonaptera: Pet lovers fear insects in the order Siphonaptera - the fleas. Fleas are blood-sucking ectoparasites that feed on mammals, and rarely, birds. There are well over 2,000 species of fleas in the world.\nOrder Coleoptera: This group, the beetles and weevils, is the largest order in the insect world, with over 300,000 distinct species known. The order Coleoptera includes well-known families: june beetles, lady beetles, click beetles, and fireflies. All have hardened forewings that fold over the abdomen to protect the delicate hindwings used for flight.\nOrder Strepsiptera: Insects in this group are parasites of other insects, particularly bees, grasshoppers, and the true bugs. The immature Strepsiptera lies in wait on a flower and quickly burrows into any host insect that comes along. Strepsiptera undergo complete metamorphosis and pupate within the host insect's body.\nOrder Diptera: Diptera is one of the largest orders, with nearly 100,000 insects named to the order. These are the true flies, mosquitoes, and gnats. Insects in this group have modified hindwings which are used for balance during flight. The forewings function as the propellers for flying.\nOrder Lepidoptera: The butterflies and moths of the order Lepidoptera comprise the second largest group in the class Insecta. These well-known insects have scaly wings with interesting colors and patterns. You can often identify an insect in this order just by the wing shape and color.\nOrder Trichoptera: Caddisflies are nocturnal as adults and aquatic when immature. The caddisfly adults have silky hairs on their wings and body, which is key to identifying a Trichoptera member. The larvae spin traps for prey with silk. They also make cases from the silk and other materials that they carry and use for protection.\nOrder Hymenoptera: The order Hymenoptera includes many of the most common insects - ants, bees, and wasps. The larvae of some wasps cause trees to form galls, which then provides food for the immature wasps. Other wasps are parasitic, living in caterpillars, beetles, or even aphids. This is the third-largest insect order with just over 100,000 species.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-aws-bedrock\\README.md", + "filetype": ".md", + "content": "\n# rag-aws-bedrock\n\nThis template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.\n\nIt primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.\n\nFor additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).\n\n## Environment Setup\n\nBefore you can use this package, ensure that you have configured `boto3` to work with your AWS account. \n\nFor details on how to set up and configure `boto3`, visit [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).\n\nIn addition, you need to install the `faiss-cpu` package to work with the FAISS vector store:\n\n```bash\npip install faiss-cpu\n```\n\nYou should also set the following environment variables to reflect your AWS profile and region (if you're not using the `default` AWS profile and `us-east-1` region):\n\n* `AWS_DEFAULT_REGION`\n* `AWS_PROFILE`\n\n## Usage\n\nFirst, install the LangChain CLI:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package:\n\n```shell\nlangchain app new my-app --package rag-aws-bedrock\n```\n\nTo add this package to an existing project:\n\n```shell\nlangchain app add rag-aws-bedrock\n```\n\nThen add the following code to your `server.py` file:\n```python\nfrom rag_aws_bedrock import chain as rag_aws_bedrock_chain\n\nadd_routes(app, rag_aws_bedrock_chain, path=\"/rag-aws-bedrock\")\n```\n\n(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)\n\nYou can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) and access the playground at [http://127.0.0.1:8000/rag-aws-bedrock/playground](http://127.0.0.1:8000/rag-aws-bedrock/playground). \n\nYou can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-aws-bedrock\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-aws-kendra\\README.md", + "filetype": ".md", + "content": "# rag-aws-kendra\n\nThis template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents. \n\nIt uses the `boto3` library to connect with the Bedrock service. \n\nFor more context on building RAG applications with Amazon Kendra, check [this page](https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/).\n\n## Environment Setup\n\nPlease ensure to setup and configure `boto3` to work with your AWS account. \n\nYou can follow the guide [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).\n\nYou should also have a Kendra Index set up before using this template. \n\nYou can use [this Cloudformation template](https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra-docs-index.yaml) to create a sample index. \n\nThis includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternatively, you can use your own Amazon Kendra index if you have indexed your own dataset. \n\nThe following environment variables need to be set:\n\n* `AWS_DEFAULT_REGION` - This should reflect the correct AWS region. Default is `us-east-1`.\n* `AWS_PROFILE` - This should reflect your AWS profile. Default is `default`.\n* `KENDRA_INDEX_ID` - This should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-aws-kendra\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-aws-kendra\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_aws_kendra.chain import chain as rag_aws_kendra_chain\n\nadd_routes(app, rag_aws_kendra_chain, path=\"/rag-aws-kendra\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-aws-kendra/playground](http://127.0.0.1:8000/rag-aws-kendra/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-aws-kendra\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-chroma\\README.md", + "filetype": ".md", + "content": "\n# rag-chroma\n\nThis template performs RAG using Chroma and OpenAI.\n\nThe vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-chroma\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-chroma\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_chroma import chain as rag_chroma_chain\n\nadd_routes(app, rag_chroma_chain, path=\"/rag-chroma\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-chroma/playground](http://127.0.0.1:8000/rag-chroma/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-chroma\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-chroma-multi-modal\\README.md", + "filetype": ".md", + "content": "\n# rag-chroma-multi-modal\n\nMulti-modal LLMs enable visual assistants that can perform question-answering about images. \n\nThis template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.\n\nIt uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.\n \nGiven a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.\n\n![Diagram illustrating the workflow of a multi-modal LLM visual assistant using OpenCLIP embeddings and GPT-4V for question-answering based on slide deck images.](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200 \"Workflow Diagram for Multi-modal LLM Visual Assistant\")\n\n## Input\n\nSupply a slide deck as pdf in the `/docs` directory. \n\nBy default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.\n\nExample questions to ask can be:\n```\nHow many customers does Datadog have?\nWhat is Datadog platform % Y/Y growth in FY20, FY21, and FY22?\n```\n\nTo create an index of the slide deck, run:\n```\npoetry install\npython ingest.py\n```\n\n## Storage\n\nThis template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.\n\nYou can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).\n\nThe first time you run the app, it will automatically download the multimodal embedding model.\n\nBy default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.\n\nYou can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:\n```\nvectorstore_mmembd = Chroma(\n collection_name=\"multi-modal-rag\",\n persist_directory=str(re_vectorstore_path),\n embedding_function=OpenCLIPEmbeddings(\n model_name=\"ViT-H-14\", checkpoint=\"laion2b_s32b_b79k\"\n ),\n)\n```\n\n## LLM\n\nThe app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-chroma-multi-modal\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-chroma-multi-modal\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain\n\nadd_routes(app, rag_chroma_multi_modal_chain, path=\"/rag-chroma-multi-modal\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-chroma-multi-modal\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-chroma-multi-modal-multi-vector\\README.md", + "filetype": ".md", + "content": "\n# rag-chroma-multi-modal-multi-vector\n\nMulti-modal LLMs enable visual assistants that can perform question-answering about images. \n\nThis template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.\n\nIt uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma.\n \nGiven a question, relevat slides are retrieved and passed to GPT-4V for answer synthesis.\n\n![Diagram illustrating the multi-modal LLM process with a slide deck, captioning, storage, question input, and answer synthesis with year-over-year growth percentages.](https://github.com/langchain-ai/langchain/assets/122662504/5277ef6b-d637-43c7-8dc1-9b1567470503 \"Multi-modal LLM Process Diagram\")\n\n## Input\n\nSupply a slide deck as pdf in the `/docs` directory. \n\nBy default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.\n\nExample questions to ask can be:\n```\nHow many customers does Datadog have?\nWhat is Datadog platform % Y/Y growth in FY20, FY21, and FY22?\n```\n\nTo create an index of the slide deck, run:\n```\npoetry install\npython ingest.py\n```\n\n## Storage\n\nHere is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):\n\n* Extract the slides as a collection of images\n* Use GPT-4V to summarize each image\n* Embed the image summaries using text embeddings with a link to the original images\n* Retrieve relevant image based on similarity between the image summary and the user input question\n* Pass those images to GPT-4V for answer synthesis\n\nBy default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.\n\nFor production, it may be desirable to use a remote option such as Redis.\n\nYou can set the `local_file_store` flag in `chain.py` and `ingest.py` to switch between the two options.\n\nFor Redis, the template will use [UpstashRedisByteStore](https://python.langchain.com/docs/integrations/stores/upstash_redis).\n\nWe will use Upstash to store the images, which offers Redis with a REST API.\n\nSimply login [here](https://upstash.com/) and create a database.\n\nThis will give you a REST API with:\n\n* `UPSTASH_URL`\n* `UPSTASH_TOKEN`\n \nSet `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database.\n\nWe will use Chroma to store and index the image summaries, which will be created locally in the template directory.\n\n## LLM\n\nThe app will retrieve images based on similarity between the text input and the image summary, and pass the images to GPT-4V.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.\n\nSet `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database if you use `UpstashRedisByteStore`.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-chroma-multi-modal-multi-vector\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-chroma-multi-modal-multi-vector\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_chroma_multi_modal_multi_vector import chain as rag_chroma_multi_modal_chain_mv\n\nadd_routes(app, rag_chroma_multi_modal_chain_mv, path=\"/rag-chroma-multi-modal-multi-vector\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground](http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-chroma-multi-modal-multi-vector\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-chroma-private\\README.md", + "filetype": ".md", + "content": "\n# rag-chroma-private\n\nThis template performs RAG with no reliance on external APIs. \n\nIt utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.\n\nThe vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering. \n\n## Environment Setup\n\nTo set up the environment, you need to download Ollama. \n\nFollow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama). \n\nYou can choose the desired LLM with Ollama. \n\nThis template uses `llama2:7b-chat`, which can be accessed using `ollama pull llama2:7b-chat`.\n\nThere are many other options available [here](https://ollama.ai/library).\n\nThis package also uses [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings. \n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-chroma-private\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-chroma-private\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_chroma_private import chain as rag_chroma_private_chain\n\nadd_routes(app, rag_chroma_private_chain, path=\"/rag-chroma-private\")\n```\n\n(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-chroma-private/playground](http://127.0.0.1:8000/rag-chroma-private/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-chroma-private\")\n```\n\nThe package will create and add documents to the vector database in `chain.py`. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-codellama-fireworks\\README.md", + "filetype": ".md", + "content": "\n# rag-codellama-fireworks\n\nThis template performs RAG on a codebase. \n \nIt uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).\n\n## Environment Setup\n\nSet the `FIREWORKS_API_KEY` environment variable to access the Fireworks models.\n\nYou can obtain it from [here](https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai).\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-codellama-fireworks\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-codellama-fireworks\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_codellama_fireworks import chain as rag_codellama_fireworks_chain\n\nadd_routes(app, rag_codellama_fireworks_chain, path=\"/rag-codellama-fireworks\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-codellama-fireworks/playground](http://127.0.0.1:8000/rag-codellama-fireworks/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-codellama-fireworks\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-conversation\\README.md", + "filetype": ".md", + "content": "\n# rag-conversation\n\nThis template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases. \n\nIt passes both a conversation history and retrieved documents into an LLM for synthesis.\n\n## Environment Setup\n\nThis template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set. \n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-conversation\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-conversation\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_conversation import chain as rag_conversation_chain\n\nadd_routes(app, rag_conversation_chain, path=\"/rag-conversation\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-conversation/playground](http://127.0.0.1:8000/rag-conversation/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-conversation\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-conversation-zep\\README.md", + "filetype": ".md", + "content": "# rag-conversation-zep\n\nThis template demonstrates building a RAG conversation app using Zep. \n\nIncluded in this template:\n- Populating a [Zep Document Collection](https://docs.getzep.com/sdk/documents/) with a set of documents (a Collection is analogous to an index in other Vector Databases).\n- Using Zep's [integrated embedding](https://docs.getzep.com/deployment/embeddings/) functionality to embed the documents as vectors.\n- Configuring a LangChain [ZepVectorStore Retriever](https://docs.getzep.com/sdk/documents/) to retrieve documents using Zep's built, hardware accelerated in [Maximal Marginal Relevance](https://docs.getzep.com/sdk/search_query/) (MMR) re-ranking.\n- Prompts, a simple chat history data structure, and other components required to build a RAG conversation app.\n- The RAG conversation chain.\n\n## About [Zep - Fast, scalable building blocks for LLM Apps](https://www.getzep.com/)\nZep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.\n\nKey Features:\n\n- Fast! Zep\u2019s async extractors operate independently of the your chat loop, ensuring a snappy user experience.\n- Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.\n- Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.\n- Hybrid search over memories and metadata, with messages automatically embedded on creation.\n- Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.\n- Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.\n- Python and JavaScript SDKs.\n\nZep project: https://github.com/getzep/zep | Docs: https://docs.getzep.com/\n\n## Environment Setup\n\nSet up a Zep service by following the [Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).\n\n## Ingesting Documents into a Zep Collection\n\nRun `python ingest.py` to ingest the test documents into a Zep Collection. Review the file to modify the Collection name and document source.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-conversation-zep\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-conversation-zep\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_conversation_zep import chain as rag_conversation_zep_chain\n\nadd_routes(app, rag_conversation_zep_chain, path=\"/rag-conversation-zep\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-conversation-zep/playground](http://127.0.0.1:8000/rag-conversation-zep/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-conversation-zep\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-elasticsearch\\README.md", + "filetype": ".md", + "content": "\n# rag-elasticsearch\n\nThis template performs RAG using [ElasticSearch](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).\n\nIt relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nTo connect to your Elasticsearch instance, use the following environment variables:\n\n```bash\nexport ELASTIC_CLOUD_ID = \nexport ELASTIC_USERNAME = \nexport ELASTIC_PASSWORD = \n```\nFor local development with Docker, use:\n\n```bash\nexport ES_URL=\"http://localhost:9200\"\n```\n\nAnd run an Elasticsearch instance in Docker with\n```bash\ndocker run -p 9200:9200 -e \"discovery.type=single-node\" -e \"xpack.security.enabled=false\" -e \"xpack.security.http.ssl.enabled=false\" docker.elastic.co/elasticsearch/elasticsearch:8.9.0\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-elasticsearch\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-elasticsearch\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_elasticsearch import chain as rag_elasticsearch_chain\n\nadd_routes(app, rag_elasticsearch_chain, path=\"/rag-elasticsearch\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-elasticsearch\")\n```\n\nFor loading the fictional workplace documents, run the following command from the root of this repository:\n\n```bash\npython ingest.py\n```\n\nHowever, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders). \n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-fusion\\README.md", + "filetype": ".md", + "content": "\n# rag-fusion\n\nThis template enables RAG fusion using a re-implementation of the project found [here](https://github.com/Raudaschl/rag-fusion). \n\nIt performs multiple query generation and Reciprocal Rank Fusion to re-rank search results.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-fusion\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-fusion\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_fusion.chain import chain as rag_fusion_chain\n\nadd_routes(app, rag_fusion_chain, path=\"/rag-fusion\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-fusion/playground](http://127.0.0.1:8000/rag-fusion/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-fusion\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-gemini-multi-modal\\README.md", + "filetype": ".md", + "content": "\n# rag-gemini-multi-modal\n\nMulti-modal LLMs enable visual assistants that can perform question-answering about images. \n\nThis template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.\n\nIt uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.\n \nGiven a question, relevat slides are retrieved and passed to [Google Gemini](https://deepmind.google/technologies/gemini/#introduction) for answer synthesis.\n \n![Diagram illustrating the process of a visual assistant using multi-modal LLM, from slide deck images to OpenCLIP embedding, retrieval, and synthesis with Google Gemini, resulting in an answer.](https://github.com/langchain-ai/langchain/assets/122662504/b9e69bef-d687-4ecf-a599-937e559d5184 \"Workflow Diagram for Visual Assistant Using Multi-modal LLM\")\n\n## Input\n\nSupply a slide deck as pdf in the `/docs` directory. \n\nBy default, this template has a slide deck about Q3 earnings from DataDog, a public techologyy company.\n\nExample questions to ask can be:\n```\nHow many customers does Datadog have?\nWhat is Datadog platform % Y/Y growth in FY20, FY21, and FY22?\n```\n\nTo create an index of the slide deck, run:\n```\npoetry install\npython ingest.py\n```\n\n## Storage\n\nThis template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.\n\nYou can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).\n\nThe first time you run the app, it will automatically download the multimodal embedding model.\n\nBy default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.\n\nYou can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:\n```\nvectorstore_mmembd = Chroma(\n collection_name=\"multi-modal-rag\",\n persist_directory=str(re_vectorstore_path),\n embedding_function=OpenCLIPEmbeddings(\n model_name=\"ViT-H-14\", checkpoint=\"laion2b_s32b_b79k\"\n ),\n)\n```\n\n## LLM\n\nThe app will retrieve images using multi-modal embeddings, and pass them to Google Gemini.\n\n## Environment Setup\n\nSet your `GOOGLE_API_KEY` environment variable in order to access Gemini.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-gemini-multi-modal\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-gemini-multi-modal\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_gemini_multi_modal import chain as rag_gemini_multi_modal_chain\n\nadd_routes(app, rag_gemini_multi_modal_chain, path=\"/rag-gemini-multi-modal\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-gemini-multi-modal/playground](http://127.0.0.1:8000/rag-gemini-multi-modal/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-gemini-multi-modal\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-google-cloud-sensitive-data-protection\\README.md", + "filetype": ".md", + "content": "# rag-google-cloud-sensitive-data-protection\n\nThis template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and\nPaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.\n\nThis template is an application that utilizes Google Sensitive Data Protection, a service for detecting and redacting\nsensitive data in text, and PaLM 2 for Chat (chat-bison), although you can use any model.\n\nFor more context on using Sensitive Data Protection,\ncheck [here](https://cloud.google.com/dlp/docs/sensitive-data-protection-overview).\n\n## Environment Setup\n\nBefore using this template, please ensure that you enable the [DLP API](https://console.cloud.google.com/marketplace/product/google/dlp.googleapis.com)\nand [Vertex AI API](https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com) in your Google Cloud\nproject.\n\nFor some common environment troubleshooting steps related to Google Cloud, see the bottom\nof this readme.\n\nSet the following environment variables:\n\n* `GOOGLE_CLOUD_PROJECT_ID` - Your Google Cloud project ID.\n* `MODEL_TYPE` - The model type for Vertex AI Search (e.g. `chat-bison`)\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-google-cloud-sensitive-data-protection\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-google-cloud-sensitive-data-protection\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom rag_google_cloud_sensitive_data_protection.chain import chain as rag_google_cloud_sensitive_data_protection_chain\n\nadd_routes(app, rag_google_cloud_sensitive_data_protection_chain, path=\"/rag-google-cloud-sensitive-data-protection\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground\nat [http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground](http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-google-cloud-sensitive-data-protection\")\n```\n```\n\n# Troubleshooting Google Cloud\n\nYou can set your `gcloud` credentials with their CLI using `gcloud auth application-default login`\n\nYou can set your `gcloud` project with the following commands\n```bash\ngcloud config set project \ngcloud auth application-default set-quota-project \nexport GOOGLE_CLOUD_PROJECT_ID=\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-google-cloud-vertexai-search\\README.md", + "filetype": ".md", + "content": "# rag-google-cloud-vertexai-search\n\nThis template is an application that utilizes Google Vertex AI Search, a machine learning powered search service, and\nPaLM 2 for Chat (chat-bison). The application uses a Retrieval chain to answer questions based on your documents.\n\nFor more context on building RAG applications with Vertex AI Search,\ncheck [here](https://cloud.google.com/generative-ai-app-builder/docs/enterprise-search-introduction).\n\n## Environment Setup\n\nBefore using this template, please ensure that you are authenticated with Vertex AI Search. See the authentication\nguide: [here](https://cloud.google.com/generative-ai-app-builder/docs/authentication).\n\nYou will also need to create:\n\n- A search application [here](https://cloud.google.com/generative-ai-app-builder/docs/create-engine-es)\n- A data store [here](https://cloud.google.com/generative-ai-app-builder/docs/create-data-store-es)\n\nA suitable dataset to test this template with is the Alphabet Earnings Reports, which you can\nfind [here](https://abc.xyz/investor/). The data is also available\nat `gs://cloud-samples-data/gen-app-builder/search/alphabet-investor-pdfs`.\n\nSet the following environment variables:\n\n* `GOOGLE_CLOUD_PROJECT_ID` - Your Google Cloud project ID.\n* `DATA_STORE_ID` - The ID of the data store in Vertex AI Search, which is a 36-character alphanumeric value found on\n the data store details page.\n* `MODEL_TYPE` - The model type for Vertex AI Search.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-google-cloud-vertexai-search\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-google-cloud-vertexai-search\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom rag_google_cloud_vertexai_search.chain import chain as rag_google_cloud_vertexai_search_chain\n\nadd_routes(app, rag_google_cloud_vertexai_search_chain, path=\"/rag-google-cloud-vertexai-search\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground\nat [http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground](http://127.0.0.1:8000/rag-google-cloud-vertexai-search/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-google-cloud-vertexai-search\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-gpt-crawler\\README.md", + "filetype": ".md", + "content": "\n# rag-gpt-crawler\n\nGPT-crawler will crawl websites to produce files for use in custom GPTs or other apps (RAG).\n\nThis template uses [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) to build a RAG app\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Crawling\n\nRun GPT-crawler to extact content from a set of urls, using the config file in GPT-crawler repo.\n\nHere is example config for LangChain use-case docs:\n\n```\nexport const config: Config = {\n url: \"https://python.langchain.com/docs/use_cases/\",\n match: \"https://python.langchain.com/docs/use_cases/**\",\n selector: \".docMainContainer_gTbr\",\n maxPagesToCrawl: 10,\n outputFileName: \"output.json\",\n};\n```\n\nThen, run this as described in the [gpt-crawler](https://github.com/BuilderIO/gpt-crawler) README:\n\n```\nnpm start\n```\n\nAnd copy the `output.json` file into the folder containing this README.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-gpt-crawler\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-gpt-crawler\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_chroma import chain as rag_gpt_crawler\n\nadd_routes(app, rag_gpt_crawler, path=\"/rag-gpt-crawler\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-gpt-crawler/playground](http://127.0.0.1:8000/rag-gpt-crawler/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-gpt-crawler\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-lancedb\\README.md", + "filetype": ".md", + "content": "# rag-lancedb\n\nThis template performs RAG using LanceDB and OpenAI.\n\n## Environment Setup\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-lancedb\n```\n\nIf you want to add this to as existing project, you can just run:\n\n```shell\nlangchain app add rag-lancedb\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_lancedb import chain as rag_lancedb_chain\n\nadd_routes(app, rag_lancedb_chain, path=\"/rag-lancedb\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-lancedb/playground](http://127.0.0.1:8000/rag-lancedb/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-lancedb\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-matching-engine\\README.md", + "filetype": ".md", + "content": "\n# rag-matching-engine\n\nThis template performs RAG using Google Cloud Platform's Vertex AI with the matching engine.\n\nIt will utilize a previously created index to retrieve relevant documents or contexts based on user-provided questions. \n\n## Environment Setup\n\nAn index should be created before running the code. \n\nThe process to create this index can be found [here](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/use-cases/document-qa/question_answering_documents_langchain_matching_engine.ipynb).\n\nEnvironment variables for Vertex should be set:\n```\nPROJECT_ID\nME_REGION\nGCS_BUCKET\nME_INDEX_ID\nME_ENDPOINT_ID\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-matching-engine\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-matching-engine\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_matching_engine import chain as rag_matching_engine_chain\n\nadd_routes(app, rag_matching_engine_chain, path=\"/rag-matching-engine\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-matching-engine/playground](http://127.0.0.1:8000/rag-matching-engine/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-matching-engine\")\n```\n\nFor more details on how to connect to the template, refer to the Jupyter notebook `rag_matching_engine`." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-momento-vector-index\\README.md", + "filetype": ".md", + "content": "# rag-momento-vector-index\n\nThis template performs RAG using Momento Vector Index (MVI) and OpenAI.\n\n> MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There's no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs. Combine with other Momento services such as Momento Cache to cache prompts and as a session store or Momento Topics as a pub/sub system to broadcast events to your application.\n\nTo sign up and access MVI, visit the [Momento Console](https://console.gomomento.com/).\n\n## Environment Setup\n\nThis template uses Momento Vector Index as a vectorstore and requires that `MOMENTO_API_KEY`, and `MOMENTO_INDEX_NAME` are set.\n\nGo to the [console](https://console.gomomento.com/) to get an API key.\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-momento-vector-index\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-momento-vector-index\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom rag_momento_vector_index import chain as rag_momento_vector_index_chain\n\nadd_routes(app, rag_momento_vector_index_chain, path=\"/rag-momento-vector-index\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-momento-vector-index/playground](http://127.0.0.1:8000/rag-momento-vector-index/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-momento-vector-index\")\n```\n\n## Indexing Data\n\nWe have included a sample module to index data. That is available at `rag_momento_vector_index/ingest.py`. You will see a commented out line in `chain.py` that invokes this. Uncomment to use.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-mongo\\README.md", + "filetype": ".md", + "content": "\n# rag-mongo\n\nThis template performs RAG using MongoDB and OpenAI.\n\n## Environment Setup\n\nYou should export two environment variables, one being your MongoDB URI, the other being your OpenAI API KEY.\nIf you do not have a MongoDB URI, see the `Setup Mongo` section at the bottom for instructions on how to do so.\n\n```shell\nexport MONGO_URI=...\nexport OPENAI_API_KEY=...\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-mongo\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-mongo\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_mongo import chain as rag_mongo_chain\n\nadd_routes(app, rag_mongo_chain, path=\"/rag-mongo\")\n```\n\nIf you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:\n```python\nfrom rag_mongo import ingest as rag_mongo_ingest\n\nadd_routes(app, rag_mongo_ingest, path=\"/rag-mongo-ingest\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you DO NOT already have a Mongo Search Index you want to connect to, see `MongoDB Setup` section below before proceeding.\n\nIf you DO have a MongoDB Search index you want to connect to, edit the connection details in `rag_mongo/chain.py`\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-mongo/playground](http://127.0.0.1:8000/rag-mongo/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-mongo\")\n```\n\nFor additional context, please refer to [this notebook](https://colab.research.google.com/drive/1cr2HBAHyBmwKUerJq2if0JaNhy-hIq7I#scrollTo=TZp7_CBfxTOB).\n\n\n## MongoDB Setup\n\nUse this step if you need to setup your MongoDB account and ingest data.\nWe will first follow the standard MongoDB Atlas setup instructions [here](https://www.mongodb.com/docs/atlas/getting-started/).\n\n1. Create an account (if not already done)\n2. Create a new project (if not already done)\n3. Locate your MongoDB URI.\n\nThis can be done by going to the deployement overview page and connecting to you database\n\n![Screenshot highlighting the 'Connect' button in MongoDB Atlas.](_images/connect.png \"MongoDB Atlas Connect Button\")\n\nWe then look at the drivers available\n\n![Screenshot showing the MongoDB Atlas drivers section for connecting to the database.](_images/driver.png \"MongoDB Atlas Drivers Section\")\n\nAmong which we will see our URI listed\n\n![Screenshot displaying an example of a MongoDB URI in the connection instructions.](_images/uri.png \"MongoDB URI Example\")\n\nLet's then set that as an environment variable locally:\n\n```shell\nexport MONGO_URI=...\n```\n\n4. Let's also set an environment variable for OpenAI (which we will use as an LLM)\n\n```shell\nexport OPENAI_API_KEY=...\n```\n\n5. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:\n\n```shell\npython ingest.py\n```\n\nNote that you can (and should!) change this to ingest data of your choice\n\n6. We now need to set up a vector index on our data.\n\nWe can first connect to the cluster where our database lives\n\n![Screenshot of the MongoDB Atlas interface showing the cluster overview with a 'Connect' button.](_images/cluster.png \"MongoDB Atlas Cluster Overview\")\n\nWe can then navigate to where all our collections are listed\n\n![Screenshot of the MongoDB Atlas interface showing the collections overview within a database.](_images/collections.png \"MongoDB Atlas Collections Overview\")\n\nWe can then find the collection we want and look at the search indexes for that collection\n\n![Screenshot showing the search indexes section in MongoDB Atlas for a specific collection.](_images/search-indexes.png \"MongoDB Atlas Search Indexes\")\n\nThat should likely be empty, and we want to create a new one:\n\n![Screenshot highlighting the 'Create Index' button in MongoDB Atlas.](_images/create.png \"MongoDB Atlas Create Index Button\")\n\nWe will use the JSON editor to create it\n\n![Screenshot showing the JSON Editor option for creating a search index in MongoDB Atlas.](_images/json_editor.png \"MongoDB Atlas JSON Editor Option\")\n\nAnd we will paste the following JSON in:\n\n```text\n {\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n }\n }\n }\n```\n![Screenshot of the JSON configuration for a search index in MongoDB Atlas.](_images/json.png \"MongoDB Atlas Search Index JSON Configuration\")\n\nFrom there, hit \"Next\" and then \"Create Search Index\". It will take a little bit but you should then have an index over your data!" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-multi-index-fusion\\README.md", + "filetype": ".md", + "content": "# RAG with Multiple Indexes (Fusion)\n\nA QA application that queries multiple domain-specific retrievers and selects the most relevant documents from across all retrieved results.\n\n## Environment Setup\n\nThis application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai) (for SEC filings).\n\nYou will need to create a free Kay AI account and [get your API key here](https://www.kay.ai).\nThen set environment variable:\n\n```bash\nexport KAY_API_KEY=\"\"\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-multi-index-fusion\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-multi-index-fusion\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_multi_index_fusion import chain as rag_multi_index_fusion_chain\n\nadd_routes(app, rag_multi_index_fusion_chain, path=\"/rag-multi-index-fusion\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-multi-index-fusion/playground](http://127.0.0.1:8000/rag-multi-index-fusion/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-multi-index-fusion\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-multi-index-router\\README.md", + "filetype": ".md", + "content": "# RAG with Multiple Indexes (Routing)\n\nA QA application that routes between different domain-specific retrievers given a user question.\n\n## Environment Setup\n\nThis application queries PubMed, ArXiv, Wikipedia, and [Kay AI](https://www.kay.ai) (for SEC filings).\n\nYou will need to create a free Kay AI account and [get your API key here](https://www.kay.ai). \nThen set environment variable:\n\n```bash\nexport KAY_API_KEY=\"\"\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-multi-index-router\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-multi-index-router\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_multi_index_router import chain as rag_multi_index_router_chain\n\nadd_routes(app, rag_multi_index_router_chain, path=\"/rag-multi-index-router\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-multi-index-router/playground](http://127.0.0.1:8000/rag-multi-index-router/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-multi-index-router\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-multi-modal-local\\README.md", + "filetype": ".md", + "content": "\n# rag-multi-modal-local\n\nVisual search is a famililar application to many with iPhones or Android devices. It allows user to serch photos using natural language. \n \nWith the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.\n\nThis template demonstrates how to perform private visual search and question-answering over a collection of your photos.\n\nIt uses OpenCLIP embeddings to embed all of the photos and stores them in Chroma.\n \nGiven a question, relevant photos are retrieved and passed to an open source multi-modal LLM of your choice for answer synthesis.\n \n![Diagram illustrating the visual search process with OpenCLIP embeddings and multi-modal LLM for question-answering, featuring example food pictures and a matcha soft serve answer trace.](https://github.com/langchain-ai/langchain/assets/122662504/da543b21-052c-4c43-939e-d4f882a45d75 \"Visual Search Process Diagram\")\n\n## Input\n\nSupply a set of photos in the `/docs` directory. \n\nBy default, this template has a toy collection of 3 food pictures.\n\nExample questions to ask can be:\n```\nWhat kind of soft serve did I have?\n```\n\nIn practice, a larger corpus of images can be tested.\n\nTo create an index of the images, run:\n```\npoetry install\npython ingest.py\n```\n\n## Storage\n\nThis template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.\n\nYou can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).\n\nThe first time you run the app, it will automatically download the multimodal embedding model.\n\nBy default, LangChain will use an embedding model with moderate performance but lower memory requirments, `ViT-H-14`.\n\nYou can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:\n```\nvectorstore_mmembd = Chroma(\n collection_name=\"multi-modal-rag\",\n persist_directory=str(re_vectorstore_path),\n embedding_function=OpenCLIPEmbeddings(\n model_name=\"ViT-H-14\", checkpoint=\"laion2b_s32b_b79k\"\n ),\n)\n```\n\n## LLM\n\nThis template will use [Ollama](https://python.langchain.com/docs/integrations/chat/ollama#multi-modal).\n\nDownload the latest version of Ollama: https://ollama.ai/\n\nPull the an open source multi-modal LLM: e.g., https://ollama.ai/library/bakllava\n\n```\nollama pull bakllava\n```\n\nThe app is by default configured for `bakllava`. But you can change this in `chain.py` and `ingest.py` for different downloaded models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-chroma-multi-modal\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-chroma-multi-modal\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain\n\nadd_routes(app, rag_chroma_multi_modal_chain, path=\"/rag-chroma-multi-modal\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-chroma-multi-modal\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-multi-modal-mv-local\\README.md", + "filetype": ".md", + "content": "\n# rag-multi-modal-mv-local\n\nVisual search is a famililar application to many with iPhones or Android devices. It allows user to serch photos using natural language. \n \nWith the release of open source, multi-modal LLMs it's possible to build this kind of application for yourself for your own private photo collection.\n\nThis template demonstrates how to perform private visual search and question-answering over a collection of your photos.\n\nIt uses an open source multi-modal LLM of your choice to create image summaries for each photos, embeds the summaries, and stores them in Chroma.\n \nGiven a question, relevat photos are retrieved and passed to the multi-modal LLM for answer synthesis.\n\n![Diagram illustrating the visual search process with food pictures, captioning, a database, a question input, and the synthesis of an answer using a multi-modal LLM.](https://github.com/langchain-ai/langchain/assets/122662504/cd9b3d82-9b06-4a39-8490-7482466baf43 \"Visual Search Process Diagram\")\n\n## Input\n\nSupply a set of photos in the `/docs` directory. \n\nBy default, this template has a toy collection of 3 food pictures.\n\nThe app will look up and summarize photos based upon provided keywords or questions:\n```\nWhat kind of ice cream did I have?\n```\n\nIn practice, a larger corpus of images can be tested.\n\nTo create an index of the images, run:\n```\npoetry install\npython ingest.py\n```\n\n## Storage\n\nHere is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):\n\n* Given a set of images\n* It uses a local multi-modal LLM ([bakllava](https://ollama.ai/library/bakllava)) to summarize each image\n* Embeds the image summaries with a link to the original images\n* Given a user question, it will relevant image(s) based on similarity between the image summary and user input (using Ollama embeddings)\n* It will pass those images to bakllava for answer synthesis\n\nBy default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.\n\n## LLM and Embedding Models\n\nWe will use [Ollama](https://python.langchain.com/docs/integrations/chat/ollama#multi-modal) for generating image summaries, embeddings, and the final image QA.\n\nDownload the latest version of Ollama: https://ollama.ai/\n\nPull an open source multi-modal LLM: e.g., https://ollama.ai/library/bakllava\n\nPull an open source embedding model: e.g., https://ollama.ai/library/llama2:7b\n\n```\nollama pull bakllava\nollama pull llama2:7b\n```\n\nThe app is by default configured for `bakllava`. But you can change this in `chain.py` and `ingest.py` for different downloaded models.\n\nThe app will retrieve images based on similarity between the text input and the image summary, and pass the images to `bakllava`.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-multi-modal-mv-local\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-multi-modal-mv-local\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_multi_modal_mv_local import chain as rag_multi_modal_mv_local_chain\n\nadd_routes(app, rag_multi_modal_mv_local_chain, path=\"/rag-multi-modal-mv-local\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-multi-modal-mv-local/playground](http://127.0.0.1:8000/rag-multi-modal-mv-local/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-multi-modal-mv-local\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-ollama-multi-query\\README.md", + "filetype": ".md", + "content": "\n# rag-ollama-multi-query\n\nThis template performs RAG using Ollama and OpenAI with a multi-query retriever. \n\nThe multi-query retriever is an example of query transformation, generating multiple queries from different perspectives based on the user's input query. \n\nFor each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.\n\nWe use a private, local LLM for the narrow task of query generation to avoid excessive calls to a larger LLM API.\n\nSee an example trace for Ollama LLM performing the query expansion [here](https://smith.langchain.com/public/8017d04d-2045-4089-b47f-f2d66393a999/r).\n\nBut we use OpenAI for the more challenging task of answer syntesis (full trace example [here](https://smith.langchain.com/public/ec75793b-645b-498d-b855-e8d85e1f6738/r)).\n\n## Environment Setup\n\nTo set up the environment, you need to download Ollama. \n\nFollow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama). \n\nYou can choose the desired LLM with Ollama. \n\nThis template uses `zephyr`, which can be accessed using `ollama pull zephyr`.\n\nThere are many other options available [here](https://ollama.ai/library).\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first install the LangChain CLI:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this package, do:\n\n```shell\nlangchain app new my-app --package rag-ollama-multi-query\n```\n\nTo add this package to an existing project, run:\n\n```shell\nlangchain app add rag-ollama-multi-query\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom rag_ollama_multi_query import chain as rag_ollama_multi_query_chain\n\nadd_routes(app, rag_ollama_multi_query_chain, path=\"/rag-ollama-multi-query\")\n```\n\n(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)\n\nYou can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nYou can access the playground at [http://127.0.0.1:8000/rag-ollama-multi-query/playground](http://127.0.0.1:8000/rag-ollama-multi-query/playground)\n\nTo access the template from code, use:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-ollama-multi-query\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-opensearch\\dummy_data.txt", + "filetype": ".txt", + "content": "[INFO] Initializing machine learning training job. Model: Convolutional Neural Network Dataset: MNIST Hyperparameters: ; - Learning Rate: 0.001; - Batch Size: 64\n[INFO] Loading training data. Training data loaded successfully. Number of training samples: 60,000\n[INFO] Loading validation data. Validation data loaded successfully. Number of validation samples: 10,000\n[INFO] Training started. Epoch 1/10; - Loss: 0.532; - Accuracy: 0.812 Epoch 2/10; - Loss: 0.398; - Accuracy: 0.874 Epoch 3/10; - Loss: 0.325; - Accuracy: 0.901 ... (training progress) Training completed.\n[INFO] Validation started. Validation loss: 0.287 Validation accuracy: 0.915 Model performance meets validation criteria. Saving the model.\n[INFO] Testing the trained model. Test loss: 0.298 Test accuracy: 0.910\n[INFO] Deploying the trained model to production. Model deployment successful. API endpoint: http://your-api-endpoint/predict\n[INFO] Monitoring system initialized. Monitoring metrics:; - CPU Usage: 25%; - Memory Usage: 40%; - GPU Usage: 80%\n[ALERT] High GPU Usage Detected! Scaling resources to handle increased load.\n[INFO] Machine learning training job completed successfully. Total training time: 3 hours and 45 minutes.\n[INFO] Cleaning up resources. Job artifacts removed. Training environment closed.\n[INFO] Image processing web server started. Listening on port 8080.\n[INFO] Received image processing request from client at IP address 192.168.1.100. Preprocessing image: resizing to 800x600 pixels. Image preprocessing completed successfully.\n[INFO] Applying filters to enhance image details. Filters applied: sharpening, contrast adjustment. Image enhancement completed.\n[INFO] Generating thumbnail for the processed image. Thumbnail generated successfully.\n[INFO] Uploading processed image to the user's gallery. Image successfully added to the gallery. Image ID: 123456.\n[INFO] Sending notification to the user: Image processing complete. Notification sent successfully.\n[ERROR] Failed to process image due to corrupted file format. Informing the client about the issue. Client notified about the image processing failure.\n[INFO] Image processing web server shutting down. Cleaning up resources. Server shutdown complete." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-opensearch\\README.md", + "filetype": ".md", + "content": "# rag-opensearch\n\nThis Template performs RAG using [OpenSearch](https://python.langchain.com/docs/integrations/vectorstores/opensearch).\n\n## Environment Setup\n\nSet the following environment variables. \n\n- `OPENAI_API_KEY` - To access OpenAI Embeddings and Models.\n\nAnd optionally set the OpenSearch ones if not using defaults:\n\n- `OPENSEARCH_URL` - URL of the hosted OpenSearch Instance\n- `OPENSEARCH_USERNAME` - User name for the OpenSearch instance\n- `OPENSEARCH_PASSWORD` - Password for the OpenSearch instance\n- `OPENSEARCH_INDEX_NAME` - Name of the index \n\nTo run the default OpenSeach instance in docker, you can use the command\n```shell\ndocker run -p 9200:9200 -p 9600:9600 -e \"discovery.type=single-node\" --name opensearch-node -d opensearchproject/opensearch:latest\n```\n\nNote: To load dummy index named `langchain-test` with dummy documents, run `python dummy_index_setup.py` in the package\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-opensearch\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-opensearch\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_opensearch import chain as rag_opensearch_chain\n\nadd_routes(app, rag_opensearch_chain, path=\"/rag-opensearch\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-opensearch/playground](http://127.0.0.1:8000/rag-opensearch/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-opensearch\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-pinecone\\README.md", + "filetype": ".md", + "content": "\n# rag-pinecone\n\nThis template performs RAG using Pinecone and OpenAI.\n\n## Environment Setup\n\nThis template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set. \n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-pinecone\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-pinecone\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_pinecone import chain as rag_pinecone_chain\n\nadd_routes(app, rag_pinecone_chain, path=\"/rag-pinecone\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-pinecone/playground](http://127.0.0.1:8000/rag-pinecone/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-pinecone\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-pinecone-multi-query\\README.md", + "filetype": ".md", + "content": "\n# rag-pinecone-multi-query\n\nThis template performs RAG using Pinecone and OpenAI with a multi-query retriever. \n\nIt uses an LLM to generate multiple queries from different perspectives based on the user's input query. \n\nFor each query, it retrieves a set of relevant documents and takes the unique union across all queries for answer synthesis.\n\n## Environment Setup\n\nThis template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set. \n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first install the LangChain CLI:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this package, do:\n\n```shell\nlangchain app new my-app --package rag-pinecone-multi-query\n```\n\nTo add this package to an existing project, run:\n\n```shell\nlangchain app add rag-pinecone-multi-query\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom rag_pinecone_multi_query import chain as rag_pinecone_multi_query_chain\n\nadd_routes(app, rag_pinecone_multi_query_chain, path=\"/rag-pinecone-multi-query\")\n```\n\n(Optional) Now, let's configure LangSmith. LangSmith will help us trace, monitor, and debug LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)\n\nYou can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nYou can access the playground at [http://127.0.0.1:8000/rag-pinecone-multi-query/playground](http://127.0.0.1:8000/rag-pinecone-multi-query/playground)\n\nTo access the template from code, use:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-pinecone-multi-query\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-pinecone-rerank\\README.md", + "filetype": ".md", + "content": "\n# rag-pinecone-rerank\n\nThis template performs RAG using Pinecone and OpenAI along with [Cohere to perform re-ranking](https://txt.cohere.com/rerank/) on returned documents. \n\nRe-ranking provides a way to rank retrieved documents using specified filters or criteria.\n\n## Environment Setup\n\nThis template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set. \n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nSet the `COHERE_API_KEY` environment variable to access the Cohere ReRank.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-pinecone-rerank\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-pinecone-rerank\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_pinecone_rerank import chain as rag_pinecone_rerank_chain\n\nadd_routes(app, rag_pinecone_rerank_chain, path=\"/rag-pinecone-rerank\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-pinecone-rerank/playground](http://127.0.0.1:8000/rag-pinecone-rerank/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-pinecone-rerank\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-redis\\README.md", + "filetype": ".md", + "content": "\n# rag-redis\n\nThis template performs RAG using Redis (vector database) and OpenAI (LLM) on financial 10k filings docs for Nike.\n\nIt relies on the sentence transformer `all-MiniLM-L6-v2` for embedding chunks of the pdf and user questions.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the [OpenAI](https://platform.openai.com) models:\n\n```bash\nexport OPENAI_API_KEY= \n```\n\nSet the following [Redis](https://redis.com/try-free) environment variables:\n\n```bash\nexport REDIS_HOST = \nexport REDIS_PORT = \nexport REDIS_USER = \nexport REDIS_PASSWORD = \n```\n\n## Supported Settings\nWe use a variety of environment variables to configure this application\n\n| Environment Variable | Description | Default Value |\n|----------------------|-----------------------------------|---------------|\n| `DEBUG` | Enable or disable Langchain debugging logs | True |\n| `REDIS_HOST` | Hostname for the Redis server | \"localhost\" |\n| `REDIS_PORT` | Port for the Redis server | 6379 |\n| `REDIS_USER` | User for the Redis server | \"\" |\n| `REDIS_PASSWORD` | Password for the Redis server | \"\" |\n| `REDIS_URL` | Full URL for connecting to Redis | `None`, Constructed from user, password, host, and port if not provided |\n| `INDEX_NAME` | Name of the vector index | \"rag-redis\" |\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI and Pydantic installed in a Python virtual environment:\n\n```shell\npip install -U langchain-cli pydantic==1.10.13\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-redis\n```\n\nIf you want to add this to an existing project, you can just run:\n```shell\nlangchain app add rag-redis\n```\n\nAnd add the following code snippet to your `app/server.py` file:\n```python\nfrom rag_redis.chain import chain as rag_redis_chain\n\nadd_routes(app, rag_redis_chain, path=\"/rag-redis\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-redis/playground](http://127.0.0.1:8000/rag-redis/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-redis\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-self-query\\README.md", + "filetype": ".md", + "content": "# rag-self-query\n\nThis template performs RAG using the self-query retrieval technique. The main idea is to let an LLM convert unstructured queries into structured queries. See the [docs for more on how this works](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query).\n\n## Environment Setup\n\nIn this template we'll use OpenAI models and an Elasticsearch vector store, but the approach generalizes to all LLMs/ChatModels and [a number of vector stores](https://python.langchain.com/docs/integrations/retrievers/self_query/).\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nTo connect to your Elasticsearch instance, use the following environment variables:\n\n```bash\nexport ELASTIC_CLOUD_ID = \nexport ELASTIC_USERNAME = \nexport ELASTIC_PASSWORD = \n```\nFor local development with Docker, use:\n\n```bash\nexport ES_URL = \"http://localhost:9200\"\ndocker run -p 9200:9200 -e \"discovery.type=single-node\" -e \"xpack.security.enabled=false\" -e \"xpack.security.http.ssl.enabled=false\" docker.elastic.co/elasticsearch/elasticsearch:8.9.0\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-self-query\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-self-query\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_self_query import chain\n\nadd_routes(app, chain, path=\"/rag-elasticsearch\")\n```\n\nTo populate the vector store with the sample data, from the root of the directory run:\n```bash\npython ingest.py\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-self-query\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-semi-structured\\README.md", + "filetype": ".md", + "content": "# rag-semi-structured\n\nThis template performs RAG on semi-structured data, such as a PDF with text and tables.\n\nSee [this cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb) as a reference.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nThis uses [Unstructured](https://unstructured-io.github.io/unstructured/) for PDF parsing, which requires some system-level package installations. \n\nOn Mac, you can install the necessary packages with the following:\n\n```shell\nbrew install tesseract poppler\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-semi-structured\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-semi-structured\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_semi_structured import chain as rag_semi_structured_chain\n\nadd_routes(app, rag_semi_structured_chain, path=\"/rag-semi-structured\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-semi-structured/playground](http://127.0.0.1:8000/rag-semi-structured/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-semi-structured\")\n```\n\nFor more details on how to connect to the template, refer to the Jupyter notebook `rag_semi_structured`." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-singlestoredb\\README.md", + "filetype": ".md", + "content": "\n# rag-singlestoredb\n\nThis template performs RAG using SingleStoreDB and OpenAI.\n\n## Environment Setup\n\nThis template uses SingleStoreDB as a vectorstore and requires that `SINGLESTOREDB_URL` is set. It should take the form `admin:password@svc-xxx.svc.singlestore.com:port/db_name`\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-singlestoredb\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-singlestoredb\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_singlestoredb import chain as rag_singlestoredb_chain\n\nadd_routes(app, rag_singlestoredb_chain, path=\"/rag-singlestoredb\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-singlestoredb/playground](http://127.0.0.1:8000/rag-singlestoredb/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-singlestoredb\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-supabase\\README.md", + "filetype": ".md", + "content": "\n# rag_supabase\n\nThis template performs RAG with Supabase.\n\n[Supabase](https://supabase.com/docs) is an open-source Firebase alternative. It is built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL), a free and open-source relational database management system (RDBMS) and uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nTo get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.\n\nTo find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api). \n\n- `SUPABASE_URL` corresponds to the Project URL\n- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key\n\n\n```shell\nexport SUPABASE_URL=\nexport SUPABASE_SERVICE_KEY=\nexport OPENAI_API_KEY=\n```\n\n## Setup Supabase Database\n\nUse these steps to setup your Supabase database if you haven't already.\n\n1. Head over to https://database.new to provision your Supabase database.\n2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store:\n\n ```sql\n -- Enable the pgvector extension to work with embedding vectors\n create extension if not exists vector;\n\n -- Create a table to store your documents\n create table\n documents (\n id uuid primary key,\n content text, -- corresponds to Document.pageContent\n metadata jsonb, -- corresponds to Document.metadata\n embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed\n );\n\n -- Create a function to search for documents\n create function match_documents (\n query_embedding vector (1536),\n filter jsonb default '{}'\n ) returns table (\n id uuid,\n content text,\n metadata jsonb,\n similarity float\n ) language plpgsql as $$\n #variable_conflict use_column\n begin\n return query\n select\n id,\n content,\n metadata,\n 1 - (documents.embedding <=> query_embedding) as similarity\n from documents\n where metadata @> filter\n order by documents.embedding <=> query_embedding;\n end;\n $$;\n ```\n\n## Setup Environment Variables\n\nSince we are using [`SupabaseVectorStore`](https://python.langchain.com/docs/integrations/vectorstores/supabase) and [`OpenAIEmbeddings`](https://python.langchain.com/docs/integrations/text_embedding/openai), we need to load their API keys.\n\n## Usage\n\nFirst, install the LangChain CLI:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-supabase\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-supabase\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom rag_supabase.chain import chain as rag_supabase_chain\n\nadd_routes(app, rag_supabase_chain, path=\"/rag-supabase\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-supabase/playground](http://127.0.0.1:8000/rag-supabase/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-supabase\")\n```\n\nTODO: Add details about setting up the Supabase database" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-timescale-conversation\\README.md", + "filetype": ".md", + "content": "\n# rag-timescale-conversation\n\nThis template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.\n\nIt passes both a conversation history and retrieved documents into an LLM for synthesis.\n\n## Environment Setup\n\nThis template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.\n\nTo load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-timescale-conversation\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-timescale-conversation\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_timescale_conversation import chain as rag_timescale_conversation_chain\n\nadd_routes(app, rag_timescale_conversation_chain, path=\"/rag-timescale_conversation\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-timescale-conversation/playground](http://127.0.0.1:8000/rag-timescale-conversation/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-timescale-conversation\")\n```\n\nSee the `rag_conversation.ipynb` notebook for example usage.\n\n## Loading your own dataset\n\nTo load your own dataset you will have to create a `load_dataset` function. You can see an example, in the\n`load_ts_git_dataset` function defined in the `load_sample_dataset.py` file. You can then run this as a\nstandalone function (e.g. in a bash script) or add it to chain.py (but then you should run it just once)." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-timescale-hybrid-search-time\\README.md", + "filetype": ".md", + "content": "# RAG with Timescale Vector using hybrid search\n\nThis template shows how to use timescale-vector with the self-query retriver to perform hybrid search on similarity and time.\nThis is useful any time your data has a strong time-based component. Some examples of such data are:\n- News articles (politics, business, etc)\n- Blog posts, documentation or other published material (public or private).\n- Social media posts\n- Changelogs of any kind\n- Messages\n\nSuch items are often searched by both similarity and time. For example: Show me all news about Toyota trucks from 2022.\n\n[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) provides superior performance when searching for embeddings within a particular timeframe by leveraging automatic table partitioning to isolate data for particular time-ranges.\n\nLangchain's self-query retriever allows deducing time-ranges (as well as other search criteria) from the text of user queries.\n\n## What is Timescale Vector?\n**[Timescale Vector](https://www.timescale.com/ai?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) is PostgreSQL++ for AI applications.**\n\nTimescale Vector enables you to efficiently store and query billions of vector embeddings in `PostgreSQL`.\n- Enhances `pgvector` with faster and more accurate similarity search on 1B+ vectors via DiskANN inspired indexing algorithm.\n- Enables fast time-based vector search via automatic time-based partitioning and indexing.\n- Provides a familiar SQL interface for querying vector embeddings and relational data.\n\nTimescale Vector is cloud PostgreSQL for AI that scales with you from POC to production:\n- Simplifies operations by enabling you to store relational metadata, vector embeddings, and time-series data in a single database.\n- Benefits from rock-solid PostgreSQL foundation with enterprise-grade feature liked streaming backups and replication, high-availability and row-level security.\n- Enables a worry-free experience with enterprise-grade security and compliance.\n\n### How to access Timescale Vector\nTimescale Vector is available on [Timescale](https://www.timescale.com/products?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral), the cloud PostgreSQL platform. (There is no self-hosted version at this time.)\n\n- LangChain users get a 90-day free trial for Timescale Vector.\n- To get started, [signup](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) to Timescale, create a new database and follow this notebook!\n- See the [installation instructions](https://github.com/timescale/python-vector) for more details on using Timescale Vector in python.\n\n## Environment Setup\n\nThis template uses Timescale Vector as a vectorstore and requires that `TIMESCALES_SERVICE_URL`. Signup for a 90-day trial [here](https://console.cloud.timescale.com/signup?utm_campaign=vectorlaunch&utm_source=langchain&utm_medium=referral) if you don't yet have an account.\n\nTo load the sample dataset, set `LOAD_SAMPLE_DATA=1`. To load your own dataset see the section below.\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-timescale-hybrid-search-time\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-timescale-hybrid-search-time\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_timescale_hybrid_search.chain import chain as rag_timescale_hybrid_search_chain\n\nadd_routes(app, rag_timescale_hybrid_search_chain, path=\"/rag-timescale-hybrid-search\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-timescale-hybrid-search/playground](http://127.0.0.1:8000/rag-timescale-hybrid-search/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-timescale-hybrid-search\")\n```\n\n## Loading your own dataset\n\nTo load your own dataset you will have to modify the code in the `DATASET SPECIFIC CODE` section of `chain.py`.\nThis code defines the name of the collection, how to load the data, and the human-language description of both the\ncontents of the collection and all of the metadata. The human-language descriptions are used by the self-query retriever\nto help the LLM convert the question into filters on the metadata when searching the data in Timescale-vector." + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-vectara\\README.md", + "filetype": ".md", + "content": "\n# rag-vectara\n\nThis template performs RAG with vectara.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nAlso, ensure the following environment variables are set:\n* `VECTARA_CUSTOMER_ID`\n* `VECTARA_CORPUS_ID`\n* `VECTARA_API_KEY`\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-vectara\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-vectara\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_vectara import chain as rag_vectara_chain\n\nadd_routes(app, rag_vectara_chain, path=\"/rag-vectara\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"vectara-demo\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-vectara/playground](http://127.0.0.1:8000/rag-vectara/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-vectara\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-vectara-multiquery\\README.md", + "filetype": ".md", + "content": "\n# rag-vectara-multiquery\n\nThis template performs multiquery RAG with vectara.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nAlso, ensure the following environment variables are set:\n* `VECTARA_CUSTOMER_ID`\n* `VECTARA_CORPUS_ID`\n* `VECTARA_API_KEY`\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-vectara-multiquery\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-vectara-multiquery\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_vectara import chain as rag_vectara_chain\n\nadd_routes(app, rag_vectara_chain, path=\"/rag-vectara-multiquery\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"vectara-demo\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-vectara-multiquery/playground](http://127.0.0.1:8000/rag-vectara-multiquery/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-vectara-multiquery\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rag-weaviate\\README.md", + "filetype": ".md", + "content": "\n# rag-weaviate\n\nThis template performs RAG with Weaviate.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nAlso, ensure the following environment variables are set:\n* `WEAVIATE_ENVIRONMENT`\n* `WEAVIATE_API_KEY`\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rag-weaviate\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rag-weaviate\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rag_weaviate import chain as rag_weaviate_chain\n\nadd_routes(app, rag_weaviate_chain, path=\"/rag-weaviate\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rag-weaviate/playground](http://127.0.0.1:8000/rag-weaviate/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rag-weaviate\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\research-assistant\\README.md", + "filetype": ".md", + "content": "# research-assistant\n\nThis template implements a version of \n[GPT Researcher](https://github.com/assafelovic/gpt-researcher) that you can use\nas a starting point for a research agent.\n\n## Environment Setup\n\nThe default template relies on ChatOpenAI and DuckDuckGo, so you will need the \nfollowing environment variable:\n\n- `OPENAI_API_KEY`\n\nAnd to use the Tavily LLM-optimized search engine, you will need:\n\n- `TAVILY_API_KEY`\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package research-assistant\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add research-assistant\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom research_assistant import chain as research_assistant_chain\n\nadd_routes(app, research_assistant_chain, path=\"/research-assistant\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/research-assistant/playground](http://127.0.0.1:8000/research-assistant/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/research-assistant\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\retrieval-agent\\README.md", + "filetype": ".md", + "content": "# retrieval-agent\n\nThis package uses Azure OpenAI to do retrieval using an agent architecture.\nBy default, this does retrieval over Arxiv.\n\n## Environment Setup\n\nSince we are using Azure OpenAI, we will need to set the following environment variables:\n\n```shell\nexport AZURE_OPENAI_ENDPOINT=...\nexport AZURE_OPENAI_API_VERSION=...\nexport AZURE_OPENAI_API_KEY=...\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package retrieval-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add retrieval-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom retrieval_agent import chain as retrieval_agent_chain\n\nadd_routes(app, retrieval_agent_chain, path=\"/retrieval-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/retrieval-agent/playground](http://127.0.0.1:8000/retrieval-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/retrieval-agent\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\retrieval-agent-fireworks\\README.md", + "filetype": ".md", + "content": "# retrieval-agent-fireworks\n\nThis package uses open source models hosted on FireworksAI to do retrieval using an agent architecture. By default, this does retrieval over Arxiv.\n\nWe will use `Mixtral8x7b-instruct-v0.1`, which is shown in this blog to yield reasonable\nresults with function calling even though it is not fine tuned for this task: https://huggingface.co/blog/open-source-llms-as-agents\n\n\n## Environment Setup\n\nThere are various great ways to run OSS models. We will use FireworksAI as an easy way to run the models. See [here](https://python.langchain.com/docs/integrations/providers/fireworks) for more information.\n\nSet the `FIREWORKS_API_KEY` environment variable to access Fireworks.\n\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package retrieval-agent-fireworks\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add retrieval-agent-fireworks\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom retrieval_agent_fireworks import chain as retrieval_agent_fireworks_chain\n\nadd_routes(app, retrieval_agent_fireworks_chain, path=\"/retrieval-agent-fireworks\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/retrieval-agent-fireworks/playground](http://127.0.0.1:8000/retrieval-agent-fireworks/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/retrieval-agent-fireworks\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\rewrite-retrieve-read\\README.md", + "filetype": ".md", + "content": "\n# rewrite_retrieve_read\n\nThis template implemenets a method for query transformation (re-writing) in the paper [Query Rewriting for Retrieval-Augmented Large Language Models](https://arxiv.org/pdf/2305.14283.pdf) to optimize for RAG. \n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package rewrite_retrieve_read\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add rewrite_retrieve_read\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom rewrite_retrieve_read.chain import chain as rewrite_retrieve_read_chain\n\nadd_routes(app, rewrite_retrieve_read_chain, path=\"/rewrite-retrieve-read\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/rewrite_retrieve_read/playground](http://127.0.0.1:8000/rewrite_retrieve_read/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/rewrite_retrieve_read\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\robocorp-action-server\\README.md", + "filetype": ".md", + "content": "# Langchain - Robocorp Action Server\n\nThis template enables using [Robocorp Action Server](https://github.com/robocorp/robocorp) served actions as tools for an Agent.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package robocorp-action-server\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add robocorp-action-server\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom robocorp_action_server import agent_executor as action_server_chain\n\nadd_routes(app, action_server_chain, path=\"/robocorp-action-server\")\n```\n\n### Running the Action Server\n\nTo run the Action Server, you need to have the Robocorp Action Server installed\n\n```bash\npip install -U robocorp-action-server\n```\n\nThen you can run the Action Server with:\n\n```bash\naction-server new\ncd ./your-project-name\naction-server start\n```\n\n### Configure LangSmith (Optional)\n\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\n### Start LangServe instance\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/robocorp-action-server/playground](http://127.0.0.1:8000/robocorp-action-server/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/robocorp-action-server\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\self-query-qdrant\\README.md", + "filetype": ".md", + "content": "\n# self-query-qdrant\n\nThis template performs [self-querying](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/) \nusing Qdrant and OpenAI. By default, it uses an artificial dataset of 10 documents, but you can replace it with your own dataset.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nSet the `QDRANT_URL` to the URL of your Qdrant instance. If you use [Qdrant Cloud](https://cloud.qdrant.io)\nyou have to set the `QDRANT_API_KEY` environment variable as well. If you do not set any of them,\nthe template will try to connect a local Qdrant instance at `http://localhost:6333`.\n\n```shell\nexport QDRANT_URL=\nexport QDRANT_API_KEY=\n\nexport OPENAI_API_KEY=\n```\n\n## Usage\n\nTo use this package, install the LangChain CLI first:\n\n```shell\npip install -U \"langchain-cli[serve]\"\n```\n\nCreate a new LangChain project and install this package as the only one:\n\n```shell\nlangchain app new my-app --package self-query-qdrant\n```\n\nTo add this to an existing project, run:\n\n```shell\nlangchain app add self-query-qdrant\n```\n\n### Defaults\n\nBefore you launch the server, you need to create a Qdrant collection and index the documents.\nIt can be done by running the following command:\n\n```python\nfrom self_query_qdrant.chain import initialize\n\ninitialize()\n```\n\nAdd the following code to your `app/server.py` file:\n\n```python\nfrom self_query_qdrant.chain import chain\n\nadd_routes(app, chain, path=\"/self-query-qdrant\")\n```\n\nThe default dataset consists 10 documents about dishes, along with their price and restaurant information.\nYou can find the documents in the `packages/self-query-qdrant/self_query_qdrant/defaults.py` file.\nHere is one of the documents:\n\n```python\nfrom langchain_core.documents import Document\n\nDocument(\n page_content=\"Spaghetti with meatballs and tomato sauce\",\n metadata={\n \"price\": 12.99,\n \"restaurant\": {\n \"name\": \"Olive Garden\",\n \"location\": [\"New York\", \"Chicago\", \"Los Angeles\"],\n },\n },\n)\n```\n\nThe self-querying allows performing semantic search over the documents, with some additional filtering\nbased on the metadata. For example, you can search for the dishes that cost less than $15 and are served in New York.\n\n### Customization\n\nAll the examples above assume that you want to launch the template with just the defaults.\nIf you want to customize the template, you can do it by passing the parameters to the `create_chain` function\nin the `app/server.py` file:\n\n```python\nfrom langchain_community.llms import Cohere\nfrom langchain_community.embeddings import HuggingFaceEmbeddings\nfrom langchain.chains.query_constructor.schema import AttributeInfo\n\nfrom self_query_qdrant.chain import create_chain\n\nchain = create_chain(\n llm=Cohere(),\n embeddings=HuggingFaceEmbeddings(),\n document_contents=\"Descriptions of cats, along with their names and breeds.\",\n metadata_field_info=[\n AttributeInfo(name=\"name\", description=\"Name of the cat\", type=\"string\"),\n AttributeInfo(name=\"breed\", description=\"Cat's breed\", type=\"string\"),\n ],\n collection_name=\"cats\",\n)\n```\n\nThe same goes for the `initialize` function that creates a Qdrant collection and indexes the documents:\n\n```python\nfrom langchain_core.documents import Document\nfrom langchain_community.embeddings import HuggingFaceEmbeddings\n\nfrom self_query_qdrant.chain import initialize\n\ninitialize(\n embeddings=HuggingFaceEmbeddings(),\n collection_name=\"cats\",\n documents=[\n Document(\n page_content=\"A mean lazy old cat who destroys furniture and eats lasagna\",\n metadata={\"name\": \"Garfield\", \"breed\": \"Tabby\"},\n ),\n ...\n ]\n)\n```\n\nThe template is flexible and might be used for different sets of documents easily.\n\n### LangSmith\n\n(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\n### Local Server\n\nThis will start the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nYou can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nAccess the playground at [http://127.0.0.1:8000/self-query-qdrant/playground](http://127.0.0.1:8000/self-query-qdrant/playground)\n\nAccess the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/self-query-qdrant\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\self-query-supabase\\README.md", + "filetype": ".md", + "content": "\n# self-query-supabase\n\nThis templates allows natural language structured quering of Supabase. \n\n[Supabase](https://supabase.com/docs) is an open-source alternative to Firebase, built on top of [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL). \n\nIt uses [pgvector](https://github.com/pgvector/pgvector) to store embeddings within your tables.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nTo get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.\n\nTo find your `SUPABASE_URL` and `SUPABASE_SERVICE_KEY`, head to your Supabase project's [API settings](https://supabase.com/dashboard/project/_/settings/api). \n\n- `SUPABASE_URL` corresponds to the Project URL\n- `SUPABASE_SERVICE_KEY` corresponds to the `service_role` API key\n\n\n```shell\nexport SUPABASE_URL=\nexport SUPABASE_SERVICE_KEY=\nexport OPENAI_API_KEY=\n```\n\n## Setup Supabase Database\n\nUse these steps to setup your Supabase database if you haven't already.\n\n1. Head over to https://database.new to provision your Supabase database.\n2. In the studio, jump to the [SQL editor](https://supabase.com/dashboard/project/_/sql/new) and run the following script to enable `pgvector` and setup your database as a vector store:\n\n ```sql\n -- Enable the pgvector extension to work with embedding vectors\n create extension if not exists vector;\n\n -- Create a table to store your documents\n create table\n documents (\n id uuid primary key,\n content text, -- corresponds to Document.pageContent\n metadata jsonb, -- corresponds to Document.metadata\n embedding vector (1536) -- 1536 works for OpenAI embeddings, change as needed\n );\n\n -- Create a function to search for documents\n create function match_documents (\n query_embedding vector (1536),\n filter jsonb default '{}'\n ) returns table (\n id uuid,\n content text,\n metadata jsonb,\n similarity float\n ) language plpgsql as $$\n #variable_conflict use_column\n begin\n return query\n select\n id,\n content,\n metadata,\n 1 - (documents.embedding <=> query_embedding) as similarity\n from documents\n where metadata @> filter\n order by documents.embedding <=> query_embedding;\n end;\n $$;\n ```\n\n## Usage\n\nTo use this package, install the LangChain CLI first:\n\n```shell\npip install -U langchain-cli\n```\n\nCreate a new LangChain project and install this package as the only one:\n\n```shell\nlangchain app new my-app --package self-query-supabase\n```\n\nTo add this to an existing project, run:\n\n```shell\nlangchain app add self-query-supabase\n```\n\nAdd the following code to your `server.py` file:\n```python\nfrom self_query_supabase.chain import chain as self_query_supabase_chain\n\nadd_routes(app, self_query_supabase_chain, path=\"/self-query-supabase\")\n```\n\n(Optional) If you have access to LangSmith, configure it to help trace, monitor and debug LangChain applications. If you don't have access, skip this section.\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nYou can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nAccess the playground at [http://127.0.0.1:8000/self-query-supabase/playground](http://127.0.0.1:8000/self-query-supabase/playground)\n\nAccess the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/self-query-supabase\")\n```\n\nTODO: Instructions to set up the Supabase database and install the package.\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\shopping-assistant\\README.md", + "filetype": ".md", + "content": "# shopping-assistant\n\nThis template creates a shopping assistant that helps users find products that they are looking for.\n\nThis template will use `Ionic` to search for products.\n\n## Environment Setup\n\nThis template will use `OpenAI` by default.\nBe sure that `OPENAI_API_KEY` is set in your environment.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package shopping-assistant\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add shopping-assistant\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom shopping_assistant.agent import agent_executor as shopping_assistant_chain\n\nadd_routes(app, shopping_assistant_chain, path=\"/shopping-assistant\")\n```\n\n(Optional) Let's now configure LangSmith.\nLangSmith will help us trace, monitor and debug LangChain applications.\nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).\nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at\n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/shopping-assistant/playground](http://127.0.0.1:8000/shopping-assistant/playground)\n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/shopping-assistant\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\skeleton-of-thought\\README.md", + "filetype": ".md", + "content": "# skeleton-of-thought\n\nImplements \"Skeleton of Thought\" from [this](https://sites.google.com/view/sot-llm) paper.\n\nThis technique makes it possible to generate longer generations more quickly by first generating a skeleton, then generating each point of the outline.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\nTo get your `OPENAI_API_KEY`, navigate to [API keys](https://platform.openai.com/account/api-keys) on your OpenAI account and create a new secret key.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package skeleton-of-thought\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add skeleton-of-thought\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom skeleton_of_thought import chain as skeleton_of_thought_chain\n\nadd_routes(app, skeleton_of_thought_chain, path=\"/skeleton-of-thought\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/skeleton-of-thought/playground](http://127.0.0.1:8000/skeleton-of-thought/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/skeleton-of-thought\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\solo-performance-prompting-agent\\README.md", + "filetype": ".md", + "content": "# solo-performance-prompting-agent\n\nThis template creates an agent that transforms a single LLM into a cognitive synergist by engaging in multi-turn self-collaboration with multiple personas.\nA cognitive synergist refers to an intelligent agent that collaborates with multiple minds, combining their individual strengths and knowledge, to enhance problem-solving and overall performance in complex tasks. By dynamically identifying and simulating different personas based on task inputs, SPP unleashes the potential of cognitive synergy in LLMs. \n\nThis template will use the `DuckDuckGo` search API. \n\n## Environment Setup\n\nThis template will use `OpenAI` by default. \nBe sure that `OPENAI_API_KEY` is set in your environment.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package solo-performance-prompting-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add solo-performance-prompting-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom solo_performance_prompting_agent.agent import agent_executor as solo_performance_prompting_agent_chain\n\nadd_routes(app, solo_performance_prompting_agent_chain, path=\"/solo-performance-prompting-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/solo-performance-prompting-agent/playground](http://127.0.0.1:8000/solo-performance-prompting-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/solo-performance-prompting-agent\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\sql-llama2\\README.md", + "filetype": ".md", + "content": "\n# sql-llama2\n\nThis template enables a user to interact with a SQL database using natural language. \n\nIt uses LLamA2-13b hosted by [Replicate](https://python.langchain.com/docs/integrations/llms/replicate), but can be adapted to any API that supports LLaMA2 including [Fireworks](https://python.langchain.com/docs/integrations/chat/fireworks). \n\nThe template includes an example database of 2023 NBA rosters. \n\nFor more information on how to build this database, see [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).\n\n## Environment Setup\n\nEnsure the `REPLICATE_API_TOKEN` is set in your environment.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package sql-llama2\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add sql-llama2\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom sql_llama2 import chain as sql_llama2_chain\n\nadd_routes(app, sql_llama2_chain, path=\"/sql-llama2\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/sql-llama2/playground](http://127.0.0.1:8000/sql-llama2/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/sql-llama2\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\sql-llamacpp\\README.md", + "filetype": ".md", + "content": "\n# sql-llamacpp\n\nThis template enables a user to interact with a SQL database using natural language. \n\nIt uses [Mistral-7b](https://mistral.ai/news/announcing-mistral-7b/) via [llama.cpp](https://github.com/ggerganov/llama.cpp) to run inference locally on a Mac laptop.\n\n## Environment Setup\n\nTo set up the environment, use the following steps:\n\n```shell\nwget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh\nbash Miniforge3-MacOSX-arm64.sh\nconda create -n llama python=3.9.16\nconda activate /Users/rlm/miniforge3/envs/llama\nCMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package sql-llamacpp\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add sql-llamacpp\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom sql_llamacpp import chain as sql_llamacpp_chain\n\nadd_routes(app, sql_llamacpp_chain, path=\"/sql-llamacpp\")\n```\n\nThe package will download the Mistral-7b model from [here](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF). You can select other files and specify their download path (browse [here](https://huggingface.co/TheBloke)).\n\nThis package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).\n\n(Optional) Configure LangSmith for tracing, monitoring and debugging LangChain applications. LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). If you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nYou can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nYou can access the playground at [http://127.0.0.1:8000/sql-llamacpp/playground](http://127.0.0.1:8000/sql-llamacpp/playground) \n\nYou can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/sql-llamacpp\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\sql-ollama\\README.md", + "filetype": ".md", + "content": "# sql-ollama\n\nThis template enables a user to interact with a SQL database using natural language. \n\nIt uses [Zephyr-7b](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) via [Ollama](https://ollama.ai/library/zephyr) to run inference locally on a Mac laptop.\n\n## Environment Setup\n\nBefore using this template, you need to set up Ollama and SQL database.\n\n1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.\n\n2. Download your LLM of interest:\n\n * This package uses `zephyr`: `ollama pull zephyr`\n * You can choose from many LLMs [here](https://ollama.ai/library)\n\n3. This package includes an example DB of 2023 NBA rosters. You can see instructions to build this DB [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb).\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package sql-ollama\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add sql-ollama\n```\n\nAnd add the following code to your `server.py` file:\n\n```python\nfrom sql_ollama import chain as sql_ollama_chain\n\nadd_routes(app, sql_ollama_chain, path=\"/sql-ollama\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/sql-ollama/playground](http://127.0.0.1:8000/sql-ollama/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/sql-ollama\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\sql-pgvector\\README.md", + "filetype": ".md", + "content": "# sql-pgvector\n\nThis template enables user to use `pgvector` for combining postgreSQL with semantic search / RAG. \n\nIt uses [PGVector](https://github.com/pgvector/pgvector) extension as shown in the [RAG empowered SQL cookbook](https://github.com/langchain-ai/langchain/blob/master/cookbook/retrieval_in_sql.ipynb)\n\n## Environment Setup\n\nIf you are using `ChatOpenAI` as your LLM, make sure the `OPENAI_API_KEY` is set in your environment. You can change both the LLM and embeddings model inside `chain.py`\n\nAnd you can configure configure the following environment variables\nfor use by the template (defaults are in parentheses)\n\n- `POSTGRES_USER` (postgres)\n- `POSTGRES_PASSWORD` (test)\n- `POSTGRES_DB` (vectordb)\n- `POSTGRES_HOST` (localhost)\n- `POSTGRES_PORT` (5432)\n\nIf you don't have a postgres instance, you can run one locally in docker:\n\n```bash\ndocker run \\\n --name some-postgres \\\n -e POSTGRES_PASSWORD=test \\\n -e POSTGRES_USER=postgres \\\n -e POSTGRES_DB=vectordb \\\n -p 5432:5432 \\\n postgres:16\n```\n\nAnd to start again later, use the `--name` defined above:\n```bash\ndocker start some-postgres\n```\n\n### PostgreSQL Database setup\n\nApart from having `pgvector` extension enabled, you will need to do some setup before being able to run semantic search within your SQL queries.\n\nIn order to run RAG over your postgreSQL database you will need to generate the embeddings for the specific columns you want. \n\nThis process is covered in the [RAG empowered SQL cookbook](cookbook/retrieval_in_sql.ipynb), but the overall approach consist of:\n1. Querying for unique values in the column\n2. Generating embeddings for those values\n3. Store the embeddings in a separate column or in an auxiliary table.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package sql-pgvector\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add sql-pgvector\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom sql_pgvector import chain as sql_pgvector_chain\n\nadd_routes(app, sql_pgvector_chain, path=\"/sql-pgvector\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/sql-pgvector/playground](http://127.0.0.1:8000/sql-pgvector/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/sql-pgvector\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\sql-research-assistant\\README.md", + "filetype": ".md", + "content": "# sql-research-assistant\n\nThis package does research over a SQL database\n\n## Usage\n\nThis package relies on multiple models, which have the following dependencies:\n\n- OpenAI: set the `OPENAI_API_KEY` environment variables\n- Ollama: [install and run Ollama](https://python.langchain.com/docs/integrations/chat/ollama)\n- llama2 (on Ollama): `ollama pull llama2` (otherwise you will get 404 errors from Ollama)\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package sql-research-assistant\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add sql-research-assistant\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom sql_research_assistant import chain as sql_research_assistant_chain\n\nadd_routes(app, sql_research_assistant_chain, path=\"/sql-research-assistant\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/sql-research-assistant/playground](http://127.0.0.1:8000/sql-research-assistant/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/sql-research-assistant\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\stepback-qa-prompting\\README.md", + "filetype": ".md", + "content": "# stepback-qa-prompting\n\nThis template replicates the \"Step-Back\" prompting technique that improves performance on complex questions by first asking a \"step back\" question. \n\nThis technique can be combined with regular question-answering applications by doing retrieval on both the original and step-back question.\n\nRead more about this in the paper [here](https://arxiv.org/abs/2310.06117) and an excellent blog post by Cobus Greyling [here](https://cobusgreyling.medium.com/a-new-prompt-engineering-technique-has-been-introduced-called-step-back-prompting-b00e8954cacb)\n\nWe will modify the prompts slightly to work better with chat models in this template.\n\n## Environment Setup\n\nSet the `OPENAI_API_KEY` environment variable to access the OpenAI models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package stepback-qa-prompting\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add stepback-qa-prompting\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom stepback_qa_prompting.chain import chain as stepback_qa_prompting_chain\n\nadd_routes(app, stepback_qa_prompting_chain, path=\"/stepback-qa-prompting\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/stepback-qa-prompting/playground](http://127.0.0.1:8000/stepback-qa-prompting/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/stepback-qa-prompting\")\n```" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\summarize-anthropic\\README.md", + "filetype": ".md", + "content": "\n# summarize-anthropic\n\nThis template uses Anthropic's `Claude2` to summarize long documents. \n\nIt leverages a large context window of 100k tokens, allowing for summarization of documents over 100 pages. \n\nYou can see the summarization prompt in `chain.py`.\n\n## Environment Setup\n\nSet the `ANTHROPIC_API_KEY` environment variable to access the Anthropic models.\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package summarize-anthropic\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add summarize-anthropic\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom summarize_anthropic import chain as summarize_anthropic_chain\n\nadd_routes(app, summarize_anthropic_chain, path=\"/summarize-anthropic\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/summarize-anthropic/playground](http://127.0.0.1:8000/summarize-anthropic/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/summarize-anthropic\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\vertexai-chuck-norris\\README.md", + "filetype": ".md", + "content": "\n# vertexai-chuck-norris\n\nThis template makes jokes about Chuck Norris using Vertex AI PaLM2. \n\n## Environment Setup\n\nFirst, make sure you have a Google Cloud project with\nan active billing account, and have the [gcloud CLI installed](https://cloud.google.com/sdk/docs/install).\n\nConfigure [application default credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc):\n\n```shell\ngcloud auth application-default login\n```\n\nTo set a default Google Cloud project to use, run this command and set [the project ID](https://support.google.com/googleapi/answer/7014113?hl=en) of the project you want to use:\n```shell\ngcloud config set project [PROJECT-ID]\n```\n\nEnable the [Vertex AI API](https://console.cloud.google.com/apis/library/aiplatform.googleapis.com) for the project:\n```shell\ngcloud services enable aiplatform.googleapis.com\n```\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package pirate-speak\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add vertexai-chuck-norris\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom vertexai_chuck_norris.chain import chain as vertexai_chuck_norris_chain\n\nadd_routes(app, vertexai_chuck_norris_chain, path=\"/vertexai-chuck-norris\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/vertexai-chuck-norris/playground](http://127.0.0.1:8000/vertexai-chuck-norris/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/vertexai-chuck-norris\")\n```\n" + }, + { + "filename": "C:\\Users\\wesla\\CodePilotAI\\repositories\\langchain\\templates\\xml-agent\\README.md", + "filetype": ".md", + "content": "\n# xml-agent\n\nThis package creates an agent that uses XML syntax to communicate its decisions of what actions to take. It uses Anthropic's Claude models for writing XML syntax and can optionally look up things on the internet using DuckDuckGo.\n\n## Environment Setup\n\nTwo environment variables need to be set:\n\n- `ANTHROPIC_API_KEY`: Required for using Anthropic\n\n## Usage\n\nTo use this package, you should first have the LangChain CLI installed:\n\n```shell\npip install -U langchain-cli\n```\n\nTo create a new LangChain project and install this as the only package, you can do:\n\n```shell\nlangchain app new my-app --package xml-agent\n```\n\nIf you want to add this to an existing project, you can just run:\n\n```shell\nlangchain app add xml-agent\n```\n\nAnd add the following code to your `server.py` file:\n```python\nfrom xml_agent import agent_executor as xml_agent_chain\n\nadd_routes(app, xml_agent_chain, path=\"/xml-agent\")\n```\n\n(Optional) Let's now configure LangSmith. \nLangSmith will help us trace, monitor and debug LangChain applications. \nLangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/). \nIf you don't have access, you can skip this section\n\n\n```shell\nexport LANGCHAIN_TRACING_V2=true\nexport LANGCHAIN_API_KEY=\nexport LANGCHAIN_PROJECT= # if not specified, defaults to \"default\"\n```\n\nIf you are inside this directory, then you can spin up a LangServe instance directly by:\n\n```shell\nlangchain serve\n```\n\nThis will start the FastAPI app with a server is running locally at \n[http://localhost:8000](http://localhost:8000)\n\nWe can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)\nWe can access the playground at [http://127.0.0.1:8000/xml-agent/playground](http://127.0.0.1:8000/xml-agent/playground) \n\nWe can access the template from code with:\n\n```python\nfrom langserve.client import RemoteRunnable\n\nrunnable = RemoteRunnable(\"http://localhost:8000/xml-agent\")\n```\n" + } +] \ No newline at end of file