Spaces:
Running
Running
add langchain docs
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- docs/langchain/.devcontainer/README.md +44 -0
- docs/langchain/.devcontainer/devcontainer.json +36 -0
- docs/langchain/.devcontainer/docker-compose.yaml +32 -0
- docs/langchain/.gitattributes +3 -0
- docs/langchain/.github/CODE_OF_CONDUCT.md +132 -0
- docs/langchain/.github/CONTRIBUTING.md +6 -0
- docs/langchain/.github/DISCUSSION_TEMPLATE/ideas.yml +38 -0
- docs/langchain/.github/DISCUSSION_TEMPLATE/q-a.yml +122 -0
- docs/langchain/.github/ISSUE_TEMPLATE/bug-report.yml +120 -0
- docs/langchain/.github/ISSUE_TEMPLATE/config.yml +15 -0
- docs/langchain/.github/ISSUE_TEMPLATE/documentation.yml +58 -0
- docs/langchain/.github/ISSUE_TEMPLATE/privileged.yml +25 -0
- docs/langchain/.github/PULL_REQUEST_TEMPLATE.md +29 -0
- docs/langchain/.github/actions/people/Dockerfile +7 -0
- docs/langchain/.github/actions/people/action.yml +11 -0
- docs/langchain/.github/actions/people/app/main.py +643 -0
- docs/langchain/.github/actions/poetry_setup/action.yml +93 -0
- docs/langchain/.github/scripts/check_diff.py +94 -0
- docs/langchain/.github/scripts/get_min_versions.py +79 -0
- docs/langchain/.github/tools/git-restore-mtime +606 -0
- docs/langchain/.github/workflows/.codespell-exclude +7 -0
- docs/langchain/.github/workflows/_compile_integration_test.yml +57 -0
- docs/langchain/.github/workflows/_dependencies.yml +117 -0
- docs/langchain/.github/workflows/_integration_test.yml +95 -0
- docs/langchain/.github/workflows/_lint.yml +128 -0
- docs/langchain/.github/workflows/_release.yml +371 -0
- docs/langchain/.github/workflows/_release_docker.yml +62 -0
- docs/langchain/.github/workflows/_test.yml +70 -0
- docs/langchain/.github/workflows/_test_doc_imports.yml +50 -0
- docs/langchain/.github/workflows/_test_release.yml +100 -0
- docs/langchain/.github/workflows/check-broken-links.yml +24 -0
- docs/langchain/.github/workflows/check_diffs.yml +160 -0
- docs/langchain/.github/workflows/codespell.yml +37 -0
- docs/langchain/.github/workflows/extract_ignored_words_list.py +10 -0
- docs/langchain/.github/workflows/langchain_release_docker.yml +14 -0
- docs/langchain/.github/workflows/people.yml +36 -0
- docs/langchain/.github/workflows/scheduled_test.yml +139 -0
- docs/langchain/.gitignore +182 -0
- docs/langchain/.readthedocs.yaml +29 -0
- docs/langchain/CITATION.cff +8 -0
- docs/langchain/LICENSE +21 -0
- docs/langchain/MIGRATE.md +70 -0
- docs/langchain/Makefile +75 -0
- docs/langchain/README.md +137 -0
- docs/langchain/SECURITY.md +61 -0
- docs/langchain/cookbook/Gemma_LangChain.ipynb +932 -0
- docs/langchain/cookbook/LLaMA2_sql_chat.ipynb +398 -0
- docs/langchain/cookbook/Multi_modal_RAG.ipynb +0 -0
- docs/langchain/cookbook/Multi_modal_RAG_google.ipynb +0 -0
- docs/langchain/cookbook/RAPTOR.ipynb +0 -0
docs/langchain/.devcontainer/README.md
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dev container
|
2 |
+
|
3 |
+
This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment.
|
4 |
+
|
5 |
+
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
|
6 |
+
|
7 |
+
## GitHub Codespaces
|
8 |
+
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
|
9 |
+
|
10 |
+
You may use the button above, or follow these steps to open this repo in a Codespace:
|
11 |
+
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
|
12 |
+
1. Click on the **Codespaces** tab.
|
13 |
+
1. Click **Create codespace on master**.
|
14 |
+
|
15 |
+
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
|
16 |
+
|
17 |
+
## VS Code Dev Containers
|
18 |
+
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
19 |
+
|
20 |
+
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
|
21 |
+
```
|
22 |
+
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
|
23 |
+
|
24 |
+
```
|
25 |
+
Then you will have a local cloned repo where you can contribute and then create pull requests.
|
26 |
+
|
27 |
+
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
|
28 |
+
|
29 |
+
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
|
30 |
+
|
31 |
+
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
|
32 |
+
|
33 |
+
2. Open a locally cloned copy of the code:
|
34 |
+
|
35 |
+
- Fork and Clone this repository to your local filesystem.
|
36 |
+
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
|
37 |
+
- Select the cloned copy of this folder, wait for the container to start, and try things out!
|
38 |
+
|
39 |
+
You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers).
|
40 |
+
|
41 |
+
## Tips and tricks
|
42 |
+
|
43 |
+
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
|
44 |
+
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
|
docs/langchain/.devcontainer/devcontainer.json
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
|
2 |
+
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
|
3 |
+
{
|
4 |
+
// Name for the dev container
|
5 |
+
"name": "langchain",
|
6 |
+
|
7 |
+
// Point to a Docker Compose file
|
8 |
+
"dockerComposeFile": "./docker-compose.yaml",
|
9 |
+
|
10 |
+
// Required when using Docker Compose. The name of the service to connect to once running
|
11 |
+
"service": "langchain",
|
12 |
+
|
13 |
+
// The optional 'workspaceFolder' property is the path VS Code should open by default when
|
14 |
+
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
|
15 |
+
"workspaceFolder": "/workspaces/langchain",
|
16 |
+
|
17 |
+
// Prevent the container from shutting down
|
18 |
+
"overrideCommand": true
|
19 |
+
|
20 |
+
// Features to add to the dev container. More info: https://containers.dev/features
|
21 |
+
// "features": {
|
22 |
+
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
|
23 |
+
// }
|
24 |
+
|
25 |
+
// Use 'forwardPorts' to make a list of ports inside the container available locally.
|
26 |
+
// "forwardPorts": [],
|
27 |
+
|
28 |
+
// Uncomment the next line to run commands after the container is created.
|
29 |
+
// "postCreateCommand": "cat /etc/os-release",
|
30 |
+
|
31 |
+
// Configure tool-specific properties.
|
32 |
+
// "customizations": {},
|
33 |
+
|
34 |
+
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
|
35 |
+
// "remoteUser": "root"
|
36 |
+
}
|
docs/langchain/.devcontainer/docker-compose.yaml
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
version: '3'
|
2 |
+
services:
|
3 |
+
langchain:
|
4 |
+
build:
|
5 |
+
dockerfile: libs/langchain/dev.Dockerfile
|
6 |
+
context: ..
|
7 |
+
volumes:
|
8 |
+
# Update this to wherever you want VS Code to mount the folder of your project
|
9 |
+
- ..:/workspaces/langchain:cached
|
10 |
+
networks:
|
11 |
+
- langchain-network
|
12 |
+
# environment:
|
13 |
+
# MONGO_ROOT_USERNAME: root
|
14 |
+
# MONGO_ROOT_PASSWORD: example123
|
15 |
+
# depends_on:
|
16 |
+
# - mongo
|
17 |
+
# mongo:
|
18 |
+
# image: mongo
|
19 |
+
# restart: unless-stopped
|
20 |
+
# environment:
|
21 |
+
# MONGO_INITDB_ROOT_USERNAME: root
|
22 |
+
# MONGO_INITDB_ROOT_PASSWORD: example123
|
23 |
+
# ports:
|
24 |
+
# - "27017:27017"
|
25 |
+
# networks:
|
26 |
+
# - langchain-network
|
27 |
+
|
28 |
+
networks:
|
29 |
+
langchain-network:
|
30 |
+
driver: bridge
|
31 |
+
|
32 |
+
|
docs/langchain/.gitattributes
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
* text=auto eol=lf
|
2 |
+
*.{cmd,[cC][mM][dD]} text eol=crlf
|
3 |
+
*.{bat,[bB][aA][tT]} text eol=crlf
|
docs/langchain/.github/CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contributor Covenant Code of Conduct
|
2 |
+
|
3 |
+
## Our Pledge
|
4 |
+
|
5 |
+
We as members, contributors, and leaders pledge to make participation in our
|
6 |
+
community a harassment-free experience for everyone, regardless of age, body
|
7 |
+
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
8 |
+
identity and expression, level of experience, education, socio-economic status,
|
9 |
+
nationality, personal appearance, race, caste, color, religion, or sexual
|
10 |
+
identity and orientation.
|
11 |
+
|
12 |
+
We pledge to act and interact in ways that contribute to an open, welcoming,
|
13 |
+
diverse, inclusive, and healthy community.
|
14 |
+
|
15 |
+
## Our Standards
|
16 |
+
|
17 |
+
Examples of behavior that contributes to a positive environment for our
|
18 |
+
community include:
|
19 |
+
|
20 |
+
* Demonstrating empathy and kindness toward other people
|
21 |
+
* Being respectful of differing opinions, viewpoints, and experiences
|
22 |
+
* Giving and gracefully accepting constructive feedback
|
23 |
+
* Accepting responsibility and apologizing to those affected by our mistakes,
|
24 |
+
and learning from the experience
|
25 |
+
* Focusing on what is best not just for us as individuals, but for the overall
|
26 |
+
community
|
27 |
+
|
28 |
+
Examples of unacceptable behavior include:
|
29 |
+
|
30 |
+
* The use of sexualized language or imagery, and sexual attention or advances of
|
31 |
+
any kind
|
32 |
+
* Trolling, insulting or derogatory comments, and personal or political attacks
|
33 |
+
* Public or private harassment
|
34 |
+
* Publishing others' private information, such as a physical or email address,
|
35 |
+
without their explicit permission
|
36 |
+
* Other conduct which could reasonably be considered inappropriate in a
|
37 |
+
professional setting
|
38 |
+
|
39 |
+
## Enforcement Responsibilities
|
40 |
+
|
41 |
+
Community leaders are responsible for clarifying and enforcing our standards of
|
42 |
+
acceptable behavior and will take appropriate and fair corrective action in
|
43 |
+
response to any behavior that they deem inappropriate, threatening, offensive,
|
44 |
+
or harmful.
|
45 |
+
|
46 |
+
Community leaders have the right and responsibility to remove, edit, or reject
|
47 |
+
comments, commits, code, wiki edits, issues, and other contributions that are
|
48 |
+
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
49 |
+
decisions when appropriate.
|
50 |
+
|
51 |
+
## Scope
|
52 |
+
|
53 |
+
This Code of Conduct applies within all community spaces, and also applies when
|
54 |
+
an individual is officially representing the community in public spaces.
|
55 |
+
Examples of representing our community include using an official e-mail address,
|
56 |
+
posting via an official social media account, or acting as an appointed
|
57 |
+
representative at an online or offline event.
|
58 |
+
|
59 |
+
## Enforcement
|
60 |
+
|
61 |
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
62 |
+
reported to the community leaders responsible for enforcement at
|
63 |
+
conduct@langchain.dev.
|
64 |
+
All complaints will be reviewed and investigated promptly and fairly.
|
65 |
+
|
66 |
+
All community leaders are obligated to respect the privacy and security of the
|
67 |
+
reporter of any incident.
|
68 |
+
|
69 |
+
## Enforcement Guidelines
|
70 |
+
|
71 |
+
Community leaders will follow these Community Impact Guidelines in determining
|
72 |
+
the consequences for any action they deem in violation of this Code of Conduct:
|
73 |
+
|
74 |
+
### 1. Correction
|
75 |
+
|
76 |
+
**Community Impact**: Use of inappropriate language or other behavior deemed
|
77 |
+
unprofessional or unwelcome in the community.
|
78 |
+
|
79 |
+
**Consequence**: A private, written warning from community leaders, providing
|
80 |
+
clarity around the nature of the violation and an explanation of why the
|
81 |
+
behavior was inappropriate. A public apology may be requested.
|
82 |
+
|
83 |
+
### 2. Warning
|
84 |
+
|
85 |
+
**Community Impact**: A violation through a single incident or series of
|
86 |
+
actions.
|
87 |
+
|
88 |
+
**Consequence**: A warning with consequences for continued behavior. No
|
89 |
+
interaction with the people involved, including unsolicited interaction with
|
90 |
+
those enforcing the Code of Conduct, for a specified period of time. This
|
91 |
+
includes avoiding interactions in community spaces as well as external channels
|
92 |
+
like social media. Violating these terms may lead to a temporary or permanent
|
93 |
+
ban.
|
94 |
+
|
95 |
+
### 3. Temporary Ban
|
96 |
+
|
97 |
+
**Community Impact**: A serious violation of community standards, including
|
98 |
+
sustained inappropriate behavior.
|
99 |
+
|
100 |
+
**Consequence**: A temporary ban from any sort of interaction or public
|
101 |
+
communication with the community for a specified period of time. No public or
|
102 |
+
private interaction with the people involved, including unsolicited interaction
|
103 |
+
with those enforcing the Code of Conduct, is allowed during this period.
|
104 |
+
Violating these terms may lead to a permanent ban.
|
105 |
+
|
106 |
+
### 4. Permanent Ban
|
107 |
+
|
108 |
+
**Community Impact**: Demonstrating a pattern of violation of community
|
109 |
+
standards, including sustained inappropriate behavior, harassment of an
|
110 |
+
individual, or aggression toward or disparagement of classes of individuals.
|
111 |
+
|
112 |
+
**Consequence**: A permanent ban from any sort of public interaction within the
|
113 |
+
community.
|
114 |
+
|
115 |
+
## Attribution
|
116 |
+
|
117 |
+
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
118 |
+
version 2.1, available at
|
119 |
+
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
|
120 |
+
|
121 |
+
Community Impact Guidelines were inspired by
|
122 |
+
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
|
123 |
+
|
124 |
+
For answers to common questions about this code of conduct, see the FAQ at
|
125 |
+
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
|
126 |
+
[https://www.contributor-covenant.org/translations][translations].
|
127 |
+
|
128 |
+
[homepage]: https://www.contributor-covenant.org
|
129 |
+
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
|
130 |
+
[Mozilla CoC]: https://github.com/mozilla/diversity
|
131 |
+
[FAQ]: https://www.contributor-covenant.org/faq
|
132 |
+
[translations]: https://www.contributor-covenant.org/translations
|
docs/langchain/.github/CONTRIBUTING.md
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contributing to LangChain
|
2 |
+
|
3 |
+
Hi there! Thank you for even being interested in contributing to LangChain.
|
4 |
+
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
|
5 |
+
|
6 |
+
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
|
docs/langchain/.github/DISCUSSION_TEMPLATE/ideas.yml
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
labels: [idea]
|
2 |
+
body:
|
3 |
+
- type: checkboxes
|
4 |
+
id: checks
|
5 |
+
attributes:
|
6 |
+
label: Checked
|
7 |
+
description: Please confirm and check all the following options.
|
8 |
+
options:
|
9 |
+
- label: I searched existing ideas and did not find a similar one
|
10 |
+
required: true
|
11 |
+
- label: I added a very descriptive title
|
12 |
+
required: true
|
13 |
+
- label: I've clearly described the feature request and motivation for it
|
14 |
+
required: true
|
15 |
+
- type: textarea
|
16 |
+
id: feature-request
|
17 |
+
validations:
|
18 |
+
required: true
|
19 |
+
attributes:
|
20 |
+
label: Feature request
|
21 |
+
description: |
|
22 |
+
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
|
23 |
+
- type: textarea
|
24 |
+
id: motivation
|
25 |
+
validations:
|
26 |
+
required: true
|
27 |
+
attributes:
|
28 |
+
label: Motivation
|
29 |
+
description: |
|
30 |
+
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
|
31 |
+
- type: textarea
|
32 |
+
id: proposal
|
33 |
+
validations:
|
34 |
+
required: false
|
35 |
+
attributes:
|
36 |
+
label: Proposal (If applicable)
|
37 |
+
description: |
|
38 |
+
If you would like to propose a solution, please describe it here.
|
docs/langchain/.github/DISCUSSION_TEMPLATE/q-a.yml
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
labels: [Question]
|
2 |
+
body:
|
3 |
+
- type: markdown
|
4 |
+
attributes:
|
5 |
+
value: |
|
6 |
+
Thanks for your interest in LangChain 🦜️🔗!
|
7 |
+
|
8 |
+
Please follow these instructions, fill every question, and do every step. 🙏
|
9 |
+
|
10 |
+
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
|
11 |
+
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
|
12 |
+
|
13 |
+
By asking questions in a structured way (following this) it will be much easier for us to help you.
|
14 |
+
|
15 |
+
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
|
16 |
+
|
17 |
+
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
|
18 |
+
|
19 |
+
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
|
20 |
+
|
21 |
+
Relevant links to check before opening a question to see if your question has already been answered, fixed or
|
22 |
+
if there's another way to solve your problem:
|
23 |
+
|
24 |
+
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
25 |
+
[API Reference](https://api.python.langchain.com/en/stable/),
|
26 |
+
[GitHub search](https://github.com/langchain-ai/langchain),
|
27 |
+
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
28 |
+
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
29 |
+
[LangChain ChatBot](https://chat.langchain.com/)
|
30 |
+
- type: checkboxes
|
31 |
+
id: checks
|
32 |
+
attributes:
|
33 |
+
label: Checked other resources
|
34 |
+
description: Please confirm and check all the following options.
|
35 |
+
options:
|
36 |
+
- label: I added a very descriptive title to this question.
|
37 |
+
required: true
|
38 |
+
- label: I searched the LangChain documentation with the integrated search.
|
39 |
+
required: true
|
40 |
+
- label: I used the GitHub search to find a similar question and didn't find it.
|
41 |
+
required: true
|
42 |
+
- type: checkboxes
|
43 |
+
id: help
|
44 |
+
attributes:
|
45 |
+
label: Commit to Help
|
46 |
+
description: |
|
47 |
+
After submitting this, I commit to one of:
|
48 |
+
|
49 |
+
* Read open questions until I find 2 where I can help someone and add a comment to help there.
|
50 |
+
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
|
51 |
+
* Once my question is answered, I will mark the answer as "accepted".
|
52 |
+
options:
|
53 |
+
- label: I commit to help with one of those options 👆
|
54 |
+
required: true
|
55 |
+
- type: textarea
|
56 |
+
id: example
|
57 |
+
attributes:
|
58 |
+
label: Example Code
|
59 |
+
description: |
|
60 |
+
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
|
61 |
+
|
62 |
+
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
63 |
+
|
64 |
+
**Important!**
|
65 |
+
|
66 |
+
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
|
67 |
+
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
68 |
+
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
69 |
+
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
70 |
+
|
71 |
+
placeholder: |
|
72 |
+
from langchain_core.runnables import RunnableLambda
|
73 |
+
|
74 |
+
def bad_code(inputs) -> int:
|
75 |
+
raise NotImplementedError('For demo purpose')
|
76 |
+
|
77 |
+
chain = RunnableLambda(bad_code)
|
78 |
+
chain.invoke('Hello!')
|
79 |
+
render: python
|
80 |
+
validations:
|
81 |
+
required: true
|
82 |
+
- type: textarea
|
83 |
+
id: description
|
84 |
+
attributes:
|
85 |
+
label: Description
|
86 |
+
description: |
|
87 |
+
What is the problem, question, or error?
|
88 |
+
|
89 |
+
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
|
90 |
+
placeholder: |
|
91 |
+
* I'm trying to use the `langchain` library to do X.
|
92 |
+
* I expect to see Y.
|
93 |
+
* Instead, it does Z.
|
94 |
+
validations:
|
95 |
+
required: true
|
96 |
+
- type: textarea
|
97 |
+
id: system-info
|
98 |
+
attributes:
|
99 |
+
label: System Info
|
100 |
+
description: |
|
101 |
+
Please share your system info with us.
|
102 |
+
|
103 |
+
"pip freeze | grep langchain"
|
104 |
+
platform (windows / linux / mac)
|
105 |
+
python version
|
106 |
+
|
107 |
+
OR if you're on a recent version of langchain-core you can paste the output of:
|
108 |
+
|
109 |
+
python -m langchain_core.sys_info
|
110 |
+
placeholder: |
|
111 |
+
"pip freeze | grep langchain"
|
112 |
+
platform
|
113 |
+
python version
|
114 |
+
|
115 |
+
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
|
116 |
+
|
117 |
+
python -m langchain_core.sys_info
|
118 |
+
|
119 |
+
These will only surface LangChain packages, don't forget to include any other relevant
|
120 |
+
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
|
121 |
+
validations:
|
122 |
+
required: true
|
docs/langchain/.github/ISSUE_TEMPLATE/bug-report.yml
ADDED
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: "\U0001F41B Bug Report"
|
2 |
+
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
|
3 |
+
labels: ["02 Bug Report"]
|
4 |
+
body:
|
5 |
+
- type: markdown
|
6 |
+
attributes:
|
7 |
+
value: >
|
8 |
+
Thank you for taking the time to file a bug report.
|
9 |
+
|
10 |
+
Use this to report bugs in LangChain.
|
11 |
+
|
12 |
+
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
|
13 |
+
to ask for help with your issue.
|
14 |
+
|
15 |
+
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
|
16 |
+
if there's another way to solve your problem:
|
17 |
+
|
18 |
+
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
19 |
+
[API Reference](https://api.python.langchain.com/en/stable/),
|
20 |
+
[GitHub search](https://github.com/langchain-ai/langchain),
|
21 |
+
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
22 |
+
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
23 |
+
[LangChain ChatBot](https://chat.langchain.com/)
|
24 |
+
- type: checkboxes
|
25 |
+
id: checks
|
26 |
+
attributes:
|
27 |
+
label: Checked other resources
|
28 |
+
description: Please confirm and check all the following options.
|
29 |
+
options:
|
30 |
+
- label: I added a very descriptive title to this issue.
|
31 |
+
required: true
|
32 |
+
- label: I searched the LangChain documentation with the integrated search.
|
33 |
+
required: true
|
34 |
+
- label: I used the GitHub search to find a similar question and didn't find it.
|
35 |
+
required: true
|
36 |
+
- label: I am sure that this is a bug in LangChain rather than my code.
|
37 |
+
required: true
|
38 |
+
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
|
39 |
+
required: true
|
40 |
+
- type: textarea
|
41 |
+
id: reproduction
|
42 |
+
validations:
|
43 |
+
required: true
|
44 |
+
attributes:
|
45 |
+
label: Example Code
|
46 |
+
description: |
|
47 |
+
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
|
48 |
+
|
49 |
+
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
50 |
+
|
51 |
+
**Important!**
|
52 |
+
|
53 |
+
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
|
54 |
+
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
|
55 |
+
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
|
56 |
+
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
57 |
+
|
58 |
+
placeholder: |
|
59 |
+
The following code:
|
60 |
+
|
61 |
+
```python
|
62 |
+
from langchain_core.runnables import RunnableLambda
|
63 |
+
|
64 |
+
def bad_code(inputs) -> int:
|
65 |
+
raise NotImplementedError('For demo purpose')
|
66 |
+
|
67 |
+
chain = RunnableLambda(bad_code)
|
68 |
+
chain.invoke('Hello!')
|
69 |
+
```
|
70 |
+
- type: textarea
|
71 |
+
id: error
|
72 |
+
validations:
|
73 |
+
required: false
|
74 |
+
attributes:
|
75 |
+
label: Error Message and Stack Trace (if applicable)
|
76 |
+
description: |
|
77 |
+
If you are reporting an error, please include the full error message and stack trace.
|
78 |
+
placeholder: |
|
79 |
+
Exception + full stack trace
|
80 |
+
- type: textarea
|
81 |
+
id: description
|
82 |
+
attributes:
|
83 |
+
label: Description
|
84 |
+
description: |
|
85 |
+
What is the problem, question, or error?
|
86 |
+
|
87 |
+
Write a short description telling what you are doing, what you expect to happen, and what is currently happening.
|
88 |
+
placeholder: |
|
89 |
+
* I'm trying to use the `langchain` library to do X.
|
90 |
+
* I expect to see Y.
|
91 |
+
* Instead, it does Z.
|
92 |
+
validations:
|
93 |
+
required: true
|
94 |
+
- type: textarea
|
95 |
+
id: system-info
|
96 |
+
attributes:
|
97 |
+
label: System Info
|
98 |
+
description: |
|
99 |
+
Please share your system info with us.
|
100 |
+
|
101 |
+
"pip freeze | grep langchain"
|
102 |
+
platform (windows / linux / mac)
|
103 |
+
python version
|
104 |
+
|
105 |
+
OR if you're on a recent version of langchain-core you can paste the output of:
|
106 |
+
|
107 |
+
python -m langchain_core.sys_info
|
108 |
+
placeholder: |
|
109 |
+
"pip freeze | grep langchain"
|
110 |
+
platform
|
111 |
+
python version
|
112 |
+
|
113 |
+
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
|
114 |
+
|
115 |
+
python -m langchain_core.sys_info
|
116 |
+
|
117 |
+
These will only surface LangChain packages, don't forget to include any other relevant
|
118 |
+
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
|
119 |
+
validations:
|
120 |
+
required: true
|
docs/langchain/.github/ISSUE_TEMPLATE/config.yml
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
blank_issues_enabled: false
|
2 |
+
version: 2.1
|
3 |
+
contact_links:
|
4 |
+
- name: 🤔 Question or Problem
|
5 |
+
about: Ask a question or ask about a problem in GitHub Discussions.
|
6 |
+
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
|
7 |
+
- name: Discord
|
8 |
+
url: https://discord.gg/6adMQxSpJS
|
9 |
+
about: General community discussions
|
10 |
+
- name: Feature Request
|
11 |
+
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
|
12 |
+
about: Suggest a feature or an idea
|
13 |
+
- name: Show and tell
|
14 |
+
about: Show what you built with LangChain
|
15 |
+
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell
|
docs/langchain/.github/ISSUE_TEMPLATE/documentation.yml
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Documentation
|
2 |
+
description: Report an issue related to the LangChain documentation.
|
3 |
+
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
|
4 |
+
labels: [03 - Documentation]
|
5 |
+
|
6 |
+
body:
|
7 |
+
- type: markdown
|
8 |
+
attributes:
|
9 |
+
value: >
|
10 |
+
Thank you for taking the time to report an issue in the documentation.
|
11 |
+
|
12 |
+
Only report issues with documentation here, explain if there are
|
13 |
+
any missing topics or if you found a mistake in the documentation.
|
14 |
+
|
15 |
+
Do **NOT** use this to ask usage questions or reporting issues with your code.
|
16 |
+
|
17 |
+
If you have usage questions or need help solving some problem,
|
18 |
+
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
|
19 |
+
|
20 |
+
If you're in the wrong place, here are some helpful links to find a better
|
21 |
+
place to ask your question:
|
22 |
+
|
23 |
+
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
24 |
+
[API Reference](https://api.python.langchain.com/en/stable/),
|
25 |
+
[GitHub search](https://github.com/langchain-ai/langchain),
|
26 |
+
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
27 |
+
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
|
28 |
+
[LangChain ChatBot](https://chat.langchain.com/)
|
29 |
+
- type: input
|
30 |
+
id: url
|
31 |
+
attributes:
|
32 |
+
label: URL
|
33 |
+
description: URL to documentation
|
34 |
+
validations:
|
35 |
+
required: false
|
36 |
+
- type: checkboxes
|
37 |
+
id: checks
|
38 |
+
attributes:
|
39 |
+
label: Checklist
|
40 |
+
description: Please confirm and check all the following options.
|
41 |
+
options:
|
42 |
+
- label: I added a very descriptive title to this issue.
|
43 |
+
required: true
|
44 |
+
- label: I included a link to the documentation page I am referring to (if applicable).
|
45 |
+
required: true
|
46 |
+
- type: textarea
|
47 |
+
attributes:
|
48 |
+
label: "Issue with current documentation:"
|
49 |
+
description: >
|
50 |
+
Please make sure to leave a reference to the document/code you're
|
51 |
+
referring to. Feel free to include names of classes, functions, methods
|
52 |
+
or concepts you'd like to see documented more.
|
53 |
+
- type: textarea
|
54 |
+
attributes:
|
55 |
+
label: "Idea or request for content:"
|
56 |
+
description: >
|
57 |
+
Please describe as clearly as possible what topics you think are missing
|
58 |
+
from the current documentation.
|
docs/langchain/.github/ISSUE_TEMPLATE/privileged.yml
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: 🔒 Privileged
|
2 |
+
description: You are a LangChain maintainer, or was asked directly by a maintainer to create an issue here. If not, check the other options.
|
3 |
+
body:
|
4 |
+
- type: markdown
|
5 |
+
attributes:
|
6 |
+
value: |
|
7 |
+
Thanks for your interest in LangChain! 🚀
|
8 |
+
|
9 |
+
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
|
10 |
+
|
11 |
+
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
|
12 |
+
or are a regular contributor to LangChain with previous merged pull requests.
|
13 |
+
- type: checkboxes
|
14 |
+
id: privileged
|
15 |
+
attributes:
|
16 |
+
label: Privileged issue
|
17 |
+
description: Confirm that you are allowed to create an issue here.
|
18 |
+
options:
|
19 |
+
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
|
20 |
+
required: true
|
21 |
+
- type: textarea
|
22 |
+
id: content
|
23 |
+
attributes:
|
24 |
+
label: Issue Content
|
25 |
+
description: Add the content of the issue here.
|
docs/langchain/.github/PULL_REQUEST_TEMPLATE.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Thank you for contributing to LangChain!
|
2 |
+
|
3 |
+
- [ ] **PR title**: "package: description"
|
4 |
+
- Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
|
5 |
+
- Example: "community: add foobar LLM"
|
6 |
+
|
7 |
+
|
8 |
+
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
|
9 |
+
- **Description:** a description of the change
|
10 |
+
- **Issue:** the issue # it fixes, if applicable
|
11 |
+
- **Dependencies:** any dependencies required for this change
|
12 |
+
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
|
13 |
+
|
14 |
+
|
15 |
+
- [ ] **Add tests and docs**: If you're adding a new integration, please include
|
16 |
+
1. a test for the integration, preferably unit tests that do not rely on network access,
|
17 |
+
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
|
18 |
+
|
19 |
+
|
20 |
+
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
|
21 |
+
|
22 |
+
Additional guidelines:
|
23 |
+
- Make sure optional dependencies are imported within a function.
|
24 |
+
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
|
25 |
+
- Most PRs should not touch more than one package.
|
26 |
+
- Changes should be backwards compatible.
|
27 |
+
- If you are adding something to community, do not re-import it in langchain.
|
28 |
+
|
29 |
+
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
|
docs/langchain/.github/actions/people/Dockerfile
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
FROM python:3.9
|
2 |
+
|
3 |
+
RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.1,<6.0.0"
|
4 |
+
|
5 |
+
COPY ./app /app
|
6 |
+
|
7 |
+
CMD ["python", "/app/main.py"]
|
docs/langchain/.github/actions/people/action.yml
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
|
2 |
+
name: "Generate LangChain People"
|
3 |
+
description: "Generate the data for the LangChain People page"
|
4 |
+
author: "Jacob Lee <jacob@langchain.dev>"
|
5 |
+
inputs:
|
6 |
+
token:
|
7 |
+
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
|
8 |
+
required: true
|
9 |
+
runs:
|
10 |
+
using: 'docker'
|
11 |
+
image: 'Dockerfile'
|
docs/langchain/.github/actions/people/app/main.py
ADDED
@@ -0,0 +1,643 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/app/main.py
|
2 |
+
|
3 |
+
import logging
|
4 |
+
import subprocess
|
5 |
+
import sys
|
6 |
+
from collections import Counter
|
7 |
+
from datetime import datetime, timedelta, timezone
|
8 |
+
from pathlib import Path
|
9 |
+
from typing import Any, Container, Dict, List, Set, Union
|
10 |
+
|
11 |
+
import httpx
|
12 |
+
import yaml
|
13 |
+
from github import Github
|
14 |
+
from pydantic import BaseModel, SecretStr
|
15 |
+
from pydantic_settings import BaseSettings
|
16 |
+
|
17 |
+
github_graphql_url = "https://api.github.com/graphql"
|
18 |
+
questions_category_id = "DIC_kwDOIPDwls4CS6Ve"
|
19 |
+
|
20 |
+
# discussions_query = """
|
21 |
+
# query Q($after: String, $category_id: ID) {
|
22 |
+
# repository(name: "langchain", owner: "langchain-ai") {
|
23 |
+
# discussions(first: 100, after: $after, categoryId: $category_id) {
|
24 |
+
# edges {
|
25 |
+
# cursor
|
26 |
+
# node {
|
27 |
+
# number
|
28 |
+
# author {
|
29 |
+
# login
|
30 |
+
# avatarUrl
|
31 |
+
# url
|
32 |
+
# }
|
33 |
+
# title
|
34 |
+
# createdAt
|
35 |
+
# comments(first: 100) {
|
36 |
+
# nodes {
|
37 |
+
# createdAt
|
38 |
+
# author {
|
39 |
+
# login
|
40 |
+
# avatarUrl
|
41 |
+
# url
|
42 |
+
# }
|
43 |
+
# isAnswer
|
44 |
+
# replies(first: 10) {
|
45 |
+
# nodes {
|
46 |
+
# createdAt
|
47 |
+
# author {
|
48 |
+
# login
|
49 |
+
# avatarUrl
|
50 |
+
# url
|
51 |
+
# }
|
52 |
+
# }
|
53 |
+
# }
|
54 |
+
# }
|
55 |
+
# }
|
56 |
+
# }
|
57 |
+
# }
|
58 |
+
# }
|
59 |
+
# }
|
60 |
+
# }
|
61 |
+
# """
|
62 |
+
|
63 |
+
# issues_query = """
|
64 |
+
# query Q($after: String) {
|
65 |
+
# repository(name: "langchain", owner: "langchain-ai") {
|
66 |
+
# issues(first: 100, after: $after) {
|
67 |
+
# edges {
|
68 |
+
# cursor
|
69 |
+
# node {
|
70 |
+
# number
|
71 |
+
# author {
|
72 |
+
# login
|
73 |
+
# avatarUrl
|
74 |
+
# url
|
75 |
+
# }
|
76 |
+
# title
|
77 |
+
# createdAt
|
78 |
+
# state
|
79 |
+
# comments(first: 100) {
|
80 |
+
# nodes {
|
81 |
+
# createdAt
|
82 |
+
# author {
|
83 |
+
# login
|
84 |
+
# avatarUrl
|
85 |
+
# url
|
86 |
+
# }
|
87 |
+
# }
|
88 |
+
# }
|
89 |
+
# }
|
90 |
+
# }
|
91 |
+
# }
|
92 |
+
# }
|
93 |
+
# }
|
94 |
+
# """
|
95 |
+
|
96 |
+
prs_query = """
|
97 |
+
query Q($after: String) {
|
98 |
+
repository(name: "langchain", owner: "langchain-ai") {
|
99 |
+
pullRequests(first: 100, after: $after, states: MERGED) {
|
100 |
+
edges {
|
101 |
+
cursor
|
102 |
+
node {
|
103 |
+
changedFiles
|
104 |
+
additions
|
105 |
+
deletions
|
106 |
+
number
|
107 |
+
labels(first: 100) {
|
108 |
+
nodes {
|
109 |
+
name
|
110 |
+
}
|
111 |
+
}
|
112 |
+
author {
|
113 |
+
login
|
114 |
+
avatarUrl
|
115 |
+
url
|
116 |
+
... on User {
|
117 |
+
twitterUsername
|
118 |
+
}
|
119 |
+
}
|
120 |
+
title
|
121 |
+
createdAt
|
122 |
+
state
|
123 |
+
reviews(first:100) {
|
124 |
+
nodes {
|
125 |
+
author {
|
126 |
+
login
|
127 |
+
avatarUrl
|
128 |
+
url
|
129 |
+
... on User {
|
130 |
+
twitterUsername
|
131 |
+
}
|
132 |
+
}
|
133 |
+
state
|
134 |
+
}
|
135 |
+
}
|
136 |
+
}
|
137 |
+
}
|
138 |
+
}
|
139 |
+
}
|
140 |
+
}
|
141 |
+
"""
|
142 |
+
|
143 |
+
|
144 |
+
class Author(BaseModel):
|
145 |
+
login: str
|
146 |
+
avatarUrl: str
|
147 |
+
url: str
|
148 |
+
twitterUsername: Union[str, None] = None
|
149 |
+
|
150 |
+
|
151 |
+
# Issues and Discussions
|
152 |
+
|
153 |
+
|
154 |
+
class CommentsNode(BaseModel):
|
155 |
+
createdAt: datetime
|
156 |
+
author: Union[Author, None] = None
|
157 |
+
|
158 |
+
|
159 |
+
class Replies(BaseModel):
|
160 |
+
nodes: List[CommentsNode]
|
161 |
+
|
162 |
+
|
163 |
+
class DiscussionsCommentsNode(CommentsNode):
|
164 |
+
replies: Replies
|
165 |
+
|
166 |
+
|
167 |
+
class Comments(BaseModel):
|
168 |
+
nodes: List[CommentsNode]
|
169 |
+
|
170 |
+
|
171 |
+
class DiscussionsComments(BaseModel):
|
172 |
+
nodes: List[DiscussionsCommentsNode]
|
173 |
+
|
174 |
+
|
175 |
+
class IssuesNode(BaseModel):
|
176 |
+
number: int
|
177 |
+
author: Union[Author, None] = None
|
178 |
+
title: str
|
179 |
+
createdAt: datetime
|
180 |
+
state: str
|
181 |
+
comments: Comments
|
182 |
+
|
183 |
+
|
184 |
+
class DiscussionsNode(BaseModel):
|
185 |
+
number: int
|
186 |
+
author: Union[Author, None] = None
|
187 |
+
title: str
|
188 |
+
createdAt: datetime
|
189 |
+
comments: DiscussionsComments
|
190 |
+
|
191 |
+
|
192 |
+
class IssuesEdge(BaseModel):
|
193 |
+
cursor: str
|
194 |
+
node: IssuesNode
|
195 |
+
|
196 |
+
|
197 |
+
class DiscussionsEdge(BaseModel):
|
198 |
+
cursor: str
|
199 |
+
node: DiscussionsNode
|
200 |
+
|
201 |
+
|
202 |
+
class Issues(BaseModel):
|
203 |
+
edges: List[IssuesEdge]
|
204 |
+
|
205 |
+
|
206 |
+
class Discussions(BaseModel):
|
207 |
+
edges: List[DiscussionsEdge]
|
208 |
+
|
209 |
+
|
210 |
+
class IssuesRepository(BaseModel):
|
211 |
+
issues: Issues
|
212 |
+
|
213 |
+
|
214 |
+
class DiscussionsRepository(BaseModel):
|
215 |
+
discussions: Discussions
|
216 |
+
|
217 |
+
|
218 |
+
class IssuesResponseData(BaseModel):
|
219 |
+
repository: IssuesRepository
|
220 |
+
|
221 |
+
|
222 |
+
class DiscussionsResponseData(BaseModel):
|
223 |
+
repository: DiscussionsRepository
|
224 |
+
|
225 |
+
|
226 |
+
class IssuesResponse(BaseModel):
|
227 |
+
data: IssuesResponseData
|
228 |
+
|
229 |
+
|
230 |
+
class DiscussionsResponse(BaseModel):
|
231 |
+
data: DiscussionsResponseData
|
232 |
+
|
233 |
+
|
234 |
+
# PRs
|
235 |
+
|
236 |
+
|
237 |
+
class LabelNode(BaseModel):
|
238 |
+
name: str
|
239 |
+
|
240 |
+
|
241 |
+
class Labels(BaseModel):
|
242 |
+
nodes: List[LabelNode]
|
243 |
+
|
244 |
+
|
245 |
+
class ReviewNode(BaseModel):
|
246 |
+
author: Union[Author, None] = None
|
247 |
+
state: str
|
248 |
+
|
249 |
+
|
250 |
+
class Reviews(BaseModel):
|
251 |
+
nodes: List[ReviewNode]
|
252 |
+
|
253 |
+
|
254 |
+
class PullRequestNode(BaseModel):
|
255 |
+
number: int
|
256 |
+
labels: Labels
|
257 |
+
author: Union[Author, None] = None
|
258 |
+
changedFiles: int
|
259 |
+
additions: int
|
260 |
+
deletions: int
|
261 |
+
title: str
|
262 |
+
createdAt: datetime
|
263 |
+
state: str
|
264 |
+
reviews: Reviews
|
265 |
+
# comments: Comments
|
266 |
+
|
267 |
+
|
268 |
+
class PullRequestEdge(BaseModel):
|
269 |
+
cursor: str
|
270 |
+
node: PullRequestNode
|
271 |
+
|
272 |
+
|
273 |
+
class PullRequests(BaseModel):
|
274 |
+
edges: List[PullRequestEdge]
|
275 |
+
|
276 |
+
|
277 |
+
class PRsRepository(BaseModel):
|
278 |
+
pullRequests: PullRequests
|
279 |
+
|
280 |
+
|
281 |
+
class PRsResponseData(BaseModel):
|
282 |
+
repository: PRsRepository
|
283 |
+
|
284 |
+
|
285 |
+
class PRsResponse(BaseModel):
|
286 |
+
data: PRsResponseData
|
287 |
+
|
288 |
+
|
289 |
+
class Settings(BaseSettings):
|
290 |
+
input_token: SecretStr
|
291 |
+
github_repository: str
|
292 |
+
httpx_timeout: int = 30
|
293 |
+
|
294 |
+
|
295 |
+
def get_graphql_response(
|
296 |
+
*,
|
297 |
+
settings: Settings,
|
298 |
+
query: str,
|
299 |
+
after: Union[str, None] = None,
|
300 |
+
category_id: Union[str, None] = None,
|
301 |
+
) -> Dict[str, Any]:
|
302 |
+
headers = {"Authorization": f"token {settings.input_token.get_secret_value()}"}
|
303 |
+
# category_id is only used by one query, but GraphQL allows unused variables, so
|
304 |
+
# keep it here for simplicity
|
305 |
+
variables = {"after": after, "category_id": category_id}
|
306 |
+
response = httpx.post(
|
307 |
+
github_graphql_url,
|
308 |
+
headers=headers,
|
309 |
+
timeout=settings.httpx_timeout,
|
310 |
+
json={"query": query, "variables": variables, "operationName": "Q"},
|
311 |
+
)
|
312 |
+
if response.status_code != 200:
|
313 |
+
logging.error(
|
314 |
+
f"Response was not 200, after: {after}, category_id: {category_id}"
|
315 |
+
)
|
316 |
+
logging.error(response.text)
|
317 |
+
raise RuntimeError(response.text)
|
318 |
+
data = response.json()
|
319 |
+
if "errors" in data:
|
320 |
+
logging.error(f"Errors in response, after: {after}, category_id: {category_id}")
|
321 |
+
logging.error(data["errors"])
|
322 |
+
logging.error(response.text)
|
323 |
+
raise RuntimeError(response.text)
|
324 |
+
return data
|
325 |
+
|
326 |
+
|
327 |
+
# def get_graphql_issue_edges(*, settings: Settings, after: Union[str, None] = None):
|
328 |
+
# data = get_graphql_response(settings=settings, query=issues_query, after=after)
|
329 |
+
# graphql_response = IssuesResponse.model_validate(data)
|
330 |
+
# return graphql_response.data.repository.issues.edges
|
331 |
+
|
332 |
+
|
333 |
+
# def get_graphql_question_discussion_edges(
|
334 |
+
# *,
|
335 |
+
# settings: Settings,
|
336 |
+
# after: Union[str, None] = None,
|
337 |
+
# ):
|
338 |
+
# data = get_graphql_response(
|
339 |
+
# settings=settings,
|
340 |
+
# query=discussions_query,
|
341 |
+
# after=after,
|
342 |
+
# category_id=questions_category_id,
|
343 |
+
# )
|
344 |
+
# graphql_response = DiscussionsResponse.model_validate(data)
|
345 |
+
# return graphql_response.data.repository.discussions.edges
|
346 |
+
|
347 |
+
|
348 |
+
def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
|
349 |
+
if after is None:
|
350 |
+
print("Querying PRs...")
|
351 |
+
else:
|
352 |
+
print(f"Querying PRs with cursor {after}...")
|
353 |
+
data = get_graphql_response(
|
354 |
+
settings=settings,
|
355 |
+
query=prs_query,
|
356 |
+
after=after
|
357 |
+
)
|
358 |
+
graphql_response = PRsResponse.model_validate(data)
|
359 |
+
return graphql_response.data.repository.pullRequests.edges
|
360 |
+
|
361 |
+
|
362 |
+
# def get_issues_experts(settings: Settings):
|
363 |
+
# issue_nodes: List[IssuesNode] = []
|
364 |
+
# issue_edges = get_graphql_issue_edges(settings=settings)
|
365 |
+
|
366 |
+
# while issue_edges:
|
367 |
+
# for edge in issue_edges:
|
368 |
+
# issue_nodes.append(edge.node)
|
369 |
+
# last_edge = issue_edges[-1]
|
370 |
+
# issue_edges = get_graphql_issue_edges(settings=settings, after=last_edge.cursor)
|
371 |
+
|
372 |
+
# commentors = Counter()
|
373 |
+
# last_month_commentors = Counter()
|
374 |
+
# authors: Dict[str, Author] = {}
|
375 |
+
|
376 |
+
# now = datetime.now(tz=timezone.utc)
|
377 |
+
# one_month_ago = now - timedelta(days=30)
|
378 |
+
|
379 |
+
# for issue in issue_nodes:
|
380 |
+
# issue_author_name = None
|
381 |
+
# if issue.author:
|
382 |
+
# authors[issue.author.login] = issue.author
|
383 |
+
# issue_author_name = issue.author.login
|
384 |
+
# issue_commentors = set()
|
385 |
+
# for comment in issue.comments.nodes:
|
386 |
+
# if comment.author:
|
387 |
+
# authors[comment.author.login] = comment.author
|
388 |
+
# if comment.author.login != issue_author_name:
|
389 |
+
# issue_commentors.add(comment.author.login)
|
390 |
+
# for author_name in issue_commentors:
|
391 |
+
# commentors[author_name] += 1
|
392 |
+
# if issue.createdAt > one_month_ago:
|
393 |
+
# last_month_commentors[author_name] += 1
|
394 |
+
|
395 |
+
# return commentors, last_month_commentors, authors
|
396 |
+
|
397 |
+
|
398 |
+
# def get_discussions_experts(settings: Settings):
|
399 |
+
# discussion_nodes: List[DiscussionsNode] = []
|
400 |
+
# discussion_edges = get_graphql_question_discussion_edges(settings=settings)
|
401 |
+
|
402 |
+
# while discussion_edges:
|
403 |
+
# for discussion_edge in discussion_edges:
|
404 |
+
# discussion_nodes.append(discussion_edge.node)
|
405 |
+
# last_edge = discussion_edges[-1]
|
406 |
+
# discussion_edges = get_graphql_question_discussion_edges(
|
407 |
+
# settings=settings, after=last_edge.cursor
|
408 |
+
# )
|
409 |
+
|
410 |
+
# commentors = Counter()
|
411 |
+
# last_month_commentors = Counter()
|
412 |
+
# authors: Dict[str, Author] = {}
|
413 |
+
|
414 |
+
# now = datetime.now(tz=timezone.utc)
|
415 |
+
# one_month_ago = now - timedelta(days=30)
|
416 |
+
|
417 |
+
# for discussion in discussion_nodes:
|
418 |
+
# discussion_author_name = None
|
419 |
+
# if discussion.author:
|
420 |
+
# authors[discussion.author.login] = discussion.author
|
421 |
+
# discussion_author_name = discussion.author.login
|
422 |
+
# discussion_commentors = set()
|
423 |
+
# for comment in discussion.comments.nodes:
|
424 |
+
# if comment.author:
|
425 |
+
# authors[comment.author.login] = comment.author
|
426 |
+
# if comment.author.login != discussion_author_name:
|
427 |
+
# discussion_commentors.add(comment.author.login)
|
428 |
+
# for reply in comment.replies.nodes:
|
429 |
+
# if reply.author:
|
430 |
+
# authors[reply.author.login] = reply.author
|
431 |
+
# if reply.author.login != discussion_author_name:
|
432 |
+
# discussion_commentors.add(reply.author.login)
|
433 |
+
# for author_name in discussion_commentors:
|
434 |
+
# commentors[author_name] += 1
|
435 |
+
# if discussion.createdAt > one_month_ago:
|
436 |
+
# last_month_commentors[author_name] += 1
|
437 |
+
# return commentors, last_month_commentors, authors
|
438 |
+
|
439 |
+
|
440 |
+
# def get_experts(settings: Settings):
|
441 |
+
# (
|
442 |
+
# discussions_commentors,
|
443 |
+
# discussions_last_month_commentors,
|
444 |
+
# discussions_authors,
|
445 |
+
# ) = get_discussions_experts(settings=settings)
|
446 |
+
# commentors = discussions_commentors
|
447 |
+
# last_month_commentors = discussions_last_month_commentors
|
448 |
+
# authors = {**discussions_authors}
|
449 |
+
# return commentors, last_month_commentors, authors
|
450 |
+
|
451 |
+
|
452 |
+
def _logistic(x, k):
|
453 |
+
return x / (x + k)
|
454 |
+
|
455 |
+
|
456 |
+
def get_contributors(settings: Settings):
|
457 |
+
pr_nodes: List[PullRequestNode] = []
|
458 |
+
pr_edges = get_graphql_pr_edges(settings=settings)
|
459 |
+
|
460 |
+
while pr_edges:
|
461 |
+
for edge in pr_edges:
|
462 |
+
pr_nodes.append(edge.node)
|
463 |
+
last_edge = pr_edges[-1]
|
464 |
+
pr_edges = get_graphql_pr_edges(settings=settings, after=last_edge.cursor)
|
465 |
+
|
466 |
+
contributors = Counter()
|
467 |
+
contributor_scores = Counter()
|
468 |
+
recent_contributor_scores = Counter()
|
469 |
+
reviewers = Counter()
|
470 |
+
authors: Dict[str, Author] = {}
|
471 |
+
|
472 |
+
for pr in pr_nodes:
|
473 |
+
pr_reviewers: Set[str] = set()
|
474 |
+
for review in pr.reviews.nodes:
|
475 |
+
if review.author:
|
476 |
+
authors[review.author.login] = review.author
|
477 |
+
pr_reviewers.add(review.author.login)
|
478 |
+
for reviewer in pr_reviewers:
|
479 |
+
reviewers[reviewer] += 1
|
480 |
+
if pr.author:
|
481 |
+
authors[pr.author.login] = pr.author
|
482 |
+
contributors[pr.author.login] += 1
|
483 |
+
files_changed = pr.changedFiles
|
484 |
+
lines_changed = pr.additions + pr.deletions
|
485 |
+
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
|
486 |
+
contributor_scores[pr.author.login] += score
|
487 |
+
three_months_ago = (datetime.now(timezone.utc) - timedelta(days=3*30))
|
488 |
+
if pr.createdAt > three_months_ago:
|
489 |
+
recent_contributor_scores[pr.author.login] += score
|
490 |
+
return contributors, contributor_scores, recent_contributor_scores, reviewers, authors
|
491 |
+
|
492 |
+
|
493 |
+
def get_top_users(
|
494 |
+
*,
|
495 |
+
counter: Counter,
|
496 |
+
min_count: int,
|
497 |
+
authors: Dict[str, Author],
|
498 |
+
skip_users: Container[str],
|
499 |
+
):
|
500 |
+
users = []
|
501 |
+
for commentor, count in counter.most_common():
|
502 |
+
if commentor in skip_users:
|
503 |
+
continue
|
504 |
+
if count >= min_count:
|
505 |
+
author = authors[commentor]
|
506 |
+
users.append(
|
507 |
+
{
|
508 |
+
"login": commentor,
|
509 |
+
"count": count,
|
510 |
+
"avatarUrl": author.avatarUrl,
|
511 |
+
"twitterUsername": author.twitterUsername,
|
512 |
+
"url": author.url,
|
513 |
+
}
|
514 |
+
)
|
515 |
+
return users
|
516 |
+
|
517 |
+
|
518 |
+
if __name__ == "__main__":
|
519 |
+
logging.basicConfig(level=logging.INFO)
|
520 |
+
settings = Settings()
|
521 |
+
logging.info(f"Using config: {settings.model_dump_json()}")
|
522 |
+
g = Github(settings.input_token.get_secret_value())
|
523 |
+
repo = g.get_repo(settings.github_repository)
|
524 |
+
# question_commentors, question_last_month_commentors, question_authors = get_experts(
|
525 |
+
# settings=settings
|
526 |
+
# )
|
527 |
+
contributors, contributor_scores, recent_contributor_scores, reviewers, pr_authors = get_contributors(
|
528 |
+
settings=settings
|
529 |
+
)
|
530 |
+
# authors = {**question_authors, **pr_authors}
|
531 |
+
authors = {**pr_authors}
|
532 |
+
maintainers_logins = {
|
533 |
+
"hwchase17",
|
534 |
+
"agola11",
|
535 |
+
"baskaryan",
|
536 |
+
"hinthornw",
|
537 |
+
"nfcampos",
|
538 |
+
"efriis",
|
539 |
+
"eyurtsev",
|
540 |
+
"rlancemartin",
|
541 |
+
"ccurme",
|
542 |
+
"vbarda",
|
543 |
+
}
|
544 |
+
hidden_logins = {
|
545 |
+
"dev2049",
|
546 |
+
"vowelparrot",
|
547 |
+
"obi1kenobi",
|
548 |
+
"langchain-infra",
|
549 |
+
"jacoblee93",
|
550 |
+
"dqbd",
|
551 |
+
"bracesproul",
|
552 |
+
"akira",
|
553 |
+
}
|
554 |
+
bot_names = {"dosubot", "github-actions", "CodiumAI-Agent"}
|
555 |
+
maintainers = []
|
556 |
+
for login in maintainers_logins:
|
557 |
+
user = authors[login]
|
558 |
+
maintainers.append(
|
559 |
+
{
|
560 |
+
"login": login,
|
561 |
+
"count": contributors[login], #+ question_commentors[login],
|
562 |
+
"avatarUrl": user.avatarUrl,
|
563 |
+
"twitterUsername": user.twitterUsername,
|
564 |
+
"url": user.url,
|
565 |
+
}
|
566 |
+
)
|
567 |
+
|
568 |
+
# min_count_expert = 10
|
569 |
+
# min_count_last_month = 3
|
570 |
+
min_score_contributor = 1
|
571 |
+
min_count_reviewer = 5
|
572 |
+
skip_users = maintainers_logins | bot_names | hidden_logins
|
573 |
+
# experts = get_top_users(
|
574 |
+
# counter=question_commentors,
|
575 |
+
# min_count=min_count_expert,
|
576 |
+
# authors=authors,
|
577 |
+
# skip_users=skip_users,
|
578 |
+
# )
|
579 |
+
# last_month_active = get_top_users(
|
580 |
+
# counter=question_last_month_commentors,
|
581 |
+
# min_count=min_count_last_month,
|
582 |
+
# authors=authors,
|
583 |
+
# skip_users=skip_users,
|
584 |
+
# )
|
585 |
+
top_recent_contributors = get_top_users(
|
586 |
+
counter=recent_contributor_scores,
|
587 |
+
min_count=min_score_contributor,
|
588 |
+
authors=authors,
|
589 |
+
skip_users=skip_users,
|
590 |
+
)
|
591 |
+
top_contributors = get_top_users(
|
592 |
+
counter=contributor_scores,
|
593 |
+
min_count=min_score_contributor,
|
594 |
+
authors=authors,
|
595 |
+
skip_users=skip_users,
|
596 |
+
)
|
597 |
+
top_reviewers = get_top_users(
|
598 |
+
counter=reviewers,
|
599 |
+
min_count=min_count_reviewer,
|
600 |
+
authors=authors,
|
601 |
+
skip_users=skip_users,
|
602 |
+
)
|
603 |
+
|
604 |
+
people = {
|
605 |
+
"maintainers": maintainers,
|
606 |
+
# "experts": experts,
|
607 |
+
# "last_month_active": last_month_active,
|
608 |
+
"top_recent_contributors": top_recent_contributors,
|
609 |
+
"top_contributors": top_contributors,
|
610 |
+
"top_reviewers": top_reviewers,
|
611 |
+
}
|
612 |
+
people_path = Path("./docs/data/people.yml")
|
613 |
+
people_old_content = people_path.read_text(encoding="utf-8")
|
614 |
+
new_people_content = yaml.dump(
|
615 |
+
people, sort_keys=False, width=200, allow_unicode=True
|
616 |
+
)
|
617 |
+
if (
|
618 |
+
people_old_content == new_people_content
|
619 |
+
):
|
620 |
+
logging.info("The LangChain People data hasn't changed, finishing.")
|
621 |
+
sys.exit(0)
|
622 |
+
people_path.write_text(new_people_content, encoding="utf-8")
|
623 |
+
logging.info("Setting up GitHub Actions git user")
|
624 |
+
subprocess.run(["git", "config", "user.name", "github-actions"], check=True)
|
625 |
+
subprocess.run(
|
626 |
+
["git", "config", "user.email", "github-actions@github.com"], check=True
|
627 |
+
)
|
628 |
+
branch_name = "langchain/langchain-people"
|
629 |
+
logging.info(f"Creating a new branch {branch_name}")
|
630 |
+
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
|
631 |
+
logging.info("Adding updated file")
|
632 |
+
subprocess.run(
|
633 |
+
["git", "add", str(people_path)], check=True
|
634 |
+
)
|
635 |
+
logging.info("Committing updated file")
|
636 |
+
message = "👥 Update LangChain people data"
|
637 |
+
result = subprocess.run(["git", "commit", "-m", message], check=True)
|
638 |
+
logging.info("Pushing branch")
|
639 |
+
subprocess.run(["git", "push", "origin", branch_name, "-f"], check=True)
|
640 |
+
logging.info("Creating PR")
|
641 |
+
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
|
642 |
+
logging.info(f"Created PR: {pr.number}")
|
643 |
+
logging.info("Finished")
|
docs/langchain/.github/actions/poetry_setup/action.yml
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# An action for setting up poetry install with caching.
|
2 |
+
# Using a custom action since the default action does not
|
3 |
+
# take poetry install groups into account.
|
4 |
+
# Action code from:
|
5 |
+
# https://github.com/actions/setup-python/issues/505#issuecomment-1273013236
|
6 |
+
name: poetry-install-with-caching
|
7 |
+
description: Poetry install with support for caching of dependency groups.
|
8 |
+
|
9 |
+
inputs:
|
10 |
+
python-version:
|
11 |
+
description: Python version, supporting MAJOR.MINOR only
|
12 |
+
required: true
|
13 |
+
|
14 |
+
poetry-version:
|
15 |
+
description: Poetry version
|
16 |
+
required: true
|
17 |
+
|
18 |
+
cache-key:
|
19 |
+
description: Cache key to use for manual handling of caching
|
20 |
+
required: true
|
21 |
+
|
22 |
+
working-directory:
|
23 |
+
description: Directory whose poetry.lock file should be cached
|
24 |
+
required: true
|
25 |
+
|
26 |
+
runs:
|
27 |
+
using: composite
|
28 |
+
steps:
|
29 |
+
- uses: actions/setup-python@v5
|
30 |
+
name: Setup python ${{ inputs.python-version }}
|
31 |
+
id: setup-python
|
32 |
+
with:
|
33 |
+
python-version: ${{ inputs.python-version }}
|
34 |
+
|
35 |
+
- uses: actions/cache@v4
|
36 |
+
id: cache-bin-poetry
|
37 |
+
name: Cache Poetry binary - Python ${{ inputs.python-version }}
|
38 |
+
env:
|
39 |
+
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "1"
|
40 |
+
with:
|
41 |
+
path: |
|
42 |
+
/opt/pipx/venvs/poetry
|
43 |
+
# This step caches the poetry installation, so make sure it's keyed on the poetry version as well.
|
44 |
+
key: bin-poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-${{ inputs.poetry-version }}
|
45 |
+
|
46 |
+
- name: Refresh shell hashtable and fixup softlinks
|
47 |
+
if: steps.cache-bin-poetry.outputs.cache-hit == 'true'
|
48 |
+
shell: bash
|
49 |
+
env:
|
50 |
+
POETRY_VERSION: ${{ inputs.poetry-version }}
|
51 |
+
PYTHON_VERSION: ${{ inputs.python-version }}
|
52 |
+
run: |
|
53 |
+
set -eux
|
54 |
+
|
55 |
+
# Refresh the shell hashtable, to ensure correct `which` output.
|
56 |
+
hash -r
|
57 |
+
|
58 |
+
# `actions/cache@v3` doesn't always seem able to correctly unpack softlinks.
|
59 |
+
# Delete and recreate the softlinks pipx expects to have.
|
60 |
+
rm /opt/pipx/venvs/poetry/bin/python
|
61 |
+
cd /opt/pipx/venvs/poetry/bin
|
62 |
+
ln -s "$(which "python$PYTHON_VERSION")" python
|
63 |
+
chmod +x python
|
64 |
+
cd /opt/pipx_bin/
|
65 |
+
ln -s /opt/pipx/venvs/poetry/bin/poetry poetry
|
66 |
+
chmod +x poetry
|
67 |
+
|
68 |
+
# Ensure everything got set up correctly.
|
69 |
+
/opt/pipx/venvs/poetry/bin/python --version
|
70 |
+
/opt/pipx_bin/poetry --version
|
71 |
+
|
72 |
+
- name: Install poetry
|
73 |
+
if: steps.cache-bin-poetry.outputs.cache-hit != 'true'
|
74 |
+
shell: bash
|
75 |
+
env:
|
76 |
+
POETRY_VERSION: ${{ inputs.poetry-version }}
|
77 |
+
PYTHON_VERSION: ${{ inputs.python-version }}
|
78 |
+
# Install poetry using the python version installed by setup-python step.
|
79 |
+
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
|
80 |
+
|
81 |
+
- name: Restore pip and poetry cached dependencies
|
82 |
+
uses: actions/cache@v4
|
83 |
+
env:
|
84 |
+
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
|
85 |
+
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
|
86 |
+
with:
|
87 |
+
path: |
|
88 |
+
~/.cache/pip
|
89 |
+
~/.cache/pypoetry/virtualenvs
|
90 |
+
~/.cache/pypoetry/cache
|
91 |
+
~/.cache/pypoetry/artifacts
|
92 |
+
${{ env.WORKDIR }}/.venv
|
93 |
+
key: py-deps-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles(format('{0}/**/poetry.lock', env.WORKDIR)) }}
|
docs/langchain/.github/scripts/check_diff.py
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import sys
|
3 |
+
import os
|
4 |
+
from typing import Dict
|
5 |
+
|
6 |
+
LANGCHAIN_DIRS = [
|
7 |
+
"libs/core",
|
8 |
+
"libs/text-splitters",
|
9 |
+
"libs/langchain",
|
10 |
+
"libs/community",
|
11 |
+
"libs/experimental",
|
12 |
+
]
|
13 |
+
|
14 |
+
if __name__ == "__main__":
|
15 |
+
files = sys.argv[1:]
|
16 |
+
|
17 |
+
dirs_to_run: Dict[str, set] = {
|
18 |
+
"lint": set(),
|
19 |
+
"test": set(),
|
20 |
+
"extended-test": set(),
|
21 |
+
}
|
22 |
+
docs_edited = False
|
23 |
+
|
24 |
+
if len(files) == 300:
|
25 |
+
# max diff length is 300 files - there are likely files missing
|
26 |
+
raise ValueError("Max diff reached. Please manually run CI on changed libs.")
|
27 |
+
|
28 |
+
for file in files:
|
29 |
+
if any(
|
30 |
+
file.startswith(dir_)
|
31 |
+
for dir_ in (
|
32 |
+
".github/workflows",
|
33 |
+
".github/tools",
|
34 |
+
".github/actions",
|
35 |
+
".github/scripts/check_diff.py",
|
36 |
+
)
|
37 |
+
):
|
38 |
+
# add all LANGCHAIN_DIRS for infra changes
|
39 |
+
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
|
40 |
+
dirs_to_run["lint"].add(".")
|
41 |
+
|
42 |
+
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
|
43 |
+
# add that dir and all dirs after in LANGCHAIN_DIRS
|
44 |
+
# for extended testing
|
45 |
+
found = False
|
46 |
+
for dir_ in LANGCHAIN_DIRS:
|
47 |
+
if file.startswith(dir_):
|
48 |
+
found = True
|
49 |
+
if found:
|
50 |
+
dirs_to_run["extended-test"].add(dir_)
|
51 |
+
elif file.startswith("libs/standard-tests"):
|
52 |
+
# TODO: update to include all packages that rely on standard-tests (all partner packages)
|
53 |
+
# note: won't run on external repo partners
|
54 |
+
dirs_to_run["lint"].add("libs/standard-tests")
|
55 |
+
dirs_to_run["test"].add("libs/partners/mistralai")
|
56 |
+
dirs_to_run["test"].add("libs/partners/openai")
|
57 |
+
dirs_to_run["test"].add("libs/partners/anthropic")
|
58 |
+
dirs_to_run["test"].add("libs/partners/ai21")
|
59 |
+
dirs_to_run["test"].add("libs/partners/fireworks")
|
60 |
+
dirs_to_run["test"].add("libs/partners/groq")
|
61 |
+
|
62 |
+
elif file.startswith("libs/cli"):
|
63 |
+
# todo: add cli makefile
|
64 |
+
pass
|
65 |
+
elif file.startswith("libs/partners"):
|
66 |
+
partner_dir = file.split("/")[2]
|
67 |
+
if os.path.isdir(f"libs/partners/{partner_dir}") and [
|
68 |
+
filename
|
69 |
+
for filename in os.listdir(f"libs/partners/{partner_dir}")
|
70 |
+
if not filename.startswith(".")
|
71 |
+
] != ["README.md"]:
|
72 |
+
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
|
73 |
+
# Skip if the directory was deleted or is just a tombstone readme
|
74 |
+
elif file.startswith("libs/"):
|
75 |
+
raise ValueError(
|
76 |
+
f"Unknown lib: {file}. check_diff.py likely needs "
|
77 |
+
"an update for this new library!"
|
78 |
+
)
|
79 |
+
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
|
80 |
+
if file.startswith("docs/"):
|
81 |
+
docs_edited = True
|
82 |
+
dirs_to_run["lint"].add(".")
|
83 |
+
|
84 |
+
outputs = {
|
85 |
+
"dirs-to-lint": list(
|
86 |
+
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
|
87 |
+
),
|
88 |
+
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
|
89 |
+
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
|
90 |
+
"docs-edited": "true" if docs_edited else "",
|
91 |
+
}
|
92 |
+
for key, value in outputs.items():
|
93 |
+
json_output = json.dumps(value)
|
94 |
+
print(f"{key}={json_output}")
|
docs/langchain/.github/scripts/get_min_versions.py
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import sys
|
2 |
+
|
3 |
+
import tomllib
|
4 |
+
from packaging.version import parse as parse_version
|
5 |
+
import re
|
6 |
+
|
7 |
+
MIN_VERSION_LIBS = [
|
8 |
+
"langchain-core",
|
9 |
+
"langchain-community",
|
10 |
+
"langchain",
|
11 |
+
"langchain-text-splitters",
|
12 |
+
]
|
13 |
+
|
14 |
+
|
15 |
+
def get_min_version(version: str) -> str:
|
16 |
+
# base regex for x.x.x with cases for rc/post/etc
|
17 |
+
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
|
18 |
+
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
|
19 |
+
# case ^x.x.x
|
20 |
+
_match = re.match(f"^\\^({vstring})$", version)
|
21 |
+
if _match:
|
22 |
+
return _match.group(1)
|
23 |
+
|
24 |
+
# case >=x.x.x,<y.y.y
|
25 |
+
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
|
26 |
+
if _match:
|
27 |
+
_min = _match.group(1)
|
28 |
+
_max = _match.group(2)
|
29 |
+
assert parse_version(_min) < parse_version(_max)
|
30 |
+
return _min
|
31 |
+
|
32 |
+
# case x.x.x
|
33 |
+
_match = re.match(f"^({vstring})$", version)
|
34 |
+
if _match:
|
35 |
+
return _match.group(1)
|
36 |
+
|
37 |
+
raise ValueError(f"Unrecognized version format: {version}")
|
38 |
+
|
39 |
+
|
40 |
+
def get_min_version_from_toml(toml_path: str):
|
41 |
+
# Parse the TOML file
|
42 |
+
with open(toml_path, "rb") as file:
|
43 |
+
toml_data = tomllib.load(file)
|
44 |
+
|
45 |
+
# Get the dependencies from tool.poetry.dependencies
|
46 |
+
dependencies = toml_data["tool"]["poetry"]["dependencies"]
|
47 |
+
|
48 |
+
# Initialize a dictionary to store the minimum versions
|
49 |
+
min_versions = {}
|
50 |
+
|
51 |
+
# Iterate over the libs in MIN_VERSION_LIBS
|
52 |
+
for lib in MIN_VERSION_LIBS:
|
53 |
+
# Check if the lib is present in the dependencies
|
54 |
+
if lib in dependencies:
|
55 |
+
# Get the version string
|
56 |
+
version_string = dependencies[lib]
|
57 |
+
|
58 |
+
if isinstance(version_string, dict):
|
59 |
+
version_string = version_string["version"]
|
60 |
+
|
61 |
+
# Use parse_version to get the minimum supported version from version_string
|
62 |
+
min_version = get_min_version(version_string)
|
63 |
+
|
64 |
+
# Store the minimum version in the min_versions dictionary
|
65 |
+
min_versions[lib] = min_version
|
66 |
+
|
67 |
+
return min_versions
|
68 |
+
|
69 |
+
|
70 |
+
if __name__ == "__main__":
|
71 |
+
# Get the TOML file path from the command line argument
|
72 |
+
toml_file = sys.argv[1]
|
73 |
+
|
74 |
+
# Call the function to get the minimum versions
|
75 |
+
min_versions = get_min_version_from_toml(toml_file)
|
76 |
+
|
77 |
+
print(
|
78 |
+
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
|
79 |
+
)
|
docs/langchain/.github/tools/git-restore-mtime
ADDED
@@ -0,0 +1,606 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
#
|
3 |
+
# git-restore-mtime - Change mtime of files based on commit date of last change
|
4 |
+
#
|
5 |
+
# Copyright (C) 2012 Rodrigo Silva (MestreLion) <linux@rodrigosilva.com>
|
6 |
+
#
|
7 |
+
# This program is free software: you can redistribute it and/or modify
|
8 |
+
# it under the terms of the GNU General Public License as published by
|
9 |
+
# the Free Software Foundation, either version 3 of the License, or
|
10 |
+
# (at your option) any later version.
|
11 |
+
#
|
12 |
+
# This program is distributed in the hope that it will be useful,
|
13 |
+
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
14 |
+
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
15 |
+
# GNU General Public License for more details.
|
16 |
+
#
|
17 |
+
# You should have received a copy of the GNU General Public License
|
18 |
+
# along with this program. See <http://www.gnu.org/licenses/gpl.html>
|
19 |
+
#
|
20 |
+
# Source: https://github.com/MestreLion/git-tools
|
21 |
+
# Version: July 13, 2023 (commit hash 5f832e72453e035fccae9d63a5056918d64476a2)
|
22 |
+
"""
|
23 |
+
Change the modification time (mtime) of files in work tree, based on the
|
24 |
+
date of the most recent commit that modified the file, including renames.
|
25 |
+
|
26 |
+
Ignores untracked files and uncommitted deletions, additions and renames, and
|
27 |
+
by default modifications too.
|
28 |
+
---
|
29 |
+
Useful prior to generating release tarballs, so each file is archived with a
|
30 |
+
date that is similar to the date when the file was actually last modified,
|
31 |
+
assuming the actual modification date and its commit date are close.
|
32 |
+
"""
|
33 |
+
|
34 |
+
# TODO:
|
35 |
+
# - Add -z on git whatchanged/ls-files, so we don't deal with filename decoding
|
36 |
+
# - When Python is bumped to 3.7, use text instead of universal_newlines on subprocess
|
37 |
+
# - Update "Statistics for some large projects" with modern hardware and repositories.
|
38 |
+
# - Create a README.md for git-restore-mtime alone. It deserves extensive documentation
|
39 |
+
# - Move Statistics there
|
40 |
+
# - See git-extras as a good example on project structure and documentation
|
41 |
+
|
42 |
+
# FIXME:
|
43 |
+
# - When current dir is outside the worktree, e.g. using --work-tree, `git ls-files`
|
44 |
+
# assume any relative pathspecs are to worktree root, not the current dir. As such,
|
45 |
+
# relative pathspecs may not work.
|
46 |
+
# - Renames are tricky:
|
47 |
+
# - R100 should not change mtime, but original name is not on filelist. Should
|
48 |
+
# track renames until a valid (A, M) mtime found and then set on current name.
|
49 |
+
# - Should set mtime for both current and original directories.
|
50 |
+
# - Check mode changes with unchanged blobs?
|
51 |
+
# - Check file (A, D) for the directory mtime is not sufficient:
|
52 |
+
# - Renames also change dir mtime, unless rename was on a parent dir
|
53 |
+
# - If most recent change of all files in a dir was a Modification (M),
|
54 |
+
# dir might not be touched at all.
|
55 |
+
# - Dirs containing only subdirectories but no direct files will also
|
56 |
+
# not be touched. They're files' [grand]parent dir, but never their dirname().
|
57 |
+
# - Some solutions:
|
58 |
+
# - After files done, perform some dir processing for missing dirs, finding latest
|
59 |
+
# file (A, D, R)
|
60 |
+
# - Simple approach: dir mtime is the most recent child (dir or file) mtime
|
61 |
+
# - Use a virtual concept of "created at most at" to fill missing info, bubble up
|
62 |
+
# to parents and grandparents
|
63 |
+
# - When handling [grand]parent dirs, stay inside <pathspec>
|
64 |
+
# - Better handling of merge commits. `-m` is plain *wrong*. `-c/--cc` is perfect, but
|
65 |
+
# painfully slow. First pass without merge commits is not accurate. Maybe add a new
|
66 |
+
# `--accurate` mode for `--cc`?
|
67 |
+
|
68 |
+
if __name__ != "__main__":
|
69 |
+
raise ImportError("{} should not be used as a module.".format(__name__))
|
70 |
+
|
71 |
+
import argparse
|
72 |
+
import datetime
|
73 |
+
import logging
|
74 |
+
import os.path
|
75 |
+
import shlex
|
76 |
+
import signal
|
77 |
+
import subprocess
|
78 |
+
import sys
|
79 |
+
import time
|
80 |
+
|
81 |
+
__version__ = "2022.12+dev"
|
82 |
+
|
83 |
+
# Update symlinks only if the platform supports not following them
|
84 |
+
UPDATE_SYMLINKS = bool(os.utime in getattr(os, 'supports_follow_symlinks', []))
|
85 |
+
|
86 |
+
# Call os.path.normpath() only if not in a POSIX platform (Windows)
|
87 |
+
NORMALIZE_PATHS = (os.path.sep != '/')
|
88 |
+
|
89 |
+
# How many files to process in each batch when re-trying merge commits
|
90 |
+
STEPMISSING = 100
|
91 |
+
|
92 |
+
# (Extra) keywords for the os.utime() call performed by touch()
|
93 |
+
UTIME_KWS = {} if not UPDATE_SYMLINKS else {'follow_symlinks': False}
|
94 |
+
|
95 |
+
|
96 |
+
# Command-line interface ######################################################
|
97 |
+
|
98 |
+
def parse_args():
|
99 |
+
parser = argparse.ArgumentParser(
|
100 |
+
description=__doc__.split('\n---')[0])
|
101 |
+
|
102 |
+
group = parser.add_mutually_exclusive_group()
|
103 |
+
group.add_argument('--quiet', '-q', dest='loglevel',
|
104 |
+
action="store_const", const=logging.WARNING, default=logging.INFO,
|
105 |
+
help="Suppress informative messages and summary statistics.")
|
106 |
+
group.add_argument('--verbose', '-v', action="count", help="""
|
107 |
+
Print additional information for each processed file.
|
108 |
+
Specify twice to further increase verbosity.
|
109 |
+
""")
|
110 |
+
|
111 |
+
parser.add_argument('--cwd', '-C', metavar="DIRECTORY", help="""
|
112 |
+
Run as if %(prog)s was started in directory %(metavar)s.
|
113 |
+
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
|
114 |
+
See 'man 1 git' or 'git --help' for more information.
|
115 |
+
""")
|
116 |
+
|
117 |
+
parser.add_argument('--git-dir', dest='gitdir', metavar="GITDIR", help="""
|
118 |
+
Path to the git repository, by default auto-discovered by searching
|
119 |
+
the current directory and its parents for a .git/ subdirectory.
|
120 |
+
""")
|
121 |
+
|
122 |
+
parser.add_argument('--work-tree', dest='workdir', metavar="WORKTREE", help="""
|
123 |
+
Path to the work tree root, by default the parent of GITDIR if it's
|
124 |
+
automatically discovered, or the current directory if GITDIR is set.
|
125 |
+
""")
|
126 |
+
|
127 |
+
parser.add_argument('--force', '-f', default=False, action="store_true", help="""
|
128 |
+
Force updating files with uncommitted modifications.
|
129 |
+
Untracked files and uncommitted deletions, renames and additions are
|
130 |
+
always ignored.
|
131 |
+
""")
|
132 |
+
|
133 |
+
parser.add_argument('--merge', '-m', default=False, action="store_true", help="""
|
134 |
+
Include merge commits.
|
135 |
+
Leads to more recent times and more files per commit, thus with the same
|
136 |
+
time, which may or may not be what you want.
|
137 |
+
Including merge commits may lead to fewer commits being evaluated as files
|
138 |
+
are found sooner, which can improve performance, sometimes substantially.
|
139 |
+
But as merge commits are usually huge, processing them may also take longer.
|
140 |
+
By default, merge commits are only used for files missing from regular commits.
|
141 |
+
""")
|
142 |
+
|
143 |
+
parser.add_argument('--first-parent', default=False, action="store_true", help="""
|
144 |
+
Consider only the first parent, the "main branch", when evaluating merge commits.
|
145 |
+
Only effective when merge commits are processed, either when --merge is
|
146 |
+
used or when finding missing files after the first regular log search.
|
147 |
+
See --skip-missing.
|
148 |
+
""")
|
149 |
+
|
150 |
+
parser.add_argument('--skip-missing', '-s', dest="missing", default=True,
|
151 |
+
action="store_false", help="""
|
152 |
+
Do not try to find missing files.
|
153 |
+
If merge commits were not evaluated with --merge and some files were
|
154 |
+
not found in regular commits, by default %(prog)s searches for these
|
155 |
+
files again in the merge commits.
|
156 |
+
This option disables this retry, so files found only in merge commits
|
157 |
+
will not have their timestamp updated.
|
158 |
+
""")
|
159 |
+
|
160 |
+
parser.add_argument('--no-directories', '-D', dest='dirs', default=True,
|
161 |
+
action="store_false", help="""
|
162 |
+
Do not update directory timestamps.
|
163 |
+
By default, use the time of its most recently created, renamed or deleted file.
|
164 |
+
Note that just modifying a file will NOT update its directory time.
|
165 |
+
""")
|
166 |
+
|
167 |
+
parser.add_argument('--test', '-t', default=False, action="store_true",
|
168 |
+
help="Test run: do not actually update any file timestamp.")
|
169 |
+
|
170 |
+
parser.add_argument('--commit-time', '-c', dest='commit_time', default=False,
|
171 |
+
action='store_true', help="Use commit time instead of author time.")
|
172 |
+
|
173 |
+
parser.add_argument('--oldest-time', '-o', dest='reverse_order', default=False,
|
174 |
+
action='store_true', help="""
|
175 |
+
Update times based on the oldest, instead of the most recent commit of a file.
|
176 |
+
This reverses the order in which the git log is processed to emulate a
|
177 |
+
file "creation" date. Note this will be inaccurate for files deleted and
|
178 |
+
re-created at later dates.
|
179 |
+
""")
|
180 |
+
|
181 |
+
parser.add_argument('--skip-older-than', metavar='SECONDS', type=int, help="""
|
182 |
+
Ignore files that are currently older than %(metavar)s.
|
183 |
+
Useful in workflows that assume such files already have a correct timestamp,
|
184 |
+
as it may improve performance by processing fewer files.
|
185 |
+
""")
|
186 |
+
|
187 |
+
parser.add_argument('--skip-older-than-commit', '-N', default=False,
|
188 |
+
action='store_true', help="""
|
189 |
+
Ignore files older than the timestamp it would be updated to.
|
190 |
+
Such files may be considered "original", likely in the author's repository.
|
191 |
+
""")
|
192 |
+
|
193 |
+
parser.add_argument('--unique-times', default=False, action="store_true", help="""
|
194 |
+
Set the microseconds to a unique value per commit.
|
195 |
+
Allows telling apart changes that would otherwise have identical timestamps,
|
196 |
+
as git's time accuracy is in seconds.
|
197 |
+
""")
|
198 |
+
|
199 |
+
parser.add_argument('pathspec', nargs='*', metavar='PATHSPEC', help="""
|
200 |
+
Only modify paths matching %(metavar)s, relative to current directory.
|
201 |
+
By default, update all but untracked files and submodules.
|
202 |
+
""")
|
203 |
+
|
204 |
+
parser.add_argument('--version', '-V', action='version',
|
205 |
+
version='%(prog)s version {version}'.format(version=get_version()))
|
206 |
+
|
207 |
+
args_ = parser.parse_args()
|
208 |
+
if args_.verbose:
|
209 |
+
args_.loglevel = max(logging.TRACE, logging.DEBUG // args_.verbose)
|
210 |
+
args_.debug = args_.loglevel <= logging.DEBUG
|
211 |
+
return args_
|
212 |
+
|
213 |
+
|
214 |
+
def get_version(version=__version__):
|
215 |
+
if not version.endswith('+dev'):
|
216 |
+
return version
|
217 |
+
try:
|
218 |
+
cwd = os.path.dirname(os.path.realpath(__file__))
|
219 |
+
return Git(cwd=cwd, errors=False).describe().lstrip('v')
|
220 |
+
except Git.Error:
|
221 |
+
return '-'.join((version, "unknown"))
|
222 |
+
|
223 |
+
|
224 |
+
# Helper functions ############################################################
|
225 |
+
|
226 |
+
def setup_logging():
|
227 |
+
"""Add TRACE logging level and corresponding method, return the root logger"""
|
228 |
+
logging.TRACE = TRACE = logging.DEBUG // 2
|
229 |
+
logging.Logger.trace = lambda _, m, *a, **k: _.log(TRACE, m, *a, **k)
|
230 |
+
return logging.getLogger()
|
231 |
+
|
232 |
+
|
233 |
+
def normalize(path):
|
234 |
+
r"""Normalize paths from git, handling non-ASCII characters.
|
235 |
+
|
236 |
+
Git stores paths as UTF-8 normalization form C.
|
237 |
+
If path contains non-ASCII or non-printable characters, git outputs the UTF-8
|
238 |
+
in octal-escaped notation, escaping double-quotes and backslashes, and then
|
239 |
+
double-quoting the whole path.
|
240 |
+
https://git-scm.com/docs/git-config#Documentation/git-config.txt-corequotePath
|
241 |
+
|
242 |
+
This function reverts this encoding, so:
|
243 |
+
normalize(r'"Back\\slash_double\"quote_a\303\247a\303\255"') =>
|
244 |
+
r'Back\slash_double"quote_açaí')
|
245 |
+
|
246 |
+
Paths with invalid UTF-8 encoding, such as single 0x80-0xFF bytes (e.g, from
|
247 |
+
Latin1/Windows-1251 encoding) are decoded using surrogate escape, the same
|
248 |
+
method used by Python for filesystem paths. So 0xE6 ("æ" in Latin1, r'\\346'
|
249 |
+
from Git) is decoded as "\udce6". See https://peps.python.org/pep-0383/ and
|
250 |
+
https://vstinner.github.io/painful-history-python-filesystem-encoding.html
|
251 |
+
|
252 |
+
Also see notes on `windows/non-ascii-paths.txt` about path encodings on
|
253 |
+
non-UTF-8 platforms and filesystems.
|
254 |
+
"""
|
255 |
+
if path and path[0] == '"':
|
256 |
+
# Python 2: path = path[1:-1].decode("string-escape")
|
257 |
+
# Python 3: https://stackoverflow.com/a/46650050/624066
|
258 |
+
path = (path[1:-1] # Remove enclosing double quotes
|
259 |
+
.encode('latin1') # Convert to bytes, required by 'unicode-escape'
|
260 |
+
.decode('unicode-escape') # Perform the actual octal-escaping decode
|
261 |
+
.encode('latin1') # 1:1 mapping to bytes, UTF-8 encoded
|
262 |
+
.decode('utf8', 'surrogateescape')) # Decode from UTF-8
|
263 |
+
if NORMALIZE_PATHS:
|
264 |
+
# Make sure the slash matches the OS; for Windows we need a backslash
|
265 |
+
path = os.path.normpath(path)
|
266 |
+
return path
|
267 |
+
|
268 |
+
|
269 |
+
def dummy(*_args, **_kwargs):
|
270 |
+
"""No-op function used in dry-run tests"""
|
271 |
+
|
272 |
+
|
273 |
+
def touch(path, mtime):
|
274 |
+
"""The actual mtime update"""
|
275 |
+
os.utime(path, (mtime, mtime), **UTIME_KWS)
|
276 |
+
|
277 |
+
|
278 |
+
def touch_ns(path, mtime_ns):
|
279 |
+
"""The actual mtime update, using nanoseconds for unique timestamps"""
|
280 |
+
os.utime(path, None, ns=(mtime_ns, mtime_ns), **UTIME_KWS)
|
281 |
+
|
282 |
+
|
283 |
+
def isodate(secs: int):
|
284 |
+
# time.localtime() accepts floats, but discards fractional part
|
285 |
+
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(secs))
|
286 |
+
|
287 |
+
|
288 |
+
def isodate_ns(ns: int):
|
289 |
+
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
|
290 |
+
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=' ')
|
291 |
+
|
292 |
+
|
293 |
+
def get_mtime_ns(secs: int, idx: int):
|
294 |
+
# Time resolution for filesystems and functions:
|
295 |
+
# ext-4 and other POSIX filesystems: 1 nanosecond
|
296 |
+
# NTFS (Windows default): 100 nanoseconds
|
297 |
+
# datetime.datetime() (due to 64-bit float epoch): 1 microsecond
|
298 |
+
us = idx % 1000000 # 10**6
|
299 |
+
return 1000 * (1000000 * secs + us)
|
300 |
+
|
301 |
+
|
302 |
+
def get_mtime_path(path):
|
303 |
+
return os.path.getmtime(path)
|
304 |
+
|
305 |
+
|
306 |
+
# Git class and parse_log(), the heart of the script ##########################
|
307 |
+
|
308 |
+
class Git:
|
309 |
+
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
|
310 |
+
self.gitcmd = ['git']
|
311 |
+
self.errors = errors
|
312 |
+
self._proc = None
|
313 |
+
if workdir: self.gitcmd.extend(('--work-tree', workdir))
|
314 |
+
if gitdir: self.gitcmd.extend(('--git-dir', gitdir))
|
315 |
+
if cwd: self.gitcmd.extend(('-C', cwd))
|
316 |
+
self.workdir, self.gitdir = self._get_repo_dirs()
|
317 |
+
|
318 |
+
def ls_files(self, paths: list = None):
|
319 |
+
return (normalize(_) for _ in self._run('ls-files --full-name', paths))
|
320 |
+
|
321 |
+
def ls_dirty(self, force=False):
|
322 |
+
return (normalize(_[3:].split(' -> ', 1)[-1])
|
323 |
+
for _ in self._run('status --porcelain')
|
324 |
+
if _[:2] != '??' and (not force or (_[0] in ('R', 'A')
|
325 |
+
or _[1] == 'D')))
|
326 |
+
|
327 |
+
def log(self, merge=False, first_parent=False, commit_time=False,
|
328 |
+
reverse_order=False, paths: list = None):
|
329 |
+
cmd = 'whatchanged --pretty={}'.format('%ct' if commit_time else '%at')
|
330 |
+
if merge: cmd += ' -m'
|
331 |
+
if first_parent: cmd += ' --first-parent'
|
332 |
+
if reverse_order: cmd += ' --reverse'
|
333 |
+
return self._run(cmd, paths)
|
334 |
+
|
335 |
+
def describe(self):
|
336 |
+
return self._run('describe --tags', check=True)[0]
|
337 |
+
|
338 |
+
def terminate(self):
|
339 |
+
if self._proc is None:
|
340 |
+
return
|
341 |
+
try:
|
342 |
+
self._proc.terminate()
|
343 |
+
except OSError:
|
344 |
+
# Avoid errors on OpenBSD
|
345 |
+
pass
|
346 |
+
|
347 |
+
def _get_repo_dirs(self):
|
348 |
+
return (os.path.normpath(_) for _ in
|
349 |
+
self._run('rev-parse --show-toplevel --absolute-git-dir', check=True))
|
350 |
+
|
351 |
+
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
|
352 |
+
cmdlist = self.gitcmd + shlex.split(cmdstr)
|
353 |
+
if paths:
|
354 |
+
cmdlist.append('--')
|
355 |
+
cmdlist.extend(paths)
|
356 |
+
popen_args = dict(universal_newlines=True, encoding='utf8')
|
357 |
+
if not self.errors:
|
358 |
+
popen_args['stderr'] = subprocess.DEVNULL
|
359 |
+
log.trace("Executing: %s", ' '.join(cmdlist))
|
360 |
+
if not output:
|
361 |
+
return subprocess.call(cmdlist, **popen_args)
|
362 |
+
if check:
|
363 |
+
try:
|
364 |
+
stdout: str = subprocess.check_output(cmdlist, **popen_args)
|
365 |
+
return stdout.splitlines()
|
366 |
+
except subprocess.CalledProcessError as e:
|
367 |
+
raise self.Error(e.returncode, e.cmd, e.output, e.stderr)
|
368 |
+
self._proc = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, **popen_args)
|
369 |
+
return (_.rstrip() for _ in self._proc.stdout)
|
370 |
+
|
371 |
+
def __del__(self):
|
372 |
+
self.terminate()
|
373 |
+
|
374 |
+
class Error(subprocess.CalledProcessError):
|
375 |
+
"""Error from git executable"""
|
376 |
+
|
377 |
+
|
378 |
+
def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
|
379 |
+
mtime = 0
|
380 |
+
datestr = isodate(0)
|
381 |
+
for line in git.log(
|
382 |
+
merge,
|
383 |
+
args.first_parent,
|
384 |
+
args.commit_time,
|
385 |
+
args.reverse_order,
|
386 |
+
filterlist
|
387 |
+
):
|
388 |
+
stats['loglines'] += 1
|
389 |
+
|
390 |
+
# Blank line between Date and list of files
|
391 |
+
if not line:
|
392 |
+
continue
|
393 |
+
|
394 |
+
# Date line
|
395 |
+
if line[0] != ':': # Faster than `not line.startswith(':')`
|
396 |
+
stats['commits'] += 1
|
397 |
+
mtime = int(line)
|
398 |
+
if args.unique_times:
|
399 |
+
mtime = get_mtime_ns(mtime, stats['commits'])
|
400 |
+
if args.debug:
|
401 |
+
datestr = isodate(mtime)
|
402 |
+
continue
|
403 |
+
|
404 |
+
# File line: three tokens if it describes a renaming, otherwise two
|
405 |
+
tokens = line.split('\t')
|
406 |
+
|
407 |
+
# Possible statuses:
|
408 |
+
# M: Modified (content changed)
|
409 |
+
# A: Added (created)
|
410 |
+
# D: Deleted
|
411 |
+
# T: Type changed: to/from regular file, symlinks, submodules
|
412 |
+
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
|
413 |
+
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
|
414 |
+
status = tokens[0].split(' ')[-1]
|
415 |
+
file = tokens[-1]
|
416 |
+
|
417 |
+
# Handles non-ASCII chars and OS path separator
|
418 |
+
file = normalize(file)
|
419 |
+
|
420 |
+
def do_file():
|
421 |
+
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
|
422 |
+
stats['skip'] += 1
|
423 |
+
return
|
424 |
+
if args.debug:
|
425 |
+
log.debug("%d\t%d\t%d\t%s\t%s",
|
426 |
+
stats['loglines'], stats['commits'], stats['files'],
|
427 |
+
datestr, file)
|
428 |
+
try:
|
429 |
+
touch(os.path.join(git.workdir, file), mtime)
|
430 |
+
stats['touches'] += 1
|
431 |
+
except Exception as e:
|
432 |
+
log.error("ERROR: %s: %s", e, file)
|
433 |
+
stats['errors'] += 1
|
434 |
+
|
435 |
+
def do_dir():
|
436 |
+
if args.debug:
|
437 |
+
log.debug("%d\t%d\t-\t%s\t%s",
|
438 |
+
stats['loglines'], stats['commits'],
|
439 |
+
datestr, "{}/".format(dirname or '.'))
|
440 |
+
try:
|
441 |
+
touch(os.path.join(git.workdir, dirname), mtime)
|
442 |
+
stats['dirtouches'] += 1
|
443 |
+
except Exception as e:
|
444 |
+
log.error("ERROR: %s: %s", e, dirname)
|
445 |
+
stats['direrrors'] += 1
|
446 |
+
|
447 |
+
if file in filelist:
|
448 |
+
stats['files'] -= 1
|
449 |
+
filelist.remove(file)
|
450 |
+
do_file()
|
451 |
+
|
452 |
+
if args.dirs and status in ('A', 'D'):
|
453 |
+
dirname = os.path.dirname(file)
|
454 |
+
if dirname in dirlist:
|
455 |
+
dirlist.remove(dirname)
|
456 |
+
do_dir()
|
457 |
+
|
458 |
+
# All files done?
|
459 |
+
if not stats['files']:
|
460 |
+
git.terminate()
|
461 |
+
return
|
462 |
+
|
463 |
+
|
464 |
+
# Main Logic ##################################################################
|
465 |
+
|
466 |
+
def main():
|
467 |
+
start = time.time() # yes, Wall time. CPU time is not realistic for users.
|
468 |
+
stats = {_: 0 for _ in ('loglines', 'commits', 'touches', 'skip', 'errors',
|
469 |
+
'dirtouches', 'direrrors')}
|
470 |
+
|
471 |
+
logging.basicConfig(level=args.loglevel, format='%(message)s')
|
472 |
+
log.trace("Arguments: %s", args)
|
473 |
+
|
474 |
+
# First things first: Where and Who are we?
|
475 |
+
if args.cwd:
|
476 |
+
log.debug("Changing directory: %s", args.cwd)
|
477 |
+
try:
|
478 |
+
os.chdir(args.cwd)
|
479 |
+
except OSError as e:
|
480 |
+
log.critical(e)
|
481 |
+
return e.errno
|
482 |
+
# Using both os.chdir() and `git -C` is redundant, but might prevent side effects
|
483 |
+
# `git -C` alone could be enough if we make sure that:
|
484 |
+
# - all paths, including args.pathspec, are processed by git: ls-files, rev-parse
|
485 |
+
# - touch() / os.utime() path argument is always prepended with git.workdir
|
486 |
+
try:
|
487 |
+
git = Git(workdir=args.workdir, gitdir=args.gitdir, cwd=args.cwd)
|
488 |
+
except Git.Error as e:
|
489 |
+
# Not in a git repository, and git already informed user on stderr. So we just...
|
490 |
+
return e.returncode
|
491 |
+
|
492 |
+
# Get the files managed by git and build file list to be processed
|
493 |
+
if UPDATE_SYMLINKS and not args.skip_older_than:
|
494 |
+
filelist = set(git.ls_files(args.pathspec))
|
495 |
+
else:
|
496 |
+
filelist = set()
|
497 |
+
for path in git.ls_files(args.pathspec):
|
498 |
+
fullpath = os.path.join(git.workdir, path)
|
499 |
+
|
500 |
+
# Symlink (to file, to dir or broken - git handles the same way)
|
501 |
+
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
|
502 |
+
log.warning("WARNING: Skipping symlink, no OS support for updates: %s",
|
503 |
+
path)
|
504 |
+
continue
|
505 |
+
|
506 |
+
# skip files which are older than given threshold
|
507 |
+
if (args.skip_older_than
|
508 |
+
and start - get_mtime_path(fullpath) > args.skip_older_than):
|
509 |
+
continue
|
510 |
+
|
511 |
+
# Always add files relative to worktree root
|
512 |
+
filelist.add(path)
|
513 |
+
|
514 |
+
# If --force, silently ignore uncommitted deletions (not in the filesystem)
|
515 |
+
# and renames / additions (will not be found in log anyway)
|
516 |
+
if args.force:
|
517 |
+
filelist -= set(git.ls_dirty(force=True))
|
518 |
+
# Otherwise, ignore any dirty files
|
519 |
+
else:
|
520 |
+
dirty = set(git.ls_dirty())
|
521 |
+
if dirty:
|
522 |
+
log.warning("WARNING: Modified files in the working directory were ignored."
|
523 |
+
"\nTo include such files, commit your changes or use --force.")
|
524 |
+
filelist -= dirty
|
525 |
+
|
526 |
+
# Build dir list to be processed
|
527 |
+
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
|
528 |
+
|
529 |
+
stats['totalfiles'] = stats['files'] = len(filelist)
|
530 |
+
log.info("{0:,} files to be processed in work dir".format(stats['totalfiles']))
|
531 |
+
|
532 |
+
if not filelist:
|
533 |
+
# Nothing to do. Exit silently and without errors, just like git does
|
534 |
+
return
|
535 |
+
|
536 |
+
# Process the log until all files are 'touched'
|
537 |
+
log.debug("Line #\tLog #\tF.Left\tModification Time\tFile Name")
|
538 |
+
parse_log(filelist, dirlist, stats, git, args.merge, args.pathspec)
|
539 |
+
|
540 |
+
# Missing files
|
541 |
+
if filelist:
|
542 |
+
# Try to find them in merge logs, if not done already
|
543 |
+
# (usually HUGE, thus MUCH slower!)
|
544 |
+
if args.missing and not args.merge:
|
545 |
+
filterlist = list(filelist)
|
546 |
+
missing = len(filterlist)
|
547 |
+
log.info("{0:,} files not found in log, trying merge commits".format(missing))
|
548 |
+
for i in range(0, missing, STEPMISSING):
|
549 |
+
parse_log(filelist, dirlist, stats, git,
|
550 |
+
merge=True, filterlist=filterlist[i:i + STEPMISSING])
|
551 |
+
|
552 |
+
# Still missing some?
|
553 |
+
for file in filelist:
|
554 |
+
log.warning("WARNING: not found in the log: %s", file)
|
555 |
+
|
556 |
+
# Final statistics
|
557 |
+
# Suggestion: use git-log --before=mtime to brag about skipped log entries
|
558 |
+
def log_info(msg, *a, width=13):
|
559 |
+
ifmt = '{:%d,}' % (width,) # not using 'n' for consistency with ffmt
|
560 |
+
ffmt = '{:%d,.2f}' % (width,)
|
561 |
+
# %-formatting lacks a thousand separator, must pre-render with .format()
|
562 |
+
log.info(msg.replace('%d', ifmt).replace('%f', ffmt).format(*a))
|
563 |
+
|
564 |
+
log_info(
|
565 |
+
"Statistics:\n"
|
566 |
+
"%f seconds\n"
|
567 |
+
"%d log lines processed\n"
|
568 |
+
"%d commits evaluated",
|
569 |
+
time.time() - start, stats['loglines'], stats['commits'])
|
570 |
+
|
571 |
+
if args.dirs:
|
572 |
+
if stats['direrrors']: log_info("%d directory update errors", stats['direrrors'])
|
573 |
+
log_info("%d directories updated", stats['dirtouches'])
|
574 |
+
|
575 |
+
if stats['touches'] != stats['totalfiles']:
|
576 |
+
log_info("%d files", stats['totalfiles'])
|
577 |
+
if stats['skip']: log_info("%d files skipped", stats['skip'])
|
578 |
+
if stats['files']: log_info("%d files missing", stats['files'])
|
579 |
+
if stats['errors']: log_info("%d file update errors", stats['errors'])
|
580 |
+
|
581 |
+
log_info("%d files updated", stats['touches'])
|
582 |
+
|
583 |
+
if args.test:
|
584 |
+
log.info("TEST RUN - No files modified!")
|
585 |
+
|
586 |
+
|
587 |
+
# Keep only essential, global assignments here. Any other logic must be in main()
|
588 |
+
log = setup_logging()
|
589 |
+
args = parse_args()
|
590 |
+
|
591 |
+
# Set the actual touch() and other functions based on command-line arguments
|
592 |
+
if args.unique_times:
|
593 |
+
touch = touch_ns
|
594 |
+
isodate = isodate_ns
|
595 |
+
|
596 |
+
# Make sure this is always set last to ensure --test behaves as intended
|
597 |
+
if args.test:
|
598 |
+
touch = dummy
|
599 |
+
|
600 |
+
# UI done, it's showtime!
|
601 |
+
try:
|
602 |
+
sys.exit(main())
|
603 |
+
except KeyboardInterrupt:
|
604 |
+
log.info("\nAborting")
|
605 |
+
signal.signal(signal.SIGINT, signal.SIG_DFL)
|
606 |
+
os.kill(os.getpid(), signal.SIGINT)
|
docs/langchain/.github/workflows/.codespell-exclude
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
libs/community/langchain_community/llms/yuan2.py
|
2 |
+
"NotIn": "not in",
|
3 |
+
- `/checkin`: Check-in
|
4 |
+
docs/docs/integrations/providers/trulens.mdx
|
5 |
+
self.assertIn(
|
6 |
+
from trulens_eval import Tru
|
7 |
+
tru = Tru()
|
docs/langchain/.github/workflows/_compile_integration_test.yml
ADDED
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: compile-integration-test
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "From which folder this pipeline executes"
|
10 |
+
|
11 |
+
env:
|
12 |
+
POETRY_VERSION: "1.7.1"
|
13 |
+
|
14 |
+
jobs:
|
15 |
+
build:
|
16 |
+
defaults:
|
17 |
+
run:
|
18 |
+
working-directory: ${{ inputs.working-directory }}
|
19 |
+
runs-on: ubuntu-latest
|
20 |
+
strategy:
|
21 |
+
matrix:
|
22 |
+
python-version:
|
23 |
+
- "3.8"
|
24 |
+
- "3.9"
|
25 |
+
- "3.10"
|
26 |
+
- "3.11"
|
27 |
+
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
|
28 |
+
steps:
|
29 |
+
- uses: actions/checkout@v4
|
30 |
+
|
31 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
32 |
+
uses: "./.github/actions/poetry_setup"
|
33 |
+
with:
|
34 |
+
python-version: ${{ matrix.python-version }}
|
35 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
36 |
+
working-directory: ${{ inputs.working-directory }}
|
37 |
+
cache-key: compile-integration
|
38 |
+
|
39 |
+
- name: Install integration dependencies
|
40 |
+
shell: bash
|
41 |
+
run: poetry install --with=test_integration,test
|
42 |
+
|
43 |
+
- name: Check integration tests compile
|
44 |
+
shell: bash
|
45 |
+
run: poetry run pytest -m compile tests/integration_tests
|
46 |
+
|
47 |
+
- name: Ensure the tests did not create any additional files
|
48 |
+
shell: bash
|
49 |
+
run: |
|
50 |
+
set -eu
|
51 |
+
|
52 |
+
STATUS="$(git status)"
|
53 |
+
echo "$STATUS"
|
54 |
+
|
55 |
+
# grep will exit non-zero if the target message isn't found,
|
56 |
+
# and `set -e` above will cause the step to fail.
|
57 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
docs/langchain/.github/workflows/_dependencies.yml
ADDED
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: dependencies
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "From which folder this pipeline executes"
|
10 |
+
langchain-location:
|
11 |
+
required: false
|
12 |
+
type: string
|
13 |
+
description: "Relative path to the langchain library folder"
|
14 |
+
|
15 |
+
env:
|
16 |
+
POETRY_VERSION: "1.7.1"
|
17 |
+
|
18 |
+
jobs:
|
19 |
+
build:
|
20 |
+
defaults:
|
21 |
+
run:
|
22 |
+
working-directory: ${{ inputs.working-directory }}
|
23 |
+
runs-on: ubuntu-latest
|
24 |
+
strategy:
|
25 |
+
matrix:
|
26 |
+
python-version:
|
27 |
+
- "3.8"
|
28 |
+
- "3.9"
|
29 |
+
- "3.10"
|
30 |
+
- "3.11"
|
31 |
+
name: dependency checks ${{ matrix.python-version }}
|
32 |
+
steps:
|
33 |
+
- uses: actions/checkout@v4
|
34 |
+
|
35 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
36 |
+
uses: "./.github/actions/poetry_setup"
|
37 |
+
with:
|
38 |
+
python-version: ${{ matrix.python-version }}
|
39 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
40 |
+
working-directory: ${{ inputs.working-directory }}
|
41 |
+
cache-key: pydantic-cross-compat
|
42 |
+
|
43 |
+
- name: Install dependencies
|
44 |
+
shell: bash
|
45 |
+
run: poetry install
|
46 |
+
|
47 |
+
- name: Check imports with base dependencies
|
48 |
+
shell: bash
|
49 |
+
run: poetry run make check_imports
|
50 |
+
|
51 |
+
- name: Install test dependencies
|
52 |
+
shell: bash
|
53 |
+
run: poetry install --with test
|
54 |
+
|
55 |
+
- name: Install langchain editable
|
56 |
+
working-directory: ${{ inputs.working-directory }}
|
57 |
+
if: ${{ inputs.langchain-location }}
|
58 |
+
env:
|
59 |
+
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
|
60 |
+
run: |
|
61 |
+
poetry run pip install -e "$LANGCHAIN_LOCATION"
|
62 |
+
|
63 |
+
- name: Install the opposite major version of pydantic
|
64 |
+
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
|
65 |
+
shell: bash
|
66 |
+
# airbyte currently doesn't support pydantic v2
|
67 |
+
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
|
68 |
+
run: |
|
69 |
+
# Determine the major part of pydantic version
|
70 |
+
REGULAR_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
|
71 |
+
|
72 |
+
if [[ "$REGULAR_VERSION" == "1" ]]; then
|
73 |
+
PYDANTIC_DEP=">=2.1,<3"
|
74 |
+
TEST_WITH_VERSION="2"
|
75 |
+
elif [[ "$REGULAR_VERSION" == "2" ]]; then
|
76 |
+
PYDANTIC_DEP="<2"
|
77 |
+
TEST_WITH_VERSION="1"
|
78 |
+
else
|
79 |
+
echo "Unexpected pydantic major version '$REGULAR_VERSION', cannot determine which version to use for cross-compatibility test."
|
80 |
+
exit 1
|
81 |
+
fi
|
82 |
+
|
83 |
+
# Install via `pip` instead of `poetry add` to avoid changing lockfile,
|
84 |
+
# which would prevent caching from working: the cache would get saved
|
85 |
+
# to a different key than where it gets loaded from.
|
86 |
+
poetry run pip install "pydantic${PYDANTIC_DEP}"
|
87 |
+
|
88 |
+
# Ensure that the correct pydantic is installed now.
|
89 |
+
echo "Checking pydantic version... Expecting ${TEST_WITH_VERSION}"
|
90 |
+
|
91 |
+
# Determine the major part of pydantic version
|
92 |
+
CURRENT_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
|
93 |
+
|
94 |
+
# Check that the major part of pydantic version is as expected, if not
|
95 |
+
# raise an error
|
96 |
+
if [[ "$CURRENT_VERSION" != "$TEST_WITH_VERSION" ]]; then
|
97 |
+
echo "Error: expected pydantic version ${CURRENT_VERSION} to have been installed, but found: ${TEST_WITH_VERSION}"
|
98 |
+
exit 1
|
99 |
+
fi
|
100 |
+
echo "Found pydantic version ${CURRENT_VERSION}, as expected"
|
101 |
+
- name: Run pydantic compatibility tests
|
102 |
+
# airbyte currently doesn't support pydantic v2
|
103 |
+
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
|
104 |
+
shell: bash
|
105 |
+
run: make test
|
106 |
+
|
107 |
+
- name: Ensure the tests did not create any additional files
|
108 |
+
shell: bash
|
109 |
+
run: |
|
110 |
+
set -eu
|
111 |
+
|
112 |
+
STATUS="$(git status)"
|
113 |
+
echo "$STATUS"
|
114 |
+
|
115 |
+
# grep will exit non-zero if the target message isn't found,
|
116 |
+
# and `set -e` above will cause the step to fail.
|
117 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
docs/langchain/.github/workflows/_integration_test.yml
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Integration tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_dispatch:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
|
10 |
+
env:
|
11 |
+
POETRY_VERSION: "1.7.1"
|
12 |
+
|
13 |
+
jobs:
|
14 |
+
build:
|
15 |
+
environment: Scheduled testing
|
16 |
+
defaults:
|
17 |
+
run:
|
18 |
+
working-directory: ${{ inputs.working-directory }}
|
19 |
+
runs-on: ubuntu-latest
|
20 |
+
strategy:
|
21 |
+
matrix:
|
22 |
+
python-version:
|
23 |
+
- "3.8"
|
24 |
+
- "3.11"
|
25 |
+
name: Python ${{ matrix.python-version }}
|
26 |
+
steps:
|
27 |
+
- uses: actions/checkout@v4
|
28 |
+
|
29 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
30 |
+
uses: "./.github/actions/poetry_setup"
|
31 |
+
with:
|
32 |
+
python-version: ${{ matrix.python-version }}
|
33 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
34 |
+
working-directory: ${{ inputs.working-directory }}
|
35 |
+
cache-key: core
|
36 |
+
|
37 |
+
- name: Install dependencies
|
38 |
+
shell: bash
|
39 |
+
run: poetry install --with test,test_integration
|
40 |
+
|
41 |
+
- name: Install deps outside pyproject
|
42 |
+
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
|
43 |
+
shell: bash
|
44 |
+
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
|
45 |
+
|
46 |
+
- name: 'Authenticate to Google Cloud'
|
47 |
+
id: 'auth'
|
48 |
+
uses: google-github-actions/auth@v2
|
49 |
+
with:
|
50 |
+
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
51 |
+
|
52 |
+
- name: Run integration tests
|
53 |
+
shell: bash
|
54 |
+
env:
|
55 |
+
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
|
56 |
+
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
57 |
+
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
58 |
+
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
59 |
+
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
60 |
+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
61 |
+
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
62 |
+
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
|
63 |
+
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
64 |
+
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
65 |
+
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
66 |
+
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
67 |
+
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
|
68 |
+
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
|
69 |
+
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
|
70 |
+
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
|
71 |
+
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
|
72 |
+
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
|
73 |
+
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
|
74 |
+
ES_URL: ${{ secrets.ES_URL }}
|
75 |
+
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
|
76 |
+
ES_API_KEY: ${{ secrets.ES_API_KEY }}
|
77 |
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
|
78 |
+
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
|
79 |
+
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
|
80 |
+
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
|
81 |
+
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
|
82 |
+
run: |
|
83 |
+
make integration_tests
|
84 |
+
|
85 |
+
- name: Ensure the tests did not create any additional files
|
86 |
+
shell: bash
|
87 |
+
run: |
|
88 |
+
set -eu
|
89 |
+
|
90 |
+
STATUS="$(git status)"
|
91 |
+
echo "$STATUS"
|
92 |
+
|
93 |
+
# grep will exit non-zero if the target message isn't found,
|
94 |
+
# and `set -e` above will cause the step to fail.
|
95 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
docs/langchain/.github/workflows/_lint.yml
ADDED
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: lint
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "From which folder this pipeline executes"
|
10 |
+
langchain-location:
|
11 |
+
required: false
|
12 |
+
type: string
|
13 |
+
description: "Relative path to the langchain library folder"
|
14 |
+
|
15 |
+
env:
|
16 |
+
POETRY_VERSION: "1.7.1"
|
17 |
+
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
|
18 |
+
|
19 |
+
# This env var allows us to get inline annotations when ruff has complaints.
|
20 |
+
RUFF_OUTPUT_FORMAT: github
|
21 |
+
|
22 |
+
jobs:
|
23 |
+
build:
|
24 |
+
name: "make lint #${{ matrix.python-version }}"
|
25 |
+
runs-on: ubuntu-latest
|
26 |
+
strategy:
|
27 |
+
matrix:
|
28 |
+
# Only lint on the min and max supported Python versions.
|
29 |
+
# It's extremely unlikely that there's a lint issue on any version in between
|
30 |
+
# that doesn't show up on the min or max versions.
|
31 |
+
#
|
32 |
+
# GitHub rate-limits how many jobs can be running at any one time.
|
33 |
+
# Starting new jobs is also relatively slow,
|
34 |
+
# so linting on fewer versions makes CI faster.
|
35 |
+
python-version:
|
36 |
+
- "3.8"
|
37 |
+
- "3.11"
|
38 |
+
steps:
|
39 |
+
- uses: actions/checkout@v4
|
40 |
+
|
41 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
42 |
+
uses: "./.github/actions/poetry_setup"
|
43 |
+
with:
|
44 |
+
python-version: ${{ matrix.python-version }}
|
45 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
46 |
+
working-directory: ${{ inputs.working-directory }}
|
47 |
+
cache-key: lint-with-extras
|
48 |
+
|
49 |
+
- name: Check Poetry File
|
50 |
+
shell: bash
|
51 |
+
working-directory: ${{ inputs.working-directory }}
|
52 |
+
run: |
|
53 |
+
poetry check
|
54 |
+
|
55 |
+
- name: Check lock file
|
56 |
+
shell: bash
|
57 |
+
working-directory: ${{ inputs.working-directory }}
|
58 |
+
run: |
|
59 |
+
poetry lock --check
|
60 |
+
|
61 |
+
- name: Install dependencies
|
62 |
+
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
63 |
+
# type hints for as many of our libraries as possible.
|
64 |
+
# This helps catch errors that require dependencies to be spotted, for example:
|
65 |
+
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
|
66 |
+
#
|
67 |
+
# If you change this configuration, make sure to change the `cache-key`
|
68 |
+
# in the `poetry_setup` action above to stop using the old cache.
|
69 |
+
# It doesn't matter how you change it, any change will cause a cache-bust.
|
70 |
+
working-directory: ${{ inputs.working-directory }}
|
71 |
+
run: |
|
72 |
+
poetry install --with lint,typing
|
73 |
+
|
74 |
+
- name: Install langchain editable
|
75 |
+
working-directory: ${{ inputs.working-directory }}
|
76 |
+
if: ${{ inputs.langchain-location }}
|
77 |
+
env:
|
78 |
+
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
|
79 |
+
run: |
|
80 |
+
poetry run pip install -e "$LANGCHAIN_LOCATION"
|
81 |
+
|
82 |
+
- name: Get .mypy_cache to speed up mypy
|
83 |
+
uses: actions/cache@v4
|
84 |
+
env:
|
85 |
+
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
|
86 |
+
with:
|
87 |
+
path: |
|
88 |
+
${{ env.WORKDIR }}/.mypy_cache
|
89 |
+
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
|
90 |
+
|
91 |
+
|
92 |
+
- name: Analysing the code with our lint
|
93 |
+
working-directory: ${{ inputs.working-directory }}
|
94 |
+
run: |
|
95 |
+
make lint_package
|
96 |
+
|
97 |
+
- name: Install unit test dependencies
|
98 |
+
# Also installs dev/lint/test/typing dependencies, to ensure we have
|
99 |
+
# type hints for as many of our libraries as possible.
|
100 |
+
# This helps catch errors that require dependencies to be spotted, for example:
|
101 |
+
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
|
102 |
+
#
|
103 |
+
# If you change this configuration, make sure to change the `cache-key`
|
104 |
+
# in the `poetry_setup` action above to stop using the old cache.
|
105 |
+
# It doesn't matter how you change it, any change will cause a cache-bust.
|
106 |
+
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
|
107 |
+
working-directory: ${{ inputs.working-directory }}
|
108 |
+
run: |
|
109 |
+
poetry install --with test
|
110 |
+
- name: Install unit+integration test dependencies
|
111 |
+
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
|
112 |
+
working-directory: ${{ inputs.working-directory }}
|
113 |
+
run: |
|
114 |
+
poetry install --with test,test_integration
|
115 |
+
|
116 |
+
- name: Get .mypy_cache_test to speed up mypy
|
117 |
+
uses: actions/cache@v4
|
118 |
+
env:
|
119 |
+
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
|
120 |
+
with:
|
121 |
+
path: |
|
122 |
+
${{ env.WORKDIR }}/.mypy_cache_test
|
123 |
+
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
|
124 |
+
|
125 |
+
- name: Analysing the code with our lint
|
126 |
+
working-directory: ${{ inputs.working-directory }}
|
127 |
+
run: |
|
128 |
+
make lint_tests
|
docs/langchain/.github/workflows/_release.yml
ADDED
@@ -0,0 +1,371 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: release
|
2 |
+
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "From which folder this pipeline executes"
|
10 |
+
workflow_dispatch:
|
11 |
+
inputs:
|
12 |
+
working-directory:
|
13 |
+
required: true
|
14 |
+
type: string
|
15 |
+
default: 'libs/langchain'
|
16 |
+
dangerous-nonmaster-release:
|
17 |
+
required: false
|
18 |
+
type: boolean
|
19 |
+
default: false
|
20 |
+
description: "Release from a non-master branch (danger!)"
|
21 |
+
|
22 |
+
env:
|
23 |
+
PYTHON_VERSION: "3.11"
|
24 |
+
POETRY_VERSION: "1.7.1"
|
25 |
+
|
26 |
+
jobs:
|
27 |
+
build:
|
28 |
+
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
|
29 |
+
environment: Scheduled testing
|
30 |
+
runs-on: ubuntu-latest
|
31 |
+
|
32 |
+
outputs:
|
33 |
+
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
|
34 |
+
version: ${{ steps.check-version.outputs.version }}
|
35 |
+
|
36 |
+
steps:
|
37 |
+
- uses: actions/checkout@v4
|
38 |
+
|
39 |
+
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
40 |
+
uses: "./.github/actions/poetry_setup"
|
41 |
+
with:
|
42 |
+
python-version: ${{ env.PYTHON_VERSION }}
|
43 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
44 |
+
working-directory: ${{ inputs.working-directory }}
|
45 |
+
cache-key: release
|
46 |
+
|
47 |
+
# We want to keep this build stage *separate* from the release stage,
|
48 |
+
# so that there's no sharing of permissions between them.
|
49 |
+
# The release stage has trusted publishing and GitHub repo contents write access,
|
50 |
+
# and we want to keep the scope of that access limited just to the release job.
|
51 |
+
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
|
52 |
+
# could get access to our GitHub or PyPI credentials.
|
53 |
+
#
|
54 |
+
# Per the trusted publishing GitHub Action:
|
55 |
+
# > It is strongly advised to separate jobs for building [...]
|
56 |
+
# > from the publish job.
|
57 |
+
# https://github.com/pypa/gh-action-pypi-publish#non-goals
|
58 |
+
- name: Build project for distribution
|
59 |
+
run: poetry build
|
60 |
+
working-directory: ${{ inputs.working-directory }}
|
61 |
+
|
62 |
+
- name: Upload build
|
63 |
+
uses: actions/upload-artifact@v4
|
64 |
+
with:
|
65 |
+
name: dist
|
66 |
+
path: ${{ inputs.working-directory }}/dist/
|
67 |
+
|
68 |
+
- name: Check Version
|
69 |
+
id: check-version
|
70 |
+
shell: bash
|
71 |
+
working-directory: ${{ inputs.working-directory }}
|
72 |
+
run: |
|
73 |
+
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
|
74 |
+
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
|
75 |
+
release-notes:
|
76 |
+
needs:
|
77 |
+
- build
|
78 |
+
runs-on: ubuntu-latest
|
79 |
+
outputs:
|
80 |
+
release-body: ${{ steps.generate-release-body.outputs.release-body }}
|
81 |
+
steps:
|
82 |
+
- uses: actions/checkout@v4
|
83 |
+
with:
|
84 |
+
repository: langchain-ai/langchain
|
85 |
+
path: langchain
|
86 |
+
sparse-checkout: | # this only grabs files for relevant dir
|
87 |
+
${{ inputs.working-directory }}
|
88 |
+
ref: master # this scopes to just master branch
|
89 |
+
fetch-depth: 0 # this fetches entire commit history
|
90 |
+
- name: Check Tags
|
91 |
+
id: check-tags
|
92 |
+
shell: bash
|
93 |
+
working-directory: langchain/${{ inputs.working-directory }}
|
94 |
+
env:
|
95 |
+
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
|
96 |
+
VERSION: ${{ needs.build.outputs.version }}
|
97 |
+
run: |
|
98 |
+
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
|
99 |
+
echo $REGEX
|
100 |
+
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
|
101 |
+
TAG="${PKG_NAME}==${VERSION}"
|
102 |
+
if [ "$TAG" == "$PREV_TAG" ]; then
|
103 |
+
echo "No new version to release"
|
104 |
+
exit 1
|
105 |
+
fi
|
106 |
+
echo tag="$TAG" >> $GITHUB_OUTPUT
|
107 |
+
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
|
108 |
+
- name: Generate release body
|
109 |
+
id: generate-release-body
|
110 |
+
working-directory: langchain
|
111 |
+
env:
|
112 |
+
WORKING_DIR: ${{ inputs.working-directory }}
|
113 |
+
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
|
114 |
+
TAG: ${{ steps.check-tags.outputs.tag }}
|
115 |
+
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
|
116 |
+
run: |
|
117 |
+
PREAMBLE="Changes since $PREV_TAG"
|
118 |
+
# if PREV_TAG is empty, then we are releasing the first version
|
119 |
+
if [ -z "$PREV_TAG" ]; then
|
120 |
+
PREAMBLE="Initial release"
|
121 |
+
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
|
122 |
+
fi
|
123 |
+
{
|
124 |
+
echo 'release-body<<EOF'
|
125 |
+
echo "# Release $TAG"
|
126 |
+
echo $PREAMBLE
|
127 |
+
echo
|
128 |
+
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
|
129 |
+
echo EOF
|
130 |
+
} >> "$GITHUB_OUTPUT"
|
131 |
+
|
132 |
+
test-pypi-publish:
|
133 |
+
needs:
|
134 |
+
- build
|
135 |
+
- release-notes
|
136 |
+
uses:
|
137 |
+
./.github/workflows/_test_release.yml
|
138 |
+
with:
|
139 |
+
working-directory: ${{ inputs.working-directory }}
|
140 |
+
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
|
141 |
+
secrets: inherit
|
142 |
+
|
143 |
+
pre-release-checks:
|
144 |
+
needs:
|
145 |
+
- build
|
146 |
+
- release-notes
|
147 |
+
- test-pypi-publish
|
148 |
+
runs-on: ubuntu-latest
|
149 |
+
steps:
|
150 |
+
- uses: actions/checkout@v4
|
151 |
+
|
152 |
+
# We explicitly *don't* set up caching here. This ensures our tests are
|
153 |
+
# maximally sensitive to catching breakage.
|
154 |
+
#
|
155 |
+
# For example, here's a way that caching can cause a falsely-passing test:
|
156 |
+
# - Make the langchain package manifest no longer list a dependency package
|
157 |
+
# as a requirement. This means it won't be installed by `pip install`,
|
158 |
+
# and attempting to use it would cause a crash.
|
159 |
+
# - That dependency used to be required, so it may have been cached.
|
160 |
+
# When restoring the venv packages from cache, that dependency gets included.
|
161 |
+
# - Tests pass, because the dependency is present even though it wasn't specified.
|
162 |
+
# - The package is published, and it breaks on the missing dependency when
|
163 |
+
# used in the real world.
|
164 |
+
|
165 |
+
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
166 |
+
uses: "./.github/actions/poetry_setup"
|
167 |
+
with:
|
168 |
+
python-version: ${{ env.PYTHON_VERSION }}
|
169 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
170 |
+
working-directory: ${{ inputs.working-directory }}
|
171 |
+
|
172 |
+
- name: Import published package
|
173 |
+
shell: bash
|
174 |
+
working-directory: ${{ inputs.working-directory }}
|
175 |
+
env:
|
176 |
+
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
|
177 |
+
VERSION: ${{ needs.build.outputs.version }}
|
178 |
+
# Here we use:
|
179 |
+
# - The default regular PyPI index as the *primary* index, meaning
|
180 |
+
# that it takes priority (https://pypi.org/simple)
|
181 |
+
# - The test PyPI index as an extra index, so that any dependencies that
|
182 |
+
# are not found on test PyPI can be resolved and installed anyway.
|
183 |
+
# (https://test.pypi.org/simple). This will include the PKG_NAME==VERSION
|
184 |
+
# package because VERSION will not have been uploaded to regular PyPI yet.
|
185 |
+
# - attempt install again after 5 seconds if it fails because there is
|
186 |
+
# sometimes a delay in availability on test pypi
|
187 |
+
run: |
|
188 |
+
poetry run pip install \
|
189 |
+
--extra-index-url https://test.pypi.org/simple/ \
|
190 |
+
"$PKG_NAME==$VERSION" || \
|
191 |
+
( \
|
192 |
+
sleep 5 && \
|
193 |
+
poetry run pip install \
|
194 |
+
--extra-index-url https://test.pypi.org/simple/ \
|
195 |
+
"$PKG_NAME==$VERSION" \
|
196 |
+
)
|
197 |
+
|
198 |
+
# Replace all dashes in the package name with underscores,
|
199 |
+
# since that's how Python imports packages with dashes in the name.
|
200 |
+
IMPORT_NAME="$(echo "$PKG_NAME" | sed s/-/_/g)"
|
201 |
+
|
202 |
+
poetry run python -c "import $IMPORT_NAME; print(dir($IMPORT_NAME))"
|
203 |
+
|
204 |
+
- name: Import test dependencies
|
205 |
+
run: poetry install --with test,test_integration
|
206 |
+
working-directory: ${{ inputs.working-directory }}
|
207 |
+
|
208 |
+
# Overwrite the local version of the package with the test PyPI version.
|
209 |
+
- name: Import published package (again)
|
210 |
+
working-directory: ${{ inputs.working-directory }}
|
211 |
+
shell: bash
|
212 |
+
env:
|
213 |
+
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
|
214 |
+
VERSION: ${{ needs.build.outputs.version }}
|
215 |
+
run: |
|
216 |
+
poetry run pip install \
|
217 |
+
--extra-index-url https://test.pypi.org/simple/ \
|
218 |
+
"$PKG_NAME==$VERSION"
|
219 |
+
|
220 |
+
- name: Run unit tests
|
221 |
+
run: make tests
|
222 |
+
working-directory: ${{ inputs.working-directory }}
|
223 |
+
|
224 |
+
- name: Get minimum versions
|
225 |
+
working-directory: ${{ inputs.working-directory }}
|
226 |
+
id: min-version
|
227 |
+
run: |
|
228 |
+
poetry run pip install packaging
|
229 |
+
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
|
230 |
+
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
|
231 |
+
echo "min-versions=$min_versions"
|
232 |
+
|
233 |
+
- name: Run unit tests with minimum dependency versions
|
234 |
+
if: ${{ steps.min-version.outputs.min-versions != '' }}
|
235 |
+
env:
|
236 |
+
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
|
237 |
+
run: |
|
238 |
+
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
|
239 |
+
make tests
|
240 |
+
working-directory: ${{ inputs.working-directory }}
|
241 |
+
|
242 |
+
- name: 'Authenticate to Google Cloud'
|
243 |
+
id: 'auth'
|
244 |
+
uses: google-github-actions/auth@v2
|
245 |
+
with:
|
246 |
+
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
247 |
+
|
248 |
+
- name: Run integration tests
|
249 |
+
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
|
250 |
+
env:
|
251 |
+
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
|
252 |
+
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
253 |
+
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
254 |
+
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
255 |
+
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
256 |
+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
257 |
+
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
258 |
+
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
259 |
+
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
260 |
+
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
|
261 |
+
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
262 |
+
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
263 |
+
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
|
264 |
+
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
265 |
+
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
266 |
+
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
267 |
+
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
|
268 |
+
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
|
269 |
+
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
|
270 |
+
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
|
271 |
+
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
|
272 |
+
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
|
273 |
+
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
|
274 |
+
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
|
275 |
+
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
|
276 |
+
ES_URL: ${{ secrets.ES_URL }}
|
277 |
+
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
|
278 |
+
ES_API_KEY: ${{ secrets.ES_API_KEY }}
|
279 |
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
|
280 |
+
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
|
281 |
+
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
|
282 |
+
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
|
283 |
+
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
284 |
+
run: make integration_tests
|
285 |
+
working-directory: ${{ inputs.working-directory }}
|
286 |
+
|
287 |
+
publish:
|
288 |
+
needs:
|
289 |
+
- build
|
290 |
+
- release-notes
|
291 |
+
- test-pypi-publish
|
292 |
+
- pre-release-checks
|
293 |
+
runs-on: ubuntu-latest
|
294 |
+
permissions:
|
295 |
+
# This permission is used for trusted publishing:
|
296 |
+
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
297 |
+
#
|
298 |
+
# Trusted publishing has to also be configured on PyPI for each package:
|
299 |
+
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
300 |
+
id-token: write
|
301 |
+
|
302 |
+
defaults:
|
303 |
+
run:
|
304 |
+
working-directory: ${{ inputs.working-directory }}
|
305 |
+
|
306 |
+
steps:
|
307 |
+
- uses: actions/checkout@v4
|
308 |
+
|
309 |
+
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
310 |
+
uses: "./.github/actions/poetry_setup"
|
311 |
+
with:
|
312 |
+
python-version: ${{ env.PYTHON_VERSION }}
|
313 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
314 |
+
working-directory: ${{ inputs.working-directory }}
|
315 |
+
cache-key: release
|
316 |
+
|
317 |
+
- uses: actions/download-artifact@v4
|
318 |
+
with:
|
319 |
+
name: dist
|
320 |
+
path: ${{ inputs.working-directory }}/dist/
|
321 |
+
|
322 |
+
- name: Publish package distributions to PyPI
|
323 |
+
uses: pypa/gh-action-pypi-publish@release/v1
|
324 |
+
with:
|
325 |
+
packages-dir: ${{ inputs.working-directory }}/dist/
|
326 |
+
verbose: true
|
327 |
+
print-hash: true
|
328 |
+
|
329 |
+
mark-release:
|
330 |
+
needs:
|
331 |
+
- build
|
332 |
+
- release-notes
|
333 |
+
- test-pypi-publish
|
334 |
+
- pre-release-checks
|
335 |
+
- publish
|
336 |
+
runs-on: ubuntu-latest
|
337 |
+
permissions:
|
338 |
+
# This permission is needed by `ncipollo/release-action` to
|
339 |
+
# create the GitHub release.
|
340 |
+
contents: write
|
341 |
+
|
342 |
+
defaults:
|
343 |
+
run:
|
344 |
+
working-directory: ${{ inputs.working-directory }}
|
345 |
+
|
346 |
+
steps:
|
347 |
+
- uses: actions/checkout@v4
|
348 |
+
|
349 |
+
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
350 |
+
uses: "./.github/actions/poetry_setup"
|
351 |
+
with:
|
352 |
+
python-version: ${{ env.PYTHON_VERSION }}
|
353 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
354 |
+
working-directory: ${{ inputs.working-directory }}
|
355 |
+
cache-key: release
|
356 |
+
|
357 |
+
- uses: actions/download-artifact@v4
|
358 |
+
with:
|
359 |
+
name: dist
|
360 |
+
path: ${{ inputs.working-directory }}/dist/
|
361 |
+
|
362 |
+
- name: Create Tag
|
363 |
+
uses: ncipollo/release-action@v1
|
364 |
+
with:
|
365 |
+
artifacts: "dist/*"
|
366 |
+
token: ${{ secrets.GITHUB_TOKEN }}
|
367 |
+
generateReleaseNotes: false
|
368 |
+
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
|
369 |
+
body: ${{ needs.release-notes.outputs.release-body }}
|
370 |
+
commit: ${{ github.sha }}
|
371 |
+
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}
|
docs/langchain/.github/workflows/_release_docker.yml
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: release_docker
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
dockerfile:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "Path to the Dockerfile to build"
|
10 |
+
image:
|
11 |
+
required: true
|
12 |
+
type: string
|
13 |
+
description: "Name of the image to build"
|
14 |
+
|
15 |
+
env:
|
16 |
+
TEST_TAG: ${{ inputs.image }}:test
|
17 |
+
LATEST_TAG: ${{ inputs.image }}:latest
|
18 |
+
|
19 |
+
jobs:
|
20 |
+
docker:
|
21 |
+
runs-on: ubuntu-latest
|
22 |
+
steps:
|
23 |
+
- name: Checkout
|
24 |
+
uses: actions/checkout@v4
|
25 |
+
- name: Get git tag
|
26 |
+
uses: actions-ecosystem/action-get-latest-tag@v1
|
27 |
+
id: get-latest-tag
|
28 |
+
- name: Set docker tag
|
29 |
+
env:
|
30 |
+
VERSION: ${{ steps.get-latest-tag.outputs.tag }}
|
31 |
+
run: |
|
32 |
+
echo "VERSION_TAG=${{ inputs.image }}:${VERSION#v}" >> $GITHUB_ENV
|
33 |
+
- name: Set up QEMU
|
34 |
+
uses: docker/setup-qemu-action@v3
|
35 |
+
- name: Set up Docker Buildx
|
36 |
+
uses: docker/setup-buildx-action@v3
|
37 |
+
- name: Login to Docker Hub
|
38 |
+
uses: docker/login-action@v3
|
39 |
+
with:
|
40 |
+
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
41 |
+
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
42 |
+
- name: Build for Test
|
43 |
+
uses: docker/build-push-action@v5
|
44 |
+
with:
|
45 |
+
context: .
|
46 |
+
file: ${{ inputs.dockerfile }}
|
47 |
+
load: true
|
48 |
+
tags: ${{ env.TEST_TAG }}
|
49 |
+
- name: Test
|
50 |
+
run: |
|
51 |
+
docker run --rm ${{ env.TEST_TAG }} python -c "import langchain"
|
52 |
+
- name: Build and Push to Docker Hub
|
53 |
+
uses: docker/build-push-action@v5
|
54 |
+
with:
|
55 |
+
context: .
|
56 |
+
file: ${{ inputs.dockerfile }}
|
57 |
+
# We can only build for the intersection of platforms supported by
|
58 |
+
# QEMU and base python image, for now build only for
|
59 |
+
# linux/amd64 and linux/arm64
|
60 |
+
platforms: linux/amd64,linux/arm64
|
61 |
+
tags: ${{ env.LATEST_TAG }},${{ env.VERSION_TAG }}
|
62 |
+
push: true
|
docs/langchain/.github/workflows/_test.yml
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: test
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "From which folder this pipeline executes"
|
10 |
+
langchain-location:
|
11 |
+
required: false
|
12 |
+
type: string
|
13 |
+
description: "Relative path to the langchain library folder"
|
14 |
+
|
15 |
+
env:
|
16 |
+
POETRY_VERSION: "1.7.1"
|
17 |
+
|
18 |
+
jobs:
|
19 |
+
build:
|
20 |
+
defaults:
|
21 |
+
run:
|
22 |
+
working-directory: ${{ inputs.working-directory }}
|
23 |
+
runs-on: ubuntu-latest
|
24 |
+
strategy:
|
25 |
+
matrix:
|
26 |
+
python-version:
|
27 |
+
- "3.8"
|
28 |
+
- "3.9"
|
29 |
+
- "3.10"
|
30 |
+
- "3.11"
|
31 |
+
name: "make test #${{ matrix.python-version }}"
|
32 |
+
steps:
|
33 |
+
- uses: actions/checkout@v4
|
34 |
+
|
35 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
36 |
+
uses: "./.github/actions/poetry_setup"
|
37 |
+
with:
|
38 |
+
python-version: ${{ matrix.python-version }}
|
39 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
40 |
+
working-directory: ${{ inputs.working-directory }}
|
41 |
+
cache-key: core
|
42 |
+
|
43 |
+
- name: Install dependencies
|
44 |
+
shell: bash
|
45 |
+
run: poetry install --with test
|
46 |
+
|
47 |
+
- name: Install langchain editable
|
48 |
+
working-directory: ${{ inputs.working-directory }}
|
49 |
+
if: ${{ inputs.langchain-location }}
|
50 |
+
env:
|
51 |
+
LANGCHAIN_LOCATION: ${{ inputs.langchain-location }}
|
52 |
+
run: |
|
53 |
+
poetry run pip install -e "$LANGCHAIN_LOCATION"
|
54 |
+
|
55 |
+
- name: Run core tests
|
56 |
+
shell: bash
|
57 |
+
run: |
|
58 |
+
make test
|
59 |
+
|
60 |
+
- name: Ensure the tests did not create any additional files
|
61 |
+
shell: bash
|
62 |
+
run: |
|
63 |
+
set -eu
|
64 |
+
|
65 |
+
STATUS="$(git status)"
|
66 |
+
echo "$STATUS"
|
67 |
+
|
68 |
+
# grep will exit non-zero if the target message isn't found,
|
69 |
+
# and `set -e` above will cause the step to fail.
|
70 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
docs/langchain/.github/workflows/_test_doc_imports.yml
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: test_doc_imports
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
|
6 |
+
env:
|
7 |
+
POETRY_VERSION: "1.7.1"
|
8 |
+
|
9 |
+
jobs:
|
10 |
+
build:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
strategy:
|
13 |
+
matrix:
|
14 |
+
python-version:
|
15 |
+
- "3.11"
|
16 |
+
name: "check doc imports #${{ matrix.python-version }}"
|
17 |
+
steps:
|
18 |
+
- uses: actions/checkout@v4
|
19 |
+
|
20 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
21 |
+
uses: "./.github/actions/poetry_setup"
|
22 |
+
with:
|
23 |
+
python-version: ${{ matrix.python-version }}
|
24 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
25 |
+
cache-key: core
|
26 |
+
|
27 |
+
- name: Install dependencies
|
28 |
+
shell: bash
|
29 |
+
run: poetry install --with test
|
30 |
+
|
31 |
+
- name: Install langchain editable
|
32 |
+
run: |
|
33 |
+
poetry run pip install -e libs/core libs/langchain libs/community libs/experimental
|
34 |
+
|
35 |
+
- name: Check doc imports
|
36 |
+
shell: bash
|
37 |
+
run: |
|
38 |
+
poetry run python docs/scripts/check_imports.py
|
39 |
+
|
40 |
+
- name: Ensure the test did not create any additional files
|
41 |
+
shell: bash
|
42 |
+
run: |
|
43 |
+
set -eu
|
44 |
+
|
45 |
+
STATUS="$(git status)"
|
46 |
+
echo "$STATUS"
|
47 |
+
|
48 |
+
# grep will exit non-zero if the target message isn't found,
|
49 |
+
# and `set -e` above will cause the step to fail.
|
50 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
docs/langchain/.github/workflows/_test_release.yml
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: test-release
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_call:
|
5 |
+
inputs:
|
6 |
+
working-directory:
|
7 |
+
required: true
|
8 |
+
type: string
|
9 |
+
description: "From which folder this pipeline executes"
|
10 |
+
dangerous-nonmaster-release:
|
11 |
+
required: false
|
12 |
+
type: boolean
|
13 |
+
default: false
|
14 |
+
description: "Release from a non-master branch (danger!)"
|
15 |
+
|
16 |
+
env:
|
17 |
+
POETRY_VERSION: "1.7.1"
|
18 |
+
PYTHON_VERSION: "3.10"
|
19 |
+
|
20 |
+
jobs:
|
21 |
+
build:
|
22 |
+
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
|
23 |
+
runs-on: ubuntu-latest
|
24 |
+
|
25 |
+
outputs:
|
26 |
+
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
|
27 |
+
version: ${{ steps.check-version.outputs.version }}
|
28 |
+
|
29 |
+
steps:
|
30 |
+
- uses: actions/checkout@v4
|
31 |
+
|
32 |
+
- name: Set up Python + Poetry ${{ env.POETRY_VERSION }}
|
33 |
+
uses: "./.github/actions/poetry_setup"
|
34 |
+
with:
|
35 |
+
python-version: ${{ env.PYTHON_VERSION }}
|
36 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
37 |
+
working-directory: ${{ inputs.working-directory }}
|
38 |
+
cache-key: release
|
39 |
+
|
40 |
+
# We want to keep this build stage *separate* from the release stage,
|
41 |
+
# so that there's no sharing of permissions between them.
|
42 |
+
# The release stage has trusted publishing and GitHub repo contents write access,
|
43 |
+
# and we want to keep the scope of that access limited just to the release job.
|
44 |
+
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
|
45 |
+
# could get access to our GitHub or PyPI credentials.
|
46 |
+
#
|
47 |
+
# Per the trusted publishing GitHub Action:
|
48 |
+
# > It is strongly advised to separate jobs for building [...]
|
49 |
+
# > from the publish job.
|
50 |
+
# https://github.com/pypa/gh-action-pypi-publish#non-goals
|
51 |
+
- name: Build project for distribution
|
52 |
+
run: poetry build
|
53 |
+
working-directory: ${{ inputs.working-directory }}
|
54 |
+
|
55 |
+
- name: Upload build
|
56 |
+
uses: actions/upload-artifact@v4
|
57 |
+
with:
|
58 |
+
name: test-dist
|
59 |
+
path: ${{ inputs.working-directory }}/dist/
|
60 |
+
|
61 |
+
- name: Check Version
|
62 |
+
id: check-version
|
63 |
+
shell: bash
|
64 |
+
working-directory: ${{ inputs.working-directory }}
|
65 |
+
run: |
|
66 |
+
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
|
67 |
+
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
|
68 |
+
|
69 |
+
publish:
|
70 |
+
needs:
|
71 |
+
- build
|
72 |
+
runs-on: ubuntu-latest
|
73 |
+
permissions:
|
74 |
+
# This permission is used for trusted publishing:
|
75 |
+
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
|
76 |
+
#
|
77 |
+
# Trusted publishing has to also be configured on PyPI for each package:
|
78 |
+
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
|
79 |
+
id-token: write
|
80 |
+
|
81 |
+
steps:
|
82 |
+
- uses: actions/checkout@v4
|
83 |
+
|
84 |
+
- uses: actions/download-artifact@v4
|
85 |
+
with:
|
86 |
+
name: test-dist
|
87 |
+
path: ${{ inputs.working-directory }}/dist/
|
88 |
+
|
89 |
+
- name: Publish to test PyPI
|
90 |
+
uses: pypa/gh-action-pypi-publish@release/v1
|
91 |
+
with:
|
92 |
+
packages-dir: ${{ inputs.working-directory }}/dist/
|
93 |
+
verbose: true
|
94 |
+
print-hash: true
|
95 |
+
repository-url: https://test.pypi.org/legacy/
|
96 |
+
|
97 |
+
# We overwrite any existing distributions with the same name and version.
|
98 |
+
# This is *only for CI use* and is *extremely dangerous* otherwise!
|
99 |
+
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
|
100 |
+
skip-existing: true
|
docs/langchain/.github/workflows/check-broken-links.yml
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Check Broken Links
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_dispatch:
|
5 |
+
schedule:
|
6 |
+
- cron: '0 13 * * *'
|
7 |
+
|
8 |
+
jobs:
|
9 |
+
check-links:
|
10 |
+
runs-on: ubuntu-latest
|
11 |
+
steps:
|
12 |
+
- uses: actions/checkout@v4
|
13 |
+
- name: Use Node.js 18.x
|
14 |
+
uses: actions/setup-node@v3
|
15 |
+
with:
|
16 |
+
node-version: 18.x
|
17 |
+
cache: "yarn"
|
18 |
+
cache-dependency-path: ./docs/yarn.lock
|
19 |
+
- name: Install dependencies
|
20 |
+
run: yarn install --immutable --mode=skip-build
|
21 |
+
working-directory: ./docs
|
22 |
+
- name: Check broken links
|
23 |
+
run: yarn check-broken-links
|
24 |
+
working-directory: ./docs
|
docs/langchain/.github/workflows/check_diffs.yml
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
name: CI
|
3 |
+
|
4 |
+
on:
|
5 |
+
push:
|
6 |
+
branches: [master]
|
7 |
+
pull_request:
|
8 |
+
|
9 |
+
# If another push to the same PR or branch happens while this workflow is still running,
|
10 |
+
# cancel the earlier run in favor of the next run.
|
11 |
+
#
|
12 |
+
# There's no point in testing an outdated version of the code. GitHub only allows
|
13 |
+
# a limited number of job runners to be active at the same time, so it's better to cancel
|
14 |
+
# pointless jobs early so that more useful jobs can run sooner.
|
15 |
+
concurrency:
|
16 |
+
group: ${{ github.workflow }}-${{ github.ref }}
|
17 |
+
cancel-in-progress: true
|
18 |
+
|
19 |
+
env:
|
20 |
+
POETRY_VERSION: "1.7.1"
|
21 |
+
|
22 |
+
jobs:
|
23 |
+
build:
|
24 |
+
runs-on: ubuntu-latest
|
25 |
+
steps:
|
26 |
+
- uses: actions/checkout@v4
|
27 |
+
- uses: actions/setup-python@v5
|
28 |
+
with:
|
29 |
+
python-version: '3.10'
|
30 |
+
- id: files
|
31 |
+
uses: Ana06/get-changed-files@v2.2.0
|
32 |
+
- id: set-matrix
|
33 |
+
run: |
|
34 |
+
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
|
35 |
+
outputs:
|
36 |
+
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
|
37 |
+
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
|
38 |
+
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
|
39 |
+
docs-edited: ${{ steps.set-matrix.outputs.docs-edited }}
|
40 |
+
lint:
|
41 |
+
name: cd ${{ matrix.working-directory }}
|
42 |
+
needs: [ build ]
|
43 |
+
if: ${{ needs.build.outputs.dirs-to-lint != '[]' }}
|
44 |
+
strategy:
|
45 |
+
matrix:
|
46 |
+
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-lint) }}
|
47 |
+
uses: ./.github/workflows/_lint.yml
|
48 |
+
with:
|
49 |
+
working-directory: ${{ matrix.working-directory }}
|
50 |
+
secrets: inherit
|
51 |
+
|
52 |
+
test:
|
53 |
+
name: cd ${{ matrix.working-directory }}
|
54 |
+
needs: [ build ]
|
55 |
+
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
|
56 |
+
strategy:
|
57 |
+
matrix:
|
58 |
+
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
|
59 |
+
uses: ./.github/workflows/_test.yml
|
60 |
+
with:
|
61 |
+
working-directory: ${{ matrix.working-directory }}
|
62 |
+
secrets: inherit
|
63 |
+
|
64 |
+
test-doc-imports:
|
65 |
+
needs: [ build ]
|
66 |
+
if: ${{ needs.build.outputs.dirs-to-test != '[]' || needs.build.outputs.docs-edited }}
|
67 |
+
uses: ./.github/workflows/_test_doc_imports.yml
|
68 |
+
secrets: inherit
|
69 |
+
|
70 |
+
compile-integration-tests:
|
71 |
+
name: cd ${{ matrix.working-directory }}
|
72 |
+
needs: [ build ]
|
73 |
+
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
|
74 |
+
strategy:
|
75 |
+
matrix:
|
76 |
+
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
|
77 |
+
uses: ./.github/workflows/_compile_integration_test.yml
|
78 |
+
with:
|
79 |
+
working-directory: ${{ matrix.working-directory }}
|
80 |
+
secrets: inherit
|
81 |
+
|
82 |
+
dependencies:
|
83 |
+
name: cd ${{ matrix.working-directory }}
|
84 |
+
needs: [ build ]
|
85 |
+
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
|
86 |
+
strategy:
|
87 |
+
matrix:
|
88 |
+
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
|
89 |
+
uses: ./.github/workflows/_dependencies.yml
|
90 |
+
with:
|
91 |
+
working-directory: ${{ matrix.working-directory }}
|
92 |
+
secrets: inherit
|
93 |
+
|
94 |
+
extended-tests:
|
95 |
+
name: "cd ${{ matrix.working-directory }} / make extended_tests #${{ matrix.python-version }}"
|
96 |
+
needs: [ build ]
|
97 |
+
if: ${{ needs.build.outputs.dirs-to-extended-test != '[]' }}
|
98 |
+
strategy:
|
99 |
+
matrix:
|
100 |
+
# note different variable for extended test dirs
|
101 |
+
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-extended-test) }}
|
102 |
+
python-version:
|
103 |
+
- "3.8"
|
104 |
+
- "3.9"
|
105 |
+
- "3.10"
|
106 |
+
- "3.11"
|
107 |
+
runs-on: ubuntu-latest
|
108 |
+
defaults:
|
109 |
+
run:
|
110 |
+
working-directory: ${{ matrix.working-directory }}
|
111 |
+
steps:
|
112 |
+
- uses: actions/checkout@v4
|
113 |
+
|
114 |
+
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
|
115 |
+
uses: "./.github/actions/poetry_setup"
|
116 |
+
with:
|
117 |
+
python-version: ${{ matrix.python-version }}
|
118 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
119 |
+
working-directory: ${{ matrix.working-directory }}
|
120 |
+
cache-key: extended
|
121 |
+
|
122 |
+
- name: Install dependencies
|
123 |
+
shell: bash
|
124 |
+
run: |
|
125 |
+
echo "Running extended tests, installing dependencies with poetry..."
|
126 |
+
poetry install --with test
|
127 |
+
poetry run pip install uv
|
128 |
+
poetry run uv pip install -r extended_testing_deps.txt
|
129 |
+
|
130 |
+
- name: Run extended tests
|
131 |
+
run: make extended_tests
|
132 |
+
|
133 |
+
- name: Ensure the tests did not create any additional files
|
134 |
+
shell: bash
|
135 |
+
run: |
|
136 |
+
set -eu
|
137 |
+
|
138 |
+
STATUS="$(git status)"
|
139 |
+
echo "$STATUS"
|
140 |
+
|
141 |
+
# grep will exit non-zero if the target message isn't found,
|
142 |
+
# and `set -e` above will cause the step to fail.
|
143 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
144 |
+
ci_success:
|
145 |
+
name: "CI Success"
|
146 |
+
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests, test-doc-imports]
|
147 |
+
if: |
|
148 |
+
always()
|
149 |
+
runs-on: ubuntu-latest
|
150 |
+
env:
|
151 |
+
JOBS_JSON: ${{ toJSON(needs) }}
|
152 |
+
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
|
153 |
+
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
|
154 |
+
steps:
|
155 |
+
- name: "CI Success"
|
156 |
+
run: |
|
157 |
+
echo $JOBS_JSON
|
158 |
+
echo $RESULTS_JSON
|
159 |
+
echo "Exiting with $EXIT_CODE"
|
160 |
+
exit $EXIT_CODE
|
docs/langchain/.github/workflows/codespell.yml
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
name: CI / cd . / make spell_check
|
3 |
+
|
4 |
+
on:
|
5 |
+
push:
|
6 |
+
branches: [master, v0.1]
|
7 |
+
pull_request:
|
8 |
+
branches: [master, v0.1]
|
9 |
+
|
10 |
+
permissions:
|
11 |
+
contents: read
|
12 |
+
|
13 |
+
jobs:
|
14 |
+
codespell:
|
15 |
+
name: (Check for spelling errors)
|
16 |
+
runs-on: ubuntu-latest
|
17 |
+
|
18 |
+
steps:
|
19 |
+
- name: Checkout
|
20 |
+
uses: actions/checkout@v4
|
21 |
+
|
22 |
+
- name: Install Dependencies
|
23 |
+
run: |
|
24 |
+
pip install toml
|
25 |
+
|
26 |
+
- name: Extract Ignore Words List
|
27 |
+
run: |
|
28 |
+
# Use a Python script to extract the ignore words list from pyproject.toml
|
29 |
+
python .github/workflows/extract_ignored_words_list.py
|
30 |
+
id: extract_ignore_words
|
31 |
+
|
32 |
+
# - name: Codespell
|
33 |
+
# uses: codespell-project/actions-codespell@v2
|
34 |
+
# with:
|
35 |
+
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
|
36 |
+
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
|
37 |
+
# exclude_file: ./.github/workflows/codespell-exclude
|
docs/langchain/.github/workflows/extract_ignored_words_list.py
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import toml
|
2 |
+
|
3 |
+
pyproject_toml = toml.load("pyproject.toml")
|
4 |
+
|
5 |
+
# Extract the ignore words list (adjust the key as per your TOML structure)
|
6 |
+
ignore_words_list = (
|
7 |
+
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
|
8 |
+
)
|
9 |
+
|
10 |
+
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
|
docs/langchain/.github/workflows/langchain_release_docker.yml
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
name: docker/langchain/langchain Release
|
3 |
+
|
4 |
+
on:
|
5 |
+
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
6 |
+
workflow_call: # Allows triggering from another workflow
|
7 |
+
|
8 |
+
jobs:
|
9 |
+
release:
|
10 |
+
uses: ./.github/workflows/_release_docker.yml
|
11 |
+
with:
|
12 |
+
dockerfile: docker/Dockerfile.base
|
13 |
+
image: langchain/langchain
|
14 |
+
secrets: inherit
|
docs/langchain/.github/workflows/people.yml
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: LangChain People
|
2 |
+
|
3 |
+
on:
|
4 |
+
schedule:
|
5 |
+
- cron: "0 14 1 * *"
|
6 |
+
push:
|
7 |
+
branches: [jacob/people]
|
8 |
+
workflow_dispatch:
|
9 |
+
inputs:
|
10 |
+
debug_enabled:
|
11 |
+
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
|
12 |
+
required: false
|
13 |
+
default: 'false'
|
14 |
+
|
15 |
+
jobs:
|
16 |
+
langchain-people:
|
17 |
+
if: github.repository_owner == 'langchain-ai'
|
18 |
+
runs-on: ubuntu-latest
|
19 |
+
steps:
|
20 |
+
- name: Dump GitHub context
|
21 |
+
env:
|
22 |
+
GITHUB_CONTEXT: ${{ toJson(github) }}
|
23 |
+
run: echo "$GITHUB_CONTEXT"
|
24 |
+
- uses: actions/checkout@v4
|
25 |
+
# Ref: https://github.com/actions/runner/issues/2033
|
26 |
+
- name: Fix git safe.directory in container
|
27 |
+
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
|
28 |
+
# Allow debugging with tmate
|
29 |
+
- name: Setup tmate session
|
30 |
+
uses: mxschmitt/action-tmate@v3
|
31 |
+
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
|
32 |
+
with:
|
33 |
+
limit-access-to-actor: true
|
34 |
+
- uses: ./.github/actions/people
|
35 |
+
with:
|
36 |
+
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}
|
docs/langchain/.github/workflows/scheduled_test.yml
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Scheduled tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
|
5 |
+
schedule:
|
6 |
+
- cron: '0 13 * * *'
|
7 |
+
|
8 |
+
env:
|
9 |
+
POETRY_VERSION: "1.7.1"
|
10 |
+
|
11 |
+
jobs:
|
12 |
+
build:
|
13 |
+
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
|
14 |
+
runs-on: ubuntu-latest
|
15 |
+
strategy:
|
16 |
+
fail-fast: false
|
17 |
+
matrix:
|
18 |
+
python-version:
|
19 |
+
- "3.8"
|
20 |
+
- "3.11"
|
21 |
+
working-directory:
|
22 |
+
- "libs/partners/openai"
|
23 |
+
- "libs/partners/anthropic"
|
24 |
+
- "libs/partners/ai21"
|
25 |
+
- "libs/partners/fireworks"
|
26 |
+
- "libs/partners/groq"
|
27 |
+
- "libs/partners/mistralai"
|
28 |
+
- "libs/partners/together"
|
29 |
+
- "libs/partners/cohere"
|
30 |
+
- "libs/partners/google-vertexai"
|
31 |
+
- "libs/partners/google-genai"
|
32 |
+
- "libs/partners/aws"
|
33 |
+
- "libs/partners/nvidia-ai-endpoints"
|
34 |
+
|
35 |
+
steps:
|
36 |
+
- uses: actions/checkout@v4
|
37 |
+
with:
|
38 |
+
path: langchain
|
39 |
+
- uses: actions/checkout@v4
|
40 |
+
with:
|
41 |
+
repository: langchain-ai/langchain-google
|
42 |
+
path: langchain-google
|
43 |
+
- uses: actions/checkout@v4
|
44 |
+
with:
|
45 |
+
repository: langchain-ai/langchain-nvidia
|
46 |
+
path: langchain-nvidia
|
47 |
+
- uses: actions/checkout@v4
|
48 |
+
with:
|
49 |
+
repository: langchain-ai/langchain-cohere
|
50 |
+
path: langchain-cohere
|
51 |
+
- uses: actions/checkout@v4
|
52 |
+
with:
|
53 |
+
repository: langchain-ai/langchain-aws
|
54 |
+
path: langchain-aws
|
55 |
+
|
56 |
+
- name: Move libs
|
57 |
+
run: |
|
58 |
+
rm -rf \
|
59 |
+
langchain/libs/partners/google-genai \
|
60 |
+
langchain/libs/partners/google-vertexai \
|
61 |
+
langchain/libs/partners/nvidia-ai-endpoints \
|
62 |
+
langchain/libs/partners/cohere
|
63 |
+
mv langchain-google/libs/genai langchain/libs/partners/google-genai
|
64 |
+
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
|
65 |
+
mv langchain-nvidia/libs/ai-endpoints langchain/libs/partners/nvidia-ai-endpoints
|
66 |
+
mv langchain-cohere/libs/cohere langchain/libs/partners/cohere
|
67 |
+
mv langchain-aws/libs/aws langchain/libs/partners/aws
|
68 |
+
|
69 |
+
- name: Set up Python ${{ matrix.python-version }}
|
70 |
+
uses: "./langchain/.github/actions/poetry_setup"
|
71 |
+
with:
|
72 |
+
python-version: ${{ matrix.python-version }}
|
73 |
+
poetry-version: ${{ env.POETRY_VERSION }}
|
74 |
+
working-directory: langchain/${{ matrix.working-directory }}
|
75 |
+
cache-key: scheduled
|
76 |
+
|
77 |
+
- name: 'Authenticate to Google Cloud'
|
78 |
+
id: 'auth'
|
79 |
+
uses: google-github-actions/auth@v2
|
80 |
+
with:
|
81 |
+
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
|
82 |
+
|
83 |
+
- name: Configure AWS Credentials
|
84 |
+
uses: aws-actions/configure-aws-credentials@v4
|
85 |
+
with:
|
86 |
+
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
87 |
+
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
88 |
+
aws-region: ${{ secrets.AWS_REGION }}
|
89 |
+
|
90 |
+
- name: Install dependencies
|
91 |
+
run: |
|
92 |
+
echo "Running scheduled tests, installing dependencies with poetry..."
|
93 |
+
cd langchain/${{ matrix.working-directory }}
|
94 |
+
poetry install --with=test_integration,test
|
95 |
+
|
96 |
+
- name: Run integration tests
|
97 |
+
env:
|
98 |
+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
99 |
+
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
100 |
+
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
|
101 |
+
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
|
102 |
+
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
|
103 |
+
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
|
104 |
+
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
|
105 |
+
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
|
106 |
+
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
|
107 |
+
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
|
108 |
+
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
|
109 |
+
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
|
110 |
+
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
|
111 |
+
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
|
112 |
+
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
|
113 |
+
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
|
114 |
+
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
|
115 |
+
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
|
116 |
+
run: |
|
117 |
+
cd langchain/${{ matrix.working-directory }}
|
118 |
+
make integration_tests
|
119 |
+
|
120 |
+
- name: Remove external libraries
|
121 |
+
run: |
|
122 |
+
rm -rf \
|
123 |
+
langchain/libs/partners/google-genai \
|
124 |
+
langchain/libs/partners/google-vertexai \
|
125 |
+
langchain/libs/partners/nvidia-ai-endpoints \
|
126 |
+
langchain/libs/partners/cohere \
|
127 |
+
langchain/libs/partners/aws
|
128 |
+
|
129 |
+
- name: Ensure the tests did not create any additional files
|
130 |
+
working-directory: langchain
|
131 |
+
run: |
|
132 |
+
set -eu
|
133 |
+
|
134 |
+
STATUS="$(git status)"
|
135 |
+
echo "$STATUS"
|
136 |
+
|
137 |
+
# grep will exit non-zero if the target message isn't found,
|
138 |
+
# and `set -e` above will cause the step to fail.
|
139 |
+
echo "$STATUS" | grep 'nothing to commit, working tree clean'
|
docs/langchain/.gitignore
ADDED
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.vs/
|
2 |
+
.vscode/
|
3 |
+
.idea/
|
4 |
+
# Byte-compiled / optimized / DLL files
|
5 |
+
__pycache__/
|
6 |
+
*.py[cod]
|
7 |
+
*$py.class
|
8 |
+
|
9 |
+
# C extensions
|
10 |
+
*.so
|
11 |
+
|
12 |
+
# Distribution / packaging
|
13 |
+
.Python
|
14 |
+
build/
|
15 |
+
develop-eggs/
|
16 |
+
dist/
|
17 |
+
downloads/
|
18 |
+
eggs/
|
19 |
+
.eggs/
|
20 |
+
lib/
|
21 |
+
lib64/
|
22 |
+
parts/
|
23 |
+
sdist/
|
24 |
+
var/
|
25 |
+
wheels/
|
26 |
+
pip-wheel-metadata/
|
27 |
+
share/python-wheels/
|
28 |
+
*.egg-info/
|
29 |
+
.installed.cfg
|
30 |
+
*.egg
|
31 |
+
MANIFEST
|
32 |
+
|
33 |
+
# Google GitHub Actions credentials files created by:
|
34 |
+
# https://github.com/google-github-actions/auth
|
35 |
+
#
|
36 |
+
# That action recommends adding this gitignore to prevent accidentally committing keys.
|
37 |
+
gha-creds-*.json
|
38 |
+
|
39 |
+
# PyInstaller
|
40 |
+
# Usually these files are written by a python script from a template
|
41 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
42 |
+
*.manifest
|
43 |
+
*.spec
|
44 |
+
|
45 |
+
# Installer logs
|
46 |
+
pip-log.txt
|
47 |
+
pip-delete-this-directory.txt
|
48 |
+
|
49 |
+
# Unit test / coverage reports
|
50 |
+
htmlcov/
|
51 |
+
.tox/
|
52 |
+
.nox/
|
53 |
+
.coverage
|
54 |
+
.coverage.*
|
55 |
+
.cache
|
56 |
+
nosetests.xml
|
57 |
+
coverage.xml
|
58 |
+
*.cover
|
59 |
+
*.py,cover
|
60 |
+
.hypothesis/
|
61 |
+
.pytest_cache/
|
62 |
+
|
63 |
+
# Translations
|
64 |
+
*.mo
|
65 |
+
*.pot
|
66 |
+
|
67 |
+
# Django stuff:
|
68 |
+
*.log
|
69 |
+
local_settings.py
|
70 |
+
db.sqlite3
|
71 |
+
db.sqlite3-journal
|
72 |
+
|
73 |
+
# Flask stuff:
|
74 |
+
instance/
|
75 |
+
.webassets-cache
|
76 |
+
|
77 |
+
# Scrapy stuff:
|
78 |
+
.scrapy
|
79 |
+
|
80 |
+
# Sphinx documentation
|
81 |
+
docs/_build/
|
82 |
+
docs/docs/_build/
|
83 |
+
|
84 |
+
# PyBuilder
|
85 |
+
target/
|
86 |
+
|
87 |
+
# Jupyter Notebook
|
88 |
+
.ipynb_checkpoints
|
89 |
+
notebooks/
|
90 |
+
|
91 |
+
# IPython
|
92 |
+
profile_default/
|
93 |
+
ipython_config.py
|
94 |
+
|
95 |
+
# pyenv
|
96 |
+
.python-version
|
97 |
+
|
98 |
+
# pipenv
|
99 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
100 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
101 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
102 |
+
# install all needed dependencies.
|
103 |
+
#Pipfile.lock
|
104 |
+
|
105 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
106 |
+
__pypackages__/
|
107 |
+
|
108 |
+
# Celery stuff
|
109 |
+
celerybeat-schedule
|
110 |
+
celerybeat.pid
|
111 |
+
|
112 |
+
# SageMath parsed files
|
113 |
+
*.sage.py
|
114 |
+
|
115 |
+
# Environments
|
116 |
+
.env
|
117 |
+
.envrc
|
118 |
+
.venv*
|
119 |
+
venv*
|
120 |
+
env/
|
121 |
+
ENV/
|
122 |
+
env.bak/
|
123 |
+
|
124 |
+
# Spyder project settings
|
125 |
+
.spyderproject
|
126 |
+
.spyproject
|
127 |
+
|
128 |
+
# Rope project settings
|
129 |
+
.ropeproject
|
130 |
+
|
131 |
+
# mkdocs documentation
|
132 |
+
/site
|
133 |
+
|
134 |
+
# mypy
|
135 |
+
.mypy_cache/
|
136 |
+
.mypy_cache_test/
|
137 |
+
.dmypy.json
|
138 |
+
dmypy.json
|
139 |
+
|
140 |
+
# Pyre type checker
|
141 |
+
.pyre/
|
142 |
+
|
143 |
+
# macOS display setting files
|
144 |
+
.DS_Store
|
145 |
+
|
146 |
+
# Wandb directory
|
147 |
+
wandb/
|
148 |
+
|
149 |
+
# asdf tool versions
|
150 |
+
.tool-versions
|
151 |
+
/.ruff_cache/
|
152 |
+
|
153 |
+
*.pkl
|
154 |
+
*.bin
|
155 |
+
|
156 |
+
# integration test artifacts
|
157 |
+
data_map*
|
158 |
+
\[('_type', 'fake'), ('stop', None)]
|
159 |
+
|
160 |
+
# Replit files
|
161 |
+
*replit*
|
162 |
+
|
163 |
+
node_modules
|
164 |
+
docs/.yarn/
|
165 |
+
docs/node_modules/
|
166 |
+
docs/.docusaurus/
|
167 |
+
docs/.cache-loader/
|
168 |
+
docs/_dist
|
169 |
+
docs/api_reference/*api_reference.rst
|
170 |
+
docs/api_reference/_build
|
171 |
+
docs/api_reference/*/
|
172 |
+
!docs/api_reference/_static/
|
173 |
+
!docs/api_reference/templates/
|
174 |
+
!docs/api_reference/themes/
|
175 |
+
docs/docs/build
|
176 |
+
docs/docs/node_modules
|
177 |
+
docs/docs/yarn.lock
|
178 |
+
_dist
|
179 |
+
docs/docs/templates
|
180 |
+
|
181 |
+
prof
|
182 |
+
virtualenv/
|
docs/langchain/.readthedocs.yaml
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Read the Docs configuration file
|
2 |
+
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
3 |
+
|
4 |
+
# Required
|
5 |
+
version: 2
|
6 |
+
|
7 |
+
formats:
|
8 |
+
- pdf
|
9 |
+
|
10 |
+
# Set the version of Python and other tools you might need
|
11 |
+
build:
|
12 |
+
os: ubuntu-22.04
|
13 |
+
tools:
|
14 |
+
python: "3.11"
|
15 |
+
commands:
|
16 |
+
- mkdir -p $READTHEDOCS_OUTPUT
|
17 |
+
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
|
18 |
+
# Build documentation in the docs/ directory with Sphinx
|
19 |
+
sphinx:
|
20 |
+
configuration: docs/api_reference/conf.py
|
21 |
+
|
22 |
+
# If using Sphinx, optionally build your docs in additional formats such as PDF
|
23 |
+
# formats:
|
24 |
+
# - pdf
|
25 |
+
|
26 |
+
# Optionally declare the Python requirements required to build your docs
|
27 |
+
python:
|
28 |
+
install:
|
29 |
+
- requirements: docs/api_reference/requirements.txt
|
docs/langchain/CITATION.cff
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cff-version: 1.2.0
|
2 |
+
message: "If you use this software, please cite it as below."
|
3 |
+
authors:
|
4 |
+
- family-names: "Chase"
|
5 |
+
given-names: "Harrison"
|
6 |
+
title: "LangChain"
|
7 |
+
date-released: 2022-10-17
|
8 |
+
url: "https://github.com/langchain-ai/langchain"
|
docs/langchain/LICENSE
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
MIT License
|
2 |
+
|
3 |
+
Copyright (c) LangChain, Inc.
|
4 |
+
|
5 |
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
6 |
+
of this software and associated documentation files (the "Software"), to deal
|
7 |
+
in the Software without restriction, including without limitation the rights
|
8 |
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
9 |
+
copies of the Software, and to permit persons to whom the Software is
|
10 |
+
furnished to do so, subject to the following conditions:
|
11 |
+
|
12 |
+
The above copyright notice and this permission notice shall be included in all
|
13 |
+
copies or substantial portions of the Software.
|
14 |
+
|
15 |
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
16 |
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
17 |
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
18 |
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
19 |
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
20 |
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
21 |
+
SOFTWARE.
|
docs/langchain/MIGRATE.md
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Migrating
|
2 |
+
|
3 |
+
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28/23
|
4 |
+
|
5 |
+
In an effort to make `langchain` leaner and safer, we are moving select chains to `langchain_experimental`.
|
6 |
+
This migration has already started, but we are remaining backwards compatible until 7/28.
|
7 |
+
On that date, we will remove functionality from `langchain`.
|
8 |
+
Read more about the motivation and the progress [here](https://github.com/langchain-ai/langchain/discussions/8043).
|
9 |
+
|
10 |
+
### Migrating to `langchain_experimental`
|
11 |
+
|
12 |
+
We are moving any experimental components of LangChain, or components with vulnerability issues, into `langchain_experimental`.
|
13 |
+
This guide covers how to migrate.
|
14 |
+
|
15 |
+
### Installation
|
16 |
+
|
17 |
+
Previously:
|
18 |
+
|
19 |
+
`pip install -U langchain`
|
20 |
+
|
21 |
+
Now (only if you want to access things in experimental):
|
22 |
+
|
23 |
+
`pip install -U langchain langchain_experimental`
|
24 |
+
|
25 |
+
### Things in `langchain.experimental`
|
26 |
+
|
27 |
+
Previously:
|
28 |
+
|
29 |
+
`from langchain.experimental import ...`
|
30 |
+
|
31 |
+
Now:
|
32 |
+
|
33 |
+
`from langchain_experimental import ...`
|
34 |
+
|
35 |
+
### PALChain
|
36 |
+
|
37 |
+
Previously:
|
38 |
+
|
39 |
+
`from langchain.chains import PALChain`
|
40 |
+
|
41 |
+
Now:
|
42 |
+
|
43 |
+
`from langchain_experimental.pal_chain import PALChain`
|
44 |
+
|
45 |
+
### SQLDatabaseChain
|
46 |
+
|
47 |
+
Previously:
|
48 |
+
|
49 |
+
`from langchain.chains import SQLDatabaseChain`
|
50 |
+
|
51 |
+
Now:
|
52 |
+
|
53 |
+
`from langchain_experimental.sql import SQLDatabaseChain`
|
54 |
+
|
55 |
+
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)
|
56 |
+
|
57 |
+
`from langchain.chains import create_sql_query_chain`
|
58 |
+
|
59 |
+
### `load_prompt` for Python files
|
60 |
+
|
61 |
+
Note: this only applies if you want to load Python files as prompts.
|
62 |
+
If you want to load json/yaml files, no change is needed.
|
63 |
+
|
64 |
+
Previously:
|
65 |
+
|
66 |
+
`from langchain.prompts import load_prompt`
|
67 |
+
|
68 |
+
Now:
|
69 |
+
|
70 |
+
`from langchain_experimental.prompts import load_prompt`
|
docs/langchain/Makefile
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
|
2 |
+
|
3 |
+
## help: Show this help info.
|
4 |
+
help: Makefile
|
5 |
+
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
|
6 |
+
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
|
7 |
+
|
8 |
+
## all: Default target, shows help.
|
9 |
+
all: help
|
10 |
+
|
11 |
+
## clean: Clean documentation and API documentation artifacts.
|
12 |
+
clean: docs_clean api_docs_clean
|
13 |
+
|
14 |
+
######################
|
15 |
+
# DOCUMENTATION
|
16 |
+
######################
|
17 |
+
|
18 |
+
## docs_build: Build the documentation.
|
19 |
+
docs_build:
|
20 |
+
cd docs && make build
|
21 |
+
|
22 |
+
## docs_clean: Clean the documentation build artifacts.
|
23 |
+
docs_clean:
|
24 |
+
cd docs && make clean
|
25 |
+
|
26 |
+
## docs_linkcheck: Run linkchecker on the documentation.
|
27 |
+
docs_linkcheck:
|
28 |
+
poetry run linkchecker _dist/docs/ --ignore-url node_modules
|
29 |
+
|
30 |
+
## api_docs_build: Build the API Reference documentation.
|
31 |
+
api_docs_build:
|
32 |
+
poetry run python docs/api_reference/create_api_rst.py
|
33 |
+
cd docs/api_reference && poetry run make html
|
34 |
+
|
35 |
+
API_PKG ?= text-splitters
|
36 |
+
|
37 |
+
api_docs_quick_preview:
|
38 |
+
poetry run pip install "pydantic<2"
|
39 |
+
poetry run python docs/api_reference/create_api_rst.py $(API_PKG)
|
40 |
+
cd docs/api_reference && poetry run make html
|
41 |
+
open docs/api_reference/_build/html/$(shell echo $(API_PKG) | sed 's/-/_/g')_api_reference.html
|
42 |
+
|
43 |
+
## api_docs_clean: Clean the API Reference documentation build artifacts.
|
44 |
+
api_docs_clean:
|
45 |
+
find ./docs/api_reference -name '*_api_reference.rst' -delete
|
46 |
+
git clean -fdX ./docs/api_reference
|
47 |
+
|
48 |
+
|
49 |
+
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
|
50 |
+
api_docs_linkcheck:
|
51 |
+
poetry run linkchecker docs/api_reference/_build/html/index.html
|
52 |
+
|
53 |
+
## spell_check: Run codespell on the project.
|
54 |
+
spell_check:
|
55 |
+
poetry run codespell --toml pyproject.toml
|
56 |
+
|
57 |
+
## spell_fix: Run codespell on the project and fix the errors.
|
58 |
+
spell_fix:
|
59 |
+
poetry run codespell --toml pyproject.toml -w
|
60 |
+
|
61 |
+
######################
|
62 |
+
# LINTING AND FORMATTING
|
63 |
+
######################
|
64 |
+
|
65 |
+
## lint: Run linting on the project.
|
66 |
+
lint lint_package lint_tests:
|
67 |
+
poetry run ruff check docs templates cookbook
|
68 |
+
poetry run ruff format docs templates cookbook --diff
|
69 |
+
poetry run ruff check --select I docs templates cookbook
|
70 |
+
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
71 |
+
|
72 |
+
## format: Format the project files.
|
73 |
+
format format_diff:
|
74 |
+
poetry run ruff format docs templates cookbook
|
75 |
+
poetry run ruff check --select I --fix docs templates cookbook
|
docs/langchain/README.md
ADDED
@@ -0,0 +1,137 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 🦜️🔗 LangChain
|
2 |
+
|
3 |
+
⚡ Build context-aware reasoning applications ⚡
|
4 |
+
|
5 |
+
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
|
6 |
+
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
|
7 |
+
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
|
8 |
+
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
|
9 |
+
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
|
10 |
+
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain?style=flat-square)](https://libraries.io/github/langchain-ai/langchain)
|
11 |
+
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
|
12 |
+
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
13 |
+
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
|
14 |
+
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
|
15 |
+
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
|
16 |
+
|
17 |
+
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
|
18 |
+
|
19 |
+
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
|
20 |
+
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
|
21 |
+
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
|
22 |
+
|
23 |
+
## Quick Install
|
24 |
+
|
25 |
+
With pip:
|
26 |
+
```bash
|
27 |
+
pip install langchain
|
28 |
+
```
|
29 |
+
|
30 |
+
With conda:
|
31 |
+
```bash
|
32 |
+
conda install langchain -c conda-forge
|
33 |
+
```
|
34 |
+
|
35 |
+
## 🤔 What is LangChain?
|
36 |
+
|
37 |
+
**LangChain** is a framework for developing applications powered by large language models (LLMs).
|
38 |
+
|
39 |
+
For these applications, LangChain simplifies the entire application lifecycle:
|
40 |
+
|
41 |
+
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel) and [components](https://python.langchain.com/v0.2/docs/concepts/#components). Integrate with hundreds of [third-party providers](https://python.langchain.com/v0.2/docs/integrations/platforms/).
|
42 |
+
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
|
43 |
+
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/v0.2/docs/langserve/).
|
44 |
+
|
45 |
+
### Open-source libraries
|
46 |
+
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
|
47 |
+
- **`langchain-community`**: Third party integrations.
|
48 |
+
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
|
49 |
+
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
|
50 |
+
- **[`LangGraph`](https://langchain-ai.github.io/langgraph/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
|
51 |
+
|
52 |
+
### Productionization:
|
53 |
+
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
|
54 |
+
|
55 |
+
### Deployment:
|
56 |
+
- **[LangServe](https://python.langchain.com/v0.2/docs/langserve/)**: A library for deploying LangChain chains as REST APIs.
|
57 |
+
|
58 |
+
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
|
59 |
+
|
60 |
+
## 🧱 What can you build with LangChain?
|
61 |
+
|
62 |
+
**❓ Question answering with RAG**
|
63 |
+
|
64 |
+
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/rag/)
|
65 |
+
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
|
66 |
+
|
67 |
+
**🧱 Extracting structured output**
|
68 |
+
|
69 |
+
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/extraction/)
|
70 |
+
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
|
71 |
+
|
72 |
+
**🤖 Chatbots**
|
73 |
+
|
74 |
+
- [Documentation](https://python.langchain.com/v0.2/docs/tutorials/chatbot/)
|
75 |
+
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
|
76 |
+
|
77 |
+
And much more! Head to the [Tutorials](https://python.langchain.com/v0.2/docs/tutorials/) section of the docs for more.
|
78 |
+
|
79 |
+
## 🚀 How does LangChain help?
|
80 |
+
The main value props of the LangChain libraries are:
|
81 |
+
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
|
82 |
+
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
|
83 |
+
|
84 |
+
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
|
85 |
+
|
86 |
+
## LangChain Expression Language (LCEL)
|
87 |
+
|
88 |
+
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
|
89 |
+
|
90 |
+
- **[Overview](https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
|
91 |
+
- **[Interface](https://python.langchain.com/v0.2/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
|
92 |
+
- **[Primitives](https://python.langchain.com/v0.2/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
|
93 |
+
- **[Cheatsheet](https://python.langchain.com/v0.2/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
|
94 |
+
|
95 |
+
## Components
|
96 |
+
|
97 |
+
Components fall into the following **modules**:
|
98 |
+
|
99 |
+
**📃 Model I/O**
|
100 |
+
|
101 |
+
This includes [prompt management](https://python.langchain.com/v0.2/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/v0.2/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/v0.2/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/v0.2/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/v0.2/docs/concepts/#output-parsers).
|
102 |
+
|
103 |
+
**📚 Retrieval**
|
104 |
+
|
105 |
+
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/v0.2/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/v0.2/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/v0.2/docs/concepts/#retrievers) it for use in the generation step.
|
106 |
+
|
107 |
+
**🤖 Agents**
|
108 |
+
|
109 |
+
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/v0.2/docs/concepts/#agents) along with the [LangGraph](https://github.com/langchain-ai/langgraph) extension for building custom agents.
|
110 |
+
|
111 |
+
## 📖 Documentation
|
112 |
+
|
113 |
+
Please see [here](https://python.langchain.com) for full documentation, which includes:
|
114 |
+
|
115 |
+
- [Introduction](https://python.langchain.com/v0.2/docs/introduction/): Overview of the framework and the structure of the docs.
|
116 |
+
- [Tutorials](https://python.langchain.com/docs/use_cases/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
|
117 |
+
- [How-to guides](https://python.langchain.com/v0.2/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
|
118 |
+
- [Conceptual guide](https://python.langchain.com/v0.2/docs/concepts/): Conceptual explanations of the key parts of the framework.
|
119 |
+
- [API Reference](https://api.python.langchain.com): Thorough documentation of every class and method.
|
120 |
+
|
121 |
+
## 🌐 Ecosystem
|
122 |
+
|
123 |
+
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
|
124 |
+
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
|
125 |
+
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
|
126 |
+
- [LangChain Templates](https://python.langchain.com/v0.2/docs/templates/): Example applications hosted with LangServe.
|
127 |
+
|
128 |
+
|
129 |
+
## 💁 Contributing
|
130 |
+
|
131 |
+
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
|
132 |
+
|
133 |
+
For detailed information on how to contribute, see [here](https://python.langchain.com/v0.2/docs/contributing/).
|
134 |
+
|
135 |
+
## 🌟 Contributors
|
136 |
+
|
137 |
+
[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
|
docs/langchain/SECURITY.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Security Policy
|
2 |
+
|
3 |
+
## Reporting OSS Vulnerabilities
|
4 |
+
|
5 |
+
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
|
6 |
+
a bounty program for our open source projects.
|
7 |
+
|
8 |
+
Please report security vulnerabilities associated with the LangChain
|
9 |
+
open source projects by visiting the following link:
|
10 |
+
|
11 |
+
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
|
12 |
+
|
13 |
+
Before reporting a vulnerability, please review:
|
14 |
+
|
15 |
+
1) In-Scope Targets and Out-of-Scope Targets below.
|
16 |
+
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
|
17 |
+
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
|
18 |
+
understand what we consider to be a security vulnerability vs. developer
|
19 |
+
responsibility.
|
20 |
+
|
21 |
+
### In-Scope Targets
|
22 |
+
|
23 |
+
The following packages and repositories are eligible for bug bounties:
|
24 |
+
|
25 |
+
- langchain-core
|
26 |
+
- langchain (see exceptions)
|
27 |
+
- langchain-community (see exceptions)
|
28 |
+
- langgraph
|
29 |
+
- langserve
|
30 |
+
|
31 |
+
### Out of Scope Targets
|
32 |
+
|
33 |
+
All out of scope targets defined by huntr as well as:
|
34 |
+
|
35 |
+
- **langchain-experimental**: This repository is for experimental code and is not
|
36 |
+
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
|
37 |
+
time and published with no bounty attached.
|
38 |
+
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
|
39 |
+
bounties. This includes the following directories
|
40 |
+
- langchain/tools
|
41 |
+
- langchain-community/tools
|
42 |
+
- Please review our [security guidelines](https://python.langchain.com/docs/security)
|
43 |
+
for more details, but generally tools interact with the real world. Developers are
|
44 |
+
expected to understand the security implications of their code and are responsible
|
45 |
+
for the security of their tools.
|
46 |
+
- Code documented with security notices. This will be decided done on a case by
|
47 |
+
case basis, but likely will not be eligible for a bounty as the code is already
|
48 |
+
documented with guidelines for developers that should be followed for making their
|
49 |
+
application secure.
|
50 |
+
- Any LangSmith related repositories or APIs see below.
|
51 |
+
|
52 |
+
## Reporting LangSmith Vulnerabilities
|
53 |
+
|
54 |
+
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
|
55 |
+
|
56 |
+
- LangSmith site: https://smith.langchain.com
|
57 |
+
- SDK client: https://github.com/langchain-ai/langsmith-sdk
|
58 |
+
|
59 |
+
### Other Security Concerns
|
60 |
+
|
61 |
+
For any other security concerns, please contact us at `security@langchain.dev`.
|
docs/langchain/cookbook/Gemma_LangChain.ipynb
ADDED
@@ -0,0 +1,932 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"metadata": {
|
6 |
+
"id": "BYejgj8Zf-LG",
|
7 |
+
"tags": []
|
8 |
+
},
|
9 |
+
"source": [
|
10 |
+
"## Getting started with LangChain and Gemma, running locally or in the Cloud"
|
11 |
+
]
|
12 |
+
},
|
13 |
+
{
|
14 |
+
"cell_type": "markdown",
|
15 |
+
"metadata": {
|
16 |
+
"id": "2IxjMb9-jIJ8"
|
17 |
+
},
|
18 |
+
"source": [
|
19 |
+
"### Installing dependencies"
|
20 |
+
]
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"cell_type": "code",
|
24 |
+
"execution_count": 1,
|
25 |
+
"metadata": {
|
26 |
+
"colab": {
|
27 |
+
"base_uri": "https://localhost:8080/"
|
28 |
+
},
|
29 |
+
"executionInfo": {
|
30 |
+
"elapsed": 9436,
|
31 |
+
"status": "ok",
|
32 |
+
"timestamp": 1708975187360,
|
33 |
+
"user": {
|
34 |
+
"displayName": "",
|
35 |
+
"userId": ""
|
36 |
+
},
|
37 |
+
"user_tz": -60
|
38 |
+
},
|
39 |
+
"id": "XZaTsXfcheTF",
|
40 |
+
"outputId": "eb21d603-d824-46c5-f99f-087fb2f618b1",
|
41 |
+
"tags": []
|
42 |
+
},
|
43 |
+
"outputs": [],
|
44 |
+
"source": [
|
45 |
+
"!pip install --upgrade langchain langchain-google-vertexai"
|
46 |
+
]
|
47 |
+
},
|
48 |
+
{
|
49 |
+
"cell_type": "markdown",
|
50 |
+
"metadata": {
|
51 |
+
"id": "IXmAujvC3Kwp"
|
52 |
+
},
|
53 |
+
"source": [
|
54 |
+
"### Running the model"
|
55 |
+
]
|
56 |
+
},
|
57 |
+
{
|
58 |
+
"cell_type": "markdown",
|
59 |
+
"metadata": {
|
60 |
+
"id": "CI8Elyc5gBQF"
|
61 |
+
},
|
62 |
+
"source": [
|
63 |
+
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint it ready, you need to copy its number."
|
64 |
+
]
|
65 |
+
},
|
66 |
+
{
|
67 |
+
"cell_type": "code",
|
68 |
+
"execution_count": 1,
|
69 |
+
"metadata": {
|
70 |
+
"id": "gv1j8FrVftsC"
|
71 |
+
},
|
72 |
+
"outputs": [],
|
73 |
+
"source": [
|
74 |
+
"# @title Basic parameters\n",
|
75 |
+
"project: str = \"PUT_YOUR_PROJECT_ID_HERE\" # @param {type:\"string\"}\n",
|
76 |
+
"endpoint_id: str = \"PUT_YOUR_ENDPOINT_ID_HERE\" # @param {type:\"string\"}\n",
|
77 |
+
"location: str = \"PUT_YOUR_ENDPOINT_LOCAtION_HERE\" # @param {type:\"string\"}"
|
78 |
+
]
|
79 |
+
},
|
80 |
+
{
|
81 |
+
"cell_type": "code",
|
82 |
+
"execution_count": 3,
|
83 |
+
"metadata": {
|
84 |
+
"executionInfo": {
|
85 |
+
"elapsed": 3,
|
86 |
+
"status": "ok",
|
87 |
+
"timestamp": 1708975440503,
|
88 |
+
"user": {
|
89 |
+
"displayName": "",
|
90 |
+
"userId": ""
|
91 |
+
},
|
92 |
+
"user_tz": -60
|
93 |
+
},
|
94 |
+
"id": "bhIHsFGYjtFt",
|
95 |
+
"tags": []
|
96 |
+
},
|
97 |
+
"outputs": [
|
98 |
+
{
|
99 |
+
"name": "stderr",
|
100 |
+
"output_type": "stream",
|
101 |
+
"text": [
|
102 |
+
"2024-02-27 17:15:10.457149: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
|
103 |
+
"2024-02-27 17:15:10.508925: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
104 |
+
"2024-02-27 17:15:10.508957: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
105 |
+
"2024-02-27 17:15:10.510289: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
106 |
+
"2024-02-27 17:15:10.518898: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
|
107 |
+
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
|
108 |
+
]
|
109 |
+
}
|
110 |
+
],
|
111 |
+
"source": [
|
112 |
+
"from langchain_google_vertexai import (\n",
|
113 |
+
" GemmaChatVertexAIModelGarden,\n",
|
114 |
+
" GemmaVertexAIModelGarden,\n",
|
115 |
+
")"
|
116 |
+
]
|
117 |
+
},
|
118 |
+
{
|
119 |
+
"cell_type": "code",
|
120 |
+
"execution_count": 4,
|
121 |
+
"metadata": {
|
122 |
+
"executionInfo": {
|
123 |
+
"elapsed": 351,
|
124 |
+
"status": "ok",
|
125 |
+
"timestamp": 1708975440852,
|
126 |
+
"user": {
|
127 |
+
"displayName": "",
|
128 |
+
"userId": ""
|
129 |
+
},
|
130 |
+
"user_tz": -60
|
131 |
+
},
|
132 |
+
"id": "WJv-UVWwh0lk",
|
133 |
+
"tags": []
|
134 |
+
},
|
135 |
+
"outputs": [],
|
136 |
+
"source": [
|
137 |
+
"llm = GemmaVertexAIModelGarden(\n",
|
138 |
+
" endpoint_id=endpoint_id,\n",
|
139 |
+
" project=project,\n",
|
140 |
+
" location=location,\n",
|
141 |
+
")"
|
142 |
+
]
|
143 |
+
},
|
144 |
+
{
|
145 |
+
"cell_type": "code",
|
146 |
+
"execution_count": 5,
|
147 |
+
"metadata": {
|
148 |
+
"colab": {
|
149 |
+
"base_uri": "https://localhost:8080/"
|
150 |
+
},
|
151 |
+
"executionInfo": {
|
152 |
+
"elapsed": 714,
|
153 |
+
"status": "ok",
|
154 |
+
"timestamp": 1708975441564,
|
155 |
+
"user": {
|
156 |
+
"displayName": "",
|
157 |
+
"userId": ""
|
158 |
+
},
|
159 |
+
"user_tz": -60
|
160 |
+
},
|
161 |
+
"id": "6kM7cEFdiN9h",
|
162 |
+
"outputId": "fb420c56-5614-4745-cda8-0ee450a3e539",
|
163 |
+
"tags": []
|
164 |
+
},
|
165 |
+
"outputs": [
|
166 |
+
{
|
167 |
+
"name": "stdout",
|
168 |
+
"output_type": "stream",
|
169 |
+
"text": [
|
170 |
+
"Prompt:\n",
|
171 |
+
"What is the meaning of life?\n",
|
172 |
+
"Output:\n",
|
173 |
+
" Who am I? Why do I exist? These are questions I have struggled with\n"
|
174 |
+
]
|
175 |
+
}
|
176 |
+
],
|
177 |
+
"source": [
|
178 |
+
"output = llm.invoke(\"What is the meaning of life?\")\n",
|
179 |
+
"print(output)"
|
180 |
+
]
|
181 |
+
},
|
182 |
+
{
|
183 |
+
"cell_type": "markdown",
|
184 |
+
"metadata": {
|
185 |
+
"id": "zzep9nfmuUcO"
|
186 |
+
},
|
187 |
+
"source": [
|
188 |
+
"We can also use Gemma as a multi-turn chat model:"
|
189 |
+
]
|
190 |
+
},
|
191 |
+
{
|
192 |
+
"cell_type": "code",
|
193 |
+
"execution_count": 7,
|
194 |
+
"metadata": {
|
195 |
+
"colab": {
|
196 |
+
"base_uri": "https://localhost:8080/"
|
197 |
+
},
|
198 |
+
"executionInfo": {
|
199 |
+
"elapsed": 964,
|
200 |
+
"status": "ok",
|
201 |
+
"timestamp": 1708976298189,
|
202 |
+
"user": {
|
203 |
+
"displayName": "",
|
204 |
+
"userId": ""
|
205 |
+
},
|
206 |
+
"user_tz": -60
|
207 |
+
},
|
208 |
+
"id": "8tPHoM5XiZOl",
|
209 |
+
"outputId": "7b8fb652-9aed-47b0-c096-aa1abfc3a2a9",
|
210 |
+
"tags": []
|
211 |
+
},
|
212 |
+
"outputs": [
|
213 |
+
{
|
214 |
+
"name": "stdout",
|
215 |
+
"output_type": "stream",
|
216 |
+
"text": [
|
217 |
+
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of'\n",
|
218 |
+
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nPrompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of<end_of_turn>\\n<start_of_turn>user\\nHow much is 3+3?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nOutput:\\n3-years old.<end_of_turn>\\n\\n<'\n"
|
219 |
+
]
|
220 |
+
}
|
221 |
+
],
|
222 |
+
"source": [
|
223 |
+
"from langchain_core.messages import HumanMessage\n",
|
224 |
+
"\n",
|
225 |
+
"llm = GemmaChatVertexAIModelGarden(\n",
|
226 |
+
" endpoint_id=endpoint_id,\n",
|
227 |
+
" project=project,\n",
|
228 |
+
" location=location,\n",
|
229 |
+
")\n",
|
230 |
+
"\n",
|
231 |
+
"message1 = HumanMessage(content=\"How much is 2+2?\")\n",
|
232 |
+
"answer1 = llm.invoke([message1])\n",
|
233 |
+
"print(answer1)\n",
|
234 |
+
"\n",
|
235 |
+
"message2 = HumanMessage(content=\"How much is 3+3?\")\n",
|
236 |
+
"answer2 = llm.invoke([message1, answer1, message2])\n",
|
237 |
+
"\n",
|
238 |
+
"print(answer2)"
|
239 |
+
]
|
240 |
+
},
|
241 |
+
{
|
242 |
+
"cell_type": "markdown",
|
243 |
+
"metadata": {},
|
244 |
+
"source": [
|
245 |
+
"You can post-process response to avoid repetitions:"
|
246 |
+
]
|
247 |
+
},
|
248 |
+
{
|
249 |
+
"cell_type": "code",
|
250 |
+
"execution_count": 8,
|
251 |
+
"metadata": {
|
252 |
+
"tags": []
|
253 |
+
},
|
254 |
+
"outputs": [
|
255 |
+
{
|
256 |
+
"name": "stdout",
|
257 |
+
"output_type": "stream",
|
258 |
+
"text": [
|
259 |
+
"content='Output:\\n<<humming>>: 2+2 = 4.\\n<end'\n",
|
260 |
+
"content='Output:\\nOutput:\\n<<humming>>: 3+3 = 6.'\n"
|
261 |
+
]
|
262 |
+
}
|
263 |
+
],
|
264 |
+
"source": [
|
265 |
+
"answer1 = llm.invoke([message1], parse_response=True)\n",
|
266 |
+
"print(answer1)\n",
|
267 |
+
"\n",
|
268 |
+
"answer2 = llm.invoke([message1, answer1, message2], parse_response=True)\n",
|
269 |
+
"\n",
|
270 |
+
"print(answer2)"
|
271 |
+
]
|
272 |
+
},
|
273 |
+
{
|
274 |
+
"cell_type": "markdown",
|
275 |
+
"metadata": {
|
276 |
+
"id": "VEfjqo7fjARR"
|
277 |
+
},
|
278 |
+
"source": [
|
279 |
+
"## Running Gemma locally from Kaggle"
|
280 |
+
]
|
281 |
+
},
|
282 |
+
{
|
283 |
+
"cell_type": "markdown",
|
284 |
+
"metadata": {
|
285 |
+
"id": "gVW8QDzHu7TA"
|
286 |
+
},
|
287 |
+
"source": [
|
288 |
+
"In order to run Gemma locally, you can download it from Kaggle first. In order to do this, you'll need to login into the Kaggle platform, create a API key and download a `kaggle.json` Read more about Kaggle auth [here](https://www.kaggle.com/docs/api)."
|
289 |
+
]
|
290 |
+
},
|
291 |
+
{
|
292 |
+
"cell_type": "markdown",
|
293 |
+
"metadata": {
|
294 |
+
"id": "S1EsXQ3XvZkQ"
|
295 |
+
},
|
296 |
+
"source": [
|
297 |
+
"### Installation"
|
298 |
+
]
|
299 |
+
},
|
300 |
+
{
|
301 |
+
"cell_type": "code",
|
302 |
+
"execution_count": 7,
|
303 |
+
"metadata": {
|
304 |
+
"executionInfo": {
|
305 |
+
"elapsed": 335,
|
306 |
+
"status": "ok",
|
307 |
+
"timestamp": 1708976305471,
|
308 |
+
"user": {
|
309 |
+
"displayName": "",
|
310 |
+
"userId": ""
|
311 |
+
},
|
312 |
+
"user_tz": -60
|
313 |
+
},
|
314 |
+
"id": "p8SMwpKRvbef",
|
315 |
+
"tags": []
|
316 |
+
},
|
317 |
+
"outputs": [
|
318 |
+
{
|
319 |
+
"name": "stderr",
|
320 |
+
"output_type": "stream",
|
321 |
+
"text": [
|
322 |
+
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
|
323 |
+
" pid, fd = os.forkpty()\n"
|
324 |
+
]
|
325 |
+
}
|
326 |
+
],
|
327 |
+
"source": [
|
328 |
+
"!mkdir -p ~/.kaggle && cp kaggle.json ~/.kaggle/kaggle.json"
|
329 |
+
]
|
330 |
+
},
|
331 |
+
{
|
332 |
+
"cell_type": "code",
|
333 |
+
"execution_count": 11,
|
334 |
+
"metadata": {
|
335 |
+
"executionInfo": {
|
336 |
+
"elapsed": 7802,
|
337 |
+
"status": "ok",
|
338 |
+
"timestamp": 1708976363010,
|
339 |
+
"user": {
|
340 |
+
"displayName": "",
|
341 |
+
"userId": ""
|
342 |
+
},
|
343 |
+
"user_tz": -60
|
344 |
+
},
|
345 |
+
"id": "Yr679aePv9Fq",
|
346 |
+
"tags": []
|
347 |
+
},
|
348 |
+
"outputs": [
|
349 |
+
{
|
350 |
+
"name": "stderr",
|
351 |
+
"output_type": "stream",
|
352 |
+
"text": [
|
353 |
+
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
|
354 |
+
" pid, fd = os.forkpty()\n"
|
355 |
+
]
|
356 |
+
},
|
357 |
+
{
|
358 |
+
"name": "stdout",
|
359 |
+
"output_type": "stream",
|
360 |
+
"text": [
|
361 |
+
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
|
362 |
+
"tensorstore 0.1.54 requires ml-dtypes>=0.3.1, but you have ml-dtypes 0.2.0 which is incompatible.\u001b[0m\u001b[31m\n",
|
363 |
+
"\u001b[0m"
|
364 |
+
]
|
365 |
+
}
|
366 |
+
],
|
367 |
+
"source": [
|
368 |
+
"!pip install keras>=3 keras_nlp"
|
369 |
+
]
|
370 |
+
},
|
371 |
+
{
|
372 |
+
"cell_type": "markdown",
|
373 |
+
"metadata": {
|
374 |
+
"id": "E9zn8nYpv3QZ"
|
375 |
+
},
|
376 |
+
"source": [
|
377 |
+
"### Usage"
|
378 |
+
]
|
379 |
+
},
|
380 |
+
{
|
381 |
+
"cell_type": "code",
|
382 |
+
"execution_count": 1,
|
383 |
+
"metadata": {
|
384 |
+
"executionInfo": {
|
385 |
+
"elapsed": 8536,
|
386 |
+
"status": "ok",
|
387 |
+
"timestamp": 1708976601206,
|
388 |
+
"user": {
|
389 |
+
"displayName": "",
|
390 |
+
"userId": ""
|
391 |
+
},
|
392 |
+
"user_tz": -60
|
393 |
+
},
|
394 |
+
"id": "0LFRmY8TjCkI",
|
395 |
+
"tags": []
|
396 |
+
},
|
397 |
+
"outputs": [
|
398 |
+
{
|
399 |
+
"name": "stderr",
|
400 |
+
"output_type": "stream",
|
401 |
+
"text": [
|
402 |
+
"2024-02-27 16:38:40.797559: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
|
403 |
+
"2024-02-27 16:38:40.848444: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
404 |
+
"2024-02-27 16:38:40.848478: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
405 |
+
"2024-02-27 16:38:40.849728: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
406 |
+
"2024-02-27 16:38:40.857936: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
|
407 |
+
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
|
408 |
+
]
|
409 |
+
}
|
410 |
+
],
|
411 |
+
"source": [
|
412 |
+
"from langchain_google_vertexai import GemmaLocalKaggle"
|
413 |
+
]
|
414 |
+
},
|
415 |
+
{
|
416 |
+
"cell_type": "markdown",
|
417 |
+
"metadata": {
|
418 |
+
"id": "v-o7oXVavdMQ"
|
419 |
+
},
|
420 |
+
"source": [
|
421 |
+
"You can specify the keras backend (by default it's `tensorflow`, but you can change it be `jax` or `torch`)."
|
422 |
+
]
|
423 |
+
},
|
424 |
+
{
|
425 |
+
"cell_type": "code",
|
426 |
+
"execution_count": 2,
|
427 |
+
"metadata": {
|
428 |
+
"executionInfo": {
|
429 |
+
"elapsed": 9,
|
430 |
+
"status": "ok",
|
431 |
+
"timestamp": 1708976601206,
|
432 |
+
"user": {
|
433 |
+
"displayName": "",
|
434 |
+
"userId": ""
|
435 |
+
},
|
436 |
+
"user_tz": -60
|
437 |
+
},
|
438 |
+
"id": "vvTUH8DNj5SF",
|
439 |
+
"tags": []
|
440 |
+
},
|
441 |
+
"outputs": [],
|
442 |
+
"source": [
|
443 |
+
"# @title Basic parameters\n",
|
444 |
+
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
|
445 |
+
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
|
446 |
+
]
|
447 |
+
},
|
448 |
+
{
|
449 |
+
"cell_type": "code",
|
450 |
+
"execution_count": 3,
|
451 |
+
"metadata": {
|
452 |
+
"executionInfo": {
|
453 |
+
"elapsed": 40836,
|
454 |
+
"status": "ok",
|
455 |
+
"timestamp": 1708976761257,
|
456 |
+
"user": {
|
457 |
+
"displayName": "",
|
458 |
+
"userId": ""
|
459 |
+
},
|
460 |
+
"user_tz": -60
|
461 |
+
},
|
462 |
+
"id": "YOmrqxo5kHXK",
|
463 |
+
"tags": []
|
464 |
+
},
|
465 |
+
"outputs": [
|
466 |
+
{
|
467 |
+
"name": "stderr",
|
468 |
+
"output_type": "stream",
|
469 |
+
"text": [
|
470 |
+
"2024-02-27 16:23:14.661164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
|
471 |
+
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
|
472 |
+
]
|
473 |
+
}
|
474 |
+
],
|
475 |
+
"source": [
|
476 |
+
"llm = GemmaLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
|
477 |
+
]
|
478 |
+
},
|
479 |
+
{
|
480 |
+
"cell_type": "code",
|
481 |
+
"execution_count": 7,
|
482 |
+
"metadata": {
|
483 |
+
"id": "Zu6yPDUgkQtQ",
|
484 |
+
"tags": []
|
485 |
+
},
|
486 |
+
"outputs": [
|
487 |
+
{
|
488 |
+
"name": "stderr",
|
489 |
+
"output_type": "stream",
|
490 |
+
"text": [
|
491 |
+
"W0000 00:00:1709051129.518076 774855 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
|
492 |
+
]
|
493 |
+
},
|
494 |
+
{
|
495 |
+
"name": "stdout",
|
496 |
+
"output_type": "stream",
|
497 |
+
"text": [
|
498 |
+
"What is the meaning of life?\n",
|
499 |
+
"\n",
|
500 |
+
"The question is one of the most important questions in the world.\n",
|
501 |
+
"\n",
|
502 |
+
"It’s the question that has\n"
|
503 |
+
]
|
504 |
+
}
|
505 |
+
],
|
506 |
+
"source": [
|
507 |
+
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=30)\n",
|
508 |
+
"print(output)"
|
509 |
+
]
|
510 |
+
},
|
511 |
+
{
|
512 |
+
"cell_type": "markdown",
|
513 |
+
"metadata": {},
|
514 |
+
"source": [
|
515 |
+
"### ChatModel"
|
516 |
+
]
|
517 |
+
},
|
518 |
+
{
|
519 |
+
"cell_type": "markdown",
|
520 |
+
"metadata": {
|
521 |
+
"id": "MSctpRE4u43N"
|
522 |
+
},
|
523 |
+
"source": [
|
524 |
+
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
|
525 |
+
]
|
526 |
+
},
|
527 |
+
{
|
528 |
+
"cell_type": "code",
|
529 |
+
"execution_count": 1,
|
530 |
+
"metadata": {
|
531 |
+
"tags": []
|
532 |
+
},
|
533 |
+
"outputs": [
|
534 |
+
{
|
535 |
+
"name": "stderr",
|
536 |
+
"output_type": "stream",
|
537 |
+
"text": [
|
538 |
+
"2024-02-27 16:58:22.331067: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
|
539 |
+
"2024-02-27 16:58:22.382948: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
540 |
+
"2024-02-27 16:58:22.382978: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
541 |
+
"2024-02-27 16:58:22.384312: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
542 |
+
"2024-02-27 16:58:22.392767: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
|
543 |
+
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
|
544 |
+
]
|
545 |
+
}
|
546 |
+
],
|
547 |
+
"source": [
|
548 |
+
"from langchain_google_vertexai import GemmaChatLocalKaggle"
|
549 |
+
]
|
550 |
+
},
|
551 |
+
{
|
552 |
+
"cell_type": "code",
|
553 |
+
"execution_count": 2,
|
554 |
+
"metadata": {
|
555 |
+
"tags": []
|
556 |
+
},
|
557 |
+
"outputs": [],
|
558 |
+
"source": [
|
559 |
+
"# @title Basic parameters\n",
|
560 |
+
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
|
561 |
+
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
|
562 |
+
]
|
563 |
+
},
|
564 |
+
{
|
565 |
+
"cell_type": "code",
|
566 |
+
"execution_count": 3,
|
567 |
+
"metadata": {
|
568 |
+
"tags": []
|
569 |
+
},
|
570 |
+
"outputs": [
|
571 |
+
{
|
572 |
+
"name": "stderr",
|
573 |
+
"output_type": "stream",
|
574 |
+
"text": [
|
575 |
+
"2024-02-27 16:58:29.001922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
|
576 |
+
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
|
577 |
+
]
|
578 |
+
}
|
579 |
+
],
|
580 |
+
"source": [
|
581 |
+
"llm = GemmaChatLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
|
582 |
+
]
|
583 |
+
},
|
584 |
+
{
|
585 |
+
"cell_type": "code",
|
586 |
+
"execution_count": 4,
|
587 |
+
"metadata": {
|
588 |
+
"executionInfo": {
|
589 |
+
"elapsed": 3,
|
590 |
+
"status": "aborted",
|
591 |
+
"timestamp": 1708976382957,
|
592 |
+
"user": {
|
593 |
+
"displayName": "",
|
594 |
+
"userId": ""
|
595 |
+
},
|
596 |
+
"user_tz": -60
|
597 |
+
},
|
598 |
+
"id": "JrJmvZqwwLqj"
|
599 |
+
},
|
600 |
+
"outputs": [
|
601 |
+
{
|
602 |
+
"name": "stderr",
|
603 |
+
"output_type": "stream",
|
604 |
+
"text": [
|
605 |
+
"2024-02-27 16:58:49.848412: I external/local_xla/xla/service/service.cc:168] XLA service 0x55adc0cf2c10 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
|
606 |
+
"2024-02-27 16:58:49.848458: I external/local_xla/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA L4, Compute Capability 8.9\n",
|
607 |
+
"2024-02-27 16:58:50.116614: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n",
|
608 |
+
"2024-02-27 16:58:54.389324: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8900\n",
|
609 |
+
"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
|
610 |
+
"I0000 00:00:1709053145.225207 784891 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n",
|
611 |
+
"W0000 00:00:1709053145.284227 784891 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
|
612 |
+
]
|
613 |
+
},
|
614 |
+
{
|
615 |
+
"name": "stdout",
|
616 |
+
"output_type": "stream",
|
617 |
+
"text": [
|
618 |
+
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.\"\n"
|
619 |
+
]
|
620 |
+
}
|
621 |
+
],
|
622 |
+
"source": [
|
623 |
+
"from langchain_core.messages import HumanMessage\n",
|
624 |
+
"\n",
|
625 |
+
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
|
626 |
+
"answer1 = llm.invoke([message1], max_tokens=30)\n",
|
627 |
+
"print(answer1)"
|
628 |
+
]
|
629 |
+
},
|
630 |
+
{
|
631 |
+
"cell_type": "code",
|
632 |
+
"execution_count": 5,
|
633 |
+
"metadata": {
|
634 |
+
"tags": []
|
635 |
+
},
|
636 |
+
"outputs": [
|
637 |
+
{
|
638 |
+
"name": "stdout",
|
639 |
+
"output_type": "stream",
|
640 |
+
"text": [
|
641 |
+
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\"\n"
|
642 |
+
]
|
643 |
+
}
|
644 |
+
],
|
645 |
+
"source": [
|
646 |
+
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
|
647 |
+
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60)\n",
|
648 |
+
"\n",
|
649 |
+
"print(answer2)"
|
650 |
+
]
|
651 |
+
},
|
652 |
+
{
|
653 |
+
"cell_type": "markdown",
|
654 |
+
"metadata": {},
|
655 |
+
"source": [
|
656 |
+
"You can post-process the response if you want to avoid multi-turn statements:"
|
657 |
+
]
|
658 |
+
},
|
659 |
+
{
|
660 |
+
"cell_type": "code",
|
661 |
+
"execution_count": 7,
|
662 |
+
"metadata": {
|
663 |
+
"tags": []
|
664 |
+
},
|
665 |
+
"outputs": [
|
666 |
+
{
|
667 |
+
"name": "stdout",
|
668 |
+
"output_type": "stream",
|
669 |
+
"text": [
|
670 |
+
"content=\"I'm a model.\\n Tampoco\\nI'm a model.\"\n",
|
671 |
+
"content='I can help you with your modeling.\\n Tampoco\\nI can'\n"
|
672 |
+
]
|
673 |
+
}
|
674 |
+
],
|
675 |
+
"source": [
|
676 |
+
"answer1 = llm.invoke([message1], max_tokens=30, parse_response=True)\n",
|
677 |
+
"print(answer1)\n",
|
678 |
+
"\n",
|
679 |
+
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60, parse_response=True)\n",
|
680 |
+
"print(answer2)"
|
681 |
+
]
|
682 |
+
},
|
683 |
+
{
|
684 |
+
"cell_type": "markdown",
|
685 |
+
"metadata": {
|
686 |
+
"id": "EiZnztso7hyF"
|
687 |
+
},
|
688 |
+
"source": [
|
689 |
+
"## Running Gemma locally from HuggingFace"
|
690 |
+
]
|
691 |
+
},
|
692 |
+
{
|
693 |
+
"cell_type": "code",
|
694 |
+
"execution_count": 1,
|
695 |
+
"metadata": {
|
696 |
+
"id": "qqAqsz5R7nKf",
|
697 |
+
"tags": []
|
698 |
+
},
|
699 |
+
"outputs": [
|
700 |
+
{
|
701 |
+
"name": "stderr",
|
702 |
+
"output_type": "stream",
|
703 |
+
"text": [
|
704 |
+
"2024-02-27 17:02:21.832409: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
|
705 |
+
"2024-02-27 17:02:21.883625: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
706 |
+
"2024-02-27 17:02:21.883656: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
707 |
+
"2024-02-27 17:02:21.884987: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
708 |
+
"2024-02-27 17:02:21.893340: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
|
709 |
+
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
|
710 |
+
]
|
711 |
+
}
|
712 |
+
],
|
713 |
+
"source": [
|
714 |
+
"from langchain_google_vertexai import GemmaChatLocalHF, GemmaLocalHF"
|
715 |
+
]
|
716 |
+
},
|
717 |
+
{
|
718 |
+
"cell_type": "code",
|
719 |
+
"execution_count": 2,
|
720 |
+
"metadata": {
|
721 |
+
"id": "tsyntzI08cOr",
|
722 |
+
"tags": []
|
723 |
+
},
|
724 |
+
"outputs": [],
|
725 |
+
"source": [
|
726 |
+
"# @title Basic parameters\n",
|
727 |
+
"hf_access_token: str = \"PUT_YOUR_TOKEN_HERE\" # @param {type:\"string\"}\n",
|
728 |
+
"model_name: str = \"google/gemma-2b\" # @param {type:\"string\"}"
|
729 |
+
]
|
730 |
+
},
|
731 |
+
{
|
732 |
+
"cell_type": "code",
|
733 |
+
"execution_count": 4,
|
734 |
+
"metadata": {
|
735 |
+
"id": "JWrqEkOo8sm9",
|
736 |
+
"tags": []
|
737 |
+
},
|
738 |
+
"outputs": [
|
739 |
+
{
|
740 |
+
"data": {
|
741 |
+
"application/vnd.jupyter.widget-view+json": {
|
742 |
+
"model_id": "a0d6de5542254ed1b6d3ba65465e050e",
|
743 |
+
"version_major": 2,
|
744 |
+
"version_minor": 0
|
745 |
+
},
|
746 |
+
"text/plain": [
|
747 |
+
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
|
748 |
+
]
|
749 |
+
},
|
750 |
+
"metadata": {},
|
751 |
+
"output_type": "display_data"
|
752 |
+
}
|
753 |
+
],
|
754 |
+
"source": [
|
755 |
+
"llm = GemmaLocalHF(model_name=\"google/gemma-2b\", hf_access_token=hf_access_token)"
|
756 |
+
]
|
757 |
+
},
|
758 |
+
{
|
759 |
+
"cell_type": "code",
|
760 |
+
"execution_count": 6,
|
761 |
+
"metadata": {
|
762 |
+
"id": "VX96Jf4Y84k-",
|
763 |
+
"tags": []
|
764 |
+
},
|
765 |
+
"outputs": [
|
766 |
+
{
|
767 |
+
"name": "stdout",
|
768 |
+
"output_type": "stream",
|
769 |
+
"text": [
|
770 |
+
"What is the meaning of life?\n",
|
771 |
+
"\n",
|
772 |
+
"The question is one of the most important questions in the world.\n",
|
773 |
+
"\n",
|
774 |
+
"It’s the question that has been asked by philosophers, theologians, and scientists for centuries.\n",
|
775 |
+
"\n",
|
776 |
+
"And it’s the question that\n"
|
777 |
+
]
|
778 |
+
}
|
779 |
+
],
|
780 |
+
"source": [
|
781 |
+
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=50)\n",
|
782 |
+
"print(output)"
|
783 |
+
]
|
784 |
+
},
|
785 |
+
{
|
786 |
+
"cell_type": "markdown",
|
787 |
+
"metadata": {},
|
788 |
+
"source": [
|
789 |
+
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
|
790 |
+
]
|
791 |
+
},
|
792 |
+
{
|
793 |
+
"cell_type": "code",
|
794 |
+
"execution_count": 3,
|
795 |
+
"metadata": {
|
796 |
+
"id": "9x-jmEBg9Mk1"
|
797 |
+
},
|
798 |
+
"outputs": [
|
799 |
+
{
|
800 |
+
"data": {
|
801 |
+
"application/vnd.jupyter.widget-view+json": {
|
802 |
+
"model_id": "c9a0b8e161d74a6faca83b1be96dee27",
|
803 |
+
"version_major": 2,
|
804 |
+
"version_minor": 0
|
805 |
+
},
|
806 |
+
"text/plain": [
|
807 |
+
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
|
808 |
+
]
|
809 |
+
},
|
810 |
+
"metadata": {},
|
811 |
+
"output_type": "display_data"
|
812 |
+
}
|
813 |
+
],
|
814 |
+
"source": [
|
815 |
+
"llm = GemmaChatLocalHF(model_name=model_name, hf_access_token=hf_access_token)"
|
816 |
+
]
|
817 |
+
},
|
818 |
+
{
|
819 |
+
"cell_type": "code",
|
820 |
+
"execution_count": 4,
|
821 |
+
"metadata": {
|
822 |
+
"id": "qv_OSaMm9PVy"
|
823 |
+
},
|
824 |
+
"outputs": [
|
825 |
+
{
|
826 |
+
"name": "stdout",
|
827 |
+
"output_type": "stream",
|
828 |
+
"text": [
|
829 |
+
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean\"\n"
|
830 |
+
]
|
831 |
+
}
|
832 |
+
],
|
833 |
+
"source": [
|
834 |
+
"from langchain_core.messages import HumanMessage\n",
|
835 |
+
"\n",
|
836 |
+
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
|
837 |
+
"answer1 = llm.invoke([message1], max_tokens=60)\n",
|
838 |
+
"print(answer1)"
|
839 |
+
]
|
840 |
+
},
|
841 |
+
{
|
842 |
+
"cell_type": "code",
|
843 |
+
"execution_count": 8,
|
844 |
+
"metadata": {
|
845 |
+
"tags": []
|
846 |
+
},
|
847 |
+
"outputs": [
|
848 |
+
{
|
849 |
+
"name": "stdout",
|
850 |
+
"output_type": "stream",
|
851 |
+
"text": [
|
852 |
+
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\\nI can help you with anything.\\n<\"\n"
|
853 |
+
]
|
854 |
+
}
|
855 |
+
],
|
856 |
+
"source": [
|
857 |
+
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
|
858 |
+
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=140)\n",
|
859 |
+
"\n",
|
860 |
+
"print(answer2)"
|
861 |
+
]
|
862 |
+
},
|
863 |
+
{
|
864 |
+
"cell_type": "markdown",
|
865 |
+
"metadata": {},
|
866 |
+
"source": [
|
867 |
+
"And the same with posprocessing:"
|
868 |
+
]
|
869 |
+
},
|
870 |
+
{
|
871 |
+
"cell_type": "code",
|
872 |
+
"execution_count": 11,
|
873 |
+
"metadata": {
|
874 |
+
"tags": []
|
875 |
+
},
|
876 |
+
"outputs": [
|
877 |
+
{
|
878 |
+
"name": "stdout",
|
879 |
+
"output_type": "stream",
|
880 |
+
"text": [
|
881 |
+
"content=\"I'm a model.\\n<end_of_turn>\\n\"\n",
|
882 |
+
"content='I can help you with anything.\\n<end_of_turn>\\n<end_of_turn>\\n'\n"
|
883 |
+
]
|
884 |
+
}
|
885 |
+
],
|
886 |
+
"source": [
|
887 |
+
"answer1 = llm.invoke([message1], max_tokens=60, parse_response=True)\n",
|
888 |
+
"print(answer1)\n",
|
889 |
+
"\n",
|
890 |
+
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=120, parse_response=True)\n",
|
891 |
+
"print(answer2)"
|
892 |
+
]
|
893 |
+
},
|
894 |
+
{
|
895 |
+
"cell_type": "code",
|
896 |
+
"execution_count": null,
|
897 |
+
"metadata": {},
|
898 |
+
"outputs": [],
|
899 |
+
"source": []
|
900 |
+
}
|
901 |
+
],
|
902 |
+
"metadata": {
|
903 |
+
"colab": {
|
904 |
+
"provenance": []
|
905 |
+
},
|
906 |
+
"environment": {
|
907 |
+
"kernel": "python3",
|
908 |
+
"name": ".m116",
|
909 |
+
"type": "gcloud",
|
910 |
+
"uri": "gcr.io/deeplearning-platform-release/:m116"
|
911 |
+
},
|
912 |
+
"kernelspec": {
|
913 |
+
"display_name": "Python 3",
|
914 |
+
"language": "python",
|
915 |
+
"name": "python3"
|
916 |
+
},
|
917 |
+
"language_info": {
|
918 |
+
"codemirror_mode": {
|
919 |
+
"name": "ipython",
|
920 |
+
"version": 3
|
921 |
+
},
|
922 |
+
"file_extension": ".py",
|
923 |
+
"mimetype": "text/x-python",
|
924 |
+
"name": "python",
|
925 |
+
"nbconvert_exporter": "python",
|
926 |
+
"pygments_lexer": "ipython3",
|
927 |
+
"version": "3.10.13"
|
928 |
+
}
|
929 |
+
},
|
930 |
+
"nbformat": 4,
|
931 |
+
"nbformat_minor": 4
|
932 |
+
}
|
docs/langchain/cookbook/LLaMA2_sql_chat.ipynb
ADDED
@@ -0,0 +1,398 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"attachments": {},
|
5 |
+
"cell_type": "markdown",
|
6 |
+
"id": "fc935871-7640-41c6-b798-58514d860fe0",
|
7 |
+
"metadata": {},
|
8 |
+
"source": [
|
9 |
+
"## LLaMA2 chat with SQL\n",
|
10 |
+
"\n",
|
11 |
+
"Open source, local LLMs are great to consider for any application that demands data privacy.\n",
|
12 |
+
"\n",
|
13 |
+
"SQL is one good example. \n",
|
14 |
+
"\n",
|
15 |
+
"This cookbook shows how to perform text-to-SQL using various local versions of LLaMA2 run locally.\n",
|
16 |
+
"\n",
|
17 |
+
"## Packages"
|
18 |
+
]
|
19 |
+
},
|
20 |
+
{
|
21 |
+
"cell_type": "code",
|
22 |
+
"execution_count": null,
|
23 |
+
"id": "81adcf8b-395a-4f02-8749-ac976942b446",
|
24 |
+
"metadata": {},
|
25 |
+
"outputs": [],
|
26 |
+
"source": [
|
27 |
+
"! pip install langchain replicate"
|
28 |
+
]
|
29 |
+
},
|
30 |
+
{
|
31 |
+
"cell_type": "markdown",
|
32 |
+
"id": "8e13ed66-300b-4a23-b8ac-44df68ee4733",
|
33 |
+
"metadata": {},
|
34 |
+
"source": [
|
35 |
+
"## LLM\n",
|
36 |
+
"\n",
|
37 |
+
"There are a few ways to access LLaMA2.\n",
|
38 |
+
"\n",
|
39 |
+
"To run locally, we use Ollama.ai. \n",
|
40 |
+
"\n",
|
41 |
+
"See [here](/docs/integrations/chat/ollama) for details on installation and setup.\n",
|
42 |
+
"\n",
|
43 |
+
"Also, see [here](/docs/guides/development/local_llms) for our full guide on local LLMs.\n",
|
44 |
+
" \n",
|
45 |
+
"To use an external API, which is not private, we can use Replicate."
|
46 |
+
]
|
47 |
+
},
|
48 |
+
{
|
49 |
+
"cell_type": "code",
|
50 |
+
"execution_count": 1,
|
51 |
+
"id": "6a75a5c6-34ee-4ab9-a664-d9b432d812ee",
|
52 |
+
"metadata": {},
|
53 |
+
"outputs": [
|
54 |
+
{
|
55 |
+
"name": "stderr",
|
56 |
+
"output_type": "stream",
|
57 |
+
"text": [
|
58 |
+
"Init param `input` is deprecated, please use `model_kwargs` instead.\n"
|
59 |
+
]
|
60 |
+
}
|
61 |
+
],
|
62 |
+
"source": [
|
63 |
+
"# Local\n",
|
64 |
+
"from langchain_community.chat_models import ChatOllama\n",
|
65 |
+
"\n",
|
66 |
+
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
|
67 |
+
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",
|
68 |
+
"\n",
|
69 |
+
"# API\n",
|
70 |
+
"from langchain_community.llms import Replicate\n",
|
71 |
+
"\n",
|
72 |
+
"# REPLICATE_API_TOKEN = getpass()\n",
|
73 |
+
"# os.environ[\"REPLICATE_API_TOKEN\"] = REPLICATE_API_TOKEN\n",
|
74 |
+
"replicate_id = \"meta/llama-2-13b-chat:f4e2de70d66816a838a89eeeb621910adffb0dd0baba3976c96980970978018d\"\n",
|
75 |
+
"llama2_chat_replicate = Replicate(\n",
|
76 |
+
" model=replicate_id, input={\"temperature\": 0.01, \"max_length\": 500, \"top_p\": 1}\n",
|
77 |
+
")"
|
78 |
+
]
|
79 |
+
},
|
80 |
+
{
|
81 |
+
"cell_type": "code",
|
82 |
+
"execution_count": 2,
|
83 |
+
"id": "ce96f7ea-b3d5-44e1-9fa5-a79e04a9e1fb",
|
84 |
+
"metadata": {},
|
85 |
+
"outputs": [],
|
86 |
+
"source": [
|
87 |
+
"# Simply set the LLM we want to use\n",
|
88 |
+
"llm = llama2_chat"
|
89 |
+
]
|
90 |
+
},
|
91 |
+
{
|
92 |
+
"cell_type": "markdown",
|
93 |
+
"id": "80222165-f353-4e35-a123-5f70fd70c6c8",
|
94 |
+
"metadata": {},
|
95 |
+
"source": [
|
96 |
+
"## DB\n",
|
97 |
+
"\n",
|
98 |
+
"Connect to a SQLite DB.\n",
|
99 |
+
"\n",
|
100 |
+
"To create this particular DB, you can use the code and follow the steps shown [here](https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/StructuredLlama.ipynb)."
|
101 |
+
]
|
102 |
+
},
|
103 |
+
{
|
104 |
+
"cell_type": "code",
|
105 |
+
"execution_count": 3,
|
106 |
+
"id": "025bdd82-3bb1-4948-bc7c-c3ccd94fd05c",
|
107 |
+
"metadata": {},
|
108 |
+
"outputs": [],
|
109 |
+
"source": [
|
110 |
+
"from langchain_community.utilities import SQLDatabase\n",
|
111 |
+
"\n",
|
112 |
+
"db = SQLDatabase.from_uri(\"sqlite:///nba_roster.db\", sample_rows_in_table_info=0)\n",
|
113 |
+
"\n",
|
114 |
+
"\n",
|
115 |
+
"def get_schema(_):\n",
|
116 |
+
" return db.get_table_info()\n",
|
117 |
+
"\n",
|
118 |
+
"\n",
|
119 |
+
"def run_query(query):\n",
|
120 |
+
" return db.run(query)"
|
121 |
+
]
|
122 |
+
},
|
123 |
+
{
|
124 |
+
"cell_type": "markdown",
|
125 |
+
"id": "654b3577-baa2-4e12-a393-f40e5db49ac7",
|
126 |
+
"metadata": {},
|
127 |
+
"source": [
|
128 |
+
"## Query a SQL Database \n",
|
129 |
+
"\n",
|
130 |
+
"Follow the runnables workflow [here](https://python.langchain.com/docs/expression_language/cookbook/sql_db)."
|
131 |
+
]
|
132 |
+
},
|
133 |
+
{
|
134 |
+
"cell_type": "code",
|
135 |
+
"execution_count": 4,
|
136 |
+
"id": "5a4933ea-d9c0-4b0a-8177-ba4490c6532b",
|
137 |
+
"metadata": {},
|
138 |
+
"outputs": [
|
139 |
+
{
|
140 |
+
"data": {
|
141 |
+
"text/plain": [
|
142 |
+
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
|
143 |
+
]
|
144 |
+
},
|
145 |
+
"execution_count": 4,
|
146 |
+
"metadata": {},
|
147 |
+
"output_type": "execute_result"
|
148 |
+
}
|
149 |
+
],
|
150 |
+
"source": [
|
151 |
+
"# Prompt\n",
|
152 |
+
"from langchain_core.prompts import ChatPromptTemplate\n",
|
153 |
+
"\n",
|
154 |
+
"# Update the template based on the type of SQL Database like MySQL, Microsoft SQL Server and so on\n",
|
155 |
+
"template = \"\"\"Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
156 |
+
"{schema}\n",
|
157 |
+
"\n",
|
158 |
+
"Question: {question}\n",
|
159 |
+
"SQL Query:\"\"\"\n",
|
160 |
+
"prompt = ChatPromptTemplate.from_messages(\n",
|
161 |
+
" [\n",
|
162 |
+
" (\"system\", \"Given an input question, convert it to a SQL query. No pre-amble.\"),\n",
|
163 |
+
" (\"human\", template),\n",
|
164 |
+
" ]\n",
|
165 |
+
")\n",
|
166 |
+
"\n",
|
167 |
+
"# Chain to query\n",
|
168 |
+
"from langchain_core.output_parsers import StrOutputParser\n",
|
169 |
+
"from langchain_core.runnables import RunnablePassthrough\n",
|
170 |
+
"\n",
|
171 |
+
"sql_response = (\n",
|
172 |
+
" RunnablePassthrough.assign(schema=get_schema)\n",
|
173 |
+
" | prompt\n",
|
174 |
+
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
175 |
+
" | StrOutputParser()\n",
|
176 |
+
")\n",
|
177 |
+
"\n",
|
178 |
+
"sql_response.invoke({\"question\": \"What team is Klay Thompson on?\"})"
|
179 |
+
]
|
180 |
+
},
|
181 |
+
{
|
182 |
+
"cell_type": "markdown",
|
183 |
+
"id": "a0e9e2c8-9b88-4853-ac86-001bc6cc6695",
|
184 |
+
"metadata": {},
|
185 |
+
"source": [
|
186 |
+
"We can review the results:\n",
|
187 |
+
"\n",
|
188 |
+
"* [LangSmith trace](https://smith.langchain.com/public/afa56a06-b4e2-469a-a60f-c1746e75e42b/r) LLaMA2-13 Replicate API\n",
|
189 |
+
"* [LangSmith trace](https://smith.langchain.com/public/2d4ecc72-6b8f-4523-8f0b-ea95c6b54a1d/r) LLaMA2-13 local \n"
|
190 |
+
]
|
191 |
+
},
|
192 |
+
{
|
193 |
+
"cell_type": "code",
|
194 |
+
"execution_count": 15,
|
195 |
+
"id": "2a2825e3-c1b6-4f7d-b9c9-d9835de323bb",
|
196 |
+
"metadata": {},
|
197 |
+
"outputs": [
|
198 |
+
{
|
199 |
+
"data": {
|
200 |
+
"text/plain": [
|
201 |
+
"AIMessage(content=' Based on the table schema and SQL query, there are 30 unique teams in the NBA.')"
|
202 |
+
]
|
203 |
+
},
|
204 |
+
"execution_count": 15,
|
205 |
+
"metadata": {},
|
206 |
+
"output_type": "execute_result"
|
207 |
+
}
|
208 |
+
],
|
209 |
+
"source": [
|
210 |
+
"# Chain to answer\n",
|
211 |
+
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
|
212 |
+
"{schema}\n",
|
213 |
+
"\n",
|
214 |
+
"Question: {question}\n",
|
215 |
+
"SQL Query: {query}\n",
|
216 |
+
"SQL Response: {response}\"\"\"\n",
|
217 |
+
"prompt_response = ChatPromptTemplate.from_messages(\n",
|
218 |
+
" [\n",
|
219 |
+
" (\n",
|
220 |
+
" \"system\",\n",
|
221 |
+
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
|
222 |
+
" ),\n",
|
223 |
+
" (\"human\", template),\n",
|
224 |
+
" ]\n",
|
225 |
+
")\n",
|
226 |
+
"\n",
|
227 |
+
"full_chain = (\n",
|
228 |
+
" RunnablePassthrough.assign(query=sql_response)\n",
|
229 |
+
" | RunnablePassthrough.assign(\n",
|
230 |
+
" schema=get_schema,\n",
|
231 |
+
" response=lambda x: db.run(x[\"query\"]),\n",
|
232 |
+
" )\n",
|
233 |
+
" | prompt_response\n",
|
234 |
+
" | llm\n",
|
235 |
+
")\n",
|
236 |
+
"\n",
|
237 |
+
"full_chain.invoke({\"question\": \"How many unique teams are there?\"})"
|
238 |
+
]
|
239 |
+
},
|
240 |
+
{
|
241 |
+
"cell_type": "markdown",
|
242 |
+
"id": "ec17b3ee-6618-4681-b6df-089bbb5ffcd7",
|
243 |
+
"metadata": {},
|
244 |
+
"source": [
|
245 |
+
"We can review the results:\n",
|
246 |
+
"\n",
|
247 |
+
"* [LangSmith trace](https://smith.langchain.com/public/10420721-746a-4806-8ecf-d6dc6399d739/r) LLaMA2-13 Replicate API\n",
|
248 |
+
"* [LangSmith trace](https://smith.langchain.com/public/5265ebab-0a22-4f37-936b-3300f2dfa1c1/r) LLaMA2-13 local "
|
249 |
+
]
|
250 |
+
},
|
251 |
+
{
|
252 |
+
"cell_type": "markdown",
|
253 |
+
"id": "1e85381b-1edc-4bb3-a7bd-2ab23f81e54d",
|
254 |
+
"metadata": {},
|
255 |
+
"source": [
|
256 |
+
"## Chat with a SQL DB \n",
|
257 |
+
"\n",
|
258 |
+
"Next, we can add memory."
|
259 |
+
]
|
260 |
+
},
|
261 |
+
{
|
262 |
+
"cell_type": "code",
|
263 |
+
"execution_count": 7,
|
264 |
+
"id": "022868f2-128e-42f5-8d90-d3bb2f11d994",
|
265 |
+
"metadata": {},
|
266 |
+
"outputs": [
|
267 |
+
{
|
268 |
+
"data": {
|
269 |
+
"text/plain": [
|
270 |
+
"' SELECT \"Team\" FROM nba_roster WHERE \"NAME\" = \\'Klay Thompson\\';'"
|
271 |
+
]
|
272 |
+
},
|
273 |
+
"execution_count": 7,
|
274 |
+
"metadata": {},
|
275 |
+
"output_type": "execute_result"
|
276 |
+
}
|
277 |
+
],
|
278 |
+
"source": [
|
279 |
+
"# Prompt\n",
|
280 |
+
"from langchain.memory import ConversationBufferMemory\n",
|
281 |
+
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
|
282 |
+
"\n",
|
283 |
+
"template = \"\"\"Given an input question, convert it to a SQL query. No pre-amble. Based on the table schema below, write a SQL query that would answer the user's question:\n",
|
284 |
+
"{schema}\n",
|
285 |
+
"\"\"\"\n",
|
286 |
+
"prompt = ChatPromptTemplate.from_messages(\n",
|
287 |
+
" [\n",
|
288 |
+
" (\"system\", template),\n",
|
289 |
+
" MessagesPlaceholder(variable_name=\"history\"),\n",
|
290 |
+
" (\"human\", \"{question}\"),\n",
|
291 |
+
" ]\n",
|
292 |
+
")\n",
|
293 |
+
"\n",
|
294 |
+
"memory = ConversationBufferMemory(return_messages=True)\n",
|
295 |
+
"\n",
|
296 |
+
"# Chain to query with memory\n",
|
297 |
+
"from langchain_core.runnables import RunnableLambda\n",
|
298 |
+
"\n",
|
299 |
+
"sql_chain = (\n",
|
300 |
+
" RunnablePassthrough.assign(\n",
|
301 |
+
" schema=get_schema,\n",
|
302 |
+
" history=RunnableLambda(lambda x: memory.load_memory_variables(x)[\"history\"]),\n",
|
303 |
+
" )\n",
|
304 |
+
" | prompt\n",
|
305 |
+
" | llm.bind(stop=[\"\\nSQLResult:\"])\n",
|
306 |
+
" | StrOutputParser()\n",
|
307 |
+
")\n",
|
308 |
+
"\n",
|
309 |
+
"\n",
|
310 |
+
"def save(input_output):\n",
|
311 |
+
" output = {\"output\": input_output.pop(\"output\")}\n",
|
312 |
+
" memory.save_context(input_output, output)\n",
|
313 |
+
" return output[\"output\"]\n",
|
314 |
+
"\n",
|
315 |
+
"\n",
|
316 |
+
"sql_response_memory = RunnablePassthrough.assign(output=sql_chain) | save\n",
|
317 |
+
"sql_response_memory.invoke({\"question\": \"What team is Klay Thompson on?\"})"
|
318 |
+
]
|
319 |
+
},
|
320 |
+
{
|
321 |
+
"cell_type": "code",
|
322 |
+
"execution_count": 21,
|
323 |
+
"id": "800a7a3b-f411-478b-af51-2310cd6e0425",
|
324 |
+
"metadata": {},
|
325 |
+
"outputs": [
|
326 |
+
{
|
327 |
+
"data": {
|
328 |
+
"text/plain": [
|
329 |
+
"AIMessage(content=' Sure! Here\\'s the natural language response based on the given input:\\n\\n\"Klay Thompson\\'s salary is $43,219,440.\"')"
|
330 |
+
]
|
331 |
+
},
|
332 |
+
"execution_count": 21,
|
333 |
+
"metadata": {},
|
334 |
+
"output_type": "execute_result"
|
335 |
+
}
|
336 |
+
],
|
337 |
+
"source": [
|
338 |
+
"# Chain to answer\n",
|
339 |
+
"template = \"\"\"Based on the table schema below, question, sql query, and sql response, write a natural language response:\n",
|
340 |
+
"{schema}\n",
|
341 |
+
"\n",
|
342 |
+
"Question: {question}\n",
|
343 |
+
"SQL Query: {query}\n",
|
344 |
+
"SQL Response: {response}\"\"\"\n",
|
345 |
+
"prompt_response = ChatPromptTemplate.from_messages(\n",
|
346 |
+
" [\n",
|
347 |
+
" (\n",
|
348 |
+
" \"system\",\n",
|
349 |
+
" \"Given an input question and SQL response, convert it to a natural language answer. No pre-amble.\",\n",
|
350 |
+
" ),\n",
|
351 |
+
" (\"human\", template),\n",
|
352 |
+
" ]\n",
|
353 |
+
")\n",
|
354 |
+
"\n",
|
355 |
+
"full_chain = (\n",
|
356 |
+
" RunnablePassthrough.assign(query=sql_response_memory)\n",
|
357 |
+
" | RunnablePassthrough.assign(\n",
|
358 |
+
" schema=get_schema,\n",
|
359 |
+
" response=lambda x: db.run(x[\"query\"]),\n",
|
360 |
+
" )\n",
|
361 |
+
" | prompt_response\n",
|
362 |
+
" | llm\n",
|
363 |
+
")\n",
|
364 |
+
"\n",
|
365 |
+
"full_chain.invoke({\"question\": \"What is his salary?\"})"
|
366 |
+
]
|
367 |
+
},
|
368 |
+
{
|
369 |
+
"cell_type": "markdown",
|
370 |
+
"id": "b77fee61-f4da-4bb1-8285-14101e505518",
|
371 |
+
"metadata": {},
|
372 |
+
"source": [
|
373 |
+
"Here is the [trace](https://smith.langchain.com/public/54794d18-2337-4ce2-8b9f-3d8a2df89e51/r)."
|
374 |
+
]
|
375 |
+
}
|
376 |
+
],
|
377 |
+
"metadata": {
|
378 |
+
"kernelspec": {
|
379 |
+
"display_name": "Python 3 (ipykernel)",
|
380 |
+
"language": "python",
|
381 |
+
"name": "python3"
|
382 |
+
},
|
383 |
+
"language_info": {
|
384 |
+
"codemirror_mode": {
|
385 |
+
"name": "ipython",
|
386 |
+
"version": 3
|
387 |
+
},
|
388 |
+
"file_extension": ".py",
|
389 |
+
"mimetype": "text/x-python",
|
390 |
+
"name": "python",
|
391 |
+
"nbconvert_exporter": "python",
|
392 |
+
"pygments_lexer": "ipython3",
|
393 |
+
"version": "3.9.16"
|
394 |
+
}
|
395 |
+
},
|
396 |
+
"nbformat": 4,
|
397 |
+
"nbformat_minor": 5
|
398 |
+
}
|
docs/langchain/cookbook/Multi_modal_RAG.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
docs/langchain/cookbook/Multi_modal_RAG_google.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
docs/langchain/cookbook/RAPTOR.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|