XuBailing commited on
Commit
18e9b7e
β€’
1 Parent(s): 70b1501

Delete README_en.md

Browse files
Files changed (1) hide show
  1. README_en.md +0 -247
README_en.md DELETED
@@ -1,247 +0,0 @@
1
- # ChatGLM Application with Local Knowledge Implementation
2
-
3
- ## Introduction
4
-
5
- [![Telegram](https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white "langchain-chatglm")](https://t.me/+RjliQ3jnJ1YyN2E9)
6
-
7
- 🌍 [_δΈ­ζ–‡ζ–‡ζ‘£_](README.md)
8
-
9
- πŸ€–οΈ This is a ChatGLM application based on local knowledge, implemented using [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) and [langchain](https://github.com/hwchase17/langchain).
10
-
11
- πŸ’‘ Inspired by [document.ai](https://github.com/GanymedeNil/document.ai) and [Alex Zhangji](https://github.com/AlexZhangji)'s [ChatGLM-6B Pull Request](https://github.com/THUDM/ChatGLM-6B/pull/216), this project establishes a local knowledge question-answering application using open-source models.
12
-
13
- βœ… The embeddings used in this project are [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese/tree/main), and the LLM is [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B). Relying on these models, this project enables the use of **open-source** models for **offline private deployment**.
14
-
15
- ⛓️ The implementation principle of this project is illustrated in the figure below. The process includes loading files -> reading text -> text segmentation -> text vectorization -> question vectorization -> matching the top k most similar text vectors to the question vector -> adding the matched text to `prompt` along with the question as context -> submitting to `LLM` to generate an answer.
16
-
17
- ![Implementation schematic diagram](img/langchain+chatglm.png)
18
-
19
- 🚩 This project does not involve fine-tuning or training; however, fine-tuning or training can be employed to optimize the effectiveness of this project.
20
-
21
- πŸ““ [ModelWhale online notebook](https://www.heywhale.com/mw/project/643977aa446c45f4592a1e59)
22
-
23
- ## Changelog
24
-
25
- **[2023/04/15]**
26
-
27
- 1. refactor the project structure to keep the command line demo [cli_demo.py](cli_demo.py) and the Web UI demo [webui.py](webui.py) in the root directory.
28
- 2. Improve the Web UI by modifying it to first load the model according to the default option of [configs/model_config.py](configs/model_config.py) after running the Web UI, and adding error messages, etc.
29
- 3. Update FAQ.
30
-
31
- **[2023/04/12]**
32
-
33
- 1. Replaced the sample files in the Web UI to avoid issues with unreadable files due to encoding problems in Ubuntu;
34
- 2. Replaced the prompt template in `knowledge_based_chatglm.py` to prevent confusion in the content returned by ChatGLM, which may arise from the prompt template containing Chinese and English bilingual text.
35
-
36
- **[2023/04/11]**
37
-
38
- 1. Added Web UI V0.1 version (thanks to [@liangtongt](https://github.com/liangtongt));
39
- 2. Added Frequently Asked Questions in `README.md` (thanks to [@calcitem](https://github.com/calcitem) and [@bolongliu](https://github.com/bolongliu));
40
- 3. Enhanced automatic detection for the availability of `cuda`, `mps`, and `cpu` for LLM and Embedding model running devices;
41
- 4. Added a check for `filepath` in `knowledge_based_chatglm.py`. In addition to supporting single file import, it now supports a single folder path as input. After input, it will traverse each file in the folder and display a command-line message indicating the success of each file load.
42
-
43
- 5. **[2023/04/09]**
44
-
45
- 1. Replaced the previously selected `ChatVectorDBChain` with `RetrievalQA` in `langchain`, effectively reducing the issue of stopping due to insufficient video memory after asking 2-3 times;
46
- 2. Added `EMBEDDING_MODEL`, `VECTOR_SEARCH_TOP_K`, `LLM_MODEL`, `LLM_HISTORY_LEN`, `REPLY_WITH_SOURCE` parameter value settings in `knowledge_based_chatglm.py`;
47
- 3. Added `chatglm-6b-int4` and `chatglm-6b-int4-qe`, which require less GPU memory, as LLM model options;
48
- 4. Corrected code errors in `README.md` (thanks to [@calcitem](https://github.com/calcitem)).
49
-
50
- **[2023/04/07]**
51
-
52
- 1. Resolved the issue of doubled video memory usage when loading the ChatGLM model (thanks to [@suc16](https://github.com/suc16) and [@myml](https://github.com/myml));
53
- 2. Added a mechanism to clear video memory;
54
- 3. Added `nghuyong/ernie-3.0-nano-zh` and `nghuyong/ernie-3.0-base-zh` as Embedding model options, which consume less video memory resources than `GanymedeNil/text2vec-large-chinese` (thanks to [@lastrei](https://github.com/lastrei)).
55
-
56
- ## How to Use
57
-
58
- ### Hardware Requirements
59
-
60
- - ChatGLM-6B Model Hardware Requirements
61
-
62
- | **Quantization Level** | **Minimum GPU Memory** (inference) | **Minimum GPU Memory** (efficient parameter fine-tuning) |
63
- | -------------- | ------------------------- | --------------------------------- |
64
- | FP16 (no quantization) | 13 GB | 14 GB |
65
- | INT8 | 8 GB | 9 GB |
66
- | INT4 | 6 GB | 7 GB |
67
-
68
- - Embedding Model Hardware Requirements
69
-
70
- The default Embedding model [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese/tree/main) in this project occupies around 3GB of video memory and can also be configured to run on a CPU.
71
- ### Software Requirements
72
-
73
- This repository has been tested with Python 3.8 and CUDA 11.7 environments.
74
-
75
- ### 1. Setting up the environment
76
-
77
- * Environment check
78
-
79
- ```shell
80
- # First, make sure your machine has Python 3.8 or higher installed
81
- $ python --version
82
- Python 3.8.13
83
-
84
- # If your version is lower, you can use conda to install the environment
85
- $ conda create -p /your_path/env_name python=3.8
86
-
87
- # Activate the environment
88
- $ source activate /your_path/env_name
89
-
90
- # Deactivate the environment
91
- $ source deactivate /your_path/env_name
92
-
93
- # Remove the environment
94
- $ conda env remove -p /your_path/env_name
95
- ```
96
-
97
- * Project dependencies
98
-
99
- ```shell
100
-
101
- # Clone the repository
102
- $ git clone https://github.com/imClumsyPanda/langchain-ChatGLM.git
103
-
104
- # Install dependencies
105
- $ pip install -r requirements.txt
106
- ```
107
-
108
- Note: When using langchain.document_loaders.UnstructuredFileLoader for unstructured file integration, you may need to install other dependency packages according to the documentation. Please refer to [langchain documentation](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html).
109
-
110
- ### 2. Run Scripts to Experience Web UI or Command Line Interaction
111
-
112
- Execute [webui.py](webui.py) script to experience **Web interaction** <img src="https://img.shields.io/badge/Version-0.1-brightgreen">
113
- ```commandline
114
- python webui.py
115
-
116
- ```
117
- Or execute [api.py](api.py) script to deploy web api.
118
- ```shell
119
- $ python api.py
120
- ```
121
- Note: Before executing, check the remaining space in the `$HOME/.cache/huggingface/` folder, at least 15G.
122
-
123
- Or execute following command to run VUE after api.py executed
124
- ```shell
125
- $ cd views
126
-
127
- $ pnpm i
128
-
129
- $ npm run dev
130
- ```
131
-
132
- VUE interface screenshots:
133
-
134
- ![](img/vue_0521_0.png)
135
-
136
- ![](img/vue_0521_1.png)
137
-
138
- ![](img/vue_0521_2.png)
139
-
140
- Web UI interface screenshots:
141
-
142
- ![img.png](img/webui_0521_0.png)
143
-
144
- ![](img/webui_0510_1.png)
145
-
146
- ![](img/webui_0510_2.png)
147
-
148
- The Web UI supports the following features:
149
-
150
- 1. Automatically reads the `LLM` and `embedding` model enumerations in `configs/model_config.py`, allowing you to select and reload the model by clicking `ι‡ζ–°εŠ θ½½ζ¨‘εž‹`.
151
- 2. The length of retained dialogue history can be manually adjusted according to the available video memory.
152
- 3. Adds a file upload function. Select the uploaded file through the drop-down box, click `εŠ θ½½ζ–‡δ»Ά` to load the file, and change the loaded file at any time during the process.
153
-
154
- Alternatively, execute the [knowledge_based_chatglm.py](https://chat.openai.com/chat/cli_demo.py) script to experience **command line interaction**:
155
-
156
- ```commandline
157
- python knowledge_based_chatglm.py
158
- ```
159
-
160
- ### FAQ
161
-
162
- Q1: What file formats does this project support?
163
-
164
- A1: Currently, this project has been tested with txt, docx, and md file formats. For more file formats, please refer to the [langchain documentation](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/unstructured_file.html). It is known that if the document contains special characters, there might be issues with loading the file.
165
-
166
- Q2: How can I resolve the `detectron2` dependency issue when reading specific file formats?
167
-
168
- A2: As the installation process for this package can be problematic and it is only required for some file formats, it is not included in `requirements.txt`. You can install it with the following command:
169
-
170
- ```commandline
171
- pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"
172
- ```
173
-
174
- Q3: How can I solve the `Resource punkt not found.` error?
175
-
176
- A3: Unzip the `packages/tokenizers` folder from https://github.com/nltk/nltk_data/raw/gh-pages/packages/tokenizers/punkt.zip, and place it in the `nltk_data/tokenizers` storage path.
177
-
178
- The `nltk_data` storage path can be found using `nltk.data.path`.
179
-
180
- Q4: How can I solve the `Resource averaged_perceptron_tagger not found.` error?
181
-
182
- A4: Download https://github.com/nltk/nltk_data/blob/gh-pages/packages/taggers/averaged_perceptron_tagger.zip, extract it, and place it in the `nltk_data/taggers` storage path.
183
-
184
- The `nltk_data` storage path can be found using `nltk.data.path`.
185
-
186
- Q5: Can this project run in Google Colab?
187
-
188
- A5: You can try running the chatglm-6b-int4 model in Google Colab. Please note that if you want to run the Web UI in Colab, you need to set the `share` parameter in `demo.queue(concurrency_count=3).launch(server_name='0.0.0.0', share=False, inbrowser=False)` to `True`.
189
-
190
- This issue is related to the system environment. For more details, please refer to [Issues with installing packages using pip in Anaconda](docs/Issue-with-Installing-Packages-Using-pip-in-Anaconda.md).
191
-
192
- ## DEMO
193
-
194
- Using the question "What are the limitations of ChatGLM-6B and how can they be improved?" as an example:
195
-
196
- Without using langchain to access local documents:
197
-
198
- > ChatGLM-6B is a chatbot based on GPT-3, and its limitations are mainly reflected in the following aspects:
199
- >
200
- > 1. Language model accuracy: Although GPT-3 is a large language model with relatively high accuracy, it still has some limitations. For instance, it may not be able to accurately understand certain dialects, accents, or complex sentence structures.
201
- > 2. Completeness of the knowledge graph: GPT-3 can generate some knowledge graphs, but its completeness and accuracy may not be sufficient. For example, it may struggle to accurately understand specific domain expertise and terminology.
202
- > 3. Coherence of dialogue generation: While GPT-3 can generate somewhat coherent dialogues, they may not always be natural. For instance, it may generate false, incoherent dialogue or misinterpret the user's intent.
203
- >
204
- > To improve ChatGLM-6B, consider the following aspects:
205
- >
206
- > 1. Enhance the accuracy of the language model: The accuracy of the language model can be improved by increasing the training data, using better language model architectures, and optimizing the model training process.
207
- > 2. Strengthen the integrity of the knowledge graph: The integrity of the knowledge graph can be enhanced by increasing its dimensions, employing better knowledge graph construction methods, and optimizing the knowledge graph's matching algorithm.
208
- > 3. Boost the coherence of dialogue generation: The coherence of dialogue generation can be improved by augmenting the context information of the dialogue generation model, utilizing better context information extraction methods, and optimizing the dialogue generation model's algorithm.
209
- > 4. Incorporate more human feedback: Human feedback can help ChatGLM-6B better understand users' intentions and needs, thereby improving the quality and accuracy of dialogue generation. More human feedback can be introduced by involving more human agents and using human feedback collectors.
210
-
211
- ChatGLM's answer after using LangChain to access the README.md file of the ChatGLM-6B project:
212
- >ChatGLM-6B is a deep learning-based natural language processing model that excels at answering questions. However, it also has some limitations, as follows:
213
- >1. Limited language understanding: ChatGLM-6B has been primarily trained on Chinese natural language, and its understanding of other languages may be limited.
214
- >2. Insufficient knowledge base: The training dataset of ChatGLM-6B contains only a Chinese corpus, so it may not be able to answer non-Chinese questions or queries in specific domains.
215
- >3. Limited data volume: ChatGLM-6B's training dataset has only a few million records, which may hinder its ability to answer very specific or complex questions.
216
- >
217
- >To improve ChatGLM-6B, consider the following aspects:
218
- >1. Expand language knowledge: Learn natural language processing techniques in other languages to broaden the model's language understanding capabilities.
219
- >2. Broaden the knowledge base: Collect more Chinese corpora or use datasets in other languages to expand the model's knowledge base.
220
- >3. Increase data volume: Use larger datasets to train ChatGLM-6B, which can improve the model's performance.
221
- >4. Introduce more evaluation metrics: Incorporate additional evaluation metrics to assess the model's performance, which can help identify the shortcomings and limitations of ChatGLM-6B.
222
- >5. Enhance the model architecture: Improve ChatGLM-6B's model architecture to boost its performance and capabilities. For example, employ larger neural networks or refined convolutional neural network structures.
223
-
224
- ## Roadmap
225
-
226
- - [x] Implement LangChain + ChatGLM-6B for local knowledge application
227
- - [x] Unstructured file access based on langchain
228
- - [x].md
229
- - [x].pdf
230
- - [x].docx
231
- - [x].txt
232
- - [ ] Add support for more LLM models
233
- - [x] THUDM/chatglm-6b
234
- - [x] THUDM/chatglm-6b-int4
235
- - [x] THUDM/chatglm-6b-int4-qe
236
- - [ ] Add Web UI DEMO
237
- - [x] Implement Web UI DEMO using Gradio
238
- - [x] Add output and error messages
239
- - [x] Citation callout
240
- - [ ] Knowledge base management
241
- - [x] QA based on selected knowledge base
242
- - [x] Add files/folder to knowledge base
243
- - [ ] Add files/folder to knowledge base
244
- - [ ] Implement Web UI DEMO using Streamlit
245
- - [ ] Add support for API deployment
246
- - [x] Use fastapi to implement API
247
- - [ ] Implement Web UI DEMO for API calls