dartpain commited on
Commit
8850ce7
1 Parent(s): 7e91c05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -43
README.md CHANGED
@@ -1,79 +1,201 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  pipeline_tag: text-generation
4
- tags:
5
- - llm-foundry
6
- - docsgpt
7
  ---
8
 
9
- DocsGPT-7B is a decoder-style transformer that is fine-tuned specifically for providing answers based on documentation given in context. It is built on to of the MosaicPretrainedTransformer (MPT), being fine-tuned from the MPT-7B model developed by MosaicML. The model inherits the powerful language understanding capabilities of MPT-7B and has been specialized for the purpose of documentation-oriented question answering.
10
 
 
11
 
 
12
 
13
- ## Model Description
 
14
 
15
- Architecture: Decoder-style Transformer
16
 
17
- Training data: Fine-tuned on approximately 1000 high-quality examples of documentation answering workflows.
18
 
19
- Base model: Fine-tuned version of [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), which is pretrained from scratch on 1T tokens of English text and code.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- License: Apache 2.0
22
 
23
- ## Features
24
 
25
- * Attention with Linear Biases (ALiBi): Inherited from the MPT family, this feature eliminates the context length limits by replacing positional embeddings, allowing for efficient and effective processing of lengthy documents. In future we are planning to finish training on our larger dataset and to increase amount of tokens for context.
26
- * Optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
27
- * Easy to Serve: Can be efficiently served using standard HuggingFace pipelines or NVIDIA's FasterTransformer.
28
 
29
 
30
- ## How to Use
31
 
32
- ```python
33
- import transformers
34
- model = transformers.AutoModelForCausalLM.from_pretrained(
35
- 'Arc53/DocsGPT-7B',
36
- trust_remote_code=True
37
- )
38
  ```
39
 
40
 
41
- This model was uses [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
42
 
43
- ```python
44
- from transformers import AutoTokenizer
45
- tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
46
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
- ## Warning
49
 
50
- This is a early version, fine tuning with 1k examples is just proof of concept we plan to fine tune on at least 100k more examples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
- ## Documentation
53
 
54
- * [Base model documentation](https://github.com/mosaicml/llm-foundry/)
55
- * Our community [Discord](https://discord.gg/n5BX8dh8rU)
56
- * [DocsGPT](https://github.com/arc53/DocsGPT) project
57
-
58
 
59
- ## Training Configuration
60
- It took 3 hours on 4xA100 GPU's on Google Cloud
61
 
62
- ## Training data
63
- Its base on all feedback that we have recieved from [Here](https://docsgpt.arc53.com/), There is a thumbs up or a down button next to each response.
64
- In this version we used 1k responses.
65
 
66
- ## Disclaimer
67
 
68
- The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
 
69
 
70
- ## Limitations
71
 
72
- Please be aware this is a relatively small llm and its prone to biases and hallucinations
 
 
 
 
 
 
73
 
 
 
74
 
75
- Our live [demo](https://docsgpt.arc53.com/) that uses a mixture of models
 
 
 
 
76
 
77
- ## Model License
 
 
 
 
 
 
 
78
 
79
- Apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
  pipeline_tag: text-generation
 
 
 
7
  ---
8
 
 
9
 
10
+ This model is fine tuned on top of llama-2-7b
11
 
12
+ DocsGPT is optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams.
13
 
14
+ We used 50k high quality examples to finetune it over 1.5 days on A10G GPU.
15
+ We used lora fine tuning process.
16
 
17
+ Its an apache-2.0 license so you can use it for commercial purposes too.
18
 
 
19
 
20
+ # How to run it
21
+ ```
22
+ from transformers import AutoTokenizer, AutoModelForCausalLM
23
+ import transformers
24
+ import torch
25
+
26
+ model = "Arc53/docsgpt-7b"
27
+
28
+ tokenizer = AutoTokenizer.from_pretrained(model)
29
+ pipeline = transformers.pipeline(
30
+ "text-generation",
31
+ model=model,
32
+ tokenizer=tokenizer,
33
+ torch_dtype=torch.bfloat16,
34
+ trust_remote_code=True,
35
+ device_map="auto",
36
+ )
37
+ sequences = pipeline(
38
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
39
+ max_length=200,
40
+ do_sample=True,
41
+ top_k=10,
42
+ num_return_sequences=1,
43
+ eos_token_id=tokenizer.eos_token_id,
44
+ )
45
+ for seq in sequences:
46
+ print(f"Result: {seq['generated_text']}")
47
+ ```
48
 
 
49
 
50
+ Benchmarks are still WIP
51
 
 
 
 
52
 
53
 
54
+ To prepare your prompts make sure you keep this format:
55
 
56
+ ```
57
+ ### Instruction
58
+ (where the question goes)
59
+ ### Context
60
+ (your document retrieval + system instructions)
61
+ ### Answer
62
  ```
63
 
64
 
65
+ Here is an example comparing it to meta-llama/Llama-2-7b
66
 
67
+ Prompt:
 
 
68
  ```
69
+ ### Instruction
70
+ Create a mock request to /api/answer in python
71
+
72
+ ### Context
73
+ You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
74
+ Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
75
+ You have access to chat history, and can use it to help answer the question.
76
+ When using code examples, use the following format:
77
+ `` ` `` (language)
78
+ (code)
79
+ `` ` ``
80
+
81
+ ----------------
82
+
83
+
84
+ /api/answer
85
+ Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
86
+ It will recieve an answer for a user provided question
87
+
88
+ `` ` ``
89
+ // answer (POST http://127.0.0.1:5000/api/answer)
90
+ fetch("http://127.0.0.1:5000/api/answer", {
91
+ "method": "POST",
92
+ "headers": {
93
+ "Content-Type": "application/json; charset=utf-8"
94
+ },
95
+ "body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
96
+ "active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
97
+ })
98
+ .then((res) => res.text())
99
+ .then(console.log.bind(console))
100
+ `` ` ``
101
+
102
+ In response you will get a json document like this one:
103
+
104
+ `` ` ``
105
+ {
106
+ "answer": " Hi there! How can I help you?\\n",
107
+ "query": "Hi",
108
+ "result": " Hi there! How can I help you?\\nSOURCES:"
109
+ }
110
+ `` ` ``
111
 
112
+
113
 
114
+ /api/docs_check
115
+ It will make sure documentation is loaded on a server (just run it everytime user is switching between libraries (documentations)
116
+ Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
117
+
118
+ `` ` ``
119
+ // answer (POST http://127.0.0.1:5000/api/docs_check)
120
+ fetch("http://127.0.0.1:5000/api/docs_check", {
121
+ "method": "POST",
122
+ "headers": {
123
+ "Content-Type": "application/json; charset=utf-8"
124
+ },
125
+ "body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
126
+ })
127
+ .then((res) => res.text())
128
+ .then(console.log.bind(console))
129
+ `` ` ``
130
+
131
+ In response you will get a json document like this one:
132
+ `` ` ``
133
+ {
134
+ "status": "exists"
135
+ }
136
+ `` ` ``
137
 
 
138
 
 
 
 
 
139
 
 
 
140
 
141
+ Issues and Pull requests
 
 
142
 
143
+ We value contributions to our issues in form of discussion or suggestion, we recommend that you check out existing issues and our Roadmap
144
 
145
+ If you want to contribute by writing code there are few things that you should know before doing it:
146
+ We have frontend (React, Vite) and Backend (python)
147
 
 
148
 
149
+ ### Answer
150
+ ```
151
+
152
+
153
+ docsgpt-7b:
154
+ ```
155
+ To create a mock request to `/api/answer` in Python, you can use the `requests` library. Here's an example:
156
 
157
+ `` ` ``python
158
+ import requests
159
 
160
+ # Set up the URL and headers
161
+ url = "http://127.0.0.1:5000/api/answer"
162
+ headers = {
163
+ "Content-Type": "application/json; charset=utf-8"
164
+ }
165
 
166
+ # Set up the JSON data to send in the request body
167
+ data = {
168
+ "question": "Hi",
169
+ "history": null,
170
+ "api_key": "OPENAI_API_KEY",
171
+ "embeddings_key": "OPENAI_API_KEY",
172
+ "active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"
173
+ }
174
 
175
+ # Send the request and get the response
176
+ response = requests.post(url, headers=headers, json=data)
177
+
178
+ # Print the response
179
+ print(response.text)
180
+ `` ` ``
181
+
182
+ This will send a POST request to `/api/answer` with the specified `question`, `history`, `api_key`, `embeddings_key`, and `active_docs` parameters. The response will be printed to the console.
183
+
184
+ Note that you will need to replace `127.0.0.1:5000` with the actual URL and port of your server. Also, make sure you have the necessary dependencies installed for `requests`.
185
+ ```
186
+
187
+ The original meta-llama/Llama-2-7b:
188
+ ```
189
+ ### DocsGPT
190
+
191
+
192
+ ### DocsGPT
193
+
194
+
195
+ ### DocsGPT
196
+
197
+
198
+ ### DocsGPT
199
+
200
+
201
+ ```