Update README.md
Browse files
README.md
CHANGED
@@ -7,33 +7,34 @@ datasets:
|
|
7 |
|
8 |
# tinyChat: Instruction-Based LLM, <1% the size of GPT-3
|
9 |
|
10 |
-
Introducing tinyChat, the instruction-based Large Language Model (LLM) that’s less than 1% the size of GPT-3.5. tinyChat is an open-source model under the Apache 2.0 license and based on Google’s Flan-T5-Large, a 770m parameter model. Although not as performant as larger models, tinyChat can perform a variety of NLP tasks such as summarization, question answering, and sentiment analysis.
|
11 |
|
12 |
-
tinyChat is available on the HuggingFace model hub and the code repository is on GitHub.
|
13 |
|
14 |
|
15 |
-
##
|
16 |
|
17 |
-
-
|
18 |
-
- Summarization
|
19 |
-
- Sentiment analysis
|
20 |
-
- Q&A systems
|
21 |
-
- Text completion
|
22 |
-
- Language modeling
|
23 |
-
- Mobile applications
|
24 |
-
- Complementing larger LLMs
|
25 |
|
26 |
-
## Future Directions
|
27 |
|
28 |
-
|
29 |
-
- Reducing biases and toxicity
|
30 |
-
- Developing new datasets
|
31 |
-
- Collaborating with the open-source community
|
32 |
-
- Applying tinyChat to new domains
|
33 |
|
34 |
-
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## Running the Code
|
39 |
|
@@ -50,4 +51,4 @@ peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
|
|
50 |
inputs = tokenizer("""[INSERT INSTRUCTION HERE]""", return_tensors="pt")
|
51 |
outputs = peft_model.generate(**inputs, max_length=300, do_sample=True)
|
52 |
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
|
53 |
-
```
|
|
|
7 |
|
8 |
# tinyChat: Instruction-Based LLM, <1% the size of GPT-3
|
9 |
|
10 |
+
Introducing tinyChat, the instruction-based Large Language Model (LLM) that’s less than 1% the size of GPT-3.5. tinyChat is an open-source model under the Apache 2.0 license and based on Google’s Flan-T5-Large, a 770m parameter model. By fine tuning on the databricks-dolly-15k dataset, tinyChat demonstrates improved outputs on a range of tasks compared to Flan-T5. Although not as performant as larger models, tinyChat can perform a variety of NLP tasks such as summarization, question answering, and sentiment analysis using instruction prompts.
|
11 |
|
12 |
+
tinyChat is available on the HuggingFace model hub and the code repository is on [GitHub](https://github.com/Leadmatic/tinyChat).
|
13 |
|
14 |
|
15 |
+
## Dataset
|
16 |
|
17 |
+
[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - databricks-dolly-15k is an open source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
|
|
19 |
|
20 |
+
## Benchmark
|
|
|
|
|
|
|
|
|
21 |
|
22 |
+
The following are results from different models tested on the EleutherAI LLM Evaluation Harness. These results indicate that tinyChat is not as performant as the other models. It also shows that tinyChat is only slightly better than Flan-t5-large on openbookqa while performing worse on other datasets. However, tinyChat shows better outputs when provided with creative prompts compared to its base model. See [blog post](https://leadmaticv3.webflow.io/blog/tinychat) for examples. This shows the limitations of these benchmarks for evaluating generative models.
|
23 |
|
24 |
+
| model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq |
|
25 |
+
|--------------------------|-----------|----------|------------|-----------|---------------|---------|---------|
|
26 |
+
| cerebras/Cerebras-GPT-13B | 0.36 | 0.598906 | 0.607735 | 0.593109 | 0.325939 | 0.749728 | 0.611621|
|
27 |
+
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963|
|
28 |
+
| dolly-v1-6b (1 epoch) | 0.428 | 0.608586 | 0.633781 | 0.650568 | 0.377133 | 0.761697 | 0.69633 |
|
29 |
+
| dolly-v1-6b (10 epochs) | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768|
|
30 |
+
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413|
|
31 |
+
| google/flan-t5-large | 0.3120 | 0.5724 | 0.5991 | 0.4871 | 0.3072 | 0.7220 | 0.8645 |
|
32 |
+
| leadmatic/tinyChat | 0.3320 | 0.4811 | 0.5825 | 0.4519 | 0.2961 | 0.7073 | 0.8358 |
|
33 |
+
|
34 |
+
|
35 |
+
## Limitations
|
36 |
+
|
37 |
+
tinyChat is prone to hallucination and displays model bias. It is under active development and is currently intended for research purposes only.
|
38 |
|
39 |
## Running the Code
|
40 |
|
|
|
51 |
inputs = tokenizer("""[INSERT INSTRUCTION HERE]""", return_tensors="pt")
|
52 |
outputs = peft_model.generate(**inputs, max_length=300, do_sample=True)
|
53 |
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
|
54 |
+
```
|