Text Generation
PEFT
Safetensors
Eval Results
dfurman commited on
Commit
29771c5
1 Parent(s): ee78cab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -14
README.md CHANGED
@@ -21,7 +21,7 @@ Falcon-40B-Chat-v0.1 is a chatbot model for dialogue generation. It was built by
21
 
22
  ## Model Summary
23
 
24
- - **Model Type:** Decoder-only
25
  - **Language(s):** English
26
  - **Base Model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) (License: [Apache 2.0](https://huggingface.co/tiiuae/falcon-40b#license))
27
  - **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE))
@@ -29,10 +29,24 @@ Falcon-40B-Chat-v0.1 is a chatbot model for dialogue generation. It was built by
29
 
30
  The model was fine-tuned in 4-bit precision using `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co/dfurman/Falcon-40B-Chat-v0.1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
31
 
32
- ### Model Date
33
 
34
  May 30, 2023
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ## Quick Start
37
 
38
  To prompt the chat model, use the following format:
@@ -210,16 +224,4 @@ See attached [Colab Notebook](https://huggingface.co/dfurman/Falcon-40B-Chat-v0.
210
  - `accelerate`: 0.19.0
211
  - `bitsandbytes`: 0.39.0
212
  - `einops`: 0.6.1
213
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
214
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__falcon-40b-openassistant-peft)
215
 
216
- | Metric | Value |
217
- |-----------------------|---------------------------|
218
- | Avg. | 51.17 |
219
- | ARC (25-shot) | 62.63 |
220
- | HellaSwag (10-shot) | 85.59 |
221
- | MMLU (5-shot) | 57.77 |
222
- | TruthfulQA (0-shot) | 51.02 |
223
- | Winogrande (5-shot) | 81.45 |
224
- | GSM8K (5-shot) | 13.34 |
225
- | DROP (3-shot) | 6.36 |
 
21
 
22
  ## Model Summary
23
 
24
+ - **Model Type:** Causal language model (clm)
25
  - **Language(s):** English
26
  - **Base Model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) (License: [Apache 2.0](https://huggingface.co/tiiuae/falcon-40b#license))
27
  - **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE))
 
29
 
30
  The model was fine-tuned in 4-bit precision using `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 10 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co/dfurman/Falcon-40B-Chat-v0.1/blob/main/finetune_falcon40b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
31
 
32
+ ## Model Date
33
 
34
  May 30, 2023
35
 
36
+ ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
37
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__falcon-40b-openassistant-peft)
38
+
39
+ | Metric | Value |
40
+ |-----------------------|---------------------------|
41
+ | Avg. | 51.17 |
42
+ | ARC (25-shot) | 62.63 |
43
+ | HellaSwag (10-shot) | 85.59 |
44
+ | MMLU (5-shot) | 57.77 |
45
+ | TruthfulQA (0-shot) | 51.02 |
46
+ | Winogrande (5-shot) | 81.45 |
47
+ | GSM8K (5-shot) | 13.34 |
48
+ | DROP (3-shot) | 6.36 |
49
+
50
  ## Quick Start
51
 
52
  To prompt the chat model, use the following format:
 
224
  - `accelerate`: 0.19.0
225
  - `bitsandbytes`: 0.39.0
226
  - `einops`: 0.6.1
 
 
227