avnishkr commited on
Commit
57afa4d
•
1 Parent(s): 0fc8a70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -24,6 +24,28 @@ Dataset Size: 87278
24
  Training Steps: 500
25
 
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ## Training procedure
28
 
29
 
 
24
  Training Steps: 500
25
 
26
 
27
+
28
+ # 🚀 Falcon-7b-chat-oasst1
29
+
30
+ Falcon-7b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset. This repo only includes the LoRA adapters from fine-tuning with 🤗's [peft](https://github.com/huggingface/peft) package.
31
+
32
+ ## Model Summary
33
+
34
+ - **Model Type:** Causal decoder-only
35
+ - **Language(s):** English
36
+ - **Base Model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) (License: [Apache 2.0](https://huggingface.co/tiiuae/falcon-7b#license))
37
+ - **Dataset:** [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) (License: [Apache 2.0](https://huggingface.co/datasets/OpenAssistant/oasst1/blob/main/LICENSE))
38
+ - **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
39
+
40
+ ## Model Details
41
+
42
+ The model was fine-tuned in 8-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 6.25 hours and was executed on a workstation with a single A100-SXM NVIDIA GPU with 37 GB of available memory. See attached [Colab Notebook](https://huggingface.co/dfurman/falcon-7b-chat-oasst1/blob/main/finetune_falcon7b_oasst1_with_bnb_peft.ipynb) for the code and hyperparams used to train the model.
43
+
44
+ ### Model Date
45
+
46
+ May 30, 2023
47
+
48
+
49
  ## Training procedure
50
 
51