avnishkr commited on
Commit
43487a9
1 Parent(s): 9d1c404

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -2
README.md CHANGED
@@ -22,10 +22,22 @@ Falcon-7b-QueAns is a chatbot-like model for Question and Answering. It was buil
22
 
23
  - **Model Type:** Causal decoder-only
24
  - **Language(s):** English
25
- - **Base Model:** [Falcon-7B] (License: [Apache 2.0])
26
- - **Dataset:** [SQuAD](https://huggingface.co/datasets/squad) (License: [cc-by-4.0])
27
  - **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
28
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ## Model Details
30
 
31
  The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 4 hours and was executed on a workstation with a single T4 NVIDIA GPU with 15 GB of available memory. See attached [Colab Notebook] used to train the model.
 
22
 
23
  - **Model Type:** Causal decoder-only
24
  - **Language(s):** English
25
+ - **Base Model:** Falcon-7B (License: Apache 2.0)
26
+ - **Dataset:** [SQuAD](https://huggingface.co/datasets/squad) (License: cc-by-4.0)
27
  - **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
28
 
29
+
30
+ ## Why use Falcon-7B?
31
+
32
+ * **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
33
+ * **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
34
+ * **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
35
+
36
+ ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
37
+
38
+ 🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
39
+
40
+
41
  ## Model Details
42
 
43
  The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 4 hours and was executed on a workstation with a single T4 NVIDIA GPU with 15 GB of available memory. See attached [Colab Notebook] used to train the model.