File size: 3,655 Bytes
00af14a
4dbbace
0fc8a70
 
6b3a00e
0fc8a70
 
 
 
 
 
 
 
 
 
23af9a5
90c036c
9d1c404
8a0cf16
9d1c404
57afa4d
 
 
 
 
43487a9
 
57afa4d
 
43487a9
 
 
 
 
 
 
380843d
43487a9
 
 
 
57afa4d
 
9d1c404
57afa4d
 
 
9d1c404
 
 
 
 
 
 
 
 
 
 
 
 
57afa4d
 
c9cff18
23af9a5
 
c9cff18
17544a5
 
c9cff18
 
 
 
 
 
 
23af9a5
c9cff18
8c8b3a3
 
c9cff18
 
 
 
 
 
 
 
 
 
 
0fc8a70
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
library_name: peft
datasets:
- squad
- tiiuae/falcon-refinedweb
language:
- en
tags:
- llms
- falcon-7b
- open source llms
- fine tuning llms
- QLoRA
- PEFT
- LoRA
---

# 🚀 Falcon-7b-QueAns

Falcon-7b-QueAns is a chatbot-like model for Question and Answering. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [SQuAD](https://huggingface.co/datasets/squad) dataset. This repo only includes the QLoRA adapters from fine-tuning with 🤗's [peft](https://github.com/huggingface/peft) package. 

## Model Summary

- **Model Type:** Causal decoder-only
- **Language(s):** English
- **Base Model:** Falcon-7B (License: Apache 2.0)
- **Dataset:** [SQuAD](https://huggingface.co/datasets/squad) (License: cc-by-4.0)
- **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"


## Why use Falcon-7B?

* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)). 
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.

⚠️ **This is a finetuned version for specifically question and answering.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct). 

🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!


## Model Details

The model was fine-tuned in 4-bit precision using 🤗 `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 4 hours and was executed on a workstation with a single T4 NVIDIA GPU with 15 GB of available memory. See attached [Colab Notebook] used to train the model. 

### Model Date

July 06, 2023


Open source falcon 7b large language model fine tuned on SQuAD dataset for question and answering.

QLoRA technique used for fine tuning the model on consumer grade GPU
SFTTrainer is also used.

Dataset used: SQuAD
Dataset Size: 87278
Training Steps: 500




## Training procedure


The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16

The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions

- PEFT 0.4.0.dev0

- PEFT 0.4.0.dev0