File size: 6,680 Bytes
047dd6d ffc2da5 047dd6d 3a554d5 047dd6d ffc2da5 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 3a554d5 047dd6d 3a554d5 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 3a554d5 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 3a554d5 047dd6d 3a554d5 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 3a554d5 047dd6d 1c07f34 047dd6d 1c07f34 047dd6d 3a554d5 047dd6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 |
---
language:
- en
license: other
library_name: transformers
tags:
- falcon
- falcon-7b
- prompt answering
- peft
pipeline_tag: text-generation
base_model: tiiuae/falcon-7b
---
## Model Card for Model ID
This repository contains further fine-tuned Falcon-7B model on conversations and question answering prompts.
**I used falcon-7b (https://huggingface.co/tiiuae/falcon-7b) as a base model, so this model has the same license with Falcon-7b model (Apache-2.0)**
## Model Details
Anyone can use (ask prompts) and play with the model using the pre-existing Jupyter Notebook in the **noteboooks** folder. The Jupyter Notebook contains example code to load the model and ask prompts to it as well as example prompts to get you started.
### Model Description
The tiiuae/falcon-7b model was finetuned on conversations and question answering prompts.
**Developed by:** [More Information Needed]
**Shared by:** [More Information Needed]
**Model type:** Causal LM
**Language(s) (NLP):** English, multilingual
**License:** Apache-2.0
**Finetuned from model:** tiiuae/falcon-7b
## Model Sources [optional]
**Repository:** [More Information Needed]
**Paper:** [More Information Needed]
**Demo:** [More Information Needed]
## Uses
The model can be used for prompt answering
### Direct Use
The model can be used for prompt answering
### Downstream Use
Generating text and prompt answering
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Usage
## Creating prompt
The model was trained on the following kind of prompt:
```python
def generate_prompt(prompt: str) -> str:
return f"""
<human>: {prompt}
<assistant>:
""".strip()
```
## How to Get Started with the Model
Use the code below to get started with the model.
1. You can git clone the repo, which contains also the artifacts for the base model for simplicity and completeness, and run the following code snippet to load the mode:
```python
import torch
from peft import PeftConfig, PeftModel
from transformers import GenerationConfig, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
MODEL_NAME = "."
config = PeftConfig.from_pretrained(MODEL_NAME)
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = PeftModel.from_pretrained(model, MODEL_NAME)
generation_config = model.generation_config
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.max_new_tokens = 32
generation_config.use_cache = False
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
model.eval()
if torch.__version__ >= "2":
model = torch.compile(model)
```
### Example of Usage
```python
prompt = "What is the capital city of Greece and with which countries does Greece border?"
prompt = generate_prompt(prompt)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
)
response = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
print(response)
>>> The capital city of Greece is Athens and it borders Albania, Bulgaria, Macedonia, and Turkey.
```
2. You can also directly call the model from HuggingFace using the following code snippet:
```python
import torch
from peft import PeftConfig, PeftModel
from transformers import GenerationConfig, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
MODEL_NAME = "Sandiago21/falcon-7b-prompt-answering"
BASE_MODEL = "tiiuae/falcon-7b"
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = PeftModel.from_pretrained(model, MODEL_NAME)
generation_config = model.generation_config
generation_config.top_p = 0.7
generation_config.num_return_sequences = 1
generation_config.max_new_tokens = 32
generation_config.use_cache = False
generation_config.pad_token_id = tokenizer.eos_token_id
generation_config.eos_token_id = tokenizer.eos_token_id
model.eval()
if torch.__version__ >= "2":
model = torch.compile(model)
```
### Example of Usage
```python
prompt = "What is the capital city of Greece and with which countries does Greece border?"
prompt = generate_prompt(prompt)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
)
response = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
print(response)
>>> The capital city of Greece is Athens and it borders Albania, Bulgaria, Macedonia, and Turkey.
```
## Training Details
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.12.1
### Training Data
The tiiuae/falcon-7b was finetuned on conversations and question answering data
### Training Procedure
The tiiuae/falcon-7b model was further trained and finetuned on question answering and prompts data for 1 epoch (approximately 10 hours of training on a single GPU)
## Model Architecture and Objective
The model is based on tiiuae/falcon-7b model and finetuned adapters on top of the main model on conversations and question answering data.
|