Commit
•
5b4e6cf
1
Parent(s):
c527da5
Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,12 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
@@ -17,13 +16,13 @@ tags: []
|
|
17 |
|
18 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:** [
|
21 |
- **Funded by [optional]:** [More Information Needed]
|
22 |
- **Shared by [optional]:** [More Information Needed]
|
23 |
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [
|
25 |
-
- **License:** [
|
26 |
-
- **Finetuned from model [optional]:** [
|
27 |
|
28 |
### Model Sources [optional]
|
29 |
|
@@ -70,6 +69,13 @@ Users (both direct and downstream) should be made aware of the risks, biases and
|
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
Use the code below to get started with the model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
[More Information Needed]
|
75 |
|
@@ -78,6 +84,7 @@ Use the code below to get started with the model.
|
|
78 |
### Training Data
|
79 |
|
80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
|
|
81 |
|
82 |
[More Information Needed]
|
83 |
|
@@ -94,6 +101,65 @@ Use the code below to get started with the model.
|
|
94 |
|
95 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
#### Speeds, Sizes, Times [optional]
|
98 |
|
99 |
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
+
Finetuned Llama3-8B-Instruct model on https://huggingface.co/datasets/isaacchung/hotpotqa-dev-raft-subset.
|
|
|
10 |
|
11 |
## Model Details
|
12 |
|
|
|
16 |
|
17 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
18 |
|
19 |
+
- **Developed by:** [Isaac Chung](https://huggingface.co/isaacchung)
|
20 |
- **Funded by [optional]:** [More Information Needed]
|
21 |
- **Shared by [optional]:** [More Information Needed]
|
22 |
- **Model type:** [More Information Needed]
|
23 |
+
- **Language(s) (NLP):** [English]
|
24 |
+
- **License:** [Apache 2.0]
|
25 |
+
- **Finetuned from model [optional]:** [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
|
26 |
|
27 |
### Model Sources [optional]
|
28 |
|
|
|
69 |
## How to Get Started with the Model
|
70 |
|
71 |
Use the code below to get started with the model.
|
72 |
+
```python
|
73 |
+
# Load model directly
|
74 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
75 |
+
|
76 |
+
tokenizer = AutoTokenizer.from_pretrained("isaacchung/llama3-8B-hotpotqa")
|
77 |
+
model = AutoModelForCausalLM.from_pretrained("isaacchung/llama3-8B-hotpotqa")
|
78 |
+
```
|
79 |
|
80 |
[More Information Needed]
|
81 |
|
|
|
84 |
### Training Data
|
85 |
|
86 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
87 |
+
https://huggingface.co/datasets/isaacchung/hotpotqa-dev-raft-subset
|
88 |
|
89 |
[More Information Needed]
|
90 |
|
|
|
101 |
|
102 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
103 |
|
104 |
+
Model loaded:
|
105 |
+
```python
|
106 |
+
model = AutoModelForCausalLM.from_pretrained(
|
107 |
+
model_id,
|
108 |
+
device_map="auto",
|
109 |
+
attn_implementation="flash_attention_2",
|
110 |
+
torch_dtype=torch.bfloat16,
|
111 |
+
quantization_config=bnb_config
|
112 |
+
)
|
113 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
114 |
+
tokenizer.padding_side = 'right' # to prevent warnings
|
115 |
+
```
|
116 |
+
|
117 |
+
|
118 |
+
Training params:
|
119 |
+
```python
|
120 |
+
# LoRA config based on QLoRA paper & Sebastian Raschka experiment
|
121 |
+
peft_config = LoraConfig(
|
122 |
+
lora_alpha=128,
|
123 |
+
lora_dropout=0.05,
|
124 |
+
r=256,
|
125 |
+
bias="none",
|
126 |
+
target_modules="all-linear",
|
127 |
+
task_type="CAUSAL_LM",
|
128 |
+
)
|
129 |
+
|
130 |
+
args = TrainingArguments(
|
131 |
+
num_train_epochs=3, # number of training epochs
|
132 |
+
per_device_train_batch_size=3, # batch size per device during training
|
133 |
+
gradient_accumulation_steps=2, # number of steps before performing a backward/update pass
|
134 |
+
gradient_checkpointing=True, # use gradient checkpointing to save memory
|
135 |
+
optim="adamw_torch_fused", # use fused adamw optimizer
|
136 |
+
logging_steps=10, # log every 10 steps
|
137 |
+
save_strategy="epoch", # save checkpoint every epoch
|
138 |
+
learning_rate=2e-4, # learning rate, based on QLoRA paper
|
139 |
+
bf16=True, # use bfloat16 precision
|
140 |
+
tf32=True, # use tf32 precision
|
141 |
+
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
|
142 |
+
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
|
143 |
+
lr_scheduler_type="constant", # use constant learning rate scheduler
|
144 |
+
)
|
145 |
+
|
146 |
+
max_seq_length = 3072 # max sequence length for model and packing of the dataset
|
147 |
+
|
148 |
+
trainer = SFTTrainer(
|
149 |
+
model=model,
|
150 |
+
args=args,
|
151 |
+
train_dataset=dataset,
|
152 |
+
peft_config=peft_config,
|
153 |
+
max_seq_length=max_seq_length,
|
154 |
+
tokenizer=tokenizer,
|
155 |
+
packing=True,
|
156 |
+
dataset_kwargs={
|
157 |
+
"add_special_tokens": False, # We template with special tokens
|
158 |
+
"append_concat_token": False, # No need to add additional separator token
|
159 |
+
}
|
160 |
+
)
|
161 |
+
```
|
162 |
+
|
163 |
#### Speeds, Sizes, Times [optional]
|
164 |
|
165 |
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|