File size: 12,163 Bytes
fe4dfb4 0982fc8 2b514d3 0982fc8 2b514d3 0982fc8 2b514d3 f11832d 0982fc8 6dd5712 e427d9f 033890d 7735a70 e427d9f 6dd5712 ccada01 e427d9f 3edeca7 0982fc8 c8c857f 3b7a158 b974443 471a5ef b974443 a17f58a a443115 a17f58a a443115 c8c857f b974443 1b9d9dc b974443 1b9d9dc b974443 a070610 b974443 1b9d9dc b974443 de8ab98 1e83fbf b974443 2e43fdd b974443 de8ab98 b974443 c29c747 b974443 ff0548d c8c857f b974443 1e83fbf b974443 c8c857f b974443 c8c857f b974443 e88b0d5 b974443 e88b0d5 b974443 e88b0d5 b974443 e88b0d5 b974443 e88b0d5 b974443 e88b0d5 b974443 c8c857f b974443 1b9d9dc b974443 5542768 c8c857f 5542768 e8b3803 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 |
---
license: llama2
library_name: peft
tags:
- llama-2
datasets:
- ehartford/dolphin
inference: false
pipeline_tag: text-generation
base_model: meta-llama/Llama-2-70b-hf
model-index:
- name: llama-2-70b-dolphin-peft
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/llama-2-70b-dolphin-peft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/llama-2-70b-dolphin-peft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.18
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/llama-2-70b-dolphin-peft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 57.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/llama-2-70b-dolphin-peft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/llama-2-70b-dolphin-peft
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.37
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=dfurman/llama-2-70b-dolphin-peft
name: Open LLM Leaderboard
---
<div align="center">
<img src="./assets/llama.png" width="150px">
</div>
# Llama-2-70B-Instruct-v0.1
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly 17 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
*Note*: This model was ranked **6th** on 🤗's Open LLM Leaderboard in Aug 2023
## Helpful links
* Model license: Llama 2 Community License Agreement
* Basic usage: [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__llama-2-70b-dolphin-peft)
| Metric |Value|
|---------------------------------|----:|
|Avg. |65.72|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |86.82|
|MMLU (5-Shot) |69.18|
|TruthfulQA (0-shot) |57.43|
|Winogrande (5-shot) |83.90|
|GSM8k (5-shot) |27.37|
## Example prompts and responses
Example 1:
**User**:
> You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
**Llama-2-70B-Instruct-v0.1-peft**:
> 1. Visit the Statue of Liberty: Take a ferry to Liberty Island and see the iconic symbol of freedom and democracy.
> 2. Explore Central Park: Enjoy the lush greenery, take a stroll, or rent a bike to explore the park's many attractions, such as the Central Park Zoo and Strawberry Fields.
> 3. Visit the Empire State Building: Take an elevator to the observation deck for breathtaking views of the city.
> 4. Walk the Brooklyn Bridge: Cross the historic bridge and enjoy the stunning views of the Manhattan skyline.
> 5. Experience Times Square: Take in the bright lights, billboards, and bustling energy of this iconic intersection.
> 6. Visit the Metropolitan Museum of Art: Explore the vast collection of art and artifacts from around the world.
> 7. Take a food tour: Sample the diverse cuisine of New York City, from pizza to bagels to ethnic specialties.
> 8. Attend a Broadway show: Enjoy a world-class performance in the heart of the Theater District.
<br>
Example 2:
**User**:
> You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n
**Llama-2-70B-Instruct-v0.1-peft**:
> Subject: Dinner Party Invitation - Friday, 7 PM
>
> Dear Friends,
>
> I hope this email finds you well! I'm excited to invite you to a dinner party at my place this Friday, at 7 PM. We'll have a delicious meal, great company, and lots of fun. Please let me know if you can make it, so I can plan accordingly.
>
> Looking forward to seeing you all!
>
> Kind regards,
> Your Name
> P.S. Please let me know if you have any dietary restrictions or allergies.
<br>
Example 3:
**User**:
> You are a helpful assistant. Tell me a recipe for vegan banana bread.\n
**Llama-2-70B-Instruct-v0.1-peft**:
> Sure! Here's a delicious and easy vegan banana bread recipe:
>
> Ingredients:
> - 2 cups all-purpose flour
> - 1/2 cup sugar
> - 1/2 cup vegan butter (such as Earth Balance)
> - 1/2 cup vegan milk (such as almond milk)
> - 1/2 cup unsweetened applesauce
> - 1/2 cup mashed ripe bananas (about 2 medium bananas)
> - 1 teaspoon baking soda
> - 1/2 teaspoon salt
> - 1/2 teaspoon ground cinnamon
> - 1/2 teaspoon ground nutmeg
> - 1/2 teaspoon ground cloves
> - 1/2 cup chopped walnuts (optional)
>
> Instructions:
> 1. Preheat the oven to 350°F (175°C). Grease a 9x5-inch loaf pan with vegan butter or cooking spray.
> 2. In a large bowl, mix together the flour, sugar, vegan butter, vegan milk, applesauce, bananas, baking soda, salt, cinnamon, nutmeg, and cloves. Stir until well combined.
> 3. Fold in the chopped walnuts, if using.
> 4. Pour the batter into the prepared loaf pan.
> 5. Bake for 50-60 minutes, or until a toothpick inserted into the center of the bread comes out clean.
> 6. Let the bread cool in the pan for 10 minutes before transferring it to a wire rack to cool completely.
> 7. Slice and enjoy!
>
> Note: You can also add chocolate chips, dried fruit, or other mix-ins to the batter for extra flavor and texture. Enjoy your vegan banana bread!
<br>
## Model description
The architecture is a modification of a standard decoder-only transformer and was trained as a causal language model (clm).
The llama-2-70b models have been modified from a standard transformer in the following ways:
* It uses the [SwiGLU activation function](https://arxiv.org/abs/2002.05202)
* It uses [rotary positional embeddings](https://arxiv.org/abs/2104.09864) (RoPE)
* It uses [grouped-query attention](https://arxiv.org/pdf/2305.13245.pdf) (GQA), a generalization of multi-query attention which uses an intermediate number of key-value heads.
| Hyperparameter | Value |
|----------------|-------|
| n_parameters | 70B |
| tokens | 2.0T |
| vocab size | 32000 |
| sequence length | 4096 |
| grouped-query attention | ✔️ |
## Pre-training data
For more details on the pretraining process, see [Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf).
The data was tokenized using the [Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) tokenizer.
## Limitations and biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Basic usage
* [notebook](assets/basic_inference_llama_2_dolphin.ipynb)
```python
!pip install -q -U huggingface_hub peft transformers torch accelerate
```
```python
from huggingface_hub import notebook_login
import torch
from peft import PeftModel, PeftConfig
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
pipeline,
)
notebook_login()
```
```python
peft_model_id = "dfurman/llama-2-13b-dolphin-peft"
config = PeftConfig.from_pretrained(peft_model_id)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
use_auth_token=True,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token
model = PeftModel.from_pretrained(model, peft_model_id)
format_template = "You are a helpful assistant. {query}\n"
```
```python
# First, format the prompt
query = "Tell me a recipe for vegan banana bread."
prompt = format_template.format(query=query)
# Inference can be done using model.generate
print("\n\n*** Generate:")
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
with torch.autocast("cuda", dtype=torch.bfloat16):
output = model.generate(
input_ids=input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
repetition_penalty=1.2,
)
print(tokenizer.decode(output["sequences"][0], skip_special_tokens=True))
```
## Runtime tests
| runtime / 50 tokens (sec) | GPU | attn | torch dtype | VRAM (GB) |
|:-----------------------------:|:----------------------:|:---------------------:|:-------------:|:-----------------------:|
| 4.50 | 1x H100 (80 GB PCIe) | torch | nf4 | 39 |
The above runtime stats were generated from this [notebook](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/postprocessing-Llama-2-70B-Instruct-v0.1-peft.ipynb).
## Acknowledgements
This model was finetuned by Daniel Furman on July 23, 2023 and is intended primarily for research purposes.
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Meta citation for llama-2 blog
```
@online{Meta2023Introducing,
author = {Meta AI},
title = {Meta and Microsoft Introduce the Next Generation of Llama},
year = {2023},
url = {https://about.fb.com/news/2023/07/llama-2/},
note = {Accessed: 2023-07-24},
urldate = {2023-07-24}
}
```
---
## Framework versions
- PEFT 0.5.0.dev0
|