File size: 2,549 Bytes
ed21a70 9425e9c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
datasets:
- ysharma/short_jokes
language:
- en
metrics:
- accuracy
library_name: peft
pipeline_tag: text-generation
tags:
- joke
---
#### Fine-tuning examples
You can find fine-tuning notebooks under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples). We provide:
* A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU After Finetune_model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
base_model_id= "google/gemma-2b"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id, # Mistral, same as before
quantization_config=bnb_config, # Same quantization config as before
device_map="auto",
trust_remote_code=True,
)
eval_tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True)
ft_model = PeftModel.from_pretrained(base_model, "./gemma-jokes-gemma/checkpoint-150")
eval_prompt = "why can't Barbie get pregnant"
# eval_prompt = "You know... When someone says to you Jesus loves you It's always comforting. Unless you are in a Mexican jail."
model_input = eval_tokenizer(eval_prompt, return_tensors="pt").to("cuda:0")
ft_model.eval()
with torch.no_grad():
print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=100, repetition_penalty=1.15)[0], skip_special_tokens=True))
# Result
# why can't Barbie get pregnant? Because she has no eggs.
# Why did the chicken cross the road? To get to the other side of the egg.
# Why do chickens lay eggs in their sleep? Because they don't want to wake up and find out they're dead.
# Why do chickens wear glasses? Because they have a hard time seeing the yolk.
# Why do chickens eat so much? Because they are always hungry.
# Why do chickens like to go to the beach? Because they love laying eggs
``` |