yatharth97 commited on
Commit
7d9976a
1 Parent(s): c80cc60

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -48,7 +48,7 @@ Below we share some code snippets on how to get quickly started with running the
48
 
49
  #### Fine-tuning the model
50
 
51
- You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-7b-it`.
52
  In that repository, we provide:
53
 
54
  * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
@@ -229,7 +229,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
229
  import transformers
230
  import torch
231
 
232
- model_id = "google/gemma-7b-it"
233
  dtype = torch.bfloat16
234
 
235
  tokenizer = AutoTokenizer.from_pretrained(model_id)
 
48
 
49
  #### Fine-tuning the model
50
 
51
+ You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `yatharth97/yatharth-gemma-7b-it-10k`.
52
  In that repository, we provide:
53
 
54
  * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA
 
229
  import transformers
230
  import torch
231
 
232
+ model_id = "yatharth97/yatharth-gemma-7b-it-10k"
233
  dtype = torch.bfloat16
234
 
235
  tokenizer = AutoTokenizer.from_pretrained(model_id)