pmking27 commited on
Commit
8e708a6
1 Parent(s): d357b02

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -22,7 +22,7 @@ tags:
22
  - unsloth
23
  - gemma
24
  - trl
25
- base_model: google/gemma-7b
26
  pipeline_tag: text-generation
27
  ---
28
 
@@ -32,7 +32,7 @@ pipeline_tag: text-generation
32
 
33
  - **Developed by:** pmking27
34
  - **License:** apache-2.0
35
- - **Finetuned from model :** google/gemma-7b
36
 
37
  This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
38
 
@@ -46,10 +46,10 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
46
  device = 'cuda'
47
 
48
  # Loading the tokenizer for the model
49
- tokenizer = AutoTokenizer.from_pretrained("pmking27/PrathameshLLM-7B")
50
 
51
  # Loading the pre-trained model
52
- model = AutoModelForCausalLM.from_pretrained("pmking27/PrathameshLLM-7B")
53
 
54
  # Defining the Alpaca prompt template
55
  alpaca_prompt = """
 
22
  - unsloth
23
  - gemma
24
  - trl
25
+ base_model: google/gemma-2b
26
  pipeline_tag: text-generation
27
  ---
28
 
 
32
 
33
  - **Developed by:** pmking27
34
  - **License:** apache-2.0
35
+ - **Finetuned from model :** google/gemma-2b
36
 
37
  This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
38
 
 
46
  device = 'cuda'
47
 
48
  # Loading the tokenizer for the model
49
+ tokenizer = AutoTokenizer.from_pretrained("pmking27/PrathameshLLM-2B")
50
 
51
  # Loading the pre-trained model
52
+ model = AutoModelForCausalLM.from_pretrained("pmking27/PrathameshLLM-2B")
53
 
54
  # Defining the Alpaca prompt template
55
  alpaca_prompt = """