yamete4 commited on
Commit
5c7d276
1 Parent(s): f7d94c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -74,12 +74,15 @@ tags:
74
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
75
 
76
  ## How to Get Started with the Model
77
-
78
  import torch
79
- from transformers import AutoModelForCausalLM, AutoTokenizer
80
-
81
- tokenizer = AutoTokenizer.from_pretrained("shpotes/codegen-350M-mono")
82
- model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono", trust_remote_code=True)
 
 
 
83
 
84
  input_ids = tokenizer(
85
  context,
@@ -104,7 +107,7 @@ Users (both direct and downstream) should be made aware of the risks, biases and
104
  )
105
  text = tokenizer.batch_decode(tokens[:, input_ids_len:, ...])
106
 
107
-
108
  [More Information Needed]
109
 
110
  ## Training Details
 
74
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
75
 
76
  ## How to Get Started with the Model
77
+ <code>
78
  import torch
79
+ from transformers import AutoModelForCausalLM
80
+ from peft import PeftModel, PeftConfig
81
+ from transformers import AutoModelForCausalLM
82
+
83
+ config = PeftConfig.from_pretrained("yamete4/codegen-350M-mono-QLoRa-flytech")
84
+ model = AutoModelForCausalLM.from_pretrained("shpotes/codegen-350M-mono")
85
+ model = PeftModel.from_pretrained(model, "yamete4/codegen-350M-mono-QLoRa-flytech")
86
 
87
  input_ids = tokenizer(
88
  context,
 
107
  )
108
  text = tokenizer.batch_decode(tokens[:, input_ids_len:, ...])
109
 
110
+ </code>
111
  [More Information Needed]
112
 
113
  ## Training Details