Yooko commited on
Commit
3cb312c
1 Parent(s): 4258dd6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -18
README.md CHANGED
@@ -22,31 +22,30 @@ This model is a PEFT model based on TurkuNLP/gpt3-finnish-small
22
 
23
 
24
 
25
- - **Developed by:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
 
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
 
40
  ## Uses
41
 
42
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
 
44
  ### Direct Use
45
-
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
 
 
 
 
 
 
 
 
50
  ### Downstream Use [optional]
51
 
52
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
 
22
 
23
 
24
 
25
+ - **Model type:** PEFT Model
26
+ - **Language(s) (NLP):** EN
27
+ - **License:** mit
28
+ - **Finetuned from model [optional]:** TurkuNLP/gpt3-finnish-small
 
 
29
 
 
 
 
 
 
 
 
30
 
31
  ## Uses
32
 
33
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
34
 
35
  ### Direct Use
36
+ ```python
37
+ import torch
38
+ from peft import PeftModel, PeftConfig
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ peft_model_id = "Yooko/gpt3-finnish-small-ft-AbirateEN"
42
+ config = PeftConfig.from_pretrained(peft_model_id)
43
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
44
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
45
+
46
+ # Load the Lora model
47
+ model = PeftModel.from_pretrained(model, peft_model_id)
48
+ ```
49
  ### Downstream Use [optional]
50
 
51
  <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->