Update README.md
Browse files
README.md
CHANGED
@@ -67,11 +67,74 @@ Users should validate the simplified text with healthcare professionals to ensur
|
|
67 |
|
68 |
Use the code below to get started with the model.
|
69 |
|
70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
prompt = """
|
72 |
-
<user>: Convert this text to reading level 6:
|
73 |
<assistant>:
|
74 |
-
""".strip()
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
@@ -151,7 +214,7 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
151 |
- **Hardware Type:** GPU (NVIDIA A100)
|
152 |
- **Hours used:** 120 hours
|
153 |
- **Cloud Provider:** AWS
|
154 |
-
- **Compute Region:** US
|
155 |
- **Carbon Emitted:** 500 kg CO2eq
|
156 |
|
157 |
## Technical Specifications [optional]
|
|
|
67 |
|
68 |
Use the code below to get started with the model.
|
69 |
|
70 |
+
```python
|
71 |
+
from transformers import (
|
72 |
+
AutoConfig,
|
73 |
+
AutoModelForCausalLM,
|
74 |
+
AutoTokenizer,
|
75 |
+
BitsAndBytesConfig
|
76 |
+
)
|
77 |
+
|
78 |
+
from peft import PeftConfig
|
79 |
+
|
80 |
+
MODEL = "9rofe/Wernicke-AI3"
|
81 |
+
|
82 |
+
bnb_config = BitsAndBytesConfig(
|
83 |
+
load_in_4bit=True,
|
84 |
+
bnb_4bit_use_double_quant=True,
|
85 |
+
bnb_4bit_quant_type="nf4",
|
86 |
+
bnb_4bit_compute_dtype=torch.bfloat16
|
87 |
+
)
|
88 |
+
|
89 |
+
config = PeftConfig.from_pretrained(MODEL)
|
90 |
+
model = AutoModelForCausalLM.from_pretrained(
|
91 |
+
config.base_model_name_or_path,
|
92 |
+
return_dict=True,
|
93 |
+
quantization_config=bnb_config,
|
94 |
+
device_map="auto",
|
95 |
+
trust_remote_code=True
|
96 |
+
)
|
97 |
+
|
98 |
+
tokenizer=AutoTokenizer.from_pretrained(config.base_model_name_or_path)
|
99 |
+
tokenizer.pad_token = tokenizer.eos_token
|
100 |
+
|
101 |
+
model = PeftModel.from_pretrained(model, MODEL)
|
102 |
+
|
103 |
+
generation_config = model.generation_config
|
104 |
+
generation_config.max_new_tokens = 500 # MODIFY
|
105 |
+
generation_config.temperature = 0.7
|
106 |
+
generation_config.top_p = 0.7
|
107 |
+
generation_config.num_return_sequences = 1
|
108 |
+
generation_config.pad_token_id = tokenizer.eos_token_id
|
109 |
+
generation_config.eos_token_id = tokenizer.eos_token_id
|
110 |
+
|
111 |
+
%%time
|
112 |
+
device = "cuda:0"
|
113 |
+
|
114 |
+
prompt = """
|
115 |
+
<user>: Convert this text to reading level 6: {TEXT}
|
116 |
+
<assistant>:
|
117 |
+
""".strip()
|
118 |
+
|
119 |
+
encoding = tokenizer(prompt, return_tensors="pt").to(device)
|
120 |
+
with torch.inference_mode():
|
121 |
+
outputs = model.generate(
|
122 |
+
input_ids = encoding.input_ids,
|
123 |
+
attention_mask = encoding.attention_mask,
|
124 |
+
generation_config = generation_config
|
125 |
+
)
|
126 |
+
|
127 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
128 |
+
```
|
129 |
+
|
130 |
+
Utilize this prompt:
|
131 |
+
|
132 |
+
```python
|
133 |
prompt = """
|
134 |
+
<user>: Convert this text to reading level 6: {text}
|
135 |
<assistant>:
|
136 |
+
""".strip()
|
137 |
+
```
|
138 |
|
139 |
## Training Details
|
140 |
|
|
|
214 |
- **Hardware Type:** GPU (NVIDIA A100)
|
215 |
- **Hours used:** 120 hours
|
216 |
- **Cloud Provider:** AWS
|
217 |
+
- **Compute Region:** US West (Utah)
|
218 |
- **Carbon Emitted:** 500 kg CO2eq
|
219 |
|
220 |
## Technical Specifications [optional]
|