MaziyarPanahi
commited on
Commit
•
1964dbb
1
Parent(s):
6e5b3ca
Update README.md
Browse files
README.md
CHANGED
@@ -108,6 +108,36 @@ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/g
|
|
108 |
It achieves the following results on the evaluation set:
|
109 |
- Loss: 1.4456
|
110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
111 |
## Model description
|
112 |
|
113 |
More information needed
|
|
|
108 |
It achieves the following results on the evaluation set:
|
109 |
- Loss: 1.4456
|
110 |
|
111 |
+
## How to use
|
112 |
+
|
113 |
+
**PEFT**
|
114 |
+
```python
|
115 |
+
from peft import PeftModel, PeftConfig
|
116 |
+
from transformers import AutoModelForCausalLM
|
117 |
+
|
118 |
+
model_id = "MaziyarPanahi/gemma-7b-Open-Hermes-v0.1"
|
119 |
+
|
120 |
+
config = PeftConfig.from_pretrained(model_id)
|
121 |
+
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
|
122 |
+
model = PeftModel.from_pretrained(model, model_id)
|
123 |
+
```
|
124 |
+
|
125 |
+
**Transformers**
|
126 |
+
```python
|
127 |
+
# Use a pipeline as a high-level helper
|
128 |
+
from transformers import pipeline
|
129 |
+
|
130 |
+
model_id = "MaziyarPanahi/gemma-7b-Open-Hermes-v0.1"
|
131 |
+
|
132 |
+
pipe = pipeline("text-generation", model=model_id)
|
133 |
+
|
134 |
+
# Load model directly
|
135 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
136 |
+
|
137 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
138 |
+
model = AutoModelForCausalLM.from_pretrained(model_id)
|
139 |
+
```
|
140 |
+
|
141 |
## Model description
|
142 |
|
143 |
More information needed
|