update readme
Browse files- .ipynb_checkpoints/README-checkpoint.md +4 -3
- README.md +4 -3
.ipynb_checkpoints/README-checkpoint.md
CHANGED
@@ -38,9 +38,10 @@ Train on my server, i have studied and adapted the model starting from the repos
|
|
38 |
num decayed parameter tensors: 225, with 251,068,416 parameters<br/>
|
39 |
num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
|
|
44 |
# Load model directly
|
45 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
46 |
|
@@ -63,5 +64,5 @@ num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
|
|
63 |
generated_text = tokenizer_model.decode(output[0], skip_special_tokens=True)
|
64 |
|
65 |
print(generated_text)
|
66 |
-
|
67 |
|
|
|
38 |
num decayed parameter tensors: 225, with 251,068,416 parameters<br/>
|
39 |
num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
|
40 |
|
41 |
+
To just use the model, you can run:
|
42 |
|
43 |
+
```py
|
44 |
+
|
45 |
# Load model directly
|
46 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
47 |
|
|
|
64 |
generated_text = tokenizer_model.decode(output[0], skip_special_tokens=True)
|
65 |
|
66 |
print(generated_text)
|
67 |
+
```
|
68 |
|
README.md
CHANGED
@@ -38,9 +38,10 @@ Train on my server, i have studied and adapted the model starting from the repos
|
|
38 |
num decayed parameter tensors: 225, with 251,068,416 parameters<br/>
|
39 |
num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
|
|
44 |
# Load model directly
|
45 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
46 |
|
@@ -63,5 +64,5 @@ num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
|
|
63 |
generated_text = tokenizer_model.decode(output[0], skip_special_tokens=True)
|
64 |
|
65 |
print(generated_text)
|
66 |
-
|
67 |
|
|
|
38 |
num decayed parameter tensors: 225, with 251,068,416 parameters<br/>
|
39 |
num non-decayed parameter tensors: 65, with 49,920 parameters<br/>
|
40 |
|
41 |
+
To just use the model, you can run:
|
42 |
|
43 |
+
```py
|
44 |
+
|
45 |
# Load model directly
|
46 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
47 |
|
|
|
64 |
generated_text = tokenizer_model.decode(output[0], skip_special_tokens=True)
|
65 |
|
66 |
print(generated_text)
|
67 |
+
```
|
68 |
|