Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,9 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
21 |
|
22 |
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-base-alpha-7b")
|
23 |
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-base-alpha-7b")
|
24 |
-
model.half()
|
25 |
|
26 |
-
inputs = tokenizer("What's your mood today?", return_tensors="pt")
|
27 |
tokens = model.generate(
|
28 |
**inputs,
|
29 |
max_new_tokens=64,
|
@@ -55,7 +55,7 @@ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
|
55 |
|
56 |
### Training Procedure
|
57 |
|
58 |
-
Models are pre-trained on the aforementioned dataset in mixed-precision (FP16) and
|
59 |
|
60 |
## Use and Limitations
|
61 |
|
|
|
21 |
|
22 |
tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-base-alpha-7b")
|
23 |
model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-base-alpha-7b")
|
24 |
+
model.half()
|
25 |
|
26 |
+
inputs = tokenizer("What's your mood today?", return_tensors="pt")
|
27 |
tokens = model.generate(
|
28 |
**inputs,
|
29 |
max_new_tokens=64,
|
|
|
55 |
|
56 |
### Training Procedure
|
57 |
|
58 |
+
Models are pre-trained on the aforementioned dataset in mixed-precision (FP16), optimized with Adam, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's GitHub repository **{TODO: FILL IN LINK}**.
|
59 |
|
60 |
## Use and Limitations
|
61 |
|