loading revision v0.1 (#8)
Browse files- loading revision v0.1 (1f4a3fef1d5473d5b58f64415e72945a9c84c267)
README.md
CHANGED
@@ -41,6 +41,11 @@ To build SmolLM-Instruct, we finetune the base models on publicly available data
|
|
41 |
|
42 |
v0.2 models are better at staying on topic and responding appropriately to standard prompts, such as greetings and questions about their role as AI assistants. SmolLM-360M-Instruct (v0.2) has a 63.3% win rate over SmolLM-360M-Instruct (v0.1) on AlpacaEval. You can find the details [here](https://huggingface.co/datasets/HuggingFaceTB/alpaca_eval_details/).
|
43 |
|
|
|
|
|
|
|
|
|
|
|
44 |
## Usage
|
45 |
|
46 |
### Local Applications
|
|
|
41 |
|
42 |
v0.2 models are better at staying on topic and responding appropriately to standard prompts, such as greetings and questions about their role as AI assistants. SmolLM-360M-Instruct (v0.2) has a 63.3% win rate over SmolLM-360M-Instruct (v0.1) on AlpacaEval. You can find the details [here](https://huggingface.co/datasets/HuggingFaceTB/alpaca_eval_details/).
|
43 |
|
44 |
+
You can load v0.1 models by specifying `revision="v0.1"` in the transformers code:
|
45 |
+
```python
|
46 |
+
model = AutoModelForCausalLM.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct", revision="v0.1")
|
47 |
+
```
|
48 |
+
|
49 |
## Usage
|
50 |
|
51 |
### Local Applications
|