Instructions to use damgomz/fp_bs8_lr5_x4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use damgomz/fp_bs8_lr5_x4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="damgomz/fp_bs8_lr5_x4")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("damgomz/fp_bs8_lr5_x4") model = AutoModelForMaskedLM.from_pretrained("damgomz/fp_bs8_lr5_x4") - Notebooks
- Google Colab
- Kaggle
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -66,3 +66,4 @@ Epoch | Train Loss | Test Loss
|
|
| 66 |
| 0.0 | 15.470805 | 10.927505 |
|
| 67 |
| 0.5 | 3.806336 | 3.467919 |
|
| 68 |
| 1.0 | 3.306795 | 3.229490 |
|
|
|
|
|
|
| 66 |
| 0.0 | 15.470805 | 10.927505 |
|
| 67 |
| 0.5 | 3.806336 | 3.467919 |
|
| 68 |
| 1.0 | 3.306795 | 3.229490 |
|
| 69 |
+
| 1.5 | 3.119937 | 3.106039 |
|