Instructions to use Hartunka/bert_base_rand_20_v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Hartunka/bert_base_rand_20_v1 with Transformers:
# Load model directly from transformers import AutoTokenizer, DistilBertForLDAMaskedLM tokenizer = AutoTokenizer.from_pretrained("Hartunka/bert_base_rand_20_v1") model = DistilBertForLDAMaskedLM.from_pretrained("Hartunka/bert_base_rand_20_v1") - Notebooks
- Google Colab
- Kaggle
| { | |
| "epoch": 25.0, | |
| "eval_accuracy": 0.15341583319206645, | |
| "eval_loss": 9.082741737365723, | |
| "eval_runtime": 1.0384, | |
| "eval_samples": 479, | |
| "eval_samples_per_second": 461.307, | |
| "eval_steps_per_second": 4.815, | |
| "perplexity": 8802.065927180123 | |
| } |