Text Classification
Transformers
TensorBoard
Safetensors
English
distilbert
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use Hartunka/tiny_bert_rand_20_v2_mrpc with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Hartunka/tiny_bert_rand_20_v2_mrpc with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/tiny_bert_rand_20_v2_mrpc")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_rand_20_v2_mrpc") model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_rand_20_v2_mrpc") - Notebooks
- Google Colab
- Kaggle
| { | |
| "epoch": 7.0, | |
| "total_flos": 673316591603712.0, | |
| "train_loss": 0.4687800475529262, | |
| "train_runtime": 20.0012, | |
| "train_samples": 3668, | |
| "train_samples_per_second": 9169.437, | |
| "train_steps_per_second": 37.498 | |
| } |