Text Classification
Transformers
TensorBoard
Safetensors
English
distilbert
Generated from Trainer
Eval Results (legacy)
text-embeddings-inference
Instructions to use Hartunka/tiny_bert_km_20_v2_cola with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Hartunka/tiny_bert_km_20_v2_cola with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Hartunka/tiny_bert_km_20_v2_cola")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Hartunka/tiny_bert_km_20_v2_cola") model = AutoModelForSequenceClassification.from_pretrained("Hartunka/tiny_bert_km_20_v2_cola") - Notebooks
- Google Colab
- Kaggle
| { | |
| "epoch": 8.0, | |
| "eval_accuracy": 0.6912751793861389, | |
| "eval_loss": 0.6190451383590698, | |
| "eval_matthews_correlation": 0.0, | |
| "eval_runtime": 0.3429, | |
| "eval_samples": 1043, | |
| "eval_samples_per_second": 3041.699, | |
| "eval_steps_per_second": 14.581 | |
| } |