Too lazy to write something

New fine-tuned version from bert-base-uncased with my own dataset.

val_loss = 0.01966 | val_acc = 0.9811 | f-1 score = 0.91

Downloads last month
51
Safetensors
Model size
102M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using b3x0m/bert-xomlac-ner 1