Edit model card

PreFace

Code vs Natural language classification using bert-small from prajwall, below are the metrics achieved

Training Metrics

Epoch Training Loss Validation Loss Accuracy
1 0.022500 0.012705 0.997203
2 0.008700 0.013107 0.996880
3 0.002700 0.014081 0.997633
4 0.001800 0.010666 0.997526
5 0.000900 0.010800 0.998063

More

Downloads last month
2,441
Safetensors
Model size
28.8M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train vishnun/codenlbert-sm