Environmental Impact (CODE CARBON DEFAULT)
Metric | Value |
---|---|
Duration (in seconds) | 31470.129777431488 |
Emissions (Co2eq in kg) | 0.0190430699900628 |
CPU power (W) | 42.5 |
GPU power (W) | [No GPU] |
RAM power (W) | 3.75 |
CPU energy (kWh) | 0.3715217473053264 |
GPU energy (kWh) | [No GPU] |
RAM energy (kWh) | 0.0327811335265635 |
Consumed energy (kWh) | 0.4043028808318904 |
Country name | Switzerland |
Cloud provider | nan |
Cloud region | nan |
CPU count | 2 |
CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
GPU count | nan |
GPU model | nan |
Environmental Impact (for one core)
Metric | Value |
---|---|
CPU energy (kWh) | 0.06057999982155562 |
Emissions (Co2eq in kg) | 0.012325800829494 |
Note
20 May 2024
My Config
Config | Value |
---|---|
checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
model_name | ft_bs16_lr7_mlm |
sequence_length | 400 |
num_epoch | 15 |
learning_rate | 5e-07 |
batch_size | 16 |
weight_decay | 0.0 |
warm_up_prop | 0.0 |
drop_out_prob | 0.1 |
packing_length | 100 |
train_test_split | 0.2 |
num_steps | 81450 |
Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall |
---|---|---|---|---|
0 | 0.628769 | 0.554894 | 0.730486 | 0.842025 |
1 | 0.510461 | 0.486829 | 0.763623 | 0.797546 |
2 | 0.449970 | 0.445788 | 0.786451 | 0.888037 |
3 | 0.410732 | 0.416862 | 0.807806 | 0.884969 |
4 | 0.380523 | 0.396044 | 0.812960 | 0.872699 |
5 | 0.359862 | 0.388476 | 0.820324 | 0.909509 |
6 | 0.342461 | 0.369396 | 0.834315 | 0.874233 |
7 | 0.330469 | 0.362060 | 0.840943 | 0.861963 |
8 | 0.319533 | 0.359950 | 0.840943 | 0.889571 |
9 | 0.310329 | 0.358102 | 0.843888 | 0.892638 |
10 | 0.300148 | 0.363338 | 0.840206 | 0.904908 |
11 | 0.291830 | 0.362882 | 0.830633 | 0.791411 |
12 | 0.285529 | 0.354668 | 0.840206 | 0.849693 |
13 | 0.277152 | 0.358292 | 0.837261 | 0.823620 |
14 | 0.264916 | 0.364439 | 0.844624 | 0.897239 |
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.