Environmental Impact (CODE CARBON DEFAULT)
Metric | Value |
---|---|
Duration (in seconds) | 36881.51029586792 |
Emissions (Co2eq in kg) | 0.0226794872442836 |
CPU power (W) | 42.5 |
GPU power (W) | [No GPU] |
RAM power (W) | 4.500000000000001 |
CPU energy (kWh) | 0.4354060074516468 |
GPU energy (kWh) | [No GPU] |
RAM energy (kWh) | 0.0461015453451873 |
Consumed energy (kWh) | 0.4815075527968334 |
Country name | Switzerland |
Cloud provider | nan |
Cloud region | nan |
CPU count | 2 |
CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
GPU count | nan |
GPU model | nan |
Environmental Impact (for one core)
Metric | Value |
---|---|
CPU energy (kWh) | 0.07099690731954574 |
Emissions (Co2eq in kg) | 0.014445258199214933 |
Note
17 May 2024
My Config
Config | Value |
---|---|
checkpoint | albert-base-v2 |
model_name | ft_bs16_lr7_base_x2 |
sequence_length | 400 |
num_epoch | 15 |
learning_rate | 5e-07 |
batch_size | 16 |
weight_decay | 0.0 |
warm_up_prop | 0.0 |
drop_out_prob | 0.1 |
packing_length | 100 |
train_test_split | 0.2 |
num_steps | 81450 |
Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall |
---|---|---|---|---|
0 | 0.588079 | 0.517896 | 0.740059 | 0.863497 |
1 | 0.477829 | 0.460693 | 0.781296 | 0.848160 |
2 | 0.429019 | 0.428960 | 0.805596 | 0.881902 |
3 | 0.391332 | 0.404802 | 0.807806 | 0.832822 |
4 | 0.368315 | 0.398500 | 0.819588 | 0.863497 |
5 | 0.350588 | 0.389129 | 0.821060 | 0.863497 |
6 | 0.335994 | 0.382235 | 0.822533 | 0.874233 |
7 | 0.324425 | 0.373543 | 0.834315 | 0.838957 |
8 | 0.310990 | 0.373090 | 0.831370 | 0.854294 |
9 | 0.300017 | 0.368493 | 0.834315 | 0.849693 |
10 | 0.286613 | 0.377919 | 0.832842 | 0.872699 |
11 | 0.275215 | 0.370514 | 0.836524 | 0.831288 |
12 | 0.260308 | 0.383199 | 0.834315 | 0.872699 |
13 | 0.249657 | 0.378506 | 0.837997 | 0.842025 |
14 | 0.234344 | 0.385054 | 0.834315 | 0.835890 |
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.