Environmental Impact (CODE CARBON DEFAULT)
Metric | Value |
---|---|
Duration (in seconds) | 37240.50697517395 |
Emissions (Co2eq in kg) | 0.0229002417892837 |
CPU power (W) | 42.5 |
GPU power (W) | [No GPU] |
RAM power (W) | 4.500000000000001 |
CPU energy (kWh) | 0.4396441116748577 |
GPU energy (kWh) | [No GPU] |
RAM energy (kWh) | 0.0465502746286989 |
Consumed energy (kWh) | 0.4861943863035558 |
Country name | Switzerland |
Cloud provider | nan |
Cloud region | nan |
CPU count | 2 |
CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
GPU count | nan |
GPU model | nan |
Environmental Impact (for one core)
Metric | Value |
---|---|
CPU energy (kWh) | 0.07168797592720987 |
Emissions (Co2eq in kg) | 0.014585865231943131 |
Note
17 May 2024
My Config
Config | Value |
---|---|
checkpoint | albert-base-v2 |
model_name | ft_bs32_lr7_base_x2 |
sequence_length | 400 |
num_epoch | 15 |
learning_rate | 5e-07 |
batch_size | 32 |
weight_decay | 0.0 |
warm_up_prop | 0.0 |
drop_out_prob | 0.1 |
packing_length | 100 |
train_test_split | 0.2 |
num_steps | 81450 |
Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall |
---|---|---|---|---|
0 | 0.649573 | 0.574175 | 0.700295 | 0.776074 |
1 | 0.527511 | 0.510573 | 0.751105 | 0.858896 |
2 | 0.470206 | 0.462849 | 0.782769 | 0.845092 |
3 | 0.423501 | 0.438939 | 0.801915 | 0.901840 |
4 | 0.393218 | 0.412751 | 0.809278 | 0.777607 |
5 | 0.370431 | 0.392084 | 0.829897 | 0.881902 |
6 | 0.356885 | 0.386488 | 0.832106 | 0.894172 |
7 | 0.343338 | 0.377735 | 0.836524 | 0.815951 |
8 | 0.336663 | 0.368789 | 0.839470 | 0.880368 |
9 | 0.324767 | 0.364318 | 0.846834 | 0.855828 |
10 | 0.316849 | 0.363216 | 0.845361 | 0.851227 |
11 | 0.308921 | 0.364404 | 0.844624 | 0.895706 |
12 | 0.300201 | 0.366181 | 0.838733 | 0.803681 |
13 | 0.295329 | 0.368080 | 0.847570 | 0.911043 |
14 | 0.286295 | 0.365098 | 0.851252 | 0.901840 |
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.