Environmental Impact (CODE CARBON DEFAULT)
Metric | Value |
---|---|
Duration (in seconds) | 42598.64004707336 |
Emissions (Co2eq in kg) | 0.0257771129576743 |
CPU power (W) | 42.5 |
GPU power (W) | [No GPU] |
RAM power (W) | 3.75 |
CPU energy (kWh) | 0.5028998541591901 |
GPU energy (kWh) | [No GPU] |
RAM energy (kWh) | 0.0443733026246229 |
Consumed energy (kWh) | 0.5472731567838127 |
Country name | Switzerland |
Cloud provider | nan |
Cloud region | nan |
CPU count | 2 |
CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
GPU count | nan |
GPU model | nan |
Environmental Impact (for one core)
Metric | Value |
---|---|
CPU energy (kWh) | 0.08200238209061621 |
Emissions (Co2eq in kg) | 0.0166844673517704 |
Note
20 May 2024
My Config
Config | Value |
---|---|
checkpoint | damgomz/ThunBERT_bs32_lr5 |
model_name | ft_bs16_lr7 |
sequence_length | 400 |
num_epoch | 15 |
learning_rate | 5e-07 |
batch_size | 16 |
weight_decay | 0.0 |
warm_up_prop | 0.0 |
drop_out_prob | 0.1 |
packing_length | 100 |
train_test_split | 0.2 |
num_steps | 81450 |
Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall |
---|---|---|---|---|
0 | 0.654002 | 0.604516 | 0.701031 | 0.799080 |
1 | 0.560526 | 0.529649 | 0.743741 | 0.860429 |
2 | 0.490023 | 0.481244 | 0.774669 | 0.878834 |
3 | 0.441199 | 0.436625 | 0.800442 | 0.819018 |
4 | 0.401051 | 0.410194 | 0.814433 | 0.866564 |
5 | 0.373274 | 0.389619 | 0.816642 | 0.851227 |
6 | 0.351502 | 0.375469 | 0.826951 | 0.863497 |
7 | 0.329992 | 0.371812 | 0.832106 | 0.826687 |
8 | 0.314654 | 0.368712 | 0.836524 | 0.848160 |
9 | 0.299517 | 0.373897 | 0.837261 | 0.878834 |
10 | 0.287029 | 0.372418 | 0.836524 | 0.860429 |
11 | 0.273394 | 0.375212 | 0.835788 | 0.822086 |
12 | 0.258898 | 0.379667 | 0.840206 | 0.852761 |
13 | 0.245673 | 0.387852 | 0.840943 | 0.865031 |
14 | 0.230148 | 0.401220 | 0.838733 | 0.880368 |
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.