gosshh commited on
Commit
c475f21
·
verified ·
1 Parent(s): e4b0b04

Model save

Browse files
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: albert/albert-base-v2
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: output
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # output
15
+
16
+ This model is a fine-tuned version of [albert/albert-base-v2](https://huggingface.co/albert/albert-base-v2) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.3331
19
+ - Memory Allocated (gb): 5.75
20
+ - Max Memory Allocated (gb): 10.76
21
+ - Total Memory Available (gb): 94.62
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 2e-05
41
+ - train_batch_size: 64
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
45
+ - lr_scheduler_type: reduce_lr_on_plateau
46
+ - lr_scheduler_warmup_ratio: 0.1
47
+ - num_epochs: 4
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss | Allocated (gb) | Memory Allocated (gb) | Memory Available (gb) |
52
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------------------:|:---------------------:|
53
+ | No log | 1.0 | 391 | 0.2682 | 5.75 | 10.76 | 94.62 |
54
+ | No log | 2.0 | 782 | 0.2636 | 5.75 | 10.76 | 94.62 |
55
+ | No log | 3.0 | 1173 | 0.2861 | 5.75 | 10.76 | 94.62 |
56
+ | 0.2178 | 4.0 | 1564 | 0.3331 | 5.75 | 10.76 | 94.62 |
57
+
58
+
59
+ ### Framework versions
60
+
61
+ - Transformers 4.40.2
62
+ - Pytorch 2.2.2a0+gitb5d0b9b
63
+ - Datasets 2.19.1
64
+ - Tokenizers 0.19.1
emissions.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ timestamp,project_name,run_id,duration,emissions,emissions_rate,cpu_power,gpu_power,ram_power,cpu_energy,gpu_energy,ram_energy,energy_consumed,country_name,country_iso_code,region,cloud_provider,cloud_region,os,python_version,codecarbon_version,cpu_count,cpu_model,gpu_count,gpu_model,longitude,latitude,ram_total_size,tracking_mode,on_cloud,pue
2
+ 2024-07-19T15:57:43,codecarbon,9adda612-4ce8-49d0-9a55-e3b071a7b885,554.2081344127655,0.00015379434004080274,2.775028558607534e-07,42.5,0.0,377.7889337539673,0.006542585306697421,0,0.05815249035074544,0.06469507565744285,Canada,CAN,quebec,,,Linux-5.15.0-113-generic-x86_64-with-glibc2.35,3.10.12,2.3.5,160,Intel(R) Xeon(R) Platinum 8380 CPU @ 2.30GHz,,,-71.2,46.8,1007.4371566772461,machine,N,1.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7a261740a0305651ca0171ad2f810bdd8e28a15d8a2455ba14e5b3dd33676955
3
  size 46743912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cef55a8434a842a953ef9c168f2412e3fc31e039b3a1dd435881a70cccffda1
3
  size 46743912
runs/Jul19_15-47-48_7cbd3327a802/events.out.tfevents.1721404109.7cbd3327a802.326136.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7de482f70234dc5b82a2563693a53e955e0656e1b960afb626e2af3262902620
3
- size 6972
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:983e8a951d28693c96defda5cec5b86bbcdd9e90fd1caa5dabfa0d46f05bbb9a
3
+ size 8432