pszemraj commited on
Commit
b44d47d
1 Parent(s): 3c0383d

Model save

Browse files
Files changed (3) hide show
  1. README.md +72 -0
  2. generation_config.json +7 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: pszemraj/tinyllama-1.1b-3T
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: tinyllama-1.1b-3T-bees-internal-2048-vN
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # tinyllama-1.1b-3T-bees-internal-2048-vN
17
+
18
+ This model is a fine-tuned version of [pszemraj/tinyllama-1.1b-3T](https://huggingface.co/pszemraj/tinyllama-1.1b-3T) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 2.1639
21
+ - Accuracy: 0.5407
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0001
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 2
43
+ - seed: 13707
44
+ - gradient_accumulation_steps: 16
45
+ - total_train_batch_size: 64
46
+ - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.05
49
+ - num_epochs: 2.0
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
54
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
55
+ | 2.4432 | 0.19 | 50 | 2.3850 | 0.5033 |
56
+ | 2.3655 | 0.39 | 100 | 2.3124 | 0.5129 |
57
+ | 2.374 | 0.58 | 150 | 2.2588 | 0.5215 |
58
+ | 2.3558 | 0.78 | 200 | 2.2132 | 0.5291 |
59
+ | 2.2677 | 0.97 | 250 | 2.1828 | 0.5348 |
60
+ | 2.0701 | 1.17 | 300 | 2.1788 | 0.5373 |
61
+ | 2.0766 | 1.36 | 350 | 2.1673 | 0.5398 |
62
+ | 2.0669 | 1.56 | 400 | 2.1651 | 0.5402 |
63
+ | 2.0314 | 1.75 | 450 | 2.1641 | 0.5406 |
64
+ | 2.0281 | 1.95 | 500 | 2.1639 | 0.5407 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.36.2
70
+ - Pytorch 2.1.0
71
+ - Datasets 2.16.1
72
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": 2,
4
+ "max_length": 2048,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.36.2"
7
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4adf62176690b09108e0f1b36e9c7001371931a32e471674e17b7fb189d764af
3
  size 2200119864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f3d457305c6406245fc2b6e735e15f0721cadf7d2119ac006cf6bd5ff6d7527
3
  size 2200119864