diabolic6045 commited on
Commit
5cf15ac
1 Parent(s): 98bab67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -58
README.md CHANGED
@@ -1,18 +1,55 @@
1
- ---
2
- library_name: transformers
3
- license: llama3.2
4
- base_model: meta-llama/Llama-3.2-1B-Instruct
5
- tags:
6
- - axolotl
7
- - generated_from_trainer
8
- model-index:
9
- - name: open-llama-Instruct
10
- results: []
11
- ---
 
 
 
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
17
  <details><summary>See axolotl config</summary>
18
 
@@ -81,50 +118,4 @@ special_tokens:
81
 
82
  ```
83
 
84
- </details><br>
85
-
86
- # open-llama-Instruct
87
-
88
- This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the None dataset.
89
-
90
- ## Model description
91
-
92
- More information needed
93
-
94
- ## Intended uses & limitations
95
-
96
- More information needed
97
-
98
- ## Training and evaluation data
99
-
100
- More information needed
101
-
102
- ## Training procedure
103
-
104
- ### Training hyperparameters
105
-
106
- The following hyperparameters were used during training:
107
- - learning_rate: 2e-05
108
- - train_batch_size: 2
109
- - eval_batch_size: 2
110
- - seed: 42
111
- - distributed_type: multi-GPU
112
- - num_devices: 2
113
- - total_train_batch_size: 4
114
- - total_eval_batch_size: 4
115
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
116
- - lr_scheduler_type: cosine
117
- - lr_scheduler_warmup_steps: 10
118
- - num_epochs: 1
119
- - mixed_precision_training: Native AMP
120
-
121
- ### Training results
122
-
123
-
124
-
125
- ### Framework versions
126
-
127
- - Transformers 4.45.2
128
- - Pytorch 2.1.2
129
- - Datasets 3.0.1
130
- - Tokenizers 0.20.1
 
1
+ ---
2
+ library_name: transformers
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-1B-Instruct
5
+ tags:
6
+ - axolotl
7
+ - OpenHermes
8
+ model-index:
9
+ - name: open-llama-Instruct
10
+ results: []
11
+ datasets:
12
+ - diabolic6045/OpenHermes-2.5_alpaca_10
13
+ pipeline_tag: text-generation
14
+ ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # open-llama-Instruct
20
+
21
+ - This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [diabolic6045/OpenHermes-2.5_alpaca_10](https://huggingface.co/datasets/diabolic6045/OpenHermes-2.5_alpaca_10) dataset. which is 10% of [OpenHermes 2.5 Dataset](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
22
+
23
+ ## Training procedure
24
+
25
+ ### Training hyperparameters
26
+
27
+ The following hyperparameters were used during training:
28
+ - learning_rate: 2e-05
29
+ - train_batch_size: 2
30
+ - eval_batch_size: 2
31
+ - seed: 42
32
+ - distributed_type: multi-GPU
33
+ - num_devices: 2
34
+ - total_train_batch_size: 4
35
+ - total_eval_batch_size: 4
36
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
37
+ - lr_scheduler_type: cosine
38
+ - lr_scheduler_warmup_steps: 10
39
+ - num_epochs: 1
40
+ - mixed_precision_training: Native AMP
41
+
42
+ ### Training results
43
+
44
+ - will be added soon
45
+
46
+ ### Framework versions
47
+
48
+ - Transformers 4.45.2
49
+ - Pytorch 2.1.2
50
+ - Datasets 3.0.1
51
+ - Tokenizers 0.20.1
52
+
53
  [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
54
  <details><summary>See axolotl config</summary>
55
 
 
118
 
119
  ```
120
 
121
+ </details><br>