SKNahin commited on
Commit
039dbd2
·
verified ·
1 Parent(s): 52c07b2

Model save

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -1,10 +1,7 @@
1
  ---
2
  library_name: transformers
3
- license: llama3.2
4
- base_model: hishab/titulm-llama-3.2-1b-v2.0
5
  tags:
6
  - llama-factory
7
- - full
8
  - generated_from_trainer
9
  model-index:
10
  - name: titulm-llama-3.2-1b-v2.0-Instruct-v1.0
@@ -16,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # titulm-llama-3.2-1b-v2.0-Instruct-v1.0
18
 
19
- This model is a fine-tuned version of [hishab/titulm-llama-3.2-1b-v2.0](https://huggingface.co/hishab/titulm-llama-3.2-1b-v2.0) on the alpaca dataset.
20
 
21
  ## Model description
22
 
@@ -38,7 +35,7 @@ The following hyperparameters were used during training:
38
  - learning_rate: 1e-05
39
  - train_batch_size: 5
40
  - eval_batch_size: 8
41
- - seed: 4000
42
  - distributed_type: multi-GPU
43
  - num_devices: 4
44
  - gradient_accumulation_steps: 5
 
1
  ---
2
  library_name: transformers
 
 
3
  tags:
4
  - llama-factory
 
5
  - generated_from_trainer
6
  model-index:
7
  - name: titulm-llama-3.2-1b-v2.0-Instruct-v1.0
 
13
 
14
  # titulm-llama-3.2-1b-v2.0-Instruct-v1.0
15
 
16
+ This model was trained from scratch on an unknown dataset.
17
 
18
  ## Model description
19
 
 
35
  - learning_rate: 1e-05
36
  - train_batch_size: 5
37
  - eval_batch_size: 8
38
+ - seed: 400
39
  - distributed_type: multi-GPU
40
  - num_devices: 4
41
  - gradient_accumulation_steps: 5