A2H0H0R1 commited on
Commit
faeb9b0
1 Parent(s): 1638fda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -18
README.md CHANGED
@@ -2,9 +2,7 @@
2
  license: other
3
  base_model: NousResearch/Llama-2-7b-chat-hf
4
  tags:
5
- - llama-factory
6
- - lora
7
- - generated_from_trainer
8
  model-index:
9
  - name: 2023-11-29-06-20-56
10
  results: []
@@ -29,20 +27,6 @@ More information needed
29
 
30
  More information needed
31
 
32
- ## Training procedure
33
-
34
- ### Training hyperparameters
35
-
36
- The following hyperparameters were used during training:
37
- - learning_rate: 5e-05
38
- - train_batch_size: 4
39
- - eval_batch_size: 8
40
- - seed: 42
41
- - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 16
43
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: cosine
45
- - num_epochs: 3.0
46
 
47
  ### Training results
48
 
@@ -53,4 +37,4 @@ The following hyperparameters were used during training:
53
  - Transformers 4.34.1
54
  - Pytorch 2.1.0+cu118
55
  - Datasets 2.14.7
56
- - Tokenizers 0.14.1
 
2
  license: other
3
  base_model: NousResearch/Llama-2-7b-chat-hf
4
  tags:
5
+ - biology
 
 
6
  model-index:
7
  - name: 2023-11-29-06-20-56
8
  results: []
 
27
 
28
  More information needed
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ### Training results
32
 
 
37
  - Transformers 4.34.1
38
  - Pytorch 2.1.0+cu118
39
  - Datasets 2.14.7
40
+ - Tokenizers 0.14.1