radm commited on
Commit
e209d7d
1 Parent(s): 2525d11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -26
README.md CHANGED
@@ -26,31 +26,7 @@ This is a LORA adapter for NousResearch/Meta-Llama-3-70B-Instruct, fine-tuned to
26
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
27
  Use repository (https://github.com/r4dm/arena-hard-local) for evaluate with local judge model.
28
 
29
- ## Training Details
30
-
31
- ### Training Data
32
-
33
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
34
- Datasets:
35
- - radm/arenahard_gpt4vsllama3
36
- - radm/truthy-dpo-v0.1-ru
37
- - jondurbin/truthy-dpo-v0.1
38
-
39
- #### Training Hyperparameters
40
-
41
- - **Training regime:** [bf16] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
42
- - **Load in 4 bit:** [True]
43
- - **Target modules:** [all]
44
- - **LoRA rank:** [16]
45
- - **Max seq length:** [8192]
46
- - **Use gradient checkpointing:** [unsloth]
47
- - **trainer:** [ORPOTrainer]
48
- - **Batch size:** [1]
49
- - **Gradient accumulation steps:** [4]
50
- - **Epochs:** [1]
51
-
52
- ### Results
53
-
54
 
55
  #### Llama-3-70B-Instruct-GPTQ as judge:
56
  ```console
@@ -77,8 +53,30 @@ Vikhr-7B-instruct_0.5 | score: 14.2 | 95% CI: (-
77
  alpindale_gemma-2b-it | score: 7.9 | 95% CI: (-0.9, 0.8) | average #tokens: 425
78
  ```
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
- ## Hardware
82
 
83
  - **Hardware Type:** [Nvidia A100 80 gb]
84
  - **Hours used:** [11 hours]
 
26
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
27
  Use repository (https://github.com/r4dm/arena-hard-local) for evaluate with local judge model.
28
 
29
+ ## Results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  #### Llama-3-70B-Instruct-GPTQ as judge:
32
  ```console
 
53
  alpindale_gemma-2b-it | score: 7.9 | 95% CI: (-0.9, 0.8) | average #tokens: 425
54
  ```
55
 
56
+ ## Training Details
57
+
58
+ ### Training Data
59
+
60
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
61
+ Datasets:
62
+ - radm/arenahard_gpt4vsllama3
63
+ - radm/truthy-dpo-v0.1-ru
64
+ - jondurbin/truthy-dpo-v0.1
65
+
66
+ #### Training Hyperparameters
67
+
68
+ - **Training regime:** [bf16] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
69
+ - **Load in 4 bit:** [True]
70
+ - **Target modules:** [all]
71
+ - **LoRA rank:** [16]
72
+ - **Max seq length:** [8192]
73
+ - **Use gradient checkpointing:** [unsloth]
74
+ - **trainer:** [ORPOTrainer]
75
+ - **Batch size:** [1]
76
+ - **Gradient accumulation steps:** [4]
77
+ - **Epochs:** [1]
78
 
79
+ ### Hardware
80
 
81
  - **Hardware Type:** [Nvidia A100 80 gb]
82
  - **Hours used:** [11 hours]