SanjiWatsuki commited on
Commit
c69bcea
1 Parent(s): 61575c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -81,6 +81,8 @@ parameters:
81
  dtype: bfloat16
82
  ```
83
 
 
 
84
  ### Prompt Template (Alpaca)
85
 
86
  ```
@@ -123,3 +125,5 @@ I found that this model **performed worse** with the xDAN prompt format so, desp
123
  | wizardlm-30b | 2 | 6.887500 | 30b
124
  | vicuna-33b-v1.3 | 2 | 6.787500 | 33b
125
  | Llama-2-70b-chat | 2 | 6.725000 | 70b
 
 
 
81
  dtype: bfloat16
82
  ```
83
 
84
+ **There was no additional training, finetuning, or DPO.** This is a straight merger.
85
+
86
  ### Prompt Template (Alpaca)
87
 
88
  ```
 
125
  | wizardlm-30b | 2 | 6.887500 | 30b
126
  | vicuna-33b-v1.3 | 2 | 6.787500 | 33b
127
  | Llama-2-70b-chat | 2 | 6.725000 | 70b
128
+
129
+ If you'd like to replicate the MT-Bench run, please ensure that the Alpaca prompt template is applied to the model. I did this by putting "alpaca" in the model path to trigger the `AlpacaAdapter`.