oobabooga commited on
Commit
8a4e1e1
1 Parent(s): 3d2f1a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -11
README.md CHANGED
@@ -58,6 +58,8 @@ Bot reply
58
 
59
  ## Evaluation
60
 
 
 
61
  I made a quick experiment where I asked a set of 3 Python and 3 Javascript questions (real world, difficult questions with nuance) to the following models:
62
 
63
  1) This one
@@ -81,17 +83,6 @@ The resulting cumulative scores were:
81
 
82
  CodeBooga-34B-v0.1 performed very well, while its variant performed poorly, so I uploaded the former but not the latter.
83
 
84
- ## Recommended settings
85
-
86
- I recommend the [Divine Intellect](https://github.com/oobabooga/text-generation-webui/blob/ae8cd449ae3e0236ecb3775892bb1eea23f9ed68/presets/Divine%20Intellect.yaml) preset for instruction-following models like this, as per the [Preset Arena experiment results](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md):
87
-
88
- ```yaml
89
- temperature: 1.31
90
- top_p: 0.14
91
- repetition_penalty: 1.17
92
- top_k: 49
93
- ```
94
-
95
  ## Quantized versions
96
 
97
  ### GGUF
 
58
 
59
  ## Evaluation
60
 
61
+ (This is not very scientific, so bear with me.)
62
+
63
  I made a quick experiment where I asked a set of 3 Python and 3 Javascript questions (real world, difficult questions with nuance) to the following models:
64
 
65
  1) This one
 
83
 
84
  CodeBooga-34B-v0.1 performed very well, while its variant performed poorly, so I uploaded the former but not the latter.
85
 
 
 
 
 
 
 
 
 
 
 
 
86
  ## Quantized versions
87
 
88
  ### GGUF