elinas commited on
Commit
eb0f531
1 Parent(s): 41ebf69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -33,8 +33,7 @@ I was wondering how transformers work?<|im_end|>
33
  ```
34
 
35
  ## Quantization
36
- Please note that we tested this model with a 4.5bpw EXL2 quant. Results are not expected to be the same when going below this quanitzation.
37
-
38
 
39
  #### LlamaCPP
40
  TODO!
@@ -48,17 +47,23 @@ TODO!
48
  ## Sampling Settings
49
  Here are some settings that work well with this model:
50
  ```
51
- TODO!
52
  ```
53
 
54
  ## Credit
55
- Thank you to my team consisting of [@Fizzarolli](https://huggingface.co/Fizzarolli), [@ToastyPigeon](https://huggingface.co/ToastyPigeon) and myself [@elinas](https://huggingface.co/elinas).
56
 
57
  Additional thanks to [@AlpinDale](https://huggingface.co/AlpinDale) and the rest of the PygmalionAI team for graciously providing the compute to finetune this model!
58
  Thank you to [anthracite-org](https://huggingface.co/anthracite-org) as well for sponsoring this model.
59
 
60
  ## Additional Details 
61
 
 
 
 
 
 
 
62
  If you have any questions or concerns, please post in the community tab.
63
 
64
  DISCLAIMER: Outputs generated by the model are not reflective of our views.
 
33
  ```
34
 
35
  ## Quantization
36
+ Please note that we tested this model with a 5.0bpw EXL2 quant. Results are not expected to be the same when going below this quanitzation.
 
37
 
38
  #### LlamaCPP
39
  TODO!
 
47
  ## Sampling Settings
48
  Here are some settings that work well with this model:
49
  ```
50
+ Coming soon
51
  ```
52
 
53
  ## Credit
54
+ Thank you to my team consisting of [@ToastyPigeon](https://huggingface.co/ToastyPigeon), [@Fizzarolli](https://huggingface.co/Fizzarolli), and myself [@elinas](https://huggingface.co/elinas).
55
 
56
  Additional thanks to [@AlpinDale](https://huggingface.co/AlpinDale) and the rest of the PygmalionAI team for graciously providing the compute to finetune this model!
57
  Thank you to [anthracite-org](https://huggingface.co/anthracite-org) as well for sponsoring this model.
58
 
59
  ## Additional Details 
60
 
61
+ We used a combination of provided logs and WizardLM evol both cleaned up and de-slopped.
62
+
63
+ Thanks to Anthropic and OpenAI for the models used to generate synthetic and partially synthetic data to train this model.
64
+
65
+ Thanks Elon Musk for being based enough to train AI that compares to the top models.
66
+
67
  If you have any questions or concerns, please post in the community tab.
68
 
69
  DISCLAIMER: Outputs generated by the model are not reflective of our views.