JosephusCheung commited on
Commit
b3ffc8e
1 Parent(s): dbeb915

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -9,6 +9,8 @@ Use the transformers library that does not require remote/external code to load
9
 
10
  *Do not use wikitext for recalibration.*
11
 
 
 
12
  For details, please refer to the previous 14B & 7B versions: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
13
 
14
  Testing only, no performance guaranteeeee...
@@ -20,6 +22,7 @@ Testing only, no performance guaranteeeee...
20
  PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
21
 
22
 
 
23
  Disclaimer:
24
 
25
  Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.
 
9
 
10
  *Do not use wikitext for recalibration.*
11
 
12
+ Initialized from Qwen 72B
13
+
14
  For details, please refer to the previous 14B & 7B versions: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
15
 
16
  Testing only, no performance guaranteeeee...
 
22
  PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
23
 
24
 
25
+
26
  Disclaimer:
27
 
28
  Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning.