adamo1139 commited on
Commit
9b5691b
1 Parent(s): db90c3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -109,9 +109,23 @@ model-index:
109
  source:
110
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-XLCTX-v3
111
  name: Open LLM Leaderboard
 
 
 
 
 
112
  ---
 
 
 
 
 
 
 
113
  ## Model description
114
 
 
 
115
  Yi-34B 200K XLCTX base model fine-tuned on RAWrr_v2 (DPO), AEZAKMI-3-6 (SFT) and unalignment/toxic-dpo-0.1 (DPO) datasets. Training took around 20-30 hours total on RTX 3090 Ti, all finetuning was done locally.
116
  It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models, with extra spicyness.
117
  Say goodbye to "It's important to remember"! \
 
109
  source:
110
  url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=adamo1139/Yi-34B-200K-AEZAKMI-XLCTX-v3
111
  name: Open LLM Leaderboard
112
+
113
+
114
+
115
+
116
+
117
  ---
118
+ ## NEWS
119
+
120
+ <b>This model has been renamed from adamo1139/Yi-34B-200K-AEZAKMI-XLCTX-v3 to adamo1139/Yi-34B-200K-AEZAKMI-RAW-TOXIC-XLCTX-2303 on 2024-03-30. \
121
+ I am not happy with how often this model starts enumerating lists and I plan to improve toxic dpo dataset to fix it. Due to this, I don't think it deserves to be called AEZAKMI v3 and will be just a next testing iteration of AEZAKMI RAW TOXIC. \
122
+ I think I will be uploading one EXL2 quant before moving onto a different training run.</b>
123
+
124
+
125
  ## Model description
126
 
127
+
128
+
129
  Yi-34B 200K XLCTX base model fine-tuned on RAWrr_v2 (DPO), AEZAKMI-3-6 (SFT) and unalignment/toxic-dpo-0.1 (DPO) datasets. Training took around 20-30 hours total on RTX 3090 Ti, all finetuning was done locally.
130
  It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models, with extra spicyness.
131
  Say goodbye to "It's important to remember"! \