Text Generation
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
adamo1139 commited on
Commit
2ddabbf
1 Parent(s): 229835e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -11,7 +11,7 @@ datasets:
11
 
12
  Have you ever wanted a sandbox for text-based social media? A place where you can bully a person, throw arguments or attack someone without any kind of actual harm being done and without any repercussions? All of it fully local, so nobody but you will ever know? No? Well, HESOYAM kinda can do that, but it's not exactly a bully similator, that's just one of ways you could use it. Specify a place on the internet that you want to be in the system prompt and then start a discussion. Will it be engaging or will you be sucked into someone's depression? For now, probably the latter. Still, I had some insightful concrete useful discussions with this model, it's not all gptslopped fluff. It does have a lot of depressive negative tones though, so it might not be for everyone.
13
 
14
- To get this model, first, I fine-tuned Yi-34B-200K (xlctx, as in second version of 34B 200K model, not new 1.5) on [adamo1139/rawrr_v2-2_stage1](https://huggingface.co/datasets/adamo1139/rawrr_v2-2_stage1) to make it so that base model will forget it's AI assistant programming and behave like a completion model trained on raw corpus of internet. This was done using [ORPO](https://huggingface.co/docs/trl/main/en/orpo_trainer) and [GaLore](https://arxiv.org/abs/2403.03507) - all of it handled by [Unsloth](https://github.com/unslothai/unsloth). I would say it's a moderately successful finetune, I plan to enhance rawrr dataset with richer data to make better finetunes of this kind in the future. Resulting adapter file can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore-PEFT) and FP16 model file for RAWrr ORPO finetune can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore).
15
 
16
  Once I had good base model, I fine-tuned it on [HESOYAM 0.2](https://huggingface.co/datasets/adamo1139/HESOYAM_v0.2) dataset. It's a collection of single turn conversations from around 10 subreddits and multi-turn conversations from board /x/. There's also pippa in there. All samples there have system prompts that should tell the model about where discussion is taking place, this will be useful when you will be deciding on where you want to have your sandbox discussion take place. Here, I used classic SFT with GaLore and Unsloth, I wanted to get some results quick so it's trained for just 0.4 epochs. Adapter after that part of fine-tuning can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-HESOYAM-RAW-0905-GaLore-PEFT).
17
 
 
11
 
12
  Have you ever wanted a sandbox for text-based social media? A place where you can bully a person, throw arguments or attack someone without any kind of actual harm being done and without any repercussions? All of it fully local, so nobody but you will ever know? No? Well, HESOYAM kinda can do that, but it's not exactly a bully similator, that's just one of ways you could use it. Specify a place on the internet that you want to be in the system prompt and then start a discussion. Will it be engaging or will you be sucked into someone's depression? For now, probably the latter. Still, I had some insightful concrete useful discussions with this model, it's not all gptslopped fluff. It does have a lot of depressive negative tones though, so it might not be for everyone.
13
 
14
+ To get this model, first, I fine-tuned Yi-34B-200K (xlctx, as in second version of 34B 200K model, not new 1.5) on [adamo1139/rawrr_v2-2_stage1](https://huggingface.co/datasets/adamo1139/rawrr_v2-2_stage1) to make it so that base model will forget it's AI assistant programming and behave like a completion model trained on raw corpus of internet. This was done using [ORPO](https://arxiv.org/abs/2403.07691) and [GaLore](https://arxiv.org/abs/2403.03507) - all of it handled by [Unsloth](https://github.com/unslothai/unsloth). I would say it's a moderately successful finetune, I plan to enhance rawrr dataset with richer data to make better finetunes of this kind in the future. Resulting adapter file can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore-PEFT) and FP16 model file for RAWrr ORPO finetune can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-RAW-ORPO-0805-GaLore).
15
 
16
  Once I had good base model, I fine-tuned it on [HESOYAM 0.2](https://huggingface.co/datasets/adamo1139/HESOYAM_v0.2) dataset. It's a collection of single turn conversations from around 10 subreddits and multi-turn conversations from board /x/. There's also pippa in there. All samples there have system prompts that should tell the model about where discussion is taking place, this will be useful when you will be deciding on where you want to have your sandbox discussion take place. Here, I used classic SFT with GaLore and Unsloth, I wanted to get some results quick so it's trained for just 0.4 epochs. Adapter after that part of fine-tuning can be found [here](https://huggingface.co/adamo1139/Yi-34B-200K-XLCTX-HESOYAM-RAW-0905-GaLore-PEFT).
17