--- tags: - merge - mergekit - lazymergekit - SanjiWatsuki/Silicon-Maid-7B - chargoddard/loyal-piano-m7-cdpo - jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES - NeverSleep/Noromaid-7b-v0.2 - athirdpath/NSFW_DPO_vmgb-7b base_model: - SanjiWatsuki/Silicon-Maid-7B - chargoddard/loyal-piano-m7-cdpo - jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES - NeverSleep/Noromaid-7b-v0.2 - athirdpath/NSFW_DPO_vmgb-7b license: apache-2.0 --- # HighdensityRPMerge-7B HighdensityRPMerge-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) * [chargoddard/loyal-piano-m7-cdpo](https://huggingface.co/chargoddard/loyal-piano-m7-cdpo) * [jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES](https://huggingface.co/jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES) * [NeverSleep/Noromaid-7b-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2) * [athirdpath/NSFW_DPO_vmgb-7b](https://huggingface.co/athirdpath/NSFW_DPO_vmgb-7b) ## 🧩 Configuration ```yaml models: - model: saishf/West-Hermes-7B # no parameters necessary for base model - model: SanjiWatsuki/Silicon-Maid-7B parameters: weight: 0.4 density: 0.8 - model: chargoddard/loyal-piano-m7-cdpo parameters: weight: 0.3 density: 0.8 - model: jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES parameters: weight: 0.25 density: 0.45 - model: NeverSleep/Noromaid-7b-v0.2 parameters: weight: 0.25 density: 0.4 - model: athirdpath/NSFW_DPO_vmgb-7b parameters: weight: 0.2 density: 0.4 merge_method: dare_ties base_model: saishf/West-Hermes-7B parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "jsfs11/HighdensityRPMerge-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```