--- base_model: v000000/YamWizard28-7B-abliterated library_name: transformers tags: - mergekit - merge - mistral - llama-cpp --- # v000000/YamWizard28-7B-Q8_0-GGUF This model was converted to GGUF format from [`v000000/YamWizard28-7B`](https://huggingface.co/v000000/YamWizard28-7B) using llama.cpp Refer to the [original model card](https://huggingface.co/v000000/YamWizard28-7B) for more details on the model.' ### YamWizard28-7B idk ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/a2hoXGGGA-XJBjs-6O33-.png) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) * [fearlessdots/WizardLM-2-7B-abliterated](https://huggingface.co/fearlessdots/WizardLM-2-7B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: fearlessdots/WizardLM-2-7B-abliterated layer_range: [0, 32] - model: automerger/YamshadowExperiment28-7B layer_range: [0, 32] merge_method: slerp base_model: fearlessdots/WizardLM-2-7B-abliterated parameters: t: - filter: self_attn value: [0.1, 0.6, 0.3, 0.8, 0.5] - filter: mlp value: [0.9, 0.4, 0.7, 0.2, 0.5] - value: 0.5 dtype: bfloat16 ``` ### Prompt Format (Alpaca): ```bash Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {system} ### Instruction: {prompt} ### Response: {output} ```