--- base_model: - ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b - ChaoticNeutrals/Eris_PrimeV3-Vision-7B library_name: transformers tags: - mergekit - merge license: other --- Model outputs are solid in quality, and relevant to given cards. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/gvC9qywRvyYk1KzDUw1zZ.png) Quants from the boi! Lewdiculus - https://huggingface.co/Lewdiculous/Eris_PrimeV3.05-Vision-7B-GGUF-IQ-Imatrix ### Models Merged The following models were included in the merge: * [ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b](https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b) * [ChaoticNeutrals/Eris_PrimeV3-Vision-7B](https://huggingface.co/ChaoticNeutrals/Eris_PrimeV3-Vision-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ChaoticNeutrals/Eris_PrimeV3-Vision-7B layer_range: [0, 32] - model: ChaoticNeutrals/Prima-LelantaclesV7-experimental-7b layer_range: [0, 32] merge_method: slerp base_model: ChaoticNeutrals/Eris_PrimeV3-Vision-7B parameters: t: - filter: self_attn value: [0.5, 0.5, 0.5, 0.5, 0.5] - filter: mlp value: [0.5, 0.5, 0.5, 0.5, 0.5] - value: 0.5 dtype: bfloat16 ```