Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Chimera-Apex-7B - bnb 4bits - Model creator: https://huggingface.co/bunnycore/ - Original model: https://huggingface.co/bunnycore/Chimera-Apex-7B/ Original model description: --- license: apache-2.0 tags: - merge - mergekit - lazymergekit --- # Chimera-Apex-7B Chimera-Apex-7B is an experimental large language model (LLM) created by merging several high-performance models with the goal of achieving exceptional capabilities. GGUF: https://huggingface.co/mradermacher/Chimera-Apex-7B-GGUF ### Tasks: Due to the inclusion of various models, Chimera-Apex-7B is intended to be a general-purpose model capable of handling a wide range of tasks, including: - Conversation - Question Answering - Code Generation - (Possibly) NSFW content generation ### Limitations: - As an experimental model, Chimera-Apex-7B's outputs may not always be perfect or accurate. - The merged models might introduce biases present in their training data. - It's important to be aware of this limitation when interpreting its outputs. ## 🧩 Configuration ```yaml models: - model: Azazelle/Half-NSFW_Noromaid-7b - model: Endevor/InfinityRP-v1-7B - model: FuseAI/FuseChat-7B-VaRM merge_method: model_stock base_model: cognitivecomputations/dolphin-2.0-mistral-7b dtype: bfloat16 ``` Chimera-Apex-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):