Quantization made by Richard Erkhov.
Cognitron-8B - GGUF
- Model creator: https://huggingface.co/bunnycore/
- Original model: https://huggingface.co/bunnycore/Cognitron-8B/
Name | Quant method | Size |
---|---|---|
Cognitron-8B.Q2_K.gguf | Q2_K | 2.96GB |
Cognitron-8B.IQ3_XS.gguf | IQ3_XS | 3.28GB |
Cognitron-8B.IQ3_S.gguf | IQ3_S | 3.43GB |
Cognitron-8B.Q3_K_S.gguf | Q3_K_S | 3.41GB |
Cognitron-8B.IQ3_M.gguf | IQ3_M | 3.52GB |
Cognitron-8B.Q3_K.gguf | Q3_K | 3.74GB |
Cognitron-8B.Q3_K_M.gguf | Q3_K_M | 3.74GB |
Cognitron-8B.Q3_K_L.gguf | Q3_K_L | 4.03GB |
Cognitron-8B.IQ4_XS.gguf | IQ4_XS | 4.18GB |
Cognitron-8B.Q4_0.gguf | Q4_0 | 4.34GB |
Cognitron-8B.IQ4_NL.gguf | IQ4_NL | 4.38GB |
Cognitron-8B.Q4_K_S.gguf | Q4_K_S | 4.37GB |
Cognitron-8B.Q4_K.gguf | Q4_K | 4.58GB |
Cognitron-8B.Q4_K_M.gguf | Q4_K_M | 4.58GB |
Cognitron-8B.Q4_1.gguf | Q4_1 | 4.78GB |
Cognitron-8B.Q5_0.gguf | Q5_0 | 5.21GB |
Cognitron-8B.Q5_K_S.gguf | Q5_K_S | 5.21GB |
Cognitron-8B.Q5_K.gguf | Q5_K | 5.34GB |
Cognitron-8B.Q5_K_M.gguf | Q5_K_M | 5.34GB |
Cognitron-8B.Q5_1.gguf | Q5_1 | 5.65GB |
Cognitron-8B.Q6_K.gguf | Q6_K | 6.14GB |
Cognitron-8B.Q8_0.gguf | Q8_0 | 7.95GB |
Original model description:
license: llama3 tags: - merge - mergekit - lazymergekit
Cognitron-8B
Cognitron-8B is an experimental large language model (LLM) created by combining three pre-existing models: Llama-3-8B-Lexi-Uncensored, Einstein-v6.1-Llama3-8B, and dolphin-2.9-llama3-8b. This combination aims to achieve a unique blend of capabilities:
- Uncensored Knowledge: By incorporating Llama-3-8B-Lexi-Uncensored, Cognitron-8B has access to a wider range of information without filtering.
- Enhanced Intelligence: The inclusion of Einstein-v6.1-Llama3-8B is intended to boost Cognitron-8B's reasoning and problem-solving abilities.
- Creative Fluency: The dolphin-2.9-llama3-8b component is designed to contribute creativity and unconventional thinking to Cognitron-8B's responses.
It is important to note that combining these models is an experiment, and the resulting performance is unknown.
GGUF: https://huggingface.co/mradermacher/Cognitron-8B-GGUF
Cognitron-8B is a merge of the following models using mergekit:
Potential Biases and Limitations
Uncensored Content: Due to the inclusion of uncensored models, Cognitron-8B may generate outputs containing biases, hate speech, or offensive language.
Importance of Uncensored Models
The inclusion of an uncensored model in Cognitron-8B reflects a growing interest in exploring the potential benefits of unfiltered information for LLMs. Here's why uncensored models are important:
- Comprehensiveness: Unrestricted access to information allows LLMs to capture a more complete picture of the world, even if it includes controversial or sensitive topics.
- Real-World Applicability: In situations where internet access is limited, uncensored LLMs could serve as a valuable source of unfiltered knowledge, allowing users to make informed decisions based on the available data.
🧩 Configuration
models:
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
- model: Weyaxi/Einstein-v6.1-Llama3-8B
- model: cognitivecomputations/dolphin-2.9-llama3-8b
merge_method: model_stock
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
dtype: bfloat16
- Downloads last month
- 2