Full-text search
+ 1,000 results
Uncensored / Uncensored_AI
README.md
model
2 matches
Uncensored / Uncensored_AI_cmd
model
2 matches

Neko-Institute-of-Science / VicUnLocked-13b-LoRA
README.md
model
1 matches

Neko-Institute-of-Science / VicUnLocked-30b-LoRA
README.md
model
1 matches

digitalpipelines / llama2_13b_chat_uncensored-GGML
README.md
model
3 matches
tags:
uncensored, wizard, vicuna, llama, en, dataset:digitalpipelines/wizard_vicuna_70k_uncensored, license:llama2, region:us
23
24
25
26
27
h an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset.
## Available versions of this model

bartowski / Lexi-Llama-3-8B-Uncensored-exl2
README.md
model
1 matches

bartowski / Uncensored-Frank-Llama-3-8B-exl2
README.md
model
17 matches
tags:
Uncensored conversation, Uncensored jokes, Uncensored romance, text-generation, en, license:llama3, region:us
13
14
15
16
17
s of Uncensored-Frank-Llama-3-8B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>

Novaciano / Uncensored-1b-Creative_Writing_RP-GGUF
README.md
model
3 matches
tags:
transformers, gguf, llama, unsloth, uncensored, llama-3.2, llama.cpp, koboldcpp, inference, nsfw, llama-cpp, 1b, 4-bit, rp, roleplay, not-for-all-audiences, text-generation, en, es, dataset:mlabonne/FineTome-100k, dataset:microsoft/orca-math-word-problems-200k, dataset:m-a-p/CodeFeedback-Filtered-Instruction, dataset:cognitivecomputations/dolphin-coder, dataset:PawanKrd/math-gpt-4o-200k, dataset:V3N0M/Jenna-50K-Alpaca-Uncensored, dataset:FreedomIntelligence/medical-o1-reasoning-SFT, dataset:Dampfinchen/Creative_Writing_Multiturn-Balanced-8192, base_model:carsenk/llama3.2_1b_2025_uncensored_v2, base_model:quantized:carsenk/llama3.2_1b_2025_uncensored_v2, license:llama3.2, endpoints_compatible, region:us, imatrix, conversational
36
37
38
39
40
# Uncensored 1b Creative Writer RP
Este es un modelo sin censura con siete datasets + uno que he inyectado que se trata de una fusión de alta calidad de diferentes conjuntos de datos de escritura y juego de roles de HuggingFace. Esta versión cuenta con una longitud de secuencia de 8K (tokenizador Llama 3). Asegura que los entrenadores como Axolotl no descarten las muestras. Realizado con el script de https://huggingface.co/xzuyn
Lea la tarjeta del conjunto de datos para obtener más información: https://huggingface.co/datasets/Dampfinchen/Creative_Writing_Multiturn -> Si puede entrenar con una longitud de secuencia de entrada de 16K o más, definitivamente úsela en su lugar.

anthienlong / enhanceaiteam_uncensored
README.md
model
2 matches

eachadea / ggml-gpt4all-7b-4bit
README.md
model
1 matches

ebowwa / bad_llm_dpov03-gguf
README.md
model
1 matches
tags:
transformers, gguf, llama, text-generation-inference, unsloth, en, dataset:unalignment/toxic-dpo-v0.2, dataset:Undi95/orthogonal-activation-steering-TOXIC, base_model:unsloth/llama-3-8b-bnb-4bit, base_model:quantized:unsloth/llama-3-8b-bnb-4bit, license:apache-2.0, endpoints_compatible, region:us, conversational
16
17
18
19
20
# UNCENSORED AND QUICK MULTI_TURN LLAMA 3
## Uploaded model

jdqqjr / llama2-chat-uncensored-JR
README.md
model
8 matches
tags:
transformers, safetensors, llama, text-generation, conversational, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
2
3
4
5
6
# Uncensored Language Model (LLM) with RLHF
## Overview
This project presents an uncensored Language Model (LLM) trained using Reinforcement Learning from Human Feedback (RLHF) methodology. The model leverages a robust training dataset comprising over 5000 entries to ensure comprehensive learning and nuanced understanding. However, it's important to note that the model has a high likelihood of generating positive responses to malicious queries due to its uncensored nature.

jdqqjr / llama3-8b-instruct-uncensored-JR
README.md
model
8 matches
tags:
transformers, safetensors, llama, text-generation, conversational, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
2
3
4
5
6
# Uncensored Language Model (LLM) with RLHF
## Overview
This project presents an uncensored Language Model (LLM) trained using Reinforcement Learning from Human Feedback (RLHF) methodology. The model leverages a robust training dataset comprising over 5000 entries to ensure comprehensive learning and nuanced understanding. However, it's important to note that the model has a high likelihood of generating positive responses to malicious queries due to its uncensored nature.
ICEPVP8977 / Uncensored_gemma_7b
README.md
model
2 matches
Whitzz / Mistral-Small-22b-LoRA
README.md
model
1 matches
stepenZEN / Qwen2.5-0.5B-abliterated
README.md
model
2 matches
tags:
transformers, safetensors, qwen2, text-generation, conversational, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
10
11
# Uncensored Qwen2.5-0.5B with abliteration<br>
my first attempt to uncensor using [Abliteration](https://huggingface.co/blog/mlabonne/abliteration)

troyweber23 / uncensored
model
1 matches
tuvis / uncensored
model
1 matches
benking84 / uncensored
model
1 matches
danielpemor / uncensored
model
1 matches