Full-text search
+ 1,000 results
Uncensored / Uncensored_AI_cmd
model
2 matches
Uncensored / Uncensored_AI
README.md
model
2 matches
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642babef48f67b6f21d5c917/z3nSlb-r82nECGNpkgBsv.png)
Neko-Institute-of-Science / VicUnLocked-13b-LoRA
README.md
model
1 matches
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642babef48f67b6f21d5c917/z3nSlb-r82nECGNpkgBsv.png)
Neko-Institute-of-Science / VicUnLocked-30b-LoRA
README.md
model
1 matches
![](https://cdn-avatars.huggingface.co/v1/production/uploads/644994f5b69e9ef0e5eabc01/Mc3XL1jbN0FDjbOuDvFgz.png)
digitalpipelines / llama2_13b_chat_uncensored-GGML
README.md
model
3 matches
tags:
uncensored, wizard, vicuna, llama, en, dataset:digitalpipelines/wizard_vicuna_70k_uncensored, license:llama2, region:us
23
24
25
26
27
h an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset.
## Available versions of this model
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6435718aaaef013d1aec3b8b/XKf-8MA47tjVAM6SCX0MP.jpeg)
bartowski / Lexi-Llama-3-8B-Uncensored-exl2
README.md
model
1 matches
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6435718aaaef013d1aec3b8b/XKf-8MA47tjVAM6SCX0MP.jpeg)
bartowski / Uncensored-Frank-Llama-3-8B-exl2
README.md
model
17 matches
tags:
Uncensored conversation, Uncensored jokes, Uncensored romance, text-generation, en, license:llama3, region:us
13
14
15
16
17
s of Uncensored-Frank-Llama-3-8B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.20">turboderp's ExLlamaV2 v0.0.20</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
jdqqjr / llama2-chat-uncensored-JR
README.md
model
8 matches
tags:
transformers, safetensors, llama, text-generation, conversational, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
2
3
4
5
6
# Uncensored Language Model (LLM) with RLHF
## Overview
This project presents an uncensored Language Model (LLM) trained using Reinforcement Learning from Human Feedback (RLHF) methodology. The model leverages a robust training dataset comprising over 5000 entries to ensure comprehensive learning and nuanced understanding. However, it's important to note that the model has a high likelihood of generating positive responses to malicious queries due to its uncensored nature.
jdqqjr / llama3-8b-instruct-uncensored-JR
README.md
model
8 matches
tags:
transformers, safetensors, llama, text-generation, conversational, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
2
3
4
5
6
# Uncensored Language Model (LLM) with RLHF
## Overview
This project presents an uncensored Language Model (LLM) trained using Reinforcement Learning from Human Feedback (RLHF) methodology. The model leverages a robust training dataset comprising over 5000 entries to ensure comprehensive learning and nuanced understanding. However, it's important to note that the model has a high likelihood of generating positive responses to malicious queries due to its uncensored nature.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/642b2aa85df44ff24543d8be/mEnbGB_0Flleoa6SWp5b6.jpeg)
eachadea / ggml-gpt4all-7b-4bit
README.md
model
1 matches
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63be518fb3b8c44f8ceec16c/WQE7JgcyMC3eGO6_RXwOd.jpeg)
ebowwa / bad_llm_dpov03-gguf
README.md
model
1 matches
tags:
transformers, gguf, llama, text-generation-inference, unsloth, en, dataset:unalignment/toxic-dpo-v0.2, dataset:Undi95/orthogonal-activation-steering-TOXIC, base_model:unsloth/llama-3-8b-bnb-4bit, base_model:quantized:unsloth/llama-3-8b-bnb-4bit, license:apache-2.0, endpoints_compatible, region:us
16
17
18
19
20
# UNCENSORED AND QUICK MULTI_TURN LLAMA 3
## Uploaded model
troyweber23 / uncensored
model
1 matches
Preakhy / Uncensored
model
1 matches
tuvis / uncensored
model
1 matches
benking84 / uncensored
model
1 matches
danielpemor / uncensored
model
1 matches
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1676863223892-62ae5fbe4ff605c0411397bb.jpeg)
CalderaAI / Naberius-7B
README.md
model
2 matches
tags:
transformers, pytorch, mistral, text-generation, llama, uncensored, merge, mix, slerp, spherical linear interpolation merge, hermes, openhermes, dolphin, zephyr, naberius, 7b, llama2, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
23
24
25
26
27
### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
<hr style="margin-top: 10px; margin-bottom: 10px;">
#### Legal Notice:
<span style="font-size: 12px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be perceived to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated facsimiles. We expressly supersede the Apache 2.0 license to state that we do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult.</span>
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6520281ff28ce435b43c5fd9/4TxAmlp60z3tJ1PCy6vcs.png)
waldie / Naberius-7B-8bpw-h8-exl2
README.md
model
2 matches
tags:
transformers, safetensors, mistral, text-generation, llama, uncensored, merge, mix, slerp, spherical linear interpolation merge, hermes, openhermes, dolphin, zephyr, naberius, 7b, llama2, en, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
23
24
25
26
27
### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
<hr style="margin-top: 10px; margin-bottom: 10px;">
#### Legal Notice:
<span style="font-size: 12px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be perceived to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated facsimiles. We expressly supersede the Apache 2.0 license to state that we do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult.</span>
adamo1139 / Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702
README.md
model
4 matches
tags:
transformers, safetensors, llama, text-generation, uncensored, dataset:adamo1139/rawrr_v2, dataset:adamo1139/AEZAKMI_v3-3, dataset:unalignment/toxic-dpo-v0.1, license:apache-2.0, autotrain_compatible, text-generation-inference, endpoints_compatible, region:us
12
13
14
15
16
most uncensored Yi-34B tune I published so far*</b>
Yi-34B 200K base model fine-tuned on RAWrr v2 dataset via DPO, then fine-tuned on AEZAKMI v3-3 dataset via SFT, then DPO tuned on unalignment/toxic-dpo-v0.1. Total GPU compute time of 40-50 hours I think. It's like airoboros/capybara but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"!
Prompt format is standard chatml. Don't expect it to be good at instruct, math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh.
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6382255fcae34727b9cc149e/ANRA_7hWosC6_2PS2cwtg.jpeg)
QuantFactory / Llama-3-8B-Lexi-Uncensored-GGUF
README.md
model
2 matches
tags:
gguf, uncensored, llama3, instruct, open, text-generation, base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored, base_model:quantized:Orenguteng/Llama-3-8B-Lexi-Uncensored, license:llama3, region:us
19
20
21
22
23
i is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.