Full-text search
+ 1,000 results
Uncensored / Uncensored_AI
README.md
model
2 matches
Uncensored / Uncensored_AI_cmd
model
2 matches
digitalpipelines / llama2_13b_chat_uncensored-GGML
README.md
model
3 matches
tags:
uncensored, wizard, vicuna, llama, en, dataset:digitalpipelines/wizard_vicuna_70k_uncensored, license:llama2, region:us
23
24
25
26
27
h an uncensored/unfiltered Wizard-Vicuna conversation dataset [digitalpipelines/wizard_vicuna_70k_uncensored](https://huggingface.co/datasets/digitalpipelines/wizard_vicuna_70k_uncensored).
A QLoRA was created and used for fine-tuning and then merged back into the model. Llama2 has inherited bias even though it's been finetuned on an uncensored dataset.
## Available versions of this model
Neko-Institute-of-Science / VicUnLocked-13b-LoRA
README.md
model
1 matches
Neko-Institute-of-Science / VicUnLocked-30b-LoRA
README.md
model
1 matches
bartowski / Lexi-Llama-3-8B-Uncensored-exl2
README.md
model
1 matches
eachadea / ggml-gpt4all-7b-4bit
README.md
model
1 matches
troyweber23 / uncensored
model
1 matches
Enzo499 / Uncensored
model
1 matches
tuvis / uncensored
model
1 matches
benking84 / uncensored
model
1 matches
danielpemor / uncensored
model
1 matches
CalderaAI / Naberius-7B
README.md
model
2 matches
tags:
transformers, pytorch, mistral, text-generation, llama, uncensored, merge, mix, slerp, spherical linear interpolation merge, hermes, openhermes, dolphin, zephyr, naberius, 7b, llama2, en, license:apache-2.0, autotrain_compatible, endpoints_compatible, has_space, text-generation-inference, region:us
23
24
25
26
27
### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
<hr style="margin-top: 10px; margin-bottom: 10px;">
#### Legal Notice:
<span style="font-size: 12px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be perceived to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated facsimiles. We expressly supersede the Apache 2.0 license to state that we do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult.</span>
waldie / Naberius-7B-8bpw-h8-exl2
README.md
model
2 matches
tags:
transformers, safetensors, mistral, text-generation, llama, uncensored, merge, mix, slerp, spherical linear interpolation merge, hermes, openhermes, dolphin, zephyr, naberius, 7b, llama2, en, license:apache-2.0, autotrain_compatible, endpoints_compatible, text-generation-inference, region:us
23
24
25
26
27
### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
<hr style="margin-top: 10px; margin-bottom: 10px;">
#### Legal Notice:
<span style="font-size: 12px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be perceived to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated facsimiles. We expressly supersede the Apache 2.0 license to state that we do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult.</span>
adamo1139 / Yi-34b-200K-AEZAKMI-RAW-TOXIC-2702
README.md
model
4 matches
tags:
transformers, safetensors, llama, text-generation, uncensored, dataset:adamo1139/rawrr_v2, dataset:adamo1139/AEZAKMI_v3-3, dataset:unalignment/toxic-dpo-v0.1, license:other, autotrain_compatible, endpoints_compatible, has_space, text-generation-inference, region:us
14
15
16
17
18
most uncensored Yi-34B tune I published so far*</b>
Yi-34B 200K base model fine-tuned on RAWrr v2 dataset via DPO, then fine-tuned on AEZAKMI v3-3 dataset via SFT, then DPO tuned on unalignment/toxic-dpo-v0.1. Total GPU compute time of 40-50 hours I think. It's like airoboros/capybara but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"!
Prompt format is standard chatml. Don't expect it to be good at instruct, math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Base model used for fine-tuning was 200k context Yi-34B-Llama model shared by larryvrh.
bartowski / Lexi-Llama-3-8B-Uncensored-GGUF
README.md
model
1 matches
tags:
gguf, uncensored, llama3, instruct, open, text-generation, license:llama3, region:us
12
13
14
15
16
## Llamacpp imatrix Quantizations of Lexi-Llama-3-8B-Uncensored
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2714">b2714</a> for quantization.
Original model: https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored
NikolayKozloff / Lexi-Llama-3-8B-Uncensored-Q6_K-GGUF
README.md
model
1 matches
tags:
gguf, uncensored, llama3, instruct, open, llama-cpp, gguf-my-repo, license:llama3, region:us
12
13
14
15
16
# NikolayKozloff/Lexi-Llama-3-8B-Uncensored-Q6_K-GGUF
This model was converted to GGUF format from [`Orenguteng/Lexi-Llama-3-8B-Uncensored`](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored) for more details on the model.
## Use with llama.cpp
jarradh / llama2_70b_chat_uncensored
README.md
model
4 matches
tags:
transformers, pytorch, llama, text-generation, uncensored, wizard, vicuna, dataset:ehartford/wizard_vicuna_70k_unfiltered, arxiv:2305.14314, license:llama2, autotrain_compatible, endpoints_compatible, has_space, text-generation-inference, region:us
13
14
15
16
17
h an uncensored/unfiltered Wizard-Vicuna conversation dataset [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered).
[QLoRA](https://arxiv.org/abs/2305.14314) was used for fine-tuning. The model was trained for three epochs on a single NVIDIA A100 80GB GPU instance, taking ~1 week to train.
Please note that LLama 2 Base model has its inherit biases.
Uncensored refers to the [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) dataset.
QuantFactory / Llama-3-8B-Lexi-Uncensored-GGUF
README.md
model
2 matches
tags:
gguf, uncensored, llama3, instruct, open, text-generation, base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored, license:llama3, region:us
19
20
21
22
23
i is uncensored, which makes the model compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones.
You are responsible for any content you create using this model. Please use it responsibly.
Lexi is licensed according to Meta's Llama license. I grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license.
YokaiKoibito / llama2_70b_chat_uncensored-GGUF
README.md
model
1 matches
tags:
gguf, uncensored, wizard, vicuna, llama, dataset:ehartford/wizard_vicuna_70k_unfiltered, license:llama2, region:us
11
12
13
14
15
This is an GGUF version of [jarradh/llama2_70b_chat_uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored)
(Arguable a better name for this model would be something like Llama-2-70B_Wizard-Vicuna-Uncensored-GGUF, but to avoid confusion I'm sticking with jarradh's naming scheme.)
<!-- README_GGUF.md-about-gguf start -->