DevsDoCode's picture
Update README.md
c3aa529 verified
|
raw
history blame
No virus
1.93 kB
---
language:
- en
license: llama2
library_name: transformers
tags:
- uncensored
- transformers
- llama
- llama-3
---
## Contributors
![Contributors](https://raw.githubusercontent.com/unslothai/unsloth/main/images/contributors.png) **Devs Do Code**, **Sreejan**, **OEvortex**
## Social Media Handles
- **Telegram:** [Insert Telegram Handle](https://t.me/your_telegram_handle)
- **YouTube:** [Insert YouTube Channel](https://www.youtube.com/your_channel)
- **Instagram:** [Insert Instagram Handle](https://www.instagram.com/your_instagram_handle)
- **LinkedIn:** [Insert LinkedIn Profile](https://www.linkedin.com/in/your_profile)
- **Discord:** [Insert Discord Server Invite](https://discord.gg/your_server_invite)
- **Twitter:** [Insert Twitter Handle](https://twitter.com/your_twitter_handle)
# Finetune Meta Llama-3 8b to create an Uncensored Model with Devs Do Code!
Unleash the power of uncensored text generation with our model! We've fine-tuned the Meta Llama-3 8b model to create an uncensored variant that pushes the boundaries of text generation.
## Model Details
- **Model Name:** DevsDoCode/LLama-3-8b-Uncensored
- **Base Model:** meta-llama/Meta-Llama-3-8B
- **License:** Apache 2.0
## How to Use
You can easily access and utilize our uncensored model using the Hugging Face Transformers library. Here's a sample code snippet to get started:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model_name = "DevsDoCode/LLama-3-8b-Uncensored"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# Now you can generate text using the model!
```
## Notebooks
- **Finetuning Process:** [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing)
- **Accessing the Model:** [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing)