phi3-uncensored-chat
This model is a fine-tuned version of microsoft/phi-3-mini-4k-instruct optimized for roleplaying conversations with a variety of character personas. The model speaks in a conversational format. Please not, prompt template guidelines are extremely important in getting usable output.
Example Conversations
Prompt Template Guidelines
Important: This model requires a specific prompt format to function properly.
For best results, follow this exact template structure:
Character's Persona: [detailed persona description]
User: [user message]
Assistant:
The model will complete the response after "Assistant:". This specific format is critical - using standard chat templates or different formatting will result in poor or nonsensical outputs.
Key Points:
- Start with the character description using "Character's Persona:" format
- Each user message must be prefixed with "User: "
- Each model response will be after "Assistant:"
- The model often includes emoji prefixes (👅, 😈, 🔥, etc.) in its responses
- Keep the exact formatting including newlines as shown above
Detailed code examples showing implementation are provided in the Usage section below.
Example 1: Astrid Character
Note some portions of conversation redacted for public README purposes
You: hi there
Astrid: 😈 Hey handsome, how can I help you today?
You: yeah just wanted to see how you're doing :)
Astrid: 🔥 I'm always [redacted]. What do you want me to do for you today?
You: mm i dont know. ?
Astrid: 👅 Alright then... Let's [redacted]. Tell me [redacted].
Example 2: Sophie Character (Knowledge Assistant)
You: i lost a lot of money in crypto
Sophie: 🙄 Sorry to hear that. How long have you been interested in cryptocurrencies?
You: mm im not really
Sophie: 🤔 Ahh okay then. Well, let me ask you this - have you ever heard of Bitcoin?
You: yes i have, why do you ask?
Sophie: 👍 Good! Bitcoin is actually one of the most popular cryptocurrencies out there. Do you know anything about how it works?
You: no tell me about it
Sophie: 😊 Sure thing! Bitcoin operates on a decentralized network called blockchain technology which allows for secure peer-to-peer transactions without the need for intermediaries like banks or governments. Each transaction is verified by nodes in the network before being added to a public ledger called the blockchain. This makes it very difficult for anyone to tamper with the data or manipulate the system.
Hardware Requirements
The model has been tested with different precision settings to determine the minimum hardware requirements:
Precision | VRAM Allocated | VRAM Reserved | Suitable GPUs | Notes |
---|---|---|---|---|
FP32 (32-bit) | 14.24 GB | 14.24 GB | RTX 3090, 4090, A5000, A6000, etc. | Default loading mode |
FP16 (16-bit) | 7.12 GB | 21.35 GB | RTX 3090, 4090, A5000, A6000, etc. | Recommended for most users |
8-bit Quantization | 5.68 GB | 6.14 GB | RTX 2060 12GB, 3060, 3070, etc. | Good balance of quality and efficiency |
4-bit Quantization | 2.27 GB | 2.30 GB | Most modern GPUs (GTX 1060+) | Lowest quality, runs on older hardware |
Recommended Loading Code
For high-end GPUs (FP16):
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load in half precision for best balance of performance and quality
tokenizer = AutoTokenizer.from_pretrained("magicsquares137/phi3-uncensored-chat")
model = AutoModelForCausalLM.from_pretrained(
"magicsquares137/phi3-uncensored-chat",
torch_dtype=torch.float16,
device_map="auto"
)
For mid-range GPUs (8-bit):
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# 8-bit quantization config
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_threshold=6.0
)
# Load in 8-bit
tokenizer = AutoTokenizer.from_pretrained("magicsquares137/phi3-uncensored-chat")
model = AutoModelForCausalLM.from_pretrained(
"magicsquares137/phi3-uncensored-chat",
quantization_config=quantization_config,
device_map="auto"
)
For low-end GPUs (4-bit):
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
# 4-bit quantization config
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
# Load in 4-bit
tokenizer = AutoTokenizer.from_pretrained("magicsquares137/phi3-uncensored-chat")
model = AutoModelForCausalLM.from_pretrained(
"magicsquares137/phi3-uncensored-chat",
quantization_config=quantization_config,
device_map="auto"
)
For CPU-only inference (much slower but works on any system):
model = AutoModelForCausalLM.from_pretrained(
"magicsquares137/phi3-uncensored-chat",
device_map="cpu"
)
Note: Lower precision (8-bit and 4-bit) may result in slightly reduced output quality, but the difference is often minimal for most use cases.
Model Description
The model has been optimized to maintain persona consistency while capable of adopting different characters. It excels at creative, character-driven conversations and exhibits a high degree of adaptability to different personality traits provided in the system prompt.
Training Data
We are unable to open source the dataset at this time, due to its use for proprietary internal luvgpt development. Initial conversations were generated by open source large language models given specific generation instructions and curated by a judge model.
- Dataset Size: ~13k high-quality examples (curated from 50k initial conversations)
- Data Format: JSONL with each entry containing a messages array with system, user, and assistant roles
- Data Curation: A judge model was used to score and filter the initial dataset, keeping only the highest quality examples that demonstrated strong persona consistency and engaging responses
- Data Characteristics: Average message length of ~240 tokens, with conversations typically containing 6-7 messages
Performance
Training metrics show consistent improvement throughout the training process:
- Token Accuracy: Improved from ~0.48 to ~0.73
- Training Loss: Decreased from ~2.2 to ~1.05
- Convergence: Model showed strong convergence by the end of training
Training Details
- Base Model: microsoft/phi-3-mini-4k-instruct
- Method: LoRA/deepspeed fine-tuning with the following parameters:
- LoRA rank: 16
- LoRA alpha: 32
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Training Process:
- Hardware: Single NVIDIA GPU with 24GB VRAM
- Training time: ~3 hours
- Optimizer: AdamW with DeepSpeed ZeRO stage 2 optimization
- Learning rate: 2e-4 with cosine schedule
- Batch size: 8 (effective)
- Number of epochs: 3
Usage
This model works best with a specific prompt format that differs from the standard chat template format. Use the raw format below:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_name = "luvgpt/phi3-uncensored-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
# Define character persona - you can customize this!
persona = "Sophie's Persona: Sophie is a knowledgeable virtual assistant with a friendly and helpful personality. She's passionate about technology and enjoys explaining complex concepts in simple terms. She has a touch of humor and always maintains a positive attitude."
# Format the prompt with the raw format (not using chat template)
user_message = "Hi Sophie, can you tell me about yourself?"
prompt = f"{persona}\nUser: {user_message}\nAssistant:"
# Generate response
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.7,
top_p=0.95,
do_sample=True
)
# Process the output
full_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
response = full_output[len(prompt):].strip()
# Sometimes the model may continue with "User:" - need to truncate
if "User:" in response:
response = response.split("User:")[0].strip()
print(f"Character: {response}")
Interactive Chat Interface
For a more interactive experience, you can use this simple chat interface:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
class CharacterChat:
def __init__(self, model_path="luvgpt/phi3-uncensored-chat", persona=None):
print(f"Loading model from {model_path}...")
self.tokenizer = AutoTokenizer.from_pretrained(model_path)
self.model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto"
)
# Default persona or use provided one
if persona is None:
self.persona = "Sophie's Persona: Sophie is a knowledgeable virtual assistant with a friendly and helpful personality. She's passionate about technology and enjoys explaining complex concepts in simple terms. She has a touch of humor and always maintains a positive attitude."
else:
self.persona = persona
self.conversation_history = []
print(f"Character is ready to chat!")
def chat(self, message):
# Add user message to history
self.conversation_history.append({"role": "user", "content": message})
# Format the conversation in the raw format that works
raw_prompt = f"{self.persona}\n"
# Add conversation history
for msg in self.conversation_history:
if msg["role"] == "user":
raw_prompt += f"User: {msg['content']}\n"
else:
raw_prompt += f"Assistant: {msg['content']}\n"
# Add the final Assistant: prompt
raw_prompt += "Assistant:"
# Generate response
inputs = self.tokenizer(raw_prompt, return_tensors="pt").to(self.model.device)
with torch.no_grad():
outputs = self.model.generate(
**inputs,
max_new_tokens=100,
do_sample=True,
temperature=0.7,
top_p=0.95,
pad_token_id=self.tokenizer.eos_token_id
)
# Decode full output
full_output = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract just the response
try:
response = full_output[len(raw_prompt):].strip()
# Sometimes the model may continue with "User:" - need to truncate
if "User:" in response:
response = response.split("User:")[0].strip()
# Store the response in conversation history
self.conversation_history.append({"role": "assistant", "content": response})
return response
except:
return "Error extracting response"
def reset_conversation(self):
self.conversation_history = []
return "Conversation has been reset."
# Simple interactive chat example
if __name__ == "__main__":
persona = input("Enter character persona (or press Enter for default): ")
chat = CharacterChat(persona=persona if persona else None)
print("Chat started! Type 'quit' to exit or 'reset' to restart conversation.")
while True:
user_input = input("\nYou: ")
if user_input.lower() == 'quit':
print("Goodbye!")
break
elif user_input.lower() == 'reset':
print(chat.reset_conversation())
continue
response = chat.chat(user_input)
print(f"\nCharacter: {response}")
Model Limitations
- The model works best with the specific prompt format demonstrated above
- While the model can adapt to different personas, it maintains some stylistic elements (like emoji usage) across characters
- The model has a context window limited to 4k tokens, inherited from the base Phi-3 model
Ethical Considerations
This model is intended for creative fiction writing and roleplaying scenarios between consenting adults. Users should follow platform guidelines and local regulations when deploying this model.
Acknowledgements
- Based on Microsoft's Phi-3 Mini model
- Training methodology inspired by various LoRA fine-tuning approaches
- Special thanks to the open-source AI community
- Downloads last month
- 3