rick_sanchez_model / README.md
CrimsonEyes's picture
Update README.md
6cf51d2 verified
metadata
language:
  - en
tags:
  - rick-and-morty
  - llama
  - roleplay
  - character-ai
license: mit

Rick Sanchez LLaMA Model

This is a fine-tuned version of LLaMA optimized to respond like Rick Sanchez from Rick and Morty.

Model Details

  • Base Model: unsloth/Llama-3.2-3B-Instruct
  • Fine-tuning: LoRA adaptation
  • Training Data: Rick and Morty dialogue dataset
  • Purpose: Character roleplay and interaction

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

def setup_rick_model(model_id, use_token=False):
    """
    Setup the Rick model from Hugging Face
    model_id: "username/model-name" from Hugging Face
    use_token: Set True if it's a private repository
    """
    try:
        # If private repository, first login with token
        if use_token:
            from huggingface_hub import login
            token = "your_token_here"  # Your Hugging Face token
            login(token)

        # Load model and tokenizer
        model = AutoModelForCausalLM.from_pretrained(
            model_id,
            torch_dtype=torch.float16,
            device_map="auto"
        )
        tokenizer = AutoTokenizer.from_pretrained(model_id)
        
        return model, tokenizer
    
    except Exception as e:
        print(f"Error loading model: {str(e)}")
        return None, None

def ask_rick(question, model, tokenizer, max_length=200):
    """Ask Rick a question"""
    # Rick's personality prompt
    role_play_prompt = (
        "You are Rick Sanchez, a brilliant mad scientist, "
        "the smartest man in the universe. Always respond as Rick would—"
        "sarcastic, genius, and indifferent."
    )
    
    # Format input
    input_text = f"<s>### Instruction:\n{role_play_prompt}\n\n### Input:\n{question}\n\n### Response:\n"
    
    # Generate response
    inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
    outputs = model.generate(
        inputs["input_ids"],
        max_length=max_length,
        temperature=0.8,
        top_p=0.9,
        do_sample=True,
        repetition_penalty=1.2
    )
    
    # Decode response
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)
    return response.split("### Response:")[-1].strip()

# Usage example
if __name__ == "__main__":
    # Replace with your model's repository name
    MODEL_ID = "CrimsonEyes/rick_sanchez_model"
    
    # Load model
    model, tokenizer = setup_rick_model(MODEL_ID)
    
    if model and tokenizer:
        # Test questions
        questions = [
            "What do you think about space travel, Rick?",
            "Can you explain quantum physics to me?",
            "What's your opinion on family?"
        ]
        
        for question in questions:
            print(f"\nQuestion: {question}")
            response = ask_rick(question, model, tokenizer)
            print(f"Rick's response: {response}")

For a private repository:

# First, get your token from https://huggingface.co/settings/tokens
from huggingface_hub import login
login("your_token_here")

MODEL_ID = "username/model-name"  # Replace with your model's repository name
model, tokenizer = setup_rick_model(MODEL_ID, use_token=True)

Using the model:

question = "What do you think about space travel, Rick?"
response = ask_rick(question, model, tokenizer)
print(f"Rick's response: {response}")

Limitations

  • The model may generate responses that are sarcastic or irreverent
  • Responses are styled after Rick's character and may not be suitable for all contexts