Edit model card

LuminRP-7B-128k-v0.2

LuminRP-7B-128k-v0.2 is a merge of four RP models into one using LazyMergekit. This is a model that is purely for roleplaying and uses a context window of 128k.

Example Response:

I use the ChatML template for this with Instruct Mode enabled. Mistral template is okay to use as well, but I don't recommend Alpaca-Roleplay because it just keeps going. Most likely because the Alpaca-Roleplay template doesn't have a message suffix. Screenshot (2).png

Quantized Version

GGUF: Ppoyaa/LuminRP-7B-128k-v0.2-GGUF

πŸ† Open LLM Leaderboard Evaluation Results

Metric Value
Avg. 73.18
AI2 Reasoning Challenge (25-Shot) 70.56
HellaSwag (10-Shot) 87.46
MMLU (5-Shot) 64.92
TruthfulQA (0-shot) 65.78
Winogrande (5-shot) 82.40
GSM8k (5-shot) 67.93

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Ppoyaa/LuminRP-7B-128k-v0.2"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
3,098
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·