BrokenLlama-3-8b / README.md
pankajmathur's picture
Update README.md
39da343 verified
|
raw
history blame
No virus
1.62 kB
metadata
license: llama3
language:
  - en
library_name: transformers
pipeline_tag: text2text-generation

πŸŽ‰ Introducing BrokenLlama-3-8b: 100% Full Finetuning, No DPO added, Enjoy! πŸš€

This bad boy is a fully fine-tuned version of the already awesome Meta-Llama-3-8B, but we've cranked it up to 11 by attempting to remove alignment and biases using a super special curated dataset πŸ“ˆ with 8192 sequence length.

BrokenLlama-3-8b went through a crazy 48-hour training session on some seriously powerful hardware, so you know it's ready to rock your world. πŸ’ͺ

With skills that'll blow your mind, BrokenLlama-3-8b can chat, code, and even do some fancy function calls. πŸ€–

But watch out! This llama is a wild one and will do pretty much anything you ask, even if it's a bit naughty. 😈 Make sure to keep it in check with your own alignment layer before letting it loose in the wild.

To get started with this incredible model, just use the ChatML prompt template and let the magic happen. It's so easy, even a llama could do it! πŸ¦™

<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>

ChatML prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method:

messages = [
    {"role": "system", "content": "You are helpful AI asistant."},
    {"role": "user", "content": "Hello!"}
]

gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)

Quants

GGUF : Coming Soon

AWQ: Coming Soon

Evals

In Progress