--- license: llama3 language: - en library_name: transformers pipeline_tag: text2text-generation --- 🎉 **Introducing BrokenLlama-3-8b: 100% Full Finetuning, No DPO added, Enjoy!** 🚀 This bad boy is a fully fine-tuned version of the already awesome Meta-Llama-3-8B, but we've cranked it up to 11 by attempting to remove alignment and biases using a super special curated dataset 📈 with 8192 sequence length. BrokenLlama-3-8b went through a crazy 48-hour training session on 4xA100 80GB, so you know it's ready to rock your world. 💪 With skills that'll blow your mind, BrokenLlama-3-8b can chat, code, and even do some fancy function calls. 🤖 But watch out! This llama is a wild one and will do pretty much anything you ask, even if it's a bit naughty. 😈 Make sure to keep it in check with your own alignment layer before letting it loose in the wild. To get started with this incredible model, just use the ChatML prompt template and let the magic happen. It's so easy, even a llama could do it! 🦙 ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ChatML prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method: ``` messages = [ {"role": "system", "content": "You are BrokeLlama, a helpful AI assistant."}, {"role": "user", "content": "Hello BrokenLlama, what can you do for me?"} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` BrokenLlama-3-8b is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) **Quants** GGUF : Coming Soon AWQ: Coming Soon **Evals** In Progress **NOTE** As long as you give us proper credit and Attribution, You are allowed to use this model as base model and performed further DPO/PPO tuning on it. Infact we encourage people to do that based upon their use case, since this is just a generalistic full fine tuned version.