Llama3.2 - OpenElla3A
OpenElla, is a Llama3.2 3B Parameter Model, That is fine-tuned for Roleplaying purposes, even if it only have a limited Parameters. This is achieved through Series of Dataset Finetuning, using 2 Dataset with different Weight, Aiming to Counter Llama3.2's Generalist Approach and focusing On Specializing with Roleplaying and Acting.
OpenElla3A Excells in Outputting RAW and UNCENSORED Output However LACKS THE PROPER TRAINING FOR OBIDIENCE, Due to this, OpenElla3 Model A Are Only Used for Training purposes, if you seek to train or Distill A Llama Model to Force it to generate Uncensored Content then please do so with care and ethical considerations
OpenElla3B is
- Developed by: N-Bot-Int
- License: apache-2.0
- Parent Model from model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
- Sequential Trained from Model: N-Bot-Int/OpenElla3-Llama3.2A
- Dataset Combined Using: Mosher-R1(Propietary Software)
OpenElla3B Is NOT YET RANKED WITH ANY METRICS
- Feel free to support by Emailing me: nexus.networkinteractives@gmail.com
Notice
- For a Good Experience, Please use
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
- For a Good Experience, Please use
Detail card:
Parameter
- 3 Billion Parameters
- (Please visit your GPU Vendor if you can Run 3B models)
Training
- 500 steps
- Mixed-RP Startup Dataset
- 200 steps
- PIPPA-ShareGPT for Increased Roleplaying capabilities
- 150 steps(Re-fining)
- PIPPA-ShareGPT to further increase weight of PIPPA and to override the noises
- 500 steps
Finetuning tool:
Unsloth AI
- This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Fine-tuned Using:
Google Colab
- Downloads last month
- 50
Model tree for N-Bot-Int/OpenElla3-Llama3.2A
Base model
meta-llama/Llama-3.2-3B-Instruct