
π¦ L3.3-70B-Vulpecula πΈ
Hi hi! π
This is a collaboration work between GradientPutri and Sao10K.
This is a passion project of mine spanning the past few weeks, so we hope you like it.
While there may some minor issues, I think the final result is nice, and there are nice outputs which was the main goal.
Model card made by GradientPutri.
π Licensing Information
This model is based on Meta's Llama 3.3 and is subject to the Llama 3.3 Community License Agreement and the Acceptable Use Policy.
While we are unable to disallow commercial usage, do note that this is a project made using our own resources, time and effort. I'd rather not be discouraged from doing future project models instead. We kindly request that commercial users reach out before deployment to discuss usage and proper attribution. We appreciate users who help maintain transparency in the AI ecosystem by keeping us informed of how our work is being used. Same goes for any merges or derivatives, hopefully :)
π Model Details
- π A thinking-based model inspired by Deepseek-R1, trained through both SFT and a little bit of RL on creative writing data.
- π§ Prefill, or begin assistant replies with <think>\n to activate thinking mode, or not. It works well without thinking too.
- π Improved Steerability, instruct-roleplay and creative control over base model.
π Dataset Composition
- πΎ Semi-synthetic Chat/Roleplaying datasets that has been re-made, cleaned and filtered for repetition, quality and output.
- π Human-based Natural Chat / Roleplaying datasets cleaned, filtered and checked for quality.
- π Diverse Instruct dataset from a few different LLMs, cleaned and filtered for refusals and quality.
- π Reasoning Traces taken from Deepseek-R1 for Instruct, Chat & Creative Tasks, filtered and cleaned for quality.
- βββ Toxic / Decensorship data was not needed for our purposes, the model is unrestricted enough as is.
Total token count: ~270M Tokens (210M Trainable), over 2 epochs.
π¨ Formatting and Samplers
Instruct Format: Llama-3-Instruct
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
---
Note that newlines are represented within example above
β¨ Sampler Recommendations
temperature: 0.75
min_p: 0.1
Repetition Penalty: 1.1
Presence Penalty: 1.1
βοΈ Training Details
# Iterations
num_epochs: 2
# Batching - Global Batch 4x GPUs Γ Batch 2 Γ 4 Grad_accum = 32
gradient_accumulation_steps: 4
micro_batch_size: 2
# Optimizer
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 0.00002
max_grad_norm: 1
weight_decay: 0.01
π¦ Thank you for visiting! May the foxes bring you good fortune! πΈ
- Downloads last month
- 228
Model tree for Sao10K/Llama-3.3-70B-Vulpecula-r1
Base model
meta-llama/Llama-3.1-70B