You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Lyra

NEW V4!

fixes issues some were having?

lyra-v4


Automatic Approval

If you agree, please place future merges / derivatives under cc-by-nc-4.0 license. ty


Mistral-NeMo-12B-Lyra-v3, built on top of Lyra-v2a2, which itself was built upon Lyra-v2a1.

Model Versioning

Lyra-v1 [Merge of Custom Roleplay & Instruct Trains, on Different Formats]
  |
  | [Additional SFT on 10% of Previous Data, Mixed]
  v
Lyra-v2a1 
  |
  | [Low Rank SFT Step + Tokenizer Diddling]
  v
Lyra-v2a2
  |
  | [RL Step Performed on Multiturn Sets, Magpie-style Responses by Lyra-v2a2 for Rejected Data]
  v
Lyra-v3

This uses a custom ChatML-style prompting Format!

-> What can go wrong?

[INST]system
This is the system prompt.[/INST]
[INST]user
Instructions placed here.[/INST]
[INST]assistant
The model's response will be here.[/INST]

Why this? I had used the wrong configs by accident. The format was meant for an 8B pruned NeMo train, instead it went to this. Oops.

Recommended Samplers:

Temperature: 0.7 - 1.2
min_p: 0.1 - 0.2 # Crucial for NeMo

Recommended Stopping Strings:

<|im_end|>
</s>

Blame messed up Training Configs, oops?

Training Metrics:

- Trained on 4xH100 SXM for 6 Hours.
- Trained for 2 Epochs.
- Effective Global Batch Size: 128.
- Dataset Used: A custom, cleaned mix of Stheno-v3.4's Dataset, focused mainly on multiturn.


Extras

Image Source: AI-Generated with FLUX.1 Dev.

have a nice day.

Downloads last month
2
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Sao10K/MN-12B-Lyra-v3

Merges
2 models
Quantizations
11 models