File size: 3,313 Bytes
cbf5bb6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
license: cc-by-nc-4.0
language:
- en
---
# ⚡ExLlamaV2 quant of : [L3-8B-Stheno-v3.3-32K](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K)
> [!note]
> ➡️ **Exl2 version :** [0.1.5](https://github.com/turboderp/exllamav2/releases/tag/v0.1.5)<br/>
> ➡️ **Cal. dataset :** Default.<br/>
> 📄 <a href="https://huggingface.co/Meggido/L3-8B-Stheno-v3.3-32K-6.5bpw-h8-exl2/resolve/main/measurement.json" download>Measurement.json</a> file.
Trained with compute from [Backyard.ai](https://backyard.ai/) | Thanks to them and @dynafire for helping me out.
---
Training Details:
<br>Trained at 8K Context -> Expanded to 32K Context with PoSE training.
Dataset Modifications:
<br>\- Further Cleaned up Roleplaying Samples -> Quality Check
<br>\- Removed Low Quality Samples from Manual Check -> Increased Baseline Quality Floor
<br>\- More Creative Writing Samples -> 2x Samples
<br>\- Remade and Refined Detailed Instruct Data
Notes:
<br>\- Training run is much less aggressive than previous Stheno versions.
<br>\- This model works when tested in bf16 with the same configs as within the file.
<br>\- I do not know the effects quantisation has on it.
<br>\- Roleplays pretty well. Feels nice in my opinion.
<br>\- It has some issues on long context understanding and reasoning. Much better vs rope scaling normally though, so that is a plus.
<br>\- Reminder, this isn't a native 32K model. It has it's issues, but it's coherent and working well.
Sanity Check // Needle in a Haystack Results:
<br>\- This is not as complex as RULER or NIAN, but it's a basic evaluator. Some improper train examples had Haystack scores ranging from Red to Orange for most of the extended contexts.
![Results](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K/resolve/main/haystack.png)
Wandb Run:
![Wandb](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.3-32K/resolve/main/wandb.png)
---
Relevant Axolotl Configurations:
<br>-> Taken from [winglian/Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE)
<br>\- I tried to find my own configs, hours of tinkering but the one he used worked best, so I stuck to it.
<br>\- 2M Rope Theta had the best loss results during training compared to other values.
<br>\- Leaving it at 500K rope wasn't that much worse, but 4M and 8M Theta made the grad_norm values worsen even if loss drops fast.
<br>\- Mixing in Pretraining Data was a PITA. Made it a lot worse with formatting.
<br>\- Pretraining / Noise made it worse at Haystack too? It wasn't all Green, Mainly Oranges.
<br>\- Improper / Bad Rope Theta shows in Grad_Norm exploding to thousands. It'll drop to low values alright, but it's a scary fast drop even with gradient clipping.
```
sequence_len: 8192
use_pose: true
pose_max_context_len: 32768
overrides_of_model_config:
rope_theta: 2000000.0
max_position_embeddings: 32768
# peft_use_dora: true
adapter: lora
peft_use_rslora: true
lora_model_dir:
lora_r: 256
lora_alpha: 256
lora_dropout: 0.1
lora_target_linear: true
lora_target_modules:
- gate_proj
- down_proj
- up_proj
- q_proj
- v_proj
- k_proj
- o_proj
warmup_steps: 80
gradient_accumulation_steps: 6
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine_with_min_lr
learning_rate: 0.00004
lr_scheduler_kwargs:
min_lr: 0.000004
``` |