Edit model card

L3-Aethora-15B

The Skullery Presents L3-Aethora-15B.

Creator: Steelskull

Dataset: Aether-Lite-V1.2

Trained: 4 x A100 for 15 hours Using RsLora and DORA

About L3-Aethora-15B:

 L3 = Llama3 

L3-Aethora-15B was crafted through using the abilteration method to adjust model responses. The model's refusal is inhibited, focusing on yielding more compliant and facilitative dialogue interactions. It then underwent a modified DUS (Depth Up Scale) merge (originally used by @Elinas) by using passthrough merge to create a 15b model, with specific adjustments (zeroing) to 'o_proj' and 'down_proj', enhancing its efficiency and reducing perplexity. This created AbL3In-15b.

AbL3In-15b was then trained for 4 epochs using Rslora & DORA training methods on the Aether-Lite-V1.2 dataset, containing ~82000 high quality samples, designed to strike a fine balance between creativity, slop, and intelligence at about a 60/40 split

This model is trained on the L3 prompt format.

Quants:

  • Mradermacher/L3-Aethora-15B-GGUF
  • Mradermacher/L3-Aethora-15B-i1-GGUF
  • NikolayKozloff/L3-Aethora-15B-GGUF
  • Dataset Summary: (Filtered)

    Filtered Phrases: GPTslop, Claudism's

    • mrfakename/Pure-Dove-ShareGPT: Processed 3707, Removed 150
    • mrfakename/Capybara-ShareGPT: Processed 13412, Removed 2594
    • jondurbin/airoboros-3.2: Processed 54517, Removed 4192
    • PJMixers/grimulkan_theory-of-mind-ShareGPT: Processed 533, Removed 6
    • grimulkan/PIPPA-augmented-dedup: Processed 869, Removed 46
    • grimulkan/LimaRP-augmented: Processed 790, Removed 14
    • PJMixers/grimulkan_physical-reasoning-ShareGPT: Processed 895, Removed 4
    • MinervaAI/Aesir-Preview: Processed 994, Removed 6
    • Doctor-Shotgun/no-robots-sharegpt: Processed 9911, Removed 89

    Deduplication Stats:

    Starting row count: 85628, Final row count: 81960, Rows removed: 3668

    I've had a few people ask about donations so here's a link:

    Downloads last month
    9,461
    Safetensors
    Model size
    15B params
    Tensor type
    BF16
    Β·
    Inference Examples
    This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

    Model tree for Steelskull/L3-Aethora-15B

    Adapters
    1 model
    Quantizations
    12 models

    Dataset used to train Steelskull/L3-Aethora-15B

    Spaces using Steelskull/L3-Aethora-15B 5

    Collection including Steelskull/L3-Aethora-15B