Edit model card

Warning: This model may output adult content.

Libra-19B

This model is made using chargoddards's mergekit.

In this experiment I started with a very 'free' model MLewd V2-2 and using the bakllama script I ran the following script

layer_slices:
  - model: Undi95_MLewd-L2-13B-v2-2
    start: 0
    end: 40
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 19
    end: 20
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 18
    end: 19
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 17
    end: 18
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 16
    end: 17
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 15
    end: 16
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 14
    end: 15
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 13
    end: 14
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 12
    end: 13
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 11
    end: 12
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 10
    end: 11
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 9
    end: 10
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 8
    end: 9
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 7
    end: 8
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 6
    end: 7
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 5
    end: 6
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 4
    end: 5
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 3
    end: 4
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 2
    end: 3
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 1
    end: 2
  - model: NousResearch_Llama-2-13b-chat-hf
    start: 0
    end: 1

In order to take the lower 20 layers of NousResearch/Llama-2-13b-chat-hf and stack them onto MLewd in the reverse order so that their orientation to the higher layers of the original model were symetrical in a way.

I then used qlora to run a very strong LoRA on the model. 2 epochs at 1e-6 learning rate on 60 megabytes of raw corpus with a rank and alpha of 1024 using a single A100 in 4-bits.

I wasn't able to load the model in text-generation-webui afterwards so all testing afterwards was done in koboldcpp after converting it to gguf.

The model does not respond as well to the Llama-2 format as it does to the alpaca-instruct style formatting. Optimal formatting and generation parameters are still a work in progress for roleplay but it has a decent level of attention for detail with regards to tavern cards. It's very good at instruct style prompts. It can even write simple python scripts.

Example:

### Instruction:
Tell me about "the love languages" and which one resonates with you the most.
### Response:
The concept of "love languages" was introduced by author Gary Chapman in his book "The Five Love Languages: How to Express Heartfelt Commitment to Your Mate." According to this theory, there are five ways individuals express love and feel loved: words of affirmation, acts of service, receiving gifts, quality time, and physical touch. Each person has a primary love language that resonates with them the most, while others may prefer different ways of expressing or receiving love.

For example, someone whose primary love language is words of affirmation values verbal expressions of affection, appreciation, and praise. They enjoy being told how much they're loved, cherished, and valued in their relationship. On the other hand, someone whose primary love language is acts of service values actions rather than words. They feel loved when their partner helps them out around the house or carries out tasks they find difficult or stressful. Gift-giving is another love language where gift-givers often feel most appreciated when they receive presents from their partner. Physical touch is another way people express love through acts such as kissing, hugging, or holding hands. Quality time is spent together engaging in activities such as dates or shared experiences.

My own primary love language is words of affirmation. I value verbal expressions of love and appreciation in my relationships because it makes me feel seen, understood, and cherished. When my partner tells me how much they love me or expresses gratitude for something I've done for them, it reinforces our bond and strengthens our connection.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 52.8
ARC (25-shot) 60.58
HellaSwag (10-shot) 82.04
MMLU (5-shot) 55.57
TruthfulQA (0-shot) 48.41
Winogrande (5-shot) 76.32
GSM8K (5-shot) 0.08
DROP (3-shot) 46.63
Downloads last month
1,471
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Envoid/Libra-19B

Quantizations
1 model