Edit model card
img

Thank @mradermacher so much for help me find out that LumiMaid use 'smaug-bpe' pre-tokenizer. So that mean all its quant is unable to use. That mean you can only use Transformer to load this model for now (maybe they will fix or add feature in future)

Update: Both version have different presents (settings) to work well

Overall:

Sao10K Stheno > SMaid V0.3 > SMaid V0.1 in Chai Benchmark

SMaid V0.1 = Sao10K Stheno > SMaid V0.3 in my custom EQ bench (Sadness and deep thought and Depression test)

Disclaimed: same seed, same character card, same scenario. 4 times try for each models.

Best of L3-8B merge series for me. I choose 2 best variants to publish.

SMaid-V0.1: More smart, understand well content, more novelwriting. I like this version.

SMaid-V0.3: Upgrade from v0.1. More talkative, active, energetic (wrong setting, lol).

No V0.2 because I deleted it, it's a worst model of series.

I think Stheno and Lumumaid can be like a 'ying-yang', so I combine them, lol. Have test on Chaiverse, both of them got > 1995 elo score from begining. (Thanks Sao10K let me know about ChaiVerse :) )

SMaid = Stheno (it's very good) + LumiMaid (not too good, but the writing style is good)

Recommend present (You can feedback if any setting is better)

Temperature - 1.1-1.25
Min-P - 0.075
Top-K - 50
Top_P - 0.5
Repetition Penalty - 1.1

Below is the auto-generate by Mergekit

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
- sources:
  - layer_range: [0, 16]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.5
      weight: 1.0
  - layer_range: [0, 16]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.5
      weight: 0.9
- sources:
  - layer_range: [16, 24]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.75
      weight: 0.5
  - layer_range: [16, 24]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.25
      weight: 0.5
- sources:
  - layer_range: [24, 32]
    model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.5
      weight: 0.5
  - layer_range: [24, 32]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.5
      weight: 1.0
merge_method: dare_ties
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
int8_mask: true
dtype: bfloat16
Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Merge of

Collection including Alsebay/L3-8B-SMaid-v0.1