Edit model card

This a TinyLlama mix merge, experimental, using a custom merge method. Should be better at RP.

merge_method: task_swapping
base_model: Doctor-Shotgun/TinyLlama-1.1B-32k
models:
  - model: cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser
    parameters:
      weight: 0.75
      diagonal_offset: 5
  - model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    parameters:
      weight: 0.85
      diagonal_offset: 17
      invert_offset: True
dtype: bfloat16
name: bye
---
merge_method: task_swapping
base_model: Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct
models:
  - model: vihangd/DopeyTinyLlama-1.1B-v1
    parameters:
      weight: 0.8
      diagonal_offset: 3
      invert_offset: False
dtype: bfloat16
name: hello
---
merge_method: task_arithmetic
base_model: Doctor-Shotgun/TinyLlama-1.1B-32k
models:
  - model: hello
    parameters:
      weight: 0.66
  - model: bye+Anarchist/PIPPA_LORA_TinyLlama
    parameters:
      weight: 0.5
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 32.99
AI2 Reasoning Challenge (25-Shot) 31.48
HellaSwag (10-Shot) 48.39
MMLU (5-Shot) 25.05
TruthfulQA (0-shot) 33.45
Winogrande (5-shot) 58.48
GSM8k (5-shot) 1.06
Downloads last month
2,606
Safetensors
Model size
1.1B params
Tensor type
BF16
·

Evaluation results