YOYO-AI's picture
Update README.md
c3c3de8 verified
metadata
license: apache-2.0
language:
  - en
  - zh
base_model:
  - Qwen/Qwen2.5-14B
  - Qwen/Qwen2.5-14B-Instruct
  - Qwen/Qwen2.5-14B-Instruct-1M
  - EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
  - Azure99/Blossom-V6-14B
  - arcee-ai/Virtuoso-Small-v2
pipeline_tag: text-generation
tags:
  - merge
model-index:
  - name: Qwen2.5-14B-1M-YOYO-V3
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 83.98
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=YOYO-AI/Qwen2.5-14B-1M-YOYO-V3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 49.47
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=YOYO-AI/Qwen2.5-14B-1M-YOYO-V3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 53.55
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=YOYO-AI/Qwen2.5-14B-1M-YOYO-V3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 10.51
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=YOYO-AI/Qwen2.5-14B-1M-YOYO-V3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 11.1
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=YOYO-AI/Qwen2.5-14B-1M-YOYO-V3
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 46.74
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=YOYO-AI/Qwen2.5-14B-1M-YOYO-V3
          name: Open LLM Leaderboard

image/png

Qwen2.5-14B-1M-YOYO-V3

This time, I not only released the model but also shared some model merging insights that might be even more valuable than the model itself.

Let’s start by looking at the initial merge configuration (YAML):

merge_method: model_stock  
base_model: Qwen/Qwen2.5-14B  
models:  
  - model: Qwen/Qwen2.5-14B-instruct  
  - model: Qwen/Qwen2.5-14B-instruct-1M  
dtype: bfloat16

Does it seem like there are no issues at all? However, merged models occasionally exhibit uncontrollable outputs, likely due to significant discrepancies between instruction-tuned models and base models.

To address this, I first attempted to directly integrate a fine-tuned model with smaller divergence from the base model, such as Virtuoso-Small-v2.

This gave rise to Qwen2.5-14B-YOYO-latest-V2.

merge_method: model_stock  
base_model: Qwen/Qwen2.5-14B  
models:  
  - model: Qwen/Qwen2.5-14B-instruct  
  - model: Qwen/Qwen2.5-14B-instruct-1M  
  - model: arcee-ai/Virtuoso-Small-v2  
dtype: bfloat16
name: Qwen2.5-14B-YOYO-latest-V2

Although the uncontrollable output issue has been addressed, the model still lacks stability.

Through practical experimentation, I found that first merging "high-divergence" models (significantly different from the base) into "low-divergence" models (closer to the base) using the DELLA method, then applying the Model Stock method, ultimately produces a model that is not only more stable but also achieves better performance.

Key models used:

1. Low-divergence, high-performance models:

  • Virtuoso-Small-v2
  • Blossom-V6-14B

2. High-divergence, instruction-focused models:

  • Qwen2.5-14B-instruct
  • Qwen2.5-14B-instruct-1M

DELLA Merge Configuration:

models:  
  - model: Qwen/Qwen2.5-14B-Instruct  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: arcee-ai/Virtuoso-Small-v2  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della1
models:  
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: arcee-ai/Virtuoso-Small-v2  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della2
models:  
  - model: Qwen/Qwen2.5-14B-Instruct  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Azure99/Blossom-V6-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della3
models:  
  - model: Qwen/Qwen2.5-14B-Instruct-1M  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Azure99/Blossom-V6-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: Qwen2.5-14B-YOYO-della4

This approach yielded four variants:

  • Qwen2.5-14B-YOYO-della1
  • Qwen2.5-14B-YOYO-della2
  • Qwen2.5-14B-YOYO-della3
  • Qwen2.5-14B-YOYO-della4

Base Model:

To enhance base model roleplay and creative writing capabilities, I applied the same strategy:

models:  
  - model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2  
    parameters:  
      density: 1  
      weight: 1  
      lambda: 0.9  
merge_method: della  
base_model: Qwen/Qwen2.5-14B  
parameters:  
  density: 1  
  weight: 1  
  lambda: 0.9  
  normalize: true  
  int8_mask: true  
dtype: bfloat16  
tokenizer_source: base  
name: EVA-Qwen2.5-14B-base

Next, I extended the context length using the SCE method:

merge_method: sce  
models:  
  - model: EVA-Qwen2.5-14B-base  
base_model: Qwen/Qwen2.5-14B-Instruct-1M  
parameters:  
  select_topk: 1  
dtype: bfloat16  
tokenizer_source: base  
normalize: true  
int8_mask: true  
name: Qwen2.5-14B-pro

Final Merge Step:

merge_method: model_stock  
base_model: Qwen2.5-14B-pro  
models:  
  - model: Qwen2.5-14B-YOYO-della1  
  - model: Qwen2.5-14B-YOYO-della2  
  - model: Qwen2.5-14B-YOYO-della3  
  - model: Qwen2.5-14B-YOYO-della4  
dtype: bfloat16  
tokenizer_source: base  
int8_mask: true  
normalize: true  
name: Qwen2.5-14B-1M-YOYO-V3

I hope this helps!

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 42.56
IFEval (0-Shot) 83.98
BBH (3-Shot) 49.47
MATH Lvl 5 (4-Shot) 53.55
GPQA (0-shot) 10.51
MuSR (0-shot) 11.10
MMLU-PRO (5-shot) 46.74