Qwen1.5-120B-Chat-Merge

--This is a 120B frankenmerge of qwen1.5-72B-Chat created by interleaving layers of qwen1.5-72B-Chat with itself using mergekit.--

Inspired by other frankenmerge models like goliath-120b and miqu-1-120b

I have adopted a new recipe for merging this 120B model (I tried to expand the recipe to 124B, but experienced a performance decline). Compared to the original 124B version, it has 4B fewer parameters but seems to have improved performance (at least that is my subjective impression). It exhibits fewer hallucinations, better comprehension, and clearer logic than the old version of the 124B model (although I am not sure by how much, as my judgement is based on limited subjectively use). It still cannot (in most time) solve some of my high-difficulty reasoning questions I use for testing, but it seems less likely to get confused and makes more slightly mistakes in the same questions.

-Quantize

Coming soon...

-Merge Configuration

This yaml below:

dtype: float16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 20]
    model: Qwen\Qwen1.5-72B-Chat
- sources:
  - layer_range: [5, 30]
    model: Qwen\Qwen1.5-72B-Chat
- sources:
  - layer_range: [10, 35]
    model: Qwen\Qwen1.5-72B-Chat
- sources:
  - layer_range: [30, 50]
    model: Qwen\Qwen1.5-72B-Chat
- sources:
  - layer_range: [40, 60]
    model: Qwen\Qwen1.5-72B-Chat
- sources:
  - layer_range: [55, 80]
    model: Qwen\Qwen1.5-72B-Chat

-Performance

  • Tips:I don't have the capability to conduct benchmark tests, nor can I even use it extensively enough, so my test results might not be accurate.I cannot promise that the performance will absolutely be good or bad

I feel its understanding and logical reasoning abilities are better than the 124B version(subjectively), but I'm not clear about other aspects of its performance (for example, writing ability, as most normal 120B+ models have decent writing, making it difficult to discern superiority).If you believe in this model's performance, feel free to test it out or offer evaluations. Everyone's tests or evaluations are welcome.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .