giraffe176's picture
Update README.md
4d373a3 verified
---
arxiv:
language:
- en
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
- not-for-all-audiences
base_model:
- cognitivecomputations/WestLake-7B-v2-laser
- NeverSleep/Noromaid-7B-0.4-DPO
- teknium/OpenHermes-2.5-Mistral-7B
- mistralai/Mistral-7B-v0.1
- Intel/neural-chat-7b-v3-3
model-index:
- name: WestLake_Noromaid_OpenHermes_neural-chatv0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: EQ-Bench
type: eq-bench
config: EQ-Bench
split: v2
args:
num_few_shot: 1
metrics:
- type: acc_norm
value: 65.56
name: normalized accuracy
source:
url: https://github.com/EQ-bench/EQ-Bench
name: EQ-Bench v2.1
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.72
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.37
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.5
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.2
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1
name: Open LLM Leaderboard
---
# WestLake_Noromaid_OpenHermes_neural-chatv0.1
<img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/ElrkYfCq7kNW9zxZhXWEz.png" alt="drawing" width="800"/>
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). DPO training data has been used to slightly uncensor the LLM.
The model's focus is in conversational roleplay. In limited testing, I've been very happy with the result. It has been able to pick up stories where other models have failed or started to loop their responses, and it seems to pace the story well.
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
* [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: cognitivecomputations/WestLake-7B-v2-laser
parameters:
density: 0.55
weight: 0.15
- model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
density: 0.55
weight: 0.35
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.55
weight: 0.30
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.55
weight: 0.20
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
### Benchmark Testing
<img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/sO_QybG17FYdT47FyAcMs.png" alt="drawing" width="800"/>
| | MT-Bench | EQ-Bench v2.1 |
|---------------------------------------------------------|---------------------------------------------|---------------------------------------------|
| giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1 | 7.171875 | 65.56 |
| | [(Paper)](https://arxiv.org/abs/2306.05685) | [(Paper)](https://arxiv.org/abs/2312.06281) |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giraffe176__WestLake_Noromaid_OpenHermes_neural-chatv0.1)
| | Avg. | AI2 (25-Shot) | HellaSwag (10-Shot) | MMLU (5-Shot) | TruthfulQA (0-shot) | Winogrande (5-shot) | GSM8k (5-shot) |
|:-----------------------------------------:|-------|-----------------------------------|---------------------|---------------|---------------------|---------------------|----------------|
| This model | 68.86 | 66.72 | 85.37 | 64.67 | 51.50 | 79.72 | 65.20 |
| cognitivecomputations/WestLake-7B-v2-laser| **74.78** | 73.29 | **88.66** | **64.72** | **67.04** | **86.74** | **68.23** |
| NeverSleep/Noromaid-7B-0.4-DPO | 59.08 | 62.29 | 84.32 | 63.2 | 42.28 | 76.95 | 25.47 |
| teknium/OpenHermes-2.5-Mistral-7B | 61.52 | 64.93 | 84.18 | 63.64 | 52.24 | 78.06 | 26.08 |
| Intel/neural-chat-7b-v3-3 | 69.83 | **66.89** | 85.26 | 63.07 | 63.01 | 79.64 | 61.11 |
### DPO training data used:
- unalignment/toxic-dpo-v0.2 (Curated version)