|
--- |
|
license: other |
|
language: |
|
- en |
|
--- |
|
|
|
# Vicuzard-30B-Uncensored |
|
|
|
This is an experimental mixed model containing a parameter-wise 50/50 blend (weighted average) of [ehartford/Wizard-Vicuna-30B-Uncensored](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored) and [ehartford/WizardLM-30B-Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored) |
|
|
|
[GGML models are provided here, for use in KoboldCPP](https://huggingface.co/concedo/Vicuzard-30B-Uncensored/tree/main/ggml). |
|
|
|
This improves on earlier model mixing techniques by only applying the merge to the layers containing tensors of the same dimensions. |
|
By selectively skipping merge operations on the input and output layers, we are now able to merge models with different vocab sizes (i.e. added tokens) so long as the hidden layers have identical sizes. |
|
|
|
All feedback and comments can be directed to Concedo on the KoboldAI discord. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_concedo__Vicuzard-30B-Uncensored) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 53.76 | |
|
| ARC (25-shot) | 62.97 | |
|
| HellaSwag (10-shot) | 83.68 | |
|
| MMLU (5-shot) | 58.16 | |
|
| TruthfulQA (0-shot) | 52.27 | |
|
| Winogrande (5-shot) | 77.11 | |
|
| GSM8K (5-shot) | 15.39 | |
|
| DROP (3-shot) | 26.76 | |
|
|