|
--- |
|
license: apache-2.0 |
|
tags: |
|
- alignment-handbook |
|
- generated_from_trainer |
|
- juanako |
|
- mistral |
|
- UNA |
|
datasets: |
|
- HuggingFaceH4/ultrafeedback_binarized |
|
model-index: |
|
- name: juanako-7b-UNA |
|
results: |
|
- task: |
|
type: text-generation |
|
name: TruthfulQA (MC2) |
|
dataset: |
|
type: text-generation |
|
name: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
metrics: |
|
- type: accuracy |
|
value: 65.13 |
|
verified: true |
|
- task: |
|
type: text-generation |
|
name: ARC-Challenge |
|
dataset: |
|
type: text-generation |
|
name: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 68.17 |
|
verified: true |
|
- task: |
|
type: text-generation |
|
name: HellaSwag |
|
dataset: |
|
type: text-generation |
|
name: Rowan/hellaswag |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 85.34 |
|
verified: true |
|
- task: |
|
type: text-generation |
|
name: Winogrande |
|
dataset: |
|
type: text-generation |
|
name: winogrande |
|
config: winogrande_debiased |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 78.85 |
|
verified: true |
|
- task: |
|
type: text-generation |
|
name: MMLU |
|
dataset: |
|
type: text-generation |
|
name: cais/mmlu |
|
config: all |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 62.47 |
|
verified: true |
|
- task: |
|
type: text-generation |
|
name: PiQA |
|
dataset: |
|
type: text-generation |
|
name: piqa |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 83.57 |
|
- task: |
|
type: text-generation |
|
name: DROP |
|
dataset: |
|
type: text-generation |
|
name: drop |
|
split: validation |
|
metrics: |
|
- type: accuracy |
|
value: 38.74 |
|
verified: true |
|
- task: |
|
type: text-generation |
|
name: PubMedQA |
|
dataset: |
|
type: text-generation |
|
name: bigbio/pubmed_qa |
|
config: pubmed_qa_artificial_bigbio_qa |
|
split: validation |
|
metrics: |
|
- type: accuracy |
|
value: 76.0 |
|
quantized_by: bartowski |
|
--- |
|
|
|
# Exllama v2 Quantizations of juanako-7b-UNA at 4.0 bits per weight |
|
|
|
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.10">turboderp's ExLlamaV2 v0.0.10</a> for quantization. |
|
|
|
Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset. |
|
|
|
Original model: https://huggingface.co/fblgit/juanako-7b-UNA |
|
|
|
## Download instructions |
|
|
|
With git: |
|
|
|
```shell |
|
git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/juanako-7b-UNA-exl2 |
|
``` |
|
|
|
With huggingface hub (credit to TheBloke for instructions): |
|
|
|
```shell |
|
pip3 install huggingface-hub |
|
``` |
|
|
|
To download from a different branch, add the `--revision` parameter: |
|
|
|
```shell |
|
mkdir juanako-7b-UNA-exl2 |
|
huggingface-cli download bartowski/juanako-7b-UNA-exl2 --revision 4_0 --local-dir juanako-7b-UNA-exl2 --local-dir-use-symlinks False |
|
``` |
|
|