File size: 1,472 Bytes
07d9ffb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
library_name: transformers
license: other
license_name: eva-llama3.3
base_model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
tags:
- generated_from_trainer
- exl2
model-index:
- name: dev/shm/EVA-LLaMA-3.33-70B-v0.1
  results: []
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- cognitivecomputations/dolphin-2.9.3
---
# EVA-LLaMA-3.33-70B-v0.0 - EXL2 3.5bpw

This is a 3.5bpw EXL2 quant of [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0)

Details about the model can be found at the above model page.

## EXL2 Version

These quants were made with exllamav2 version 0.2.4. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.

If you have problems loading these models, please update Text Generation WebUI to the latest version.

## Perplexity Scoring

Below are the perplexity scores for the EXL2 models. A lower score is better. 


| Quant Level | Perplexity Score |
|-------------|------------------|
| 5.0 | 5.2386 |
| 4.5 | 5.3409 |
| 4.0 | 5.5167 |
| 3.5 | 5.9224 |
| 3.0 | 15.1469 |
| 2.75 | 8.9386 |
| 2.5 | 9.4244 |
| 2.25 | 11.5358 |