cgus commited on
Commit
35139ad
1 Parent(s): 1d63aa7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -11,12 +11,27 @@ language:
11
  - zh
12
  - ja
13
  pipeline_tag: text-generation
14
- base_model: anthracite-org/magnum-12b-v2
15
  tags:
16
  - chat
17
  ---
 
 
 
18
 
 
 
 
 
 
 
19
 
 
 
 
 
 
 
20
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/sWYs3iHkn36lw6FT_Y7nn.png)
21
 
22
  v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen".
 
11
  - zh
12
  - ja
13
  pipeline_tag: text-generation
14
+ base_model: anthracite-org/magnum-v2.5-12b-kto
15
  tags:
16
  - chat
17
  ---
18
+ # magnum-v2.5-12b-kto-exl2
19
+ Original model: [magnum-v2.5-12b-kto](https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto)
20
+ Creator: [anthracite-org](https://huggingface.co/anthracite-org)
21
 
22
+ ## Quants
23
+ [4bpw h6 (main)](https://huggingface.co/cgus/magnum-v2.5-12b-kto-exl2/tree/main)
24
+ [4.5bpw h6](https://huggingface.co/cgus/magnum-v2.5-12b-kto-exl2/tree/4.5bpw-h6)
25
+ [5bpw h6](https://huggingface.co/cgus/magnum-v2.5-12b-kto-exl2/tree/5bpw-h6)
26
+ [6bpw h6](https://huggingface.co/cgus/magnum-v2.5-12b-kto-exl2/tree/6bpw-h6)
27
+ [8bpw h8](https://huggingface.co/cgus/magnum-v2.5-12b-kto-exl2/tree/8bpw-h8)
28
 
29
+ ## Quantization notes
30
+ Made with exllamav2 0.2.2 with the default dataset.
31
+ These quants are for RTX cards on Windows/Linux or AMD on Linux.
32
+ Use with Text-Generation-WebUI, TabbyAPI, etc.
33
+
34
+ # Original model card
35
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/sWYs3iHkn36lw6FT_Y7nn.png)
36
 
37
  v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen".