Natkituwu commited on
Commit
50d1148
1 Parent(s): d7258bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -4
README.md CHANGED
@@ -1,16 +1,22 @@
1
  ---
2
- base_model: []
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
 
 
7
  license: cc-by-nc-4.0
8
  ---
9
- # Kunokukulemonchini-7b
10
 
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
 
12
 
13
- Here is an 4.1bpw exl2 quant [Kunokukulemonchini-7b-4.1bpw-exl2](https://huggingface.co/icefog72/Kunokukulemonchini-7b-4.1bpw-exl2) for people like me with 6gb vram.
14
  ## Merge Details
15
 
16
  Slightly edited kukulemon-7B config.json before merge to get at least ~32k context window.
 
1
  ---
2
+ base_model:
3
+ - grimjim/kukulemon-7B
4
+ - Nitral-AI/Kunocchini-7b-128k-test
5
  library_name: transformers
6
  tags:
7
  - mergekit
8
  - merge
9
+ - mistral
10
+ - alpaca
11
  license: cc-by-nc-4.0
12
  ---
 
13
 
14
+ # Kunokukulemonchini-7b-5.0bpw-exl2
15
+
16
+ This is an 5.0 bpw exl2 quant of a merger [icefog72/Kunokukulemonchini-7b](https://huggingface.co/icefog72/Kunokukulemonchini-7b).
17
+
18
+ Good balance between 4.1bpw and 6.5bpw, should give more context than 6.5bpw.
19
 
 
20
  ## Merge Details
21
 
22
  Slightly edited kukulemon-7B config.json before merge to get at least ~32k context window.