xriminact commited on
Commit
4e9480a
1 Parent(s): a4a5fb5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -33
README.md CHANGED
@@ -7,36 +7,8 @@ library_name: adapter-transformers
7
 
8
  xriminact/TarsChattyBasev0.1 GGUF models converted using llama.cpp
9
 
10
- | Tables | Are | Cool |
11
- | ------------- |:-------------:| -----:|
12
- | col 3 is | right-aligned | $1600 |
13
- | col 2 is | centered | $12 |
14
- | zebra stripes | are neat | $1 |
15
-
16
-
17
- widget:
18
- table:
19
- Name:
20
- - TarsChattyBasev0.1-Q4_K_M.gguf
21
- - TarsChattyBasev0.1-Q5_K_M.gguf
22
- - TarsChattyBasev0.1-Q8_0.gguf
23
- Quant Method:
24
- - Q4_K_M
25
- - Q5_K_M
26
- - Q8_0
27
- Bits:
28
- - 4
29
- - 5
30
- - 8
31
- Size:
32
- - 4.37 GB
33
- - 5.13 GB
34
- - 7.70 GB
35
- Max RAM required:
36
- - ~6.5 GB
37
- - ~7.5 GB
38
- - ~10.5 GB
39
- Use case
40
- - balanced quality
41
- - large, very low quality loss
42
- - very large, extremely low quality loss
 
7
 
8
  xriminact/TarsChattyBasev0.1 GGUF models converted using llama.cpp
9
 
10
+ | Name | Quant Method | Bits | Max RAM Required | Usecase |
11
+ | ----------- |:----------------------:|:-----:|:-----------------:|:--------:|
12
+ | TarsChattyBasev0.1-Q4_K_M.gguf | Q4_K_M | 4 | ~6.5 GB | balanced quality
13
+ | TarsChattyBasev0.1-Q5_K_M.gguf | Q5_K_M | 5 | ~7.5 GB | large, very low quality loss
14
+ | TarsChattyBasev0.1-Q8_0.gguf | Q8_0 | 8 | ~10.5 GB | very large, extremely low quality loss