File size: 1,418 Bytes
2d6267b
 
 
 
 
 
 
 
 
 
 
01be733
 
 
f5c0256
 
 
 
 
2d6267b
b605fd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01be733
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: cc-by-nc-4.0
---

# Command-R 35B v1.0 - GGUF
- Model creator: [CohereForAI](https://huggingface.co/CohereForAI)
- Original model: [Command-R 35B v1.0](https://huggingface.co/CohereForAI/c4ai-command-r-v01)

<!-- description start -->
## Description

This repo contains llama.cpp GGUF format model files for
[Command-R 35B v1.0](https://huggingface.co/CohereForAI/c4ai-command-r-v01).

<!-- compatibility_gguf start -->
## Compatibility

These quantised GGUF files are compatible with llama.cpp from Mar 16, 2024 onwards,
starting from release [b2440](https://github.com/ggerganov/llama.cpp/releases/tag/b2440)

## F16 files are split and require joining

**Note:** Hugging face does not support uploading files larger than 50GB so
I uploaded the GGUF as 2 split files.

To join the files, run the following:

Linux and macOS:
```
cat c4ai-command-r-v01-f16.gguf-split-* > c4ai-command-r-v01-f16.gguf
```
Then you can remove the split files to save space:
```
rm c4ai-command-r-v01-f16.gguf-split-*
```
Windows command line:
```
COPY /B c4ai-command-r-v01-f16.gguf-split-a + c4ai-command-r-v01-f16.gguf-split-b c4ai-command-r-v01-f16.gguf
```

Then you can remove the split files to save space:
```
del c4ai-command-r-v01-f16.gguf-split-a c4ai-command-r-v01-f16.gguf-split-b
```

You can optionally confirm the checksum of merged c4ai-command-r-v01-f16.gguf
with the md5sum file:
```
md5sum -c md5sum
```