Ollama can not create

#5
by jiyintor - opened

I have tried merging two files using the 'cat' command and loading them, also attempted loading the two files individually in the 'modelfile', but none of these methods worked in Ollama, even though I updated to the latest version of Ollama.

Owner

These weights are split with gguf-split so you must merge them like this:

./gguf-split --merge /path/to/command-r-plus-f16-00001-of-00005.gguf /path/to/command-r-plus-f16-combined.gguf

cat won't work here

Thanks !
Sorry I didn't read the README carefully.
Running on llama.cpp went fine, but it seems ollama prefers a single file rather than split ones.

Where can we get the script ./gguf-split ?

gguf-split is part of llama-cpp
You have to build llama-cpp. (from github: https://github.com/ggerganov/llama.cpp)
Clone it like this:
git clone https://github.com/ggerganov/llama.cpp.git

In order to prevent building everything from that library, I used cmake to configure the project (I think it's described on the github page):

mkdir build
cd build
cmake ..

But instead of using cmake --build . --config Release, I used the following:
make -j 12 gguf-split

(Replace 12 with the number of cores/processors you want to use for building)
This should result in a gguf-split executable in the build/bin/ directory (relative from the git repo you downloaded/cloned)

This seems to work on linux/unix like systems (haven't tried it on Apple), and you might have to install cmake (and if you don't have it, gcc and its build chain... but then you probably don't want to go through all the hassle, maybe....)

Sign up or log in to comment