File size: 903 Bytes
12c3986
 
 
f020f92
71a3089
 
9a63c86
 
d156677
9a63c86
 
f020f92
c904ce4
 
f020f92
 
 
 
 
ce65180
 
9a63c86
a1173c5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: mit
---

# Original model: [NousResearch - Obsidian-3B-V0.5](https://huggingface.co/NousResearch/Obsidian-3B-V0.5) 
## gguf q6 quantised version by Nisten
To run the server inside /llama.cpp/ folder IN YOUR TERMINAL 

## ./server -m obsidian-q6.gguf --mmproj mmproj-obsidian-f16.gguf -ngl 42

that's it, it's literally one command, you open your browser now at http://127.0.0.1:8080

## FIRST TIME TO RUN mac or linux, make a new folder, COPY PASTE THIS TO DL & RUN EVERYTHIN whole in ONE SHOT


```bash
git clone -b stablelm-support https://github.com/Galunid/llama.cpp.git && \
cd llama.cpp && \
make && \
wget https://huggingface.co/nisten/obsidian-3b-multimodal-q6-gguf/resolve/main/obsidian-q6.gguf && \
wget https://huggingface.co/nisten/obsidian-3b-multimodal-q6-gguf/resolve/main/mmproj-obsidian-f16.gguf && \
./server -m obsidian-q6.gguf --mmproj mmproj-obsidian-f16.gguf -ngl 42