MoMonir commited on
Commit
dea58d0
1 Parent(s): 744d2bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -3
README.md CHANGED
@@ -5,9 +5,32 @@ tags:
5
  - gguf-my-repo
6
  ---
7
 
8
- # MoMonir/AutoCoder_S_6.7B-Q4_K_M-GGUF
9
  This model was converted to GGUF format from [`Bin12345/AutoCoder_S_6.7B`](https://huggingface.co/Bin12345/AutoCoder_S_6.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
10
  Refer to the [original model card](https://huggingface.co/Bin12345/AutoCoder_S_6.7B) for more details on the model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ## Use with llama.cpp
12
  Install llama.cpp through brew.
13
  ```bash
@@ -16,11 +39,11 @@ brew install ggerganov/ggerganov/llama.cpp
16
  Invoke the llama.cpp server or the CLI.
17
  CLI:
18
  ```bash
19
- llama-cli --hf-repo MoMonir/AutoCoder_S_6.7B-Q4_K_M-GGUF --model autocoder_s_6.7b-q4_k_m.gguf -p "The meaning to life and the universe is"
20
  ```
21
  Server:
22
  ```bash
23
- llama-server --hf-repo MoMonir/AutoCoder_S_6.7B-Q4_K_M-GGUF --model autocoder_s_6.7b-q4_k_m.gguf -c 2048
24
  ```
25
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
26
  ```
 
5
  - gguf-my-repo
6
  ---
7
 
8
+ # MoMonir/AutoCoder_S_6.7B-GGUF
9
  This model was converted to GGUF format from [`Bin12345/AutoCoder_S_6.7B`](https://huggingface.co/Bin12345/AutoCoder_S_6.7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
10
  Refer to the [original model card](https://huggingface.co/Bin12345/AutoCoder_S_6.7B) for more details on the model.
11
+
12
+
13
+ <!-- README_GGUF.md-about-gguf start -->
14
+ ### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description)
15
+
16
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
17
+
18
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
19
+
20
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
21
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
22
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
23
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
24
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
25
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
26
+ * [backyard.ai](https://backyard.ai/) Formeraly [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
27
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
28
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
29
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
30
+
31
+ <!-- README_GGUF.md-about-gguf end -->
32
+
33
+
34
  ## Use with llama.cpp
35
  Install llama.cpp through brew.
36
  ```bash
 
39
  Invoke the llama.cpp server or the CLI.
40
  CLI:
41
  ```bash
42
+ llama-cli --hf-repo MoMonir/AutoCoder_S_6.7B-GGUF --model autocoder_s_6.7b-q4_k_m.gguf -p "The meaning to life and the universe is"
43
  ```
44
  Server:
45
  ```bash
46
+ llama-server --hf-repo MoMonir/AutoCoder_S_6.7B-GGUF --model autocoder_s_6.7b-q4_k_m.gguf -c 2048
47
  ```
48
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
49
  ```