NikolayKozloff commited on
Commit
74f7321
1 Parent(s): 83e8e58

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - it
4
+ license: cc-by-nc-sa-4.0
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - mistral
9
+ - trl
10
+ - sft
11
+ - llama-cpp
12
+ - gguf-my-repo
13
+ base_model: sapienzanlp/Minerva-3B-base-v1.0
14
+ datasets:
15
+ - mchl-labs/stambecco_data_it
16
+ pipeline_tag: text-generation
17
+ widget:
18
+ - text: "Di seguito è riportata un'istruzione che descrive un'attività, abbinata ad\
19
+ \ un input che fornisce ulteriore informazione. Scrivi una risposta che soddisfi\
20
+ \ adeguatamente la richiesta. \n### Istruzione:\nSuggerisci un'attività serale\
21
+ \ romantica\n\n### Input:\n\n### Risposta:"
22
+ example_title: Example 1
23
+ ---
24
+
25
+ # NikolayKozloff/Minerva-3B-Instruct-v1.0-Q5_K_M-GGUF
26
+ This model was converted to GGUF format from [`FairMind/Minerva-3B-Instruct-v1.0`](https://huggingface.co/FairMind/Minerva-3B-Instruct-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
27
+ Refer to the [original model card](https://huggingface.co/FairMind/Minerva-3B-Instruct-v1.0) for more details on the model.
28
+ ## Use with llama.cpp
29
+
30
+ Install llama.cpp through brew.
31
+
32
+ ```bash
33
+ brew install ggerganov/ggerganov/llama.cpp
34
+ ```
35
+ Invoke the llama.cpp server or the CLI.
36
+
37
+ CLI:
38
+
39
+ ```bash
40
+ llama-cli --hf-repo NikolayKozloff/Minerva-3B-Instruct-v1.0-Q5_K_M-GGUF --model minerva-3b-instruct-v1.0.Q5_K_M.gguf -p "The meaning to life and the universe is"
41
+ ```
42
+
43
+ Server:
44
+
45
+ ```bash
46
+ llama-server --hf-repo NikolayKozloff/Minerva-3B-Instruct-v1.0-Q5_K_M-GGUF --model minerva-3b-instruct-v1.0.Q5_K_M.gguf -c 2048
47
+ ```
48
+
49
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
50
+
51
+ ```
52
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m minerva-3b-instruct-v1.0.Q5_K_M.gguf -n 128
53
+ ```