DeepMount00 commited on
Commit
ad86c95
1 Parent(s): 4c9f1ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -20
README.md CHANGED
@@ -4,16 +4,6 @@ language:
4
  - it
5
  ---
6
  # Mistral-Ita-7B GGUF
7
- <!-- README_GGUF.md-provided-files start -->
8
- ## Provided files
9
-
10
- | Name | Quant method | Bits | Size | Use case |
11
- |------|--------------|------|---------|--------------------------------------------------|
12
- | [mistal-Ita-7b-q3_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q3_k_m.gguf) | Q3_K_M | 3 | 3.52 GB | very small, high quality loss |
13
- | [mistal-Ita-7b-q4_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q4_k_m.gguf) | Q4_K_M | 4 | 4.37 GB | medium, balanced quality - recommended |
14
- | [mistal-Ita-7b-q5_k_m.gguf](https://huggingface.co/DeepMount00/Mistral-Ita-7b-GGUF/blob/main/mistal-Ita-7b-q5_k_m.gguf) | Q5_K_M | 5 | 5.13 GB | large, very low quality loss - recommended |
15
-
16
- <!-- README_GGUF.md-provided-files end -->
17
 
18
  ## How to Use
19
  How to utilize my Mistral for Italian text generation
@@ -22,17 +12,9 @@ How to utilize my Mistral for Italian text generation
22
  from ctransformers import AutoModelForCausalLM
23
 
24
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
25
- llm = AutoModelForCausalLM.from_pretrained("DeepMount00/Mistral-Ita-7b-GGUF", model_file="mistal-Ita-7b-q3_k_m.gguf", model_type="mistral", gpu_layers=0)
26
-
27
- domanda = """Scrivi una funzione python che calcola la media tra questi valori"""
28
- contesto = """
29
- [-5, 10, 15, 20, 25, 30, 35]
30
- """
31
 
32
- system_prompt = ''
33
- prompt = domanda + "\n" + contesto
34
- B_INST, E_INST = "[INST]", "[/INST]"
35
- prompt = f"{system_prompt}{B_INST}{prompt}\n{E_INST}"
36
 
37
  print(llm(prompt))
38
  ```
 
4
  - it
5
  ---
6
  # Mistral-Ita-7B GGUF
 
 
 
 
 
 
 
 
 
 
7
 
8
  ## How to Use
9
  How to utilize my Mistral for Italian text generation
 
12
  from ctransformers import AutoModelForCausalLM
13
 
14
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
15
+ llm = AutoModelForCausalLM.from_pretrained("DeepMount00/Mistral-Ita-7b-GGUF", model_file="mistral_ita-7b-Q4_K_M.gguf", model_type="mistral", context_length=4096, max_new_tokens=1000, gpu_layers=20)
 
 
 
 
 
16
 
17
+ prompt = "Trova la soluzione a questo problema: se Mario ha 12 mele e ne vende 4 a 8 euro e le restanti a 3 euro, quanto guadagna Mario?"
 
 
 
18
 
19
  print(llm(prompt))
20
  ```