Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ tags:
|
|
16 |
- trl
|
17 |
---
|
18 |
|
19 |
-
Experimenting with pre-training Arabic language + finetuning on instructions using the quantized model `mistralai/Mistral-7B-v0.3` from `unsloth`. First time trying pre-training, expect issues and low quality outputs. The repo contains the merged, quantized model and GGUF format.
|
20 |
|
21 |
### Example usage
|
22 |
|
@@ -35,7 +35,7 @@ inference_prompt = """ููู
ุง ููู ุชุนููู
ุงุช ุชุตู ู
ูู
ุฉ. ุงูุชุจ
|
|
35 |
|
36 |
llm = Llama.from_pretrained(
|
37 |
repo_id="nazimali/mistral-7b-v0.3-instruct-arabic",
|
38 |
-
filename="
|
39 |
)
|
40 |
|
41 |
llm.create_chat_completion(
|
@@ -53,7 +53,7 @@ llm.create_chat_completion(
|
|
53 |
```shell
|
54 |
./llama-cli \
|
55 |
--hf-repo "nazimali/mistral-7b-v0.3-instruct-arabic" \
|
56 |
-
--hf-file
|
57 |
-p "ุงูุณูุงู
ุนูููู
ุ ููุง ูู
ูุก" \
|
58 |
--conversation
|
59 |
```
|
|
|
16 |
- trl
|
17 |
---
|
18 |
|
19 |
+
Experimenting with pre-training Arabic language + finetuning on instructions using the quantized model `mistralai/Mistral-7B-v0.3` from `unsloth`. First time trying pre-training, expect issues and low quality outputs. The repo contains the merged, quantized model and a GGUF format.
|
20 |
|
21 |
### Example usage
|
22 |
|
|
|
35 |
|
36 |
llm = Llama.from_pretrained(
|
37 |
repo_id="nazimali/mistral-7b-v0.3-instruct-arabic",
|
38 |
+
filename="Q8_0.gguf",
|
39 |
)
|
40 |
|
41 |
llm.create_chat_completion(
|
|
|
53 |
```shell
|
54 |
./llama-cli \
|
55 |
--hf-repo "nazimali/mistral-7b-v0.3-instruct-arabic" \
|
56 |
+
--hf-file Q8_0.gguf \
|
57 |
-p "ุงูุณูุงู
ุนูููู
ุ ููุง ูู
ูุก" \
|
58 |
--conversation
|
59 |
```
|