Update README.md
Browse files
README.md
CHANGED
@@ -6,23 +6,27 @@ pipeline_tag: text-generation
|
|
6 |
inference: false
|
7 |
---
|
8 |
|
9 |
-
# Vicuna 13b v1.3
|
10 |
|
11 |
vicuna-13b-v1.3-ger is a variant of [LMSYS](https://huggingface.co/lmsys)´s [Vicuna 13b v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) model, finetuned on an additional dataset in German language. The original model has been trained on explain tuned datasets, created using instructions and input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
|
12 |
|
13 |
This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count.
|
|
|
14 |
|
15 |
I am working on improving the model´s capabilities and will update the model if there is sufficient interest.
|
16 |
|
|
|
|
|
17 |
## Results
|
18 |
|
19 |
-
I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is
|
20 |
|
21 |
## Problems
|
22 |
|
23 |
There might be inconsistencies in multi-turn chat applications, as there was a small problem with the <eos> tokens during preparation of the finetuning dataset.
|
24 |
Please report any problems so I can fix this for the next version.
|
25 |
|
|
|
26 |
# Original Vicuna Model Card
|
27 |
|
28 |
## Model Details
|
|
|
6 |
inference: false
|
7 |
---
|
8 |
|
9 |
+
# Vicuna 13b v1.3 German
|
10 |
|
11 |
vicuna-13b-v1.3-ger is a variant of [LMSYS](https://huggingface.co/lmsys)´s [Vicuna 13b v1.3](https://huggingface.co/lmsys/vicuna-13b-v1.3) model, finetuned on an additional dataset in German language. The original model has been trained on explain tuned datasets, created using instructions and input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
|
12 |
|
13 |
This model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. However the model is not yet fully optimized for German language, as it has been trained on a small, experimental dataset and has limited capabilities due to the small parameter count.
|
14 |
+
Some of the fineunting data is also targeted towards factual retrieval (only answer questions from information in the context and refuse to hallucinate) and the model should perform better for these tasks than original Vicuna.
|
15 |
|
16 |
I am working on improving the model´s capabilities and will update the model if there is sufficient interest.
|
17 |
|
18 |
+
A quantized GGML version for use with llama.cpp, kobold.cpp and other GUIs for CPU inference can be found [here](https://huggingface.co/jphme/vicuna-13b-v1.3-ger-GGML).
|
19 |
+
|
20 |
## Results
|
21 |
|
22 |
+
I did only evaluate the output on a small, handcrafted sample on test prompts in German, confirming that the model's ability to understand and generate German text is above the base model in many situations.
|
23 |
|
24 |
## Problems
|
25 |
|
26 |
There might be inconsistencies in multi-turn chat applications, as there was a small problem with the <eos> tokens during preparation of the finetuning dataset.
|
27 |
Please report any problems so I can fix this for the next version.
|
28 |
|
29 |
+
---------------------------
|
30 |
# Original Vicuna Model Card
|
31 |
|
32 |
## Model Details
|