phi0112358's picture
Update README.md
c2365c4
metadata
license: apache-2.0
datasets:
  - yahma/alpaca-cleaned
language:
  - de
tags:
  - llama
  - alpaca
  - ggml
  - german
  - deutsch
  - zicklein

---


Zicklein: A german finetuned instructions following LLaMA

This is a ggml conversion of Zicklein 7B.

Zicklein itself is a LLaMA finetuned model with a cleaned and german translated Alpaca dataset.

Currently I have only converted it into new k-quant method Q5_K_M. I will gladly make more versions on request.

Other possible quantizations include: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q5_K_M, q6_K

A f-16 version could be found here: nikuya3/alpaca-lora-7b-german-base-51k-ggml

Compatible with llama.cpp, but also with:

  • text-generation-webui
  • KoboldCpp
  • ParisNeo/GPT4All-UI
  • llama-cpp-python
  • ctransformers

Prompt format

Since this model is based on alpaca dataset, the right prompt formatting should look like this:

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Input:
{input}

### Response:

Or without addiotional input:


Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

That's it!

If you have any further questions, feel free to contact me or start a discussion