File size: 3,130 Bytes
981ac86
fa8f4b8
981ac86
b0b1c22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8e166a1
b0b1c22
 
 
 
 
 
 
 
fa8f4b8
b0b1c22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6be1fbf
b0b1c22
6be1fbf
b0b1c22
 
 
 
 
 
 
 
 
 
 
 
 
 
fa8f4b8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
pipeline_tag: text-generation
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** C.B. Pronin, A.V. Volosova, A.V. Ostroukh, Yu.N. Strogov, V.V. Kurbatov, A.S. Umarova.
- **Model type:** GGUF Conversion and quantizations of model "MexIvanov/zephyr-python-ru-merged" made for ease of inference.
- **Language(s) (NLP):** Russian, English, Python
- **License:** MIT
- **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta

### Model Sources

<!-- Provide the basic links for the model. -->

- **Paper:** https://arxiv.org/abs/2409.09353

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
An experimental finetune of Zephyr-7b-beta, aimed at improving coding performance and support for coding-related instructions written in Russian language.

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

Instruction-based coding in Python, based of instructions written in natural language (English or Russian)

Prompt template - Zephyr:
```  
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>
```

<!-- README_GGUF.md-provided-files start -->
## Provided files (quantization info taken from TheBloke/zephyr-7B-beta-GGUF)

| Name | Quant method | Bits | Use case |
| ---- | ---- | ---- | ----- |
| [zephyr-python-ru-q4_K_M.gguf](https://huggingface.co/MexIvanov/zephyr-python-ru-gguf/blob/main/zephyr-python-ru-q4_K_M.gguf) | Q4_K_M | 4 | medium, balanced quality - recommended |
| [zephyr-python-ru-q6_K.gguf](https://huggingface.co/MexIvanov/zephyr-python-ru-gguf/blob/main/zephyr-python-ru-q6_K.gguf) | Q6_K | 6 | very large, extremely low quality loss |

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This adapter model is intended (but not limited) for research usage only. It was trained on a code based instruction set and it does not have any moderation mechanisms. Use at your own risk, we are not responsible for any usage or output of this model.

Quote from Zephyr (base-model) repository: "Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this."

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.