File size: 5,006 Bytes
76a1f51 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
- IlyaGusev/saiga_preferences
license: gemma
---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
# QuantFactory/saiga_gemma2_9b-GGUF
This is quantized version of [IlyaGusev/saiga_gemma2_9b](https://huggingface.co/IlyaGusev/saiga_gemma2_9b) created using llama.cpp
# Original Model Card
# Saiga/Gemma2 9B, Russian Gemma-2-based chatbot
Based on [Gemma-2 9B Instruct](https://huggingface.co/google/gemma-2-9b-it).
## Prompt format
Gemma-2 prompt format:
```
<start_of_turn>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<end_of_turn>
<start_of_turn>user
Как дела?<end_of_turn>
<start_of_turn>model
Отлично, а у тебя?<end_of_turn>
<start_of_turn>user
Шикарно. Как пройти в библиотеку?<end_of_turn>
<start_of_turn>model
```
## Code example
```python
# Исключительно ознакомительный пример.
# НЕ НАДО ТАК ИНФЕРИТЬ МОДЕЛЬ В ПРОДЕ.
# См. https://github.com/vllm-project/vllm или https://github.com/huggingface/text-generation-inference
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
MODEL_NAME = "IlyaGusev/saiga_gemma2_10b"
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
load_in_8bit=True,
torch_dtype=torch.bfloat16,
device_map="auto"
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
generation_config = GenerationConfig.from_pretrained(MODEL_NAME)
print(generation_config)
inputs = ["Почему трава зеленая?", "Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч"]
for query in inputs:
prompt = tokenizer.apply_chat_template([{
"role": "user",
"content": query
}], tokenize=False, add_generation_prompt=True)
data = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]):]
output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
print(query)
print(output)
print()
print("==============================")
print()
```
## Versions
v2:
- [258869abdf95aca1658b069bcff69ea6d2299e7f](https://huggingface.co/IlyaGusev/saiga_gemma2_9b/commit/258869abdf95aca1658b069bcff69ea6d2299e7f)
- Other name: saiga_gemma2_9b_abliterated_sft_m3_d9_abliterated_kto_m1_d13
- SFT dataset config: [sft_d9.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/sft_d9.json)
- SFT model config: [saiga_gemma2_9b_sft_m2.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_sft_m3.json)
- KTO dataset config: [pref_d11.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/pref_d13.json)
- KTO model config: [saiga_gemma2_9b_kto_m1.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_kto_m1.json)
- SFT wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/pjsuik1l)
- KTO wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dsxwvyyx)
v1:
- [fa63cfe898ee6372419b8e38d35f4c41756d2c22](https://huggingface.co/IlyaGusev/saiga_gemma2_9b/commit/fa63cfe898ee6372419b8e38d35f4c41756d2c22)
- Other name: saiga_gemma2_9b_abliterated_sft_m2_d9_abliterated_kto_m1_d11
- SFT dataset config: [sft_d9.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/sft_d9.json)
- SFT model config: [saiga_gemma2_9b_sft_m2.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_sft_m2.json)
- KTO dataset config: [pref_d11.json](https://github.com/IlyaGusev/saiga/blob/main/configs/datasets/pref_d11.json)
- KTO model config: [saiga_gemma2_9b_kto_m1.json](https://github.com/IlyaGusev/saiga/blob/main/configs/models/saiga_gemma2_9b_kto_m1.json)
- SFT wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/af49qmbb)
- KTO wandb: [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/5bt7729x)
## Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
Pivot: gemma_2_9b_it_abliterated
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|gemma_2_9b_it_abliterated | 50.00 | 50.00 | 0.00 | 1126 |
|saiga_gemma2_9b, v1 | 48.66 | 45.54 | 2.45 | 1066 |
|saiga_gemms2_9b, v2 | 47.77 | 45.30 | 2.45 | 1074 |
|