File size: 2,143 Bytes
5aed84c d4a38f8 5aed84c d4a38f8 82d40f9 d4a38f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: mit
pipeline_tag: text-generation
widget:
- text: "@@ПЕРВЫЙ@@ привет @@ВТОРОЙ@@ привет @@ПЕРВЫЙ@@ как дела? @@ВТОРОЙ@@"
example_title: "how r u"
- text: "@@ПЕРВЫЙ@@ что ты делал на выходных? @@ВТОРОЙ@@"
example_title: "wyd"
language:
- ru
tags:
- conversational
---
This generation model is based on [sberbank-ai/rugpt3small_based_on_gpt2](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2). It's trained on large corpus of dialog data and can be used for buildning generative conversational agents
The model was trained with context size 3
On a private validation set we calculated metrics introduced in [this paper](https://arxiv.org/pdf/2001.09977.pdf):
- Sensibleness: Crowdsourcers were asked whether model's response makes sense given the context
- Specificity: Crowdsourcers were asked whether model's response is specific for given context, in other words we don't want our model to give general and boring responses
- SSA which is the average of two metrics above (Sensibleness Specificity Average)
| | sensibleness | specificity | SSA |
|:----------------------------------------------------|---------------:|--------------:|------:|
| [tinkoff-ai/ruDialoGPT-small](https://huggingface.co/tinkoff-ai/ruDialoGPT-small) | 0.64 | 0.5 | 0.57 |
| [tinkoff-ai/ruDialoGPT-medium](https://huggingface.co/tinkoff-ai/ruDialoGPT-medium) | 0.78 | 0.69 | 0.735 |
How to use:
```python
import torch
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained('tinkoff-ai/ruDialoGPT-small')
model = AutoModelWithLMHead.from_pretrained('tinkoff-ai/ruDialoGPT-small')
inputs = tokenizer('@@ПЕРВЫЙ@@ привет @@ВТОРОЙ@@ привет @@ПЕРВЫЙ@@ как дела? @@ВТОРОЙ@@', return_tensors='pt')
with torch.inference_mode():
generated_token_ids = model.generate(**inputs)
context_with_response = tokenizer.decode(generated_token_ids[0])
context_with_response
```
|