File size: 1,921 Bytes
efc5a1b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
license: apache-2.0
datasets:
- IlyaGusev/rulm
inference:
parameters:
min_length: 20
max_new_tokens: 250
top_k: 50
top_p: 0.9
early_stopping: true
no_repeat_ngram_size: 2
use_cache: true
repetition_penalty: 1.5
length_penalty: 0.8
num_beams: 2
language:
- ru
library_name: transformers
pipeline_tag: text-generation
tags:
- finance
- code
---
<h1 style="font-size: 42px">WortegaLM 109m<h1/>
# Model Summary
> Это GPTneo like модель обученная с нуля на сете в 95gb кода, хабра, пикабу, новостей(порядка 12B токенов) Она умеет решать примитивные задачи, не пригодна для ZS FS, но идеальна как модель для студенческих проектов
# Quick Start
```python
from transformers import AutoTokenizer, AutoModelForCausalLM,
tokenizer = AutoTokenizer.from_pretrained('AlexWortega/wortegaLM',padding_side='left')
device = 'cuda'
model = AutoModelForCausalLM.from_pretrained('AlexWortega/wortegaLM')
model.resize_token_embeddings(len(tokenizer))
model.to(device)
def generate_seqs(q,model, k=2):
gen_kwargs = {
"min_length": 20,
"max_new_tokens": 100,
"top_k": 50,
"top_p": 0.7,
"do_sample": True,
"early_stopping": True,
"no_repeat_ngram_size": 2,
"eos_token_id": tokenizer.eos_token_id,
"pad_token_id": tokenizer.eos_token_id,
"use_cache": True,
"repetition_penalty": 1.5,
"length_penalty": 1.2,
"num_beams": 4,
"num_return_sequences": k
}
t = tokenizer.encode(q, add_special_tokens=False, return_tensors='pt').to(device)
g = model.generate(t, **gen_kwargs)
generated_sequences = tokenizer.batch_decode(g, skip_special_tokens=False)
return generated_sequences
```
|