File size: 1,374 Bytes
a13772e 8034190 253de58 c5aef78 253de58 8034190 a13772e 8034190 6badd60 8034190 6badd60 253de58 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: mit
language:
- en
- zh
- de
- fr
- ja
- ko
- es
widget:
- text: Hi assistant How can I help you
- text: user: Python 和 C++ 哪个更好学?哪个更强大?我该怎么选择?
- text: >-
user: Good morning\n assistant: Good morning! How can I assist you
today?
pipeline_tag: text2text-generation
tags:
- text-generation-inference
---
# Generate title for conversation
## How to use
```python
model_name = "theblackcat102/alpaca-title-generator-mt0-large"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
question = 'Hi\nHow can I help you?'
encodes = tokenizer(question, return_tensors='pt')
outputs = model.generate(encodes.input_ids,
max_length=512,
do_sample=True,
repetition_penalty=1.2,
top_k=50,
num_return_sequences=1,
early_stopping=True
)
for i, beam_output in enumerate(outputs):
print('-----')
print("{}".format(tokenizer.decode(beam_output, skip_special_tokens=True)))
# > Help requested.
```
## Generate title data
data was generated using response pair from `yahma/alpaca-cleaned` and use openai turbo model for title.
```
""
user: {}
assistant: {}
""
Generate a very short title within 5 words of the conversation above, title must be as relevant as possible. Title language must be same as the context
TITLE:
``` |