--- license: mit language: - en - zh - de - fr - ja - ko - es widget: - text: Hi assistant How can I help you - text: Guten Morgen! Wie kann ich Ihnen helfen? - text: どうすれば運動を続けられますか? 運動を続けることは、健康的な生活を維持する上で非常に重要ですが、モチベーションを維持することが難しい場合があります。以下にいくつかの方法を紹介します。 - text: 세계 1차 대전은 1914년부터 1918년까지 전 세계적으로 벌어진 대규모 전쟁입니다. 주요한 참전국으로는 독일, 오스트리아-헝가리 제국, 영국, 프랑스, 러시아, 이탈리아, 미국 등이 있었습니다. - text: こんにちは!お元気ですか?何かお手伝いできることがありますか? - text: user: Python 和 C++ 哪个更好学?哪个更强大?我该怎么选择? - text: 在大熱天裡,墨鏡的銷售與冰淇淋的銷售有著高度相關性。當天氣很熱的時候,兩個都十分熱賣,而天氣轉涼以後兩者的銷售就底落谷底。當有一天,墨鏡批發商車輛在上班途中拋錨,因此無法開業,導致墨鏡銷售變 0 。請問當天冰淇淋的銷售如何? - text: >- user: Good morning\n assistant: Good morning! How can I assist you today? pipeline_tag: text2text-generation tags: - text-generation-inference --- # Generate title for conversation ## How to use ```python model_name = "theblackcat102/alpaca-title-generator-mt0-large" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) question = 'Hi\nHow can I help you?' encodes = tokenizer(question, return_tensors='pt') outputs = model.generate(encodes.input_ids, max_length=512, do_sample=True, repetition_penalty=1.2, top_k=50, num_return_sequences=1, early_stopping=True ) for i, beam_output in enumerate(outputs): print('-----') print("{}".format(tokenizer.decode(beam_output, skip_special_tokens=True))) # > Help requested. ``` ## Generate title data data was generated using response pair from `yahma/alpaca-cleaned` and use openai turbo model for title. ``` "" user: {} assistant: {} "" Generate a very short title within 5 words of the conversation above, title must be as relevant as possible. Title language must be same as the context TITLE: ```