Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Useless ChitChat Language Model

Basic Dialog Model from DialoGPT-small. Finetuned on Dialog dataset. (Daily Dialog, MultiWoz)

For better usage. Use repo https://github.com/jinymusim/Daily-Dialog-GPT

How to use

If used with repo https://github.com/jinymusim/Daily-Dialog-GPT
User only needs to start the ds.py script. Otherwise use following

Use it as any torch python Language Model

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained("jinymusim/dialogmodel")
model = AutoModelForCausalLM.from_pretrained("jinymusim/dialogmodel")

# Take user Input
user_utterance = input('USER> ')
user_utterance = user_utterance.strip()
tokenized_context = tokenizer.encode(user_utterance + tokenizer.eos_token, return_tensors='pt')

# generated a response, limit max_lenght to resonable size 
out_response = model.generate(tokenized_context, 
                                max_length=100,
                                num_beams=2,
                                no_repeat_ngram_size=2,
                                early_stopping=True,
                                pad_token_id=self.tokenizer.eos_token_id)

# Truncate User Input
decoded_response = self.tokenizer.decode(out_response[0], skip_special_tokens=True)[len(user_utterance):]

print(f'SYSTEM> {decoded_response}')
Downloads last month
7

Datasets used to train jinymusim/dialogmodel