--- license: mit datasets: - databricks/databricks-dolly-15k language: - en tags: - gpt2-medium --- This model is a finetuned version of ```gpt2-medium``` using ```databricks/databricks-dolly-15k dataset``` ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='Sharathhebbar24/Instruct_GPT') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ```