--- inference: parameters: temperature: 0.5 widget: text: "A courier received 50 packages yesterday and twice as many today. All of these should be delivered tomorrow. How many packages should be delivered tomorrow?" --- This model was created using GPT-2 as a base, and fine-tuned upon a dataset of elementary school problems requiring logic and reasoning. Requires Pytorch How to use to infer text ```python from transformers import AutoTokenizer, AutoModelForCasualLM import torch type = "gpt2-large" tokenizer = AutoTokenizer.from_pretrained(type) model = AutoModelForCausalLM.from_pretrained(type) model_path = '../model.pt' model = torch.load(model_path) your_text = "A courier received 50 packages yesterday and twice as many today. All of these should be delivered tomorrow. How many packages should be delivered tomorrow?" encoded_text = self.tokenizer.encode(your_text, return_tensors='pt') outputs = model.generate(encoded_text, max_length=64, do_sample=True, temperature=0.5, top_p=1) outputs = [tokenizer.decode(output) for output in outputs] ```