Text Truncation even with increase in max_new_tokens

#11
by vivekam101 - opened

I followed above changes. But still Generated text are getting truncated
prompt: How can I write a Python function to generate the nth Fibonacci number?

response: Here is a simple Python function to generate the nth Fibonacci number:

def fib(n):
if n <= 1

prompt: can you explain me the algorithm of merge sort ?

response: Sure, I’d be happy to explain the algorithm of merge sort.
Merge sort is a divide-and-conquer algorithm that works by

Code Snippet
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR)
model = AutoModelForCausalLM.from_pretrained(MODEL_DIR, device="auto", torch_dtype=torch.bfloat16)
text = "<|system|>\n<|end|>\n<|user|>" + text + "<|end|>\n<|assistant|>"
inputs = tokenizer.encode(text, return_tensors="pt").to(device)
outputs = model.generate(inputs, do_sample=True, max_new_tokens=64)
response = tokenizer.decode(outputs[0])
--
any idea why its happening ? help will be appreciated

please try this below:

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model=your model path, torch_dtype=torch.bfloat16, device_map="auto")

text = "How can I write a Python function to generate the nth Fibonacci number?"
prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
prompt = prompt_template.format(query=text)
outputs = pipe(prompt, max_new_tokens=256, stop_sequence='<|end|>', do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
print(outputs)
print(outputs[0]['generated_text'])
generated = outputs[0]['generated_text'].split('<|assistant|>')[-1]
print(generated)

Sign up or log in to comment