Missing documentation for FIM?

#31
by JoaoLages - opened

SantaCoder used a special encoding/decoding code for FIM, does StarCoder also need this?
https://huggingface.co/spaces/bigcode/santacoder-demo/blob/main/app.py#L24

BigCode org

Yes, it's actually documented in The README

No, that's not what I meant! SantaCoder did a lot of custom preprocessing:

  • In SantaCoder we had to initialize the tokenizer with padding_side="left" - this is no longer needed in StarCoder, right?
  • We also had to tokenize the inputs with return_token_type_ids=False - this is no longer needed in StarCoder, or is it?
  • We also had to include a pad_token_id=tokenizer.pad_token_id in model.generate - is this needed?

I would also be very interested in the configuration used.
For SantaCoder, the demo showed all the hyperparameters chosen for the tokenizer and the generation. On the other hand, StarCoder uses the endpoint for which I cannot replicate the results locally.

BigCode org

@nandovallec you can run FIM using the following code, nothing special is needed except for specifying FIM tokens:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder", truncation_side="left")
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder", torch_dtype=torch.bfloat16).cuda()

input_text = "<fim_prefix>def fib(n):<fim_suffix>    else:\n        return fib(n - 2) + fib(n - 1)<fim_middle>"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=25)
generation = [tokenizer_fim.decode(tensor, skip_special_tokens=False) for tensor in outputs]
print(generation[0])
<fim_prefix>def fib(n):<fim_suffix>    else:
        return fib(n - 2) + fib(n - 1)<fim_middle>
    if n < 2:
        return n
<|endoftext|>
loubnabnl changed discussion status to closed

Sign up or log in to comment