Transformers documentation
CodeLlama
This model was released on 2023-08-24 and added to Hugging Face Transformers on 2023-08-25.
CodeLlama
Code Llama is a specialized family of large language models based on Llama 2 for coding tasks. It comes in different flavors - general code, Python-specific, and instruction-following variant - all available in 7B, 13B, 34B, and 70B parameters. Code Llama models can generate, explain, and even fill in missing parts of your code (called “infilling”). It can also handle very long contexts with stable generation up to 100k tokens, even though it was trained on sequences of 16K tokens.
You can find all the original Code Llama checkpoints under the Code Llama collection.
Click on the Code Llama models in the right sidebar for more examples of how to apply Code Llama to different coding tasks.
The example below demonstrates how to generate code with Pipeline, or the AutoModel, and from the command line.
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="meta-llama/CodeLlama-7b-hf",
dtype=torch.float16,
device_map=0
)
# basic code generation
result = pipe("# Function to calculate the factorial of a number\ndef factorial(n):", max_new_tokens=256)
print(result[0]['generated_text'])
# infilling
infill_result = pipe("def remove_non_ascii(s: str) -> str:\n \"\"\" <FILL_ME>\n return result", max_new_tokens=200)
print(infill_result[0]['generated_text'])Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses bitsandbytes to only quantize the weights to 4-bits.
# pip install bitsandbytes
import torch
from transformers import AutoModelForCausalLM, CodeLlamaTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True)
tokenizer = CodeLlamaTokenizer.from_pretrained("meta-llama/CodeLlama-34b-hf")
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/CodeLlama-34b-hf",
dtype=torch.bfloat16,
device_map="auto",
quantization_config=bnb_config
)
prompt = "# Write a Python function to check if a string is a palindrome\ndef is_palindrome(s):"
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**input_ids, max_new_tokens=200, cache_implementation="static")
print(tokenizer.decode(output[0], skip_special_tokens=True))Use the AttentionMaskVisualizer to better understand what tokens the model can and cannot attend to.
from transformers.utils.attention_visualizer import AttentionMaskVisualizer
visualizer = AttentionMaskVisualizer("meta-llama/CodeLlama-7b-hf")
visualizer("""def func(a, b):
return a + b""")
Notes
Infilling is only available in the 7B and 13B base models, and not in the Python, Instruct, 34B, or 70B models.
Use the
<FILL_ME>token where you want your input to be filled. The tokenizer splits this token to create a formatted input string that follows the original training pattern. This is more robust than preparing the pattern yourself.from transformers import LlamaForCausalLM, CodeLlamaTokenizer tokenizer = CodeLlamaTokenizer.from_pretrained("meta-llama/CodeLlama-7b-hf") model = LlamaForCausalLM.from_pretrained("meta-llama/CodeLlama-7b-hf") PROMPT = '''def remove_non_ascii(s: str) -> str: """ <FILL_ME> return result ''' input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"] generated_ids = model.generate(input_ids, max_new_tokens=128) filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0] print(PROMPT.replace("<FILL_ME>", filling))Use
bfloat16for further training or fine-tuning andfloat16for inference.The
BOScharacter is not used for infilling when encoding the prefix or suffix, but only at the beginning of each prompt.The tokenizer is a byte-pair encoding model based on SentencePiece. During decoding, if the first token is the start of the word (for example, “Banana”), the tokenizer doesn’t prepend the prefix space to the string.
CodeLlamaTokenizer
class transformers.CodeLlamaTokenizer
< source >( clean_up_tokenization_spaces = False unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' prefix_token = '▁<PRE>' middle_token = '▁<MID>' suffix_token = '▁<SUF>' eot_token = '▁<EOT>' fill_token = '<FILL_ME>' additional_special_tokens = None add_bos_token = True add_eos_token = False use_default_system_prompt = False add_prefix_space = None vocab = None merges = None vocab_file = None **kwargs )
Parameters
- clean_up_tokenization_spaces (
str, optional, defaults toFalse) — Whether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. - unk_token (
str, optional, defaults to"<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - bos_token (
str, optional, defaults to"<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - eos_token (
str, optional, defaults to"</s>") — The end of sequence token. - prefix_token (
str, optional, defaults to"▁<PRE>") — Prefix token used for infilling. - middle_token (
str, optional, defaults to"▁<MID>") — Middle token used for infilling. - suffix_token (
str, optional, defaults to"▁<SUF>") — Suffix token used for infilling. - eot_token (
str, optional, defaults to"▁<EOT>") — End of text token used for infilling. - fill_token (
str, optional, defaults to"<FILL_ME>") — The token used to split the input between the prefix and suffix. - additional_special_tokens (
list[str], optional) — Additional special tokens used by the tokenizer. - add_bos_token (
bool, optional, defaults toTrue) — Whether to add a beginning of sequence token at the start of sequences. - add_eos_token (
bool, optional, defaults toFalse) — Whether to add an end of sequence token at the end of sequences. - use_default_system_prompt (
bool, optional, defaults toFalse) — Whether or not the default system prompt for Llama should be used. - add_prefix_space (
bool, optional) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. - vocab (
dict, optional) — Custom vocabulary dictionary. If not provided, vocabulary is loaded from vocab_file. - merges (
list, optional) — Custom merges list. If not provided, merges are loaded from merges_file. - vocab_file (
str, optional) — SentencePiece file (generally has a .model extension) that contains the vocabulary necessary to instantiate a tokenizer.
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no normalization.
>>> from transformers import CodeLlamaTokenizer
>>> tokenizer = CodeLlamaTokenizer.from_pretrained("hf-internal-testing/llama-tokenizer")
>>> tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]If you want to change the bos_token or the eos_token, make sure to specify them when initializing the model, or
call tokenizer.update_post_processor() to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. The default configuration match that of meta-llama/CodeLlama-7b-Instruct-hf which supports prompt infilling.
get_special_tokens_mask
< source >( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None already_has_special_tokens: bool = False ) → A list of integers in the range [0, 1]
Parameters
- token_ids_0 — List of IDs for the (possibly already formatted) sequence.
- token_ids_1 — Unused when
already_has_special_tokens=True. Must be None in that case. - already_has_special_tokens — Whether the sequence is already formatted with special tokens.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added.
For fast tokenizers, data collators call this with already_has_special_tokens=True to build a mask over an
already-formatted sequence. In that case, we compute the mask by checking membership in all_special_ids.
CodeLlamaTokenizerFast
class transformers.CodeLlamaTokenizer
< source >( clean_up_tokenization_spaces = False unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' prefix_token = '▁<PRE>' middle_token = '▁<MID>' suffix_token = '▁<SUF>' eot_token = '▁<EOT>' fill_token = '<FILL_ME>' additional_special_tokens = None add_bos_token = True add_eos_token = False use_default_system_prompt = False add_prefix_space = None vocab = None merges = None vocab_file = None **kwargs )
Parameters
- clean_up_tokenization_spaces (
str, optional, defaults toFalse) — Whether to cleanup spaces after decoding, cleanup consists in removing potential artifacts like extra spaces. - unk_token (
str, optional, defaults to"<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. - bos_token (
str, optional, defaults to"<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. - eos_token (
str, optional, defaults to"</s>") — The end of sequence token. - prefix_token (
str, optional, defaults to"▁<PRE>") — Prefix token used for infilling. - middle_token (
str, optional, defaults to"▁<MID>") — Middle token used for infilling. - suffix_token (
str, optional, defaults to"▁<SUF>") — Suffix token used for infilling. - eot_token (
str, optional, defaults to"▁<EOT>") — End of text token used for infilling. - fill_token (
str, optional, defaults to"<FILL_ME>") — The token used to split the input between the prefix and suffix. - additional_special_tokens (
list[str], optional) — Additional special tokens used by the tokenizer. - add_bos_token (
bool, optional, defaults toTrue) — Whether to add a beginning of sequence token at the start of sequences. - add_eos_token (
bool, optional, defaults toFalse) — Whether to add an end of sequence token at the end of sequences. - use_default_system_prompt (
bool, optional, defaults toFalse) — Whether or not the default system prompt for Llama should be used. - add_prefix_space (
bool, optional) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. - vocab (
dict, optional) — Custom vocabulary dictionary. If not provided, vocabulary is loaded from vocab_file. - merges (
list, optional) — Custom merges list. If not provided, merges are loaded from merges_file. - vocab_file (
str, optional) — SentencePiece file (generally has a .model extension) that contains the vocabulary necessary to instantiate a tokenizer.
Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding.
This uses notably ByteFallback and no normalization.
>>> from transformers import CodeLlamaTokenizer
>>> tokenizer = CodeLlamaTokenizer.from_pretrained("hf-internal-testing/llama-tokenizer")
>>> tokenizer.encode("Hello this is a test")
[1, 15043, 445, 338, 263, 1243]If you want to change the bos_token or the eos_token, make sure to specify them when initializing the model, or
call tokenizer.update_post_processor() to make sure that the post-processing is correctly done (otherwise the
values of the first token and final token of an encoded sequence will not be correct). For more details, checkout
[post-processors] (https://huggingface.co/docs/tokenizers/api/post-processors) documentation.
This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. The default configuration match that of meta-llama/CodeLlama-7b-Instruct-hf which supports prompt infilling.
get_special_tokens_mask
< source >( token_ids_0: list token_ids_1: typing.Optional[list[int]] = None already_has_special_tokens: bool = False ) → A list of integers in the range [0, 1]
Parameters
- token_ids_0 — List of IDs for the (possibly already formatted) sequence.
- token_ids_1 — Unused when
already_has_special_tokens=True. Must be None in that case. - already_has_special_tokens — Whether the sequence is already formatted with special tokens.
Returns
A list of integers in the range [0, 1]
1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added.
For fast tokenizers, data collators call this with already_has_special_tokens=True to build a mask over an
already-formatted sequence. In that case, we compute the mask by checking membership in all_special_ids.
Updates the underlying post processor with the current bos_token and eos_token.