Text Generation
Transformers
PyTorch
llama
Inference Endpoints
text-generation-inference

Generated with:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("openlm-research/open_llama_3b_v2")
assert tokenizer.is_fast
tokenizer.save_pretrained("...")
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment