Text Generation
Transformers
PyTorch
llama
Inference Endpoints
text-generation-inference

Adding tokenizer.json for easier consumption. The issue with fast tokenizers referenced in the README is fixed now so we should be good to include this file.

If this change sounds good, I can also generate the tokenizer.json for the 7B model as well.

Generated with:

#!/usr/bin/env python3

import os
from transformers import AutoTokenizer

path = os.path.join(os.getcwd(),"open_llama_3b_v2")

tokenizer = AutoTokenizer.from_pretrained(path)
tokenizer.save_pretrained(path)
bianchidotdev changed pull request status to open
Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment