Fix loading in Hugging Face transformers

#2
by awni - opened

Currently the tokenizer does not load when you try to load the model with Hugging Face transformers. I realize it's not there, but it would be good to package it in the model repos so that we can use the model with a single repo ID rather than pointing to a compatible tokenizer elsewhere.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("apple/OpenELM-270M")

use other tokenizer like NousResearch/Llama-2-7b-hf

Right one can always do that as a hack, but for ease of use the tokenizer should be packaged with the HF repo o/w people will need to figure out which tokenizer to use and point to another repo.

Sign up or log in to comment