Fix loading in Hugging Face transformers

#21
by awni - opened

Currently the tokenizer does not load when you try to load the model with Hugging Face transformers. I realize it's not there, but it would be good to package it in the model repos so that we can use the model with a single repo ID rather than pointing to a compatible tokenizer elsewhere.

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("apple/OpenELM-270M")

+1 on this streamline use of OpenELM. Thanks!

I’m with @awni ,

This fix would save a lot of time and tinkering!

Having the tokenizer and model in the same repo is a best practice. And will avoid mistakes.

Sign up or log in to comment