Update tokenizer_config.json #9

change LLaMATokenizer => LlamaTokenizer to solve issues

Thanks a lot, that fixed the issues for me!

Where is tokenizer_config located? I can't find it anywhere in my system when i use the 2 commands below.

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("decapoda-research/llama-7b-hf")

Thanks a lot, that fixed the issues for me!

which transformer version are you used? It is still not work for me.

Does anybody have any clear instructions on how to fix this? Or perhaps a Google Colab notebook explaining the steps to fix this.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment