missing tokenizer.model

#2
by vesofilev - opened

I had to copy tokenizer.model from v1.0 to be able to start the model, since I was encountering this issue:

https://github.com/ThilinaRajapakse/simpletransformers/issues/873

Is that the expected approach?

И как го подкара ?

Копирах tokenizer.model oт version0.1 и след това с LLaMA-Factory

Благодаря,
ще пробвам.

На какъв хардуер работи ?

аз ползвах Tesla V100-SXM2-32GB, но заема малко повече от половината памет

Tokenizer.model не ви трябва. Относно VRAM, помислете за quantuzation.

При мене тръгна. Обаче лъже като дърт циганин.

Institute for Computer Science, Artificial intelligence and Technology org

tokenizer.model has been added to the repo. You were correct in using the one from v0.1. You can use one of the GGUF quantized versions if it doesn't fit on your hardware, otherwise, you need more than 15GB of VRAM to run without offloading to CPU. If you have a good GPU but not enough memory, I believe llamacpp can help with doing the opposite - GPU offloading.

Sign up or log in to comment