How to train this model with text-generation-webui?

#7
by wmr - opened

I tried adding an alpaca style dataset with text-generation-webui and it gave the following error in the training:

(...) training.py", line 247, in tokenize
    result = shared.tokenizer(prompt, truncation=True, max_length=cutoff_len + 1, padding="max_length")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'LlamaCppModel' object is not callable

I'm using q5_1.

You can't train GGML models I'm afraid.

You'll need a model for GPU inference, either an unquantised HF model, or (I think) a GPTQ 4bit model can work. I've never tried training in text-gen-ui myself so not sure of the specifics. But it definitely can't work on a GGML model with llama.cpp (not yet anyway - maybe llama.cpp will add that in the future!)

How to add new knowledge to this model then, without a full retraining? Thanks.

You would do fine tuning on the HF model, available here: https://huggingface.co/TheBloke/wizardLM-7B-HF

You'll need a GPU with enough VRAM, though. Which means you'll need at least 16GB VRAM. If you have less, you could investigate doing fine tuning in 4bit, eg check out these repos:
https://github.com/johnsmith0031/alpaca_lora_4bit
https://github.com/stochasticai/xturing/tree/main/examples/int4_finetuning

Sign up or log in to comment