Text Generation
GGUF
Portuguese

Criação de arquivo LoRA via oobabooga

#1
by lucasuniverse - opened

Bom dia a todos,

Gostaria de saber como posso gerar um arquivo LORA para o modelo mistralreloadbr_v2_ptbr.Q4_K_M.gguf. Estou tentando fazer isso pelo Oobabooga, mas sempre ocorre um erro. Existe alguma outra maneira de gerar o arquivo LORA?

TypeError: LlamaCppModel.encode() got an unexpected keyword argument 'truncation'
11:49:28-483516 WARNING LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models.
(Found model type: LlamaCppModel)
11:49:33-488129 INFO Loading raw text file dataset
Traceback (most recent call last):
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\queueing.py", line 561, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 260, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1741, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1308, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 575, in async_iteration
return await iterator.anext()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 568, in anext
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 551, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\installer_files\env\Lib\site-packages\gradio\utils.py", line 734, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\modules\training.py", line 456, in do_train
train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\modules\training.py", line 456, in
train_data = Dataset.from_list([tokenize(x) for x in text_chunks])
^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\modules\training.py", line 374, in tokenize
input_ids = encode(prompt, True)
^^^^^^^^^^^^^^^^^^^^
File "C:\AI\webUI\oobabooga\text-generation-webui-main\modules\training.py", line 362, in
Captura de tela 2024-04-10 115427.png
encode
result = shared.tokenizer.encode(text, truncation=True, max_length=cutoff_len)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: LlamaCppModel.encode() got an unexpected keyword argument 'truncation'

Sign up or log in to comment