适配新版transformers | adapt transformers update (https://github.com/huggingface/transformers/pull/31116)

#58

2024-07-02_172230.png
新版的transformers改了这里的返回值,会导致以下错误 | the latest transformer changed the return value in this picture, cause the error blow
修改没有处理返回值是cache_params的情况,希望有人补充 | this modify not complete cache_params type return, anyone volunteer?
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████| 10/10 [00:15<00:00, 1.52s/it]
C:\Users\Hiber.cache\huggingface\modules\transformers_modules\model\modeling_chatglm.py:271: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
context_layer = torch.nn.functional.scaled_dot_product_attention(query_layer, key_layer, value_layer,
Traceback (most recent call last):
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\runpy.py", line 88, in _run_code
exec(code, run_globals)
File "c:\Users\Hiber.vscode\extensions\ms-python.debugpy-2024.6.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy__main__.py", line 39, in
cli.main()
File "c:\Users\Hiber.vscode\extensions\ms-python.debugpy-2024.6.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\Hiber.vscode\extensions\ms-python.debugpy-2024.6.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="main")
File "c:\Users\Hiber.vscode\extensions\ms-python.debugpy-2024.6.0-win32-x64\bundled\libs\debugpy_vendored\pydevd_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.vscode\extensions\ms-python.debugpy-2024.6.0-win32-x64\bundled\libs\debugpy_vendored\pydevd_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "c:\Users\Hiber.vscode\extensions\ms-python.debugpy-2024.6.0-win32-x64\bundled\libs\debugpy_vendored\pydevd_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "D:\works\gml4-9b-chat\main.py", line 35, in
outputs = model.generate(**inputs, **gen_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\transformers\generation\utils.py", line 1914, in generate
result = self._sample(
^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\transformers\generation\utils.py", line 2651, in _sample
outputs = self(
^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\accelerate\hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Hiber.cache\huggingface\modules\transformers_modules\model\modeling_chatglm.py", line 1005, in forward
transformer_outputs = self.transformer(
^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\accelerate\hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Hiber.cache\huggingface\modules\transformers_modules\model\modeling_chatglm.py", line 901, in forward
hidden_states, presents, all_hidden_states, all_self_attentions = self.encoder(
^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\accelerate\hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Hiber.cache\huggingface\modules\transformers_modules\model\modeling_chatglm.py", line 726, in forward
layer_ret = layer(
^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\accelerate\hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Hiber.cache\huggingface\modules\transformers_modules\model\modeling_chatglm.py", line 629, in forward
attention_output, kv_cache = self.self_attention(
^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Hiber.pyenv\pyenv-win\versions\3.11.2\Lib\site-packages\accelerate\hooks.py", line 166, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Hiber.cache\huggingface\modules\transformers_modules\model\modeling_chatglm.py", line 494, in forward
cache_k, cache_v = kv_cache
^^^^^^^^^^^^^^^^
ValueError: too many values to unpack (expected 2)

my savior thxxxxxxx

my savior thxxxxxxx

glad this is helpful :P

Cannot merge
This branch has merge conflicts in the following files:
  • modeling_chatglm.py

Sign up or log in to comment