runtime error

Traceback (most recent call last): File "/home/user/app/app.py", line 21, in <module> model = model.to_bettertransformer() File "/home/user/.pyenv/versions/3.10.14/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4310, in to_bettertransformer return BetterTransformer.transform(self) File "/home/user/.pyenv/versions/3.10.14/lib/python3.10/contextlib.py", line 79, in inner return func(*args, **kwds) File "/home/user/.pyenv/versions/3.10.14/lib/python3.10/site-packages/optimum/bettertransformer/transformation.py", line 211, in transform raise ValueError( ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type whisper. As such, there is no need to use `model.to_bettertransformers()` or `BetterTransformer.transform(model)` from the Optimum library. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it. Details: https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention.

Container logs:

Fetching error logs...