runtime error

config.json: 0%| | 0.00/666 [00:00<?, ?B/s] config.json: 100%|██████████| 666/666 [00:00<00:00, 4.58MB/s] vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 37.9MB/s] merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 97.8MB/s] tokenizer.json: 0%| | 0.00/1.36M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 87.1MB/s] ORTModelForCausalLM loaded a legacy ONNX model with no position_ids input, although this input is required for batched generation for the architecture gpt2. We strongly encourage to re-export the model with optimum>=1.14 for position_ids and batched inference support. Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> model = ORTModelForCausalLM(ort_sess,model_save_dir=".",config=config) File "/home/user/.local/lib/python3.10/site-packages/optimum/onnxruntime/modeling_decoder.py", line 166, in __init__ if use_cache ^ self.use_cache: TypeError: unsupported operand type(s) for ^: 'NoneType' and 'bool'

Container logs:

Fetching error logs...