runtime error

tokenizer_config.json: 0%| | 0.00/212 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 212/212 [00:00<00:00, 887kB/s] tokenizer.json: 0%| | 0.00/2.41M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.41M/2.41M [00:00<00:00, 40.7MB/s] special_tokens_map.json: 0%| | 0.00/30.0 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 30.0/30.0 [00:00<00:00, 189kB/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. config.json: 0%| | 0.00/631 [00:00<?, ?B/s] config.json: 100%|██████████| 631/631 [00:00<00:00, 4.36MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained return model_class.from_pretrained( File "/home/user/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2996, in from_pretrained raise ImportError( ImportError: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`

Container logs:

Fetching error logs...