runtime error

oken = 0 '<unk>' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.17 MiB llm_load_tensors: offloading 0 repeating layers to GPU llm_load_tensors: offloaded 0/49 layers to GPU llm_load_tensors: CPU buffer size = 7272.38 MiB ................................................................................................... llama_new_context_with_model: n_ctx = 2000 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CPU KV buffer size = 375.00 MiB llama_new_context_with_model: KV self size = 375.00 MiB, K (f16): 187.50 MiB, V (f16): 187.50 MiB llama_new_context_with_model: CPU input buffer size = 11.92 MiB llama_new_context_with_model: CPU compute buffer size = 163.90 MiB llama_new_context_with_model: graph splits (measure): 1 AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | Model metadata: {'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.rope.freq_base': '1000000.000000', 'llama.context_length': '32768', 'general.name': 'models', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '48', 'llama.attention.head_count_kv': '8', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '17'} Traceback (most recent call last): File "/home/user/app/app.py", line 115, in <module> chatbot = gr.Chatbot(label="δ»₯ηœŸη†δΉ‹ε").style(height=400) AttributeError: 'Chatbot' object has no attribute 'style'. Did you mean: 'scale'?

Container logs:

Fetching error logs...