build: 3785 (64c6af31) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu llama_model_loader: loaded meta data with 34 key-value pairs and 338 tensors from Qwen2.5-1.5B-Instruct-IMat-GGUF/Qwen2.5-1.5B-Instruct.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = qwen2 llama_model_loader: - kv 1: general.type str = model llama_model_loader: - kv 2: general.name str = Qwen2.5 1.5B Instruct llama_model_loader: - kv 3: general.finetune str = Instruct llama_model_loader: - kv 4: general.basename str = Qwen2.5 llama_model_loader: - kv 5: general.size_label str = 1.5B llama_model_loader: - kv 6: general.license str = apache-2.0 llama_model_loader: - kv 7: general.license.link str = https://huggingface.co/Qwen/Qwen2.5-1... llama_model_loader: - kv 8: general.base_model.count u32 = 1 llama_model_loader: - kv 9: general.base_model.0.name str = Qwen2.5 1.5B llama_model_loader: - kv 10: general.base_model.0.organization str = Qwen llama_model_loader: - kv 11: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen2.5-1.5B llama_model_loader: - kv 12: general.tags arr[str,2] = ["chat", "text-generation"] llama_model_loader: - kv 13: general.languages arr[str,1] = ["en"] llama_model_loader: - kv 14: qwen2.block_count u32 = 28 llama_model_loader: - kv 15: qwen2.context_length u32 = 32768 llama_model_loader: - kv 16: qwen2.embedding_length u32 = 1536 llama_model_loader: - kv 17: qwen2.feed_forward_length u32 = 8960 llama_model_loader: - kv 18: qwen2.attention.head_count u32 = 12 llama_model_loader: - kv 19: qwen2.attention.head_count_kv u32 = 2 llama_model_loader: - kv 20: qwen2.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 21: qwen2.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 22: general.file_type u32 = 7 llama_model_loader: - kv 23: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 24: tokenizer.ggml.pre str = qwen2 llama_model_loader: - kv 25: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ... llama_model_loader: - kv 26: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 27: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",... llama_model_loader: - kv 28: tokenizer.ggml.eos_token_id u32 = 151645 llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 151643 llama_model_loader: - kv 30: tokenizer.ggml.bos_token_id u32 = 151643 llama_model_loader: - kv 31: tokenizer.ggml.add_bos_token bool = false llama_model_loader: - kv 32: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>... llama_model_loader: - kv 33: general.quantization_version u32 = 2 llama_model_loader: - type f32: 141 tensors llama_model_loader: - type q8_0: 197 tensors llm_load_vocab: special tokens cache size = 22 llm_load_vocab: token to piece cache size = 0.9310 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = qwen2 llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 151936 llm_load_print_meta: n_merges = 151387 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 1536 llm_load_print_meta: n_layer = 28 llm_load_print_meta: n_head = 12 llm_load_print_meta: n_head_kv = 2 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 6 llm_load_print_meta: n_embd_k_gqa = 256 llm_load_print_meta: n_embd_v_gqa = 256 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 8960 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 2 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: ssm_dt_b_c_rms = 0 llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 1.54 B llm_load_print_meta: model size = 1.53 GiB (8.50 BPW) llm_load_print_meta: general.name = Qwen2.5 1.5B Instruct llm_load_print_meta: BOS token = 151643 '<|endoftext|>' llm_load_print_meta: EOS token = 151645 '<|im_end|>' llm_load_print_meta: PAD token = 151643 '<|endoftext|>' llm_load_print_meta: LF token = 148848 'ÄĬ' llm_load_print_meta: EOT token = 151645 '<|im_end|>' llm_load_print_meta: max token length = 256 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes llm_load_tensors: ggml ctx size = 0.30 MiB llm_load_tensors: offloading 28 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 29/29 layers to GPU llm_load_tensors: CPU buffer size = 236.47 MiB llm_load_tensors: CUDA0 buffer size = 1564.63 MiB ............................................................................ llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: n_batch = 512 llama_new_context_with_model: n_ubatch = 512 llama_new_context_with_model: flash_attn = 0 llama_new_context_with_model: freq_base = 1000000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: CUDA0 KV buffer size = 14.00 MiB llama_new_context_with_model: KV self size = 14.00 MiB, K (f16): 7.00 MiB, V (f16): 7.00 MiB llama_new_context_with_model: CUDA_Host output buffer size = 0.58 MiB llama_new_context_with_model: CUDA0 compute buffer size = 299.75 MiB llama_new_context_with_model: CUDA_Host compute buffer size = 4.01 MiB llama_new_context_with_model: graph nodes = 986 llama_new_context_with_model: graph splits = 2 system_info: n_threads = 25 (n_threads_batch = 25) / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | RISCV_VECT = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | compute_imatrix: tokenizing the input .. compute_imatrix: tokenization took 131.741 ms compute_imatrix: computing over 128 chunks with batch_size 512 compute_imatrix: 0.38 seconds per pass - ETA 0.82 minutes [1]5.6033,[2]4.0235,[3]4.0926,[4]4.7834,[5]4.6231,[6]4.2649,[7]4.8004,[8]4.8789,[9]5.4428,[10]5.2342,[11]5.1214,[12]5.6121,[13]6.2956,[14]6.5763,[15]7.1743,[16]7.5646,[17]7.7295,[18]8.2087,[19]7.9258,[20]8.0246,[21]8.1162,[22]8.1416,[23]7.9441,[24]8.1539,[25]8.3425,[26]8.2438,[27]8.4273,[28]8.6201,[29]8.9422,[30]8.8979,[31]8.6011,[32]8.2590,[33]8.0650,[34]7.9231,[35]7.8159,[36]7.7714,[37]7.7802,[38]7.8808,[39]7.9307,[40]8.1216,[41]8.1891,[42]8.5198,[43]8.8134,[44]9.0846,[45]9.2768,[46]9.4155,[47]9.2804,[48]9.3235,[49]9.4182,[50]9.4775,[51]9.3578,[52]9.4334,[53]9.6170,[54]9.7244,[55]9.8449,[56]9.9123,[57]9.9357,[58]9.9921,[59]10.0049,[60]10.0238,[61]9.9682,[62]9.9255,[63]9.9780,[64]10.0482,[65]9.9664,[66]9.9506,[67]9.9341,[68]9.8251,[69]9.7468,[70]9.7182,[71]9.6565,[72]9.6264,[73]9.6291,[74]9.5306,[75]9.4428,[76]9.3469,[77]9.3104,[78]9.2727,[79]9.2287,[80]9.1295,[81]9.1555,[82]9.1382,[83]9.0660,[84]9.0859,[85]9.0935,[86]9.0300,[87]8.9972,[88]8.9943,[89]9.0173,[90]9.0358,[91]9.0304,[92]8.9541,[93]8.8919,[94]8.8132,[95]8.7379,[96]8.6775,[97]8.6021,[98]8.5266,[99]8.4935,[100]8.5062,[101]8.5271,[102]8.6421,[103]8.7505,[104]8.8356,[105]8.9794,[106]9.0741,[107]9.1126,[108]9.0799,[109]9.0698,[110]9.0640,[111]9.0382,[112]8.9831,[113]8.9977,[114]9.0497,[115]9.0581,[116]9.0733,[117]9.0998,[118]9.1421,[119]9.1443,[120]9.1395,[121]9.1404,[122]9.0914,[123]9.1422,[124]9.2048,[125]9.2599,[126]9.3238,[127]9.3841,[128]9.4452, Final estimate: PPL = 9.4452 +/- 0.13705 llama_perf_context_print: load time = 1146.30 ms llama_perf_context_print: prompt eval time = 30735.08 ms / 65536 tokens ( 0.47 ms per token, 2132.29 tokens per second) llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second) llama_perf_context_print: total time = 32424.61 ms / 65537 tokens