legraphista commited on
Commit
265e934
1 Parent(s): ceb2adc

Upload imatrix.log with huggingface_hub

Browse files
Files changed (1) hide show
  1. imatrix.log +148 -0
imatrix.log ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from mathstral-7B-v0.1-IMat-GGUF/mathstral-7B-v0.1.Q8_0.gguf.hardlink.gguf (version GGUF V3 (latest))
2
+ llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
3
+ llama_model_loader: - kv 0: general.architecture str = llama
4
+ llama_model_loader: - kv 1: general.type str = model
5
+ llama_model_loader: - kv 2: general.name str = mathstral-7B-v0.1
6
+ llama_model_loader: - kv 3: llama.block_count u32 = 32
7
+ llama_model_loader: - kv 4: llama.context_length u32 = 32768
8
+ llama_model_loader: - kv 5: llama.embedding_length u32 = 4096
9
+ llama_model_loader: - kv 6: llama.feed_forward_length u32 = 14336
10
+ llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
11
+ llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
12
+ llama_model_loader: - kv 9: llama.rope.freq_base f32 = 1000000.000000
13
+ llama_model_loader: - kv 10: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
14
+ llama_model_loader: - kv 11: general.file_type u32 = 7
15
+ llama_model_loader: - kv 12: llama.vocab_size u32 = 32768
16
+ llama_model_loader: - kv 13: llama.rope.dimension_count u32 = 128
17
+ llama_model_loader: - kv 14: tokenizer.ggml.add_space_prefix bool = false
18
+ llama_model_loader: - kv 15: tokenizer.ggml.model str = llama
19
+ llama_model_loader: - kv 16: tokenizer.ggml.pre str = default
20
+ llama_model_loader: - kv 17: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[...
21
+ llama_model_loader: - kv 18: tokenizer.ggml.scores arr[f32,32768] = [-1000.000000, -1000.000000, -1000.00...
22
+ llama_model_loader: - kv 19: tokenizer.ggml.token_type arr[i32,32768] = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
23
+ llama_model_loader: - kv 20: tokenizer.ggml.bos_token_id u32 = 1
24
+ llama_model_loader: - kv 21: tokenizer.ggml.eos_token_id u32 = 2
25
+ llama_model_loader: - kv 22: tokenizer.ggml.unknown_token_id u32 = 0
26
+ llama_model_loader: - kv 23: tokenizer.ggml.add_bos_token bool = true
27
+ llama_model_loader: - kv 24: tokenizer.ggml.add_eos_token bool = false
28
+ llama_model_loader: - kv 25: general.quantization_version u32 = 2
29
+ llama_model_loader: - type f32: 65 tensors
30
+ llama_model_loader: - type q8_0: 226 tensors
31
+ llm_load_vocab: special tokens cache size = 771
32
+ llm_load_vocab: token to piece cache size = 0.1732 MB
33
+ llm_load_print_meta: format = GGUF V3 (latest)
34
+ llm_load_print_meta: arch = llama
35
+ llm_load_print_meta: vocab type = SPM
36
+ llm_load_print_meta: n_vocab = 32768
37
+ llm_load_print_meta: n_merges = 0
38
+ llm_load_print_meta: vocab_only = 0
39
+ llm_load_print_meta: n_ctx_train = 32768
40
+ llm_load_print_meta: n_embd = 4096
41
+ llm_load_print_meta: n_layer = 32
42
+ llm_load_print_meta: n_head = 32
43
+ llm_load_print_meta: n_head_kv = 8
44
+ llm_load_print_meta: n_rot = 128
45
+ llm_load_print_meta: n_swa = 0
46
+ llm_load_print_meta: n_embd_head_k = 128
47
+ llm_load_print_meta: n_embd_head_v = 128
48
+ llm_load_print_meta: n_gqa = 4
49
+ llm_load_print_meta: n_embd_k_gqa = 1024
50
+ llm_load_print_meta: n_embd_v_gqa = 1024
51
+ llm_load_print_meta: f_norm_eps = 0.0e+00
52
+ llm_load_print_meta: f_norm_rms_eps = 1.0e-05
53
+ llm_load_print_meta: f_clamp_kqv = 0.0e+00
54
+ llm_load_print_meta: f_max_alibi_bias = 0.0e+00
55
+ llm_load_print_meta: f_logit_scale = 0.0e+00
56
+ llm_load_print_meta: n_ff = 14336
57
+ llm_load_print_meta: n_expert = 0
58
+ llm_load_print_meta: n_expert_used = 0
59
+ llm_load_print_meta: causal attn = 1
60
+ llm_load_print_meta: pooling type = 0
61
+ llm_load_print_meta: rope type = 0
62
+ llm_load_print_meta: rope scaling = linear
63
+ llm_load_print_meta: freq_base_train = 1000000.0
64
+ llm_load_print_meta: freq_scale_train = 1
65
+ llm_load_print_meta: n_ctx_orig_yarn = 32768
66
+ llm_load_print_meta: rope_finetuned = unknown
67
+ llm_load_print_meta: ssm_d_conv = 0
68
+ llm_load_print_meta: ssm_d_inner = 0
69
+ llm_load_print_meta: ssm_d_state = 0
70
+ llm_load_print_meta: ssm_dt_rank = 0
71
+ llm_load_print_meta: model type = 7B
72
+ llm_load_print_meta: model ftype = Q8_0
73
+ llm_load_print_meta: model params = 7.25 B
74
+ llm_load_print_meta: model size = 7.17 GiB (8.50 BPW)
75
+ llm_load_print_meta: general.name = mathstral-7B-v0.1
76
+ llm_load_print_meta: BOS token = 1 '<s>'
77
+ llm_load_print_meta: EOS token = 2 '</s>'
78
+ llm_load_print_meta: UNK token = 0 '<unk>'
79
+ llm_load_print_meta: LF token = 781 '<0x0A>'
80
+ llm_load_print_meta: max token length = 48
81
+ ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
82
+ ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
83
+ ggml_cuda_init: found 1 CUDA devices:
84
+ Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
85
+ llm_load_tensors: ggml ctx size = 0.27 MiB
86
+ llm_load_tensors: offloading 32 repeating layers to GPU
87
+ llm_load_tensors: offloading non-repeating layers to GPU
88
+ llm_load_tensors: offloaded 33/33 layers to GPU
89
+ llm_load_tensors: CPU buffer size = 136.00 MiB
90
+ llm_load_tensors: CUDA0 buffer size = 7209.02 MiB
91
+ ...................................................................................................
92
+ llama_new_context_with_model: n_ctx = 512
93
+ llama_new_context_with_model: n_batch = 512
94
+ llama_new_context_with_model: n_ubatch = 512
95
+ llama_new_context_with_model: flash_attn = 0
96
+ llama_new_context_with_model: freq_base = 1000000.0
97
+ llama_new_context_with_model: freq_scale = 1
98
+ llama_kv_cache_init: CUDA0 KV buffer size = 64.00 MiB
99
+ llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
100
+ llama_new_context_with_model: CUDA_Host output buffer size = 0.12 MiB
101
+ llama_new_context_with_model: CUDA0 compute buffer size = 81.00 MiB
102
+ llama_new_context_with_model: CUDA_Host compute buffer size = 9.01 MiB
103
+ llama_new_context_with_model: graph nodes = 1030
104
+ llama_new_context_with_model: graph splits = 2
105
+
106
+ system_info: n_threads = 25 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 |
107
+ compute_imatrix: tokenizing the input ..
108
+ compute_imatrix: tokenization took 92.115 ms
109
+ compute_imatrix: computing over 148 chunks with batch_size 512
110
+ compute_imatrix: 0.69 seconds per pass - ETA 1.70 minutes
111
+ [1]3.5452,[2]2.8329,[3]3.0262,[4]3.1460,[5]3.5080,[6]3.5694,[7]3.2126,[8]3.6830,[9]3.8318,
112
+ save_imatrix: stored collected data after 10 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
113
+ [10]4.1986,[11]4.3073,[12]4.0355,[13]4.2441,[14]4.5271,[15]4.9188,[16]5.0713,[17]5.3272,[18]5.4369,[19]5.5136,
114
+ save_imatrix: stored collected data after 20 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
115
+ [20]5.6691,[21]5.6314,[22]5.4412,[23]5.5454,[24]5.5499,[25]5.5891,[26]5.4078,[27]5.6123,[28]5.5037,[29]5.5842,
116
+ save_imatrix: stored collected data after 30 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
117
+ [30]5.5128,[31]5.6122,[32]5.7471,[33]5.9049,[34]5.9470,[35]5.8957,[36]5.6761,[37]5.5096,[38]5.3793,[39]5.2930,
118
+ save_imatrix: stored collected data after 40 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
119
+ [40]5.2242,[41]5.2013,[42]5.1347,[43]5.0977,[44]5.0190,[45]4.9909,[46]5.0124,[47]5.0766,[48]5.1782,[49]5.2092,
120
+ save_imatrix: stored collected data after 50 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
121
+ [50]5.3932,[51]5.5280,[52]5.7135,[53]5.8522,[54]5.9860,[55]5.9293,[56]5.8975,[57]5.9817,[58]6.0532,[59]6.0657,
122
+ save_imatrix: stored collected data after 60 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
123
+ [60]6.0003,[61]5.9817,[62]5.9937,[63]6.0510,[64]6.1530,[65]6.2290,[66]6.2539,[67]6.2774,[68]6.3050,[69]6.3105,
124
+ save_imatrix: stored collected data after 70 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
125
+ [70]6.3049,[71]6.2421,[72]6.1953,[73]6.1624,[74]6.1771,[75]6.2074,[76]6.1869,[77]6.1996,[78]6.2031,[79]6.1725,
126
+ save_imatrix: stored collected data after 80 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
127
+ [80]6.1499,[81]6.1183,[82]6.1181,[83]6.1110,[84]6.1010,[85]6.1118,[86]6.0884,[87]6.0691,[88]6.0443,[89]6.0441,
128
+ save_imatrix: stored collected data after 90 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
129
+ [90]6.0145,[91]5.9830,[92]5.9591,[93]5.9387,[94]5.9632,[95]5.9798,[96]5.9557,[97]5.9501,[98]5.9288,[99]5.9606,
130
+ save_imatrix: stored collected data after 100 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
131
+ [100]5.9009,[101]5.8985,[102]5.8895,[103]5.9036,[104]5.9168,[105]5.9061,[106]5.8697,[107]5.8317,[108]5.7936,[109]5.7524,
132
+ save_imatrix: stored collected data after 110 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
133
+ [110]5.7113,[111]5.6772,[112]5.6418,[113]5.6044,[114]5.5670,[115]5.5359,[116]5.5421,[117]5.5597,[118]5.6181,[119]5.6725,
134
+ save_imatrix: stored collected data after 120 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
135
+ [120]5.7221,[121]5.8018,[122]5.8702,[123]5.8742,[124]5.8832,[125]5.8403,[126]5.8329,[127]5.8244,[128]5.8196,[129]5.7959,
136
+ save_imatrix: stored collected data after 130 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
137
+ [130]5.7644,[131]5.7874,[132]5.8225,[133]5.8230,[134]5.8249,[135]5.8384,[136]5.8640,[137]5.8734,[138]5.8819,[139]5.9024,
138
+ save_imatrix: stored collected data after 140 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
139
+ [140]5.9061,[141]5.8953,[142]5.9452,[143]5.9894,[144]6.0205,[145]6.0665,[146]6.1046,[147]6.1571,[148]6.1966,
140
+ save_imatrix: stored collected data after 148 chunks in mathstral-7B-v0.1-IMat-GGUF/imatrix.dat
141
+
142
+ llama_print_timings: load time = 3695.51 ms
143
+ llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
144
+ llama_print_timings: prompt eval time = 83480.72 ms / 75776 tokens ( 1.10 ms per token, 907.71 tokens per second)
145
+ llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
146
+ llama_print_timings: total time = 86882.14 ms / 75777 tokens
147
+
148
+ Final estimate: PPL = 6.1966 +/- 0.07627