Transformers
falcon

Cuda error 700

#11
by pshah - opened

Hi, I am new to this AI/ML, I know my hardware has enough resources to run this model (250GB+ RAM) and (8 A6000 cards), I have installed all libraries correctly but I am facing below issue

2023-09-22 13:08:09,188 [INFO] Load pretrained SentenceTransformer: models/embedding_models/all-mpnet-base-v2/
2023-09-22 13:08:09,324 [INFO] Created a temporary directory at /tmp/tmp2q7f4zri
2023-09-22 13:08:09,325 [INFO] Writing /tmp/tmp2q7f4zri/_remote_module_non_scriptable.py
2023-09-22 13:08:11,008 [INFO] Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2023-09-22 13:08:11,034 [INFO] Successfully imported ClickHouse Connect C data optimizations
2023-09-22 13:08:11,040 [INFO] Using python library for writing JSON byte strings
Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.05s/it]
/home/condor/.local/lib/python3.10/site-packages/langchain/utils/utils.py:157: UserWarning: WARNING! n_gqa is not default parameter.
n_gqa was transferred to model_kwargs.
Please confirm that n_gqa is what you intended.
warnings.warn(
ggml_init_cublas: found 8 CUDA devices:
Device 0: NVIDIA RTX A6000, compute capability 8.6
Device 1: NVIDIA RTX A6000, compute capability 8.6
Device 2: NVIDIA RTX A6000, compute capability 8.6
Device 3: NVIDIA RTX A6000, compute capability 8.6
Device 4: NVIDIA RTX A6000, compute capability 8.6
Device 5: NVIDIA RTX A6000, compute capability 8.6
Device 6: NVIDIA RTX A6000, compute capability 8.6
Device 7: NVIDIA RTX A6000, compute capability 8.6
llama_model_loader: loaded meta data with 18 key-value pairs and 644 tensors from models/llms/falcon-180b-chat.Q4_K_M.gguf (version GGUF V2 (latest))
llama_model_loader: - tensor 0: blk.0.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 3: token_embd.weight q4_K [ 14848, 65024, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 6: blk.0.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 7: blk.0.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 8: blk.0.ffn_down.weight q6_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 9: blk.1.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 10: blk.1.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 11: blk.1.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 13: blk.1.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 14: blk.1.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 15: blk.1.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 16: blk.1.ffn_down.weight q6_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 17: blk.2.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 18: blk.2.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 19: blk.2.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 20: blk.2.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 21: blk.2.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 22: blk.2.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 23: blk.2.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 24: blk.2.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 25: blk.3.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 26: blk.3.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 27: blk.3.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 28: blk.3.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 29: blk.3.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 30: blk.3.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 31: blk.3.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 32: blk.3.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 33: blk.4.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 34: blk.4.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 35: blk.4.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 36: blk.4.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 37: blk.4.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 38: blk.4.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 39: blk.4.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 40: blk.4.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 41: blk.5.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 42: blk.5.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 43: blk.5.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 44: blk.5.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 45: blk.5.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 46: blk.5.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 47: blk.5.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 48: blk.5.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 49: blk.6.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 50: blk.6.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 51: blk.6.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 52: blk.6.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 53: blk.6.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 54: blk.6.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 55: blk.6.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 56: blk.6.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 57: blk.7.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 58: blk.7.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 59: blk.7.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 60: blk.7.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 61: blk.7.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 62: blk.7.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 63: blk.7.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 64: blk.7.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 65: blk.8.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 66: blk.8.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 67: blk.8.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 68: blk.8.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 69: blk.8.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 70: blk.8.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 71: blk.8.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 72: blk.8.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 73: blk.9.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 74: blk.9.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 75: blk.9.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 76: blk.10.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 77: blk.10.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 78: blk.10.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 79: blk.9.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 80: blk.9.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 81: blk.9.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 82: blk.9.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 83: blk.9.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 84: blk.10.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 85: blk.10.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 86: blk.10.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 87: blk.10.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 88: blk.10.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 89: blk.11.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 90: blk.11.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 91: blk.11.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 92: blk.11.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 93: blk.11.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 94: blk.11.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 95: blk.11.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 96: blk.11.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 97: blk.12.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 98: blk.12.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 99: blk.12.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 100: blk.12.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 101: blk.12.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 102: blk.12.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 103: blk.12.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 104: blk.12.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 105: blk.13.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 106: blk.13.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 107: blk.13.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 108: blk.13.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 109: blk.13.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 110: blk.13.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 111: blk.13.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 112: blk.13.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 113: blk.14.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 114: blk.14.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 115: blk.14.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 116: blk.14.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 117: blk.14.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 118: blk.14.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 119: blk.14.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 120: blk.14.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 121: blk.15.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 122: blk.15.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 123: blk.15.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 124: blk.15.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 125: blk.15.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 126: blk.15.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 127: blk.15.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 128: blk.15.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 129: blk.16.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 130: blk.16.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 131: blk.16.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 132: blk.16.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 133: blk.16.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 134: blk.16.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 135: blk.16.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 136: blk.16.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 137: blk.17.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 138: blk.17.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 139: blk.17.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 140: blk.17.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 141: blk.17.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 142: blk.17.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 143: blk.17.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 144: blk.17.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 145: blk.18.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 146: blk.18.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 147: blk.18.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 148: blk.18.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 149: blk.18.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 150: blk.18.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 151: blk.18.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 152: blk.18.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 153: blk.19.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 154: blk.19.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 155: blk.19.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 156: blk.19.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 157: blk.19.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 158: blk.19.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 159: blk.19.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 160: blk.19.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 161: blk.20.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 162: blk.20.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 163: blk.20.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 164: blk.20.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 165: blk.20.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 166: blk.20.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 167: blk.20.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 168: blk.20.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 169: blk.21.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 170: blk.21.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 171: blk.21.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 172: blk.21.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 173: blk.21.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 174: blk.21.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 175: blk.21.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 176: blk.21.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 177: blk.22.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 178: blk.22.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 179: blk.22.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 180: blk.22.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 181: blk.22.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 182: blk.22.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 183: blk.22.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 184: blk.22.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 185: blk.23.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 186: blk.23.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 187: blk.23.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 188: blk.23.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 189: blk.23.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 190: blk.23.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 191: blk.23.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 192: blk.23.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 193: blk.24.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 194: blk.24.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 195: blk.24.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 196: blk.24.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 197: blk.24.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 198: blk.24.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 199: blk.24.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 200: blk.24.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 201: blk.25.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 202: blk.25.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 203: blk.25.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 204: blk.25.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 205: blk.25.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 206: blk.25.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 207: blk.25.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 208: blk.25.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 209: blk.26.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 210: blk.26.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 211: blk.26.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 212: blk.26.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 213: blk.26.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 214: blk.26.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 215: blk.26.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 216: blk.26.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 217: blk.27.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 218: blk.27.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 219: blk.27.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 220: blk.27.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 221: blk.27.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 222: blk.27.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 223: blk.27.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 224: blk.27.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 225: blk.28.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 226: blk.28.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 227: blk.28.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 228: blk.28.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 229: blk.28.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 230: blk.28.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 231: blk.28.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 232: blk.28.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 233: blk.29.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 234: blk.29.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 235: blk.29.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 236: blk.29.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 237: blk.29.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 238: blk.29.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 239: blk.29.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 240: blk.29.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 241: blk.30.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 242: blk.30.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 243: blk.30.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 244: blk.30.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 245: blk.30.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 246: blk.30.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 247: blk.30.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 248: blk.30.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 249: blk.31.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 250: blk.31.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 251: blk.31.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 252: blk.31.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 253: blk.31.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 254: blk.31.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 255: blk.31.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 256: blk.31.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 257: blk.32.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 258: blk.32.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 259: blk.32.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 260: blk.32.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 261: blk.32.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 262: blk.32.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 263: blk.32.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 264: blk.32.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 265: blk.33.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 266: blk.33.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 267: blk.33.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 268: blk.33.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 269: blk.33.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 270: blk.33.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 271: blk.33.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 272: blk.33.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 273: blk.34.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 274: blk.34.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 275: blk.34.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 276: blk.34.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 277: blk.34.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 278: blk.34.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 279: blk.34.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 280: blk.34.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 281: blk.35.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 282: blk.35.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 283: blk.35.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 284: blk.35.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 285: blk.35.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 286: blk.35.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 287: blk.35.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 288: blk.35.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 289: blk.36.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 290: blk.36.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 291: blk.36.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 292: blk.36.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 293: blk.36.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 294: blk.36.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 295: blk.36.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 296: blk.36.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 297: blk.37.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 298: blk.37.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 299: blk.37.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 300: blk.37.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 301: blk.37.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 302: blk.37.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 303: blk.37.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 304: blk.37.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 305: blk.38.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 306: blk.38.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 307: blk.38.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 308: blk.38.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 309: blk.38.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 310: blk.38.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 311: blk.38.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 312: blk.38.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 313: blk.39.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 314: blk.39.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 315: blk.39.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 316: blk.39.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 317: blk.39.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 318: blk.39.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 319: blk.39.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 320: blk.39.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 321: blk.40.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 322: blk.40.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 323: blk.40.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 324: blk.40.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 325: blk.40.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 326: blk.40.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 327: blk.40.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 328: blk.40.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 329: blk.41.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 330: blk.41.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 331: blk.41.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 332: blk.41.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 333: blk.41.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 334: blk.41.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 335: blk.41.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 336: blk.41.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 337: blk.42.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 338: blk.42.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 339: blk.42.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 340: blk.42.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 341: blk.42.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 342: blk.42.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 343: blk.42.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 344: blk.42.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 345: blk.43.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 346: blk.43.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 347: blk.43.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 348: blk.43.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 349: blk.43.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 350: blk.43.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 351: blk.43.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 352: blk.43.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 353: blk.44.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 354: blk.44.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 355: blk.44.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 356: blk.44.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 357: blk.44.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 358: blk.44.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 359: blk.44.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 360: blk.44.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 361: blk.45.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 362: blk.45.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 363: blk.45.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 364: blk.45.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 365: blk.45.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 366: blk.45.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 367: blk.45.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 368: blk.45.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 369: blk.46.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 370: blk.46.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 371: blk.46.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 372: blk.46.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 373: blk.46.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 374: blk.46.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 375: blk.46.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 376: blk.46.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 377: blk.47.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 378: blk.47.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 379: blk.47.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 380: blk.47.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 381: blk.47.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 382: blk.47.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 383: blk.47.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 384: blk.47.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 385: blk.48.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 386: blk.48.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 387: blk.48.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 388: blk.48.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 389: blk.48.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 390: blk.48.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 391: blk.48.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 392: blk.48.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 393: blk.49.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 394: blk.49.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 395: blk.49.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 396: blk.49.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 397: blk.49.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 398: blk.49.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 399: blk.49.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 400: blk.49.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 401: blk.50.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 402: blk.50.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 403: blk.50.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 404: blk.50.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 405: blk.50.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 406: blk.50.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 407: blk.50.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 408: blk.50.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 409: blk.51.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 410: blk.51.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 411: blk.51.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 412: blk.51.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 413: blk.51.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 414: blk.51.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 415: blk.51.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 416: blk.51.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 417: blk.52.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 418: blk.52.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 419: blk.52.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 420: blk.52.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 421: blk.52.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 422: blk.52.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 423: blk.52.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 424: blk.52.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 425: blk.53.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 426: blk.53.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 427: blk.53.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 428: blk.53.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 429: blk.53.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 430: blk.53.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 431: blk.53.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 432: blk.53.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 433: blk.54.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 434: blk.54.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 435: blk.54.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 436: blk.54.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 437: blk.54.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 438: blk.54.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 439: blk.54.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 440: blk.54.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 441: blk.55.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 442: blk.55.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 443: blk.55.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 444: blk.55.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 445: blk.55.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 446: blk.55.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 447: blk.55.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 448: blk.55.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 449: blk.56.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 450: blk.56.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 451: blk.56.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 452: blk.56.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 453: blk.56.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 454: blk.56.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 455: blk.56.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 456: blk.56.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 457: blk.57.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 458: blk.57.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 459: blk.57.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 460: blk.57.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 461: blk.57.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 462: blk.57.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 463: blk.57.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 464: blk.57.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 465: blk.58.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 466: blk.58.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 467: blk.58.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 468: blk.58.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 469: blk.58.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 470: blk.58.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 471: blk.58.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 472: blk.58.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 473: blk.59.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 474: blk.59.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 475: blk.59.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 476: blk.59.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 477: blk.59.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 478: blk.59.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 479: blk.59.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 480: blk.59.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 481: blk.60.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 482: blk.60.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 483: blk.60.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 484: blk.60.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 485: blk.60.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 486: blk.60.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 487: blk.60.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 488: blk.60.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 489: blk.61.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 490: blk.61.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 491: blk.61.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 492: blk.61.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 493: blk.61.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 494: blk.61.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 495: blk.61.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 496: blk.61.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 497: blk.62.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 498: blk.62.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 499: blk.62.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 500: blk.62.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 501: blk.62.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 502: blk.62.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 503: blk.62.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 504: blk.62.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 505: blk.63.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 506: blk.63.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 507: blk.63.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 508: blk.63.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 509: blk.63.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 510: blk.63.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 511: blk.63.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 512: blk.63.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 513: blk.64.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 514: blk.64.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 515: blk.64.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 516: blk.64.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 517: blk.64.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 518: blk.64.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 519: blk.64.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 520: blk.64.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 521: blk.65.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 522: blk.65.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 523: blk.65.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 524: blk.65.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 525: blk.65.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 526: blk.65.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 527: blk.65.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 528: blk.65.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 529: blk.66.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 530: blk.66.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 531: blk.66.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 532: blk.66.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 533: blk.66.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 534: blk.66.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 535: blk.66.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 536: blk.66.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 537: blk.67.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 538: blk.67.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 539: blk.67.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 540: blk.67.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 541: blk.67.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 542: blk.67.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 543: blk.67.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 544: blk.67.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 545: blk.68.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 546: blk.68.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 547: blk.68.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 548: blk.68.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 549: blk.68.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 550: blk.68.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 551: blk.68.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 552: blk.68.ffn_down.weight q4_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 553: blk.69.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 554: blk.69.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 555: blk.69.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 556: blk.69.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 557: blk.69.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 558: blk.69.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 559: blk.69.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 560: blk.69.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 561: blk.70.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 562: blk.70.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 563: blk.70.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 564: blk.70.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 565: blk.70.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 566: blk.70.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 567: blk.70.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 568: blk.70.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 569: blk.71.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 570: blk.71.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 571: blk.71.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 572: blk.71.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 573: blk.71.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 574: blk.71.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 575: blk.71.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 576: blk.71.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 577: blk.72.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 578: blk.72.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 579: blk.72.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 580: blk.72.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 581: blk.72.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 582: blk.72.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 583: blk.72.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 584: blk.72.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 585: blk.73.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 586: blk.73.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 587: blk.73.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 588: blk.73.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 589: blk.73.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 590: blk.73.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 591: blk.73.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 592: blk.73.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 593: blk.74.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 594: blk.74.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 595: blk.74.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 596: blk.74.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 597: blk.74.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 598: blk.74.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 599: blk.74.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 600: blk.74.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 601: blk.75.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 602: blk.75.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 603: blk.75.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 604: blk.75.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 605: blk.75.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 606: blk.75.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 607: blk.75.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 608: blk.75.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 609: blk.76.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 610: blk.76.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 611: blk.76.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 612: blk.76.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 613: blk.76.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 614: blk.76.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 615: blk.76.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 616: blk.76.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 617: blk.77.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 618: blk.77.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 619: blk.77.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 620: blk.77.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 621: blk.77.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 622: blk.77.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 623: blk.77.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 624: blk.77.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 625: blk.78.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 626: blk.78.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 627: blk.78.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 628: blk.78.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 629: blk.78.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 630: blk.78.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 631: blk.78.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 632: blk.78.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 633: blk.79.ffn_up.weight q4_K [ 14848, 59392, 1, 1 ]
llama_model_loader: - tensor 634: blk.79.attn_output.weight q4_K [ 14848, 14848, 1, 1 ]
llama_model_loader: - tensor 635: blk.79.attn_qkv.weight q5_K [ 14848, 15872, 1, 1 ]
llama_model_loader: - tensor 636: output.weight q8_0 [ 14848, 65024, 1, 1 ]
llama_model_loader: - tensor 637: blk.79.attn_norm_2.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 638: blk.79.attn_norm_2.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 639: blk.79.attn_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 640: blk.79.attn_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 641: blk.79.ffn_down.weight q5_K [ 59392, 14848, 1, 1 ]
llama_model_loader: - tensor 642: output_norm.bias f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - tensor 643: output_norm.weight f32 [ 14848, 1, 1, 1 ]
llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: falcon.context_length u32
llama_model_loader: - kv 3: falcon.tensor_data_layout str
llama_model_loader: - kv 4: falcon.embedding_length u32
llama_model_loader: - kv 5: falcon.feed_forward_length u32
llama_model_loader: - kv 6: falcon.block_count u32
llama_model_loader: - kv 7: falcon.attention.head_count u32
llama_model_loader: - kv 8: falcon.attention.head_count_kv u32
llama_model_loader: - kv 9: falcon.attention.layer_norm_epsilon f32
llama_model_loader: - kv 10: general.file_type u32
llama_model_loader: - kv 11: tokenizer.ggml.model str
llama_model_loader: - kv 12: tokenizer.ggml.tokens arr
llama_model_loader: - kv 13: tokenizer.ggml.scores arr
llama_model_loader: - kv 14: tokenizer.ggml.token_type arr
llama_model_loader: - kv 15: tokenizer.ggml.merges arr
llama_model_loader: - kv 16: tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv 17: general.quantization_version u32
llama_model_loader: - type f32: 322 tensors
llama_model_loader: - type q8_0: 1 tensors
llama_model_loader: - type q4_K: 201 tensors
llama_model_loader: - type q5_K: 118 tensors
llama_model_loader: - type q6_K: 2 tensors
llm_load_print_meta: format = GGUF V2 (latest)
llm_load_print_meta: arch = falcon
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 65024
llm_load_print_meta: n_merges = 64784
llm_load_print_meta: n_ctx_train = 2048
llm_load_print_meta: n_ctx = 4096
llm_load_print_meta: n_embd = 14848
llm_load_print_meta: n_head = 232
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_rot = 64
llm_load_print_meta: n_gqa = 29
llm_load_print_meta: f_norm_eps = 1.0e-05
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: n_ff = 59392
llm_load_print_meta: freq_base = 10000.0
llm_load_print_meta: freq_scale = 1
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = mostly Q4_K - Medium
llm_load_print_meta: model size = 179.52 B
llm_load_print_meta: general.name = Falcon
llm_load_print_meta: BOS token = 11 '<|endoftext|>'
llm_load_print_meta: EOS token = 11 '<|endoftext|>'
llm_load_print_meta: LF token = 193 '
'
llm_load_tensors: ggml ctx size = 103455.55 MB
llm_load_tensors: using CUDA for GPU acceleration
ggml_cuda_set_main_device: using device 0 (NVIDIA RTX A6000) as main device
llm_load_tensors: mem required = 1496.54 MB (+ 640.00 MB per state)
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloaded 80/83 layers to GPU
llm_load_tensors: VRAM used: 101960 MB
....................................................................................................
llama_new_context_with_model: kv self size = 640.00 MB
llama_new_context_with_model: compute buffer total size = 1975.47 MB
llama_new_context_with_model: VRAM scratch buffer: 1974.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 27.68it/s]

CUDA error 700 at /tmp/pip-install-725h88a0/llama-cpp-python_5ff78294af3a4aa6b0df5a4b36561890/vendor/llama.cpp/ggml-cuda.cu:6411: an illegal memory access was encountered
current device: 1

Can someone guide me in right direction to fix it?

Sign up or log in to comment