Steelskull/L3.3-Electra-R1-70b-GGUF

This repo provides several GGUF imatrix quantizations of Steelskull/L3.3-Electra-R1-70b.

Quantizations (worst to best)

  • IQ2_M
  • IQ3_XS
  • IQ3_M
  • Q4_K_S
  • IQ4_XS
  • Q4_K_M
  • Q5_K_S
  • Q5_K_M
  • Q6_K
  • Q8_0

The imatrix was generated using the same calibration data as Bartowski, and both the calibration data as well as the imatrix itself are provided here.

Downloads last month
1,523
GGUF
Model size
70.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ddh0/L3.3-Electra-R1-70b-GGUF