Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

original model Meta-Llama-3-70B-Instruct

EXL2 quants of Meta-Llama-3-70B-Instruct

Located in the main branch

  • 2.55 bits per weight
  • measurement.json

์›๋ณธ ๋ชจ๋ธ Meta-Llama-3-70B-Instruct

Meta-Llama-3-70B-Instruct ๋ชจ๋ธ EXL2 ์–‘์žํ™”

๋ฉ”์ธ branch์— ์žˆ๋Š” ํŒŒ์ผ

  • 2.55 bits per weight
  • measurement.json
Downloads last month
3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.