Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

casperhansen
/
llama-3-70b-instruct-awq

Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference
4-bit precision
awq
Model card Files Files and versions Community
10
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Marlin kernel in vLLM - new checkpoint?

#10 opened 10 months ago by
zoltan-fedor

Based on llama-2?

1
#9 opened 11 months ago by
rdewolff

[AUTOMATED] Model Memory Requirements

#8 opened 12 months ago by
muellerzr

How to setup the generation_config properly?

#7 opened 12 months ago by
KIlian42

The inference API is too slow.

1
#6 opened about 1 year ago by
YernazarBis

How did you create AWQ-quantized weights?

4
#5 opened about 1 year ago by
nightdude

encountered error when loading model

7
#4 opened about 1 year ago by
zhouzr
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs