Adding `safetensors` variant of this model
#13 opened 11 months ago
by
SFconvertbot
![](https://cdn-avatars.huggingface.co/v1/production/uploads/635fd4cc14657fb8cff2a081/GDkyDwAcuqDBpaOvQgJuq.png)
Adding Evaluation Results
#11 opened over 1 year ago
by
leaderboard-pr-bot
![](https://cdn-avatars.huggingface.co/v1/production/uploads/655506df9dc61e22c5f9c732/IZGvup0FdVlioPPIPnzZv.jpeg)
When we can expect vicuna variant of CodeLlama-2 34b model?
#10 opened over 1 year ago
by
perelmanych
Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check
#9 opened over 1 year ago
by
Shivam1410
Bigger is NOT always better...
5
#8 opened over 1 year ago
by
MrDevolver
Adding `safetensors` variant of this model
#6 opened over 1 year ago
by
mmahlwy3
Adding `safetensors` variant of this model
#5 opened over 1 year ago
by
mmahlwy3
How much GPU graphics memory is required for deployment
2
#3 opened over 1 year ago
by
chenfeicqq
Is there a 4bit quantize version for the FastChat?
6
#2 opened over 1 year ago
by
ruradium
Prompt format?
5
#1 opened over 1 year ago
by
Thireus