Adding `safetensors` variant of this model
#20 opened 2 months ago
by
SFconvertbot
![](https://cdn-avatars.huggingface.co/v1/production/uploads/635fd4cc14657fb8cff2a081/GDkyDwAcuqDBpaOvQgJuq.png)
Compatibility with Llama-2-7b LoRAs
#18 opened 3 months ago
by
Balint831d
Adding Evaluation Results
#15 opened 7 months ago
by
leaderboard-pr-bot
![](https://cdn-avatars.huggingface.co/v1/production/uploads/655506df9dc61e22c5f9c732/IZGvup0FdVlioPPIPnzZv.jpeg)
Traceback (most recent call last)
#14 opened 8 months ago
by
fwrefewrfwe
llama2 forward pass seemingly not working with padded inputs, unless one element in batch is not padded
3
#13 opened 9 months ago
by
joehakim
Input validation error: `max_new_tokens` must be <= 1. Given: 20
1
#12 opened 9 months ago
by
reubenlee3
Loading model without fast-attn
1
#10 opened 9 months ago
by
TZ20
Great model. Plans for 13b version?
1
#9 opened 9 months ago
by
nahuel89p
![](https://cdn-avatars.huggingface.co/v1/production/uploads/64595c35e51abbc104d7aab3/S6IB1Vts6SpfL64nk_KN0.png)
Model gives itself instructions and keeps going and going and going?
5
#8 opened 10 months ago
by
michael-newsrx-com
Quantizations for llama.cpp
#7 opened 10 months ago
by
rozek
Any plans for chat model?
1
#5 opened 10 months ago
by
brekk
![](https://cdn-avatars.huggingface.co/v1/production/uploads/648fdcb4cb9b9578a7e53bad/K4zQCoKvjtDD0NKDkQrM6.png)
when will have a ggml version?
8
#3 opened 10 months ago
by
CUIGuy
LocalAI Model Loading
3
#2 opened 10 months ago
by
FIWisher
The model doesn't seem to stop
15
#1 opened 10 months ago
by
LaferriereJC