Update config.json
1
#71 opened about 2 months ago
by
ArthurZ
The request to access the repo has been sent for several days, why hasn't it passed yet?
7
#70 opened about 2 months ago
by
water-cui
AttributeError: type object 'AttentionMaskConverter' has no attribute '_ignore_causal_mask_sdpa' [ ]:
4
#69 opened about 2 months ago
by
tianke0711
Access given still not working.
7
#68 opened about 2 months ago
by
adityar23
Uncaught (in promise) SyntaxError: Unexpected token 'E', "Expected r"... is not valid JSON
5
#66 opened about 2 months ago
by
sxmss1
Llama responses are broken during conversation
1
#64 opened about 2 months ago
by
gusakovskyi
for using model
#63 opened about 2 months ago
by
yogeshm
Access Denied
#59 opened about 2 months ago
by
Jerry-hyl
Not outputting <|eot_id|> on SageMaker
#58 opened about 2 months ago
by
zhengsj
Update README.md
#57 opened about 2 months ago
by
inuwamobarak
Batched inference on multi-GPUs
2
#56 opened about 2 months ago
by
d-i-o-n
Badly Encoded Tokens/Mojibake
1
#55 opened about 2 months ago
by
muchanem
Denied permission to DL
7
#51 opened about 2 months ago
by
TimPine
request to access is still pending a review
26
#50 opened about 2 months ago
by
Hoo1196
mlx_lm.server gives wonky answers
#49 opened about 2 months ago
by
conleysa
Tokenizer mismatch all the time
2
#47 opened about 2 months ago
by
tian9
Could anyone can tell me how to set the prompt template when i use the model in the pycharm by transformers?
1
#46 opened about 2 months ago
by
LAKSERS
Instruct format?
3
#44 opened about 2 months ago
by
m-conrad-202
Warning: The attention mask and the pad token id were not set..
2
#40 opened about 2 months ago
by
Stephen-smj
MPS support quantification
5
#39 opened about 2 months ago
by
tonimelisma
`meta-llama/Meta-Llama-3-8B-Instruct` model with sagemaker
1
#38 opened about 2 months ago
by
aak7912
Problem with the tokenizer
2
#37 opened about 2 months ago
by
Douedos
how to output an answer without side chatter
7
#36 opened about 2 months ago
by
Gerald001
ValueError: You can't train a model that has been loaded in 8-bit precision on a different device than the one you're training on.
11
#35 opened about 2 months ago
by
madhurjindal
Does instruct need add_generation_prompt?
1
#33 opened about 2 months ago
by
bdambrosio
Error while downloading the model
#32 opened about 2 months ago
by
amarnadh1998
Garbage responses
2
#30 opened about 2 months ago
by
RainmakerP
GPU requirements
8
#29 opened about 2 months ago
by
Gerald001
can I run it on CPU ?
4
#28 opened about 2 months ago
by
aljbali
ChatLLM.cpp fully supports Llama-3 now
#24 opened about 2 months ago
by
J22
Transformers pipeline update please
#23 opened about 2 months ago
by
ip210
The best 8B in the planet right now. PERIOD!
2
#22 opened about 2 months ago
by
cyberneticos
Is it really good?
5
#20 opened about 2 months ago
by
urtuuuu
The result for Llama 2 13B's GSM-8K (8-shot, CoT) is 77.4, which seems incorrect.
2
#19 opened about 2 months ago
by
Hi-archer
how is possible to create images with llama 3
4
#17 opened about 2 months ago
by
robinaicol
OMG insomnia in the community
#16 opened about 2 months ago
by
Languido
It's like Christmas without December! 🌲🎁🤖
#15 opened about 2 months ago
by
Joseph717171
What is the conversation template?
8
#14 opened about 2 months ago
by
aeminkocal
Update numbering format of Prohibited Uses
#13 opened about 2 months ago
by
BallisticAI
Max output tokens?
3
#12 opened about 2 months ago
by
stri8ted
IAM READYYYYYY
2
#3 opened about 2 months ago
by
10100101j
Non-English language capabilities
6
#2 opened about 2 months ago
by
oliviermills