Inference Speed
1
#61 opened 6 months ago
by
khaled-hesham
Data Provenance
#60 opened 7 months ago
by
exdysa

Learning Rate during pretraining
1
#58 opened 8 months ago
by
shuyuej

Truly great model for text-based operations like analysing and researching
4
#56 opened 9 months ago
by
bkieser
"triu_tril_cuda_template" not implemented for 'BFloat16'
4
#52 opened 11 months ago
by
Ashmal

Prompt format for fine-tuning
#51 opened 11 months ago
by
skevja
Request: DOI
1
#50 opened 11 months ago
by
gagan3012

Please document pretraining datasets
#49 opened 11 months ago
by
markding

Instruct-finetuning dataset
5
#43 opened 11 months ago
by
Andriy
Context length is not 128k
4
#41 opened 12 months ago
by
pseudotensor

Is there a best way to infer this model from multiple small memory GPUs?
1
#39 opened 12 months ago
by
hongdouzi
Configuring command-r-gptq
#33 opened 12 months ago
by
Cyleux
Any recommended frontend to run this model?
2
#30 opened 12 months ago
by
DrNicefellow

[AUTOMATED] Model Memory Requirements
#26 opened 12 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#25 opened 12 months ago
by
model-sizer-bot
Error "sharded is not supported for AutoModel" when deploying on sagemaker endpoint
#22 opened 12 months ago
by
LorenzoCevolaniAXA

gguf is required :)
12
#11 opened 12 months ago
by
flymonk