Adding `safetensors` variant of this model
#34 opened 6 months ago
by
SFconvertbot
Update replit_lm_tokenizer.py
#33 opened 7 months ago
by
dobbySeo
Adapt code completion to SDK private language
#32 opened 12 months ago
by
tomasmc
🚩 Integrate with fastChat ?
#31 opened about 1 year ago
by
fengcaiwen
How to use GPU instead of CPU ? "you are using config.init_device='cpu', but you can also use config.init_device="meta"
1
#30 opened over 1 year ago
by
ali-issa
GGML quantize script?
1
#29 opened over 1 year ago
by
MichelNivard
Fix typos in README
#28 opened over 1 year ago
by
madhavatreplit
Error when using attn_impl triton
1
#27 opened over 1 year ago
by
Wraken
Add eos_token_id to config
#26 opened over 1 year ago
by
madhavatreplit
Update generation_config.json
#25 opened over 1 year ago
by
madhavatreplit
How to fill in the middle code?
2
#24 opened over 1 year ago
by
realnex
activate use_cache to speed up inference
1
#23 opened over 1 year ago
by
loubnabnl
How to train it with qlora?
#22 opened over 1 year ago
by
Sardar
Triton is slower?
#21 opened over 1 year ago
by
doguaraci
Update README for 8-bit and 4-bit
1
#20 opened over 1 year ago
by
madhavatreplit
Update modeling_mpt.py
1
#19 opened over 1 year ago
by
0xGrrr
Does this work with HF Inference API?
2
#18 opened over 1 year ago
by
sagardesai
Update weights to MPT
1
#17 opened over 1 year ago
by
madhavatreplit
Convert ReplitLM to MPT
1
#16 opened over 1 year ago
by
madhavatreplit
Code to Code translation
3
#15 opened over 1 year ago
by
tusharpiku
Speed UP method
2
#14 opened over 1 year ago
by
luoji12345
huggigfacemodel deployment
8
#13 opened over 1 year ago
by
arminnorouzi
ReplitLM does not support generation with right padding
#12 opened over 1 year ago
by
merlinarer
Dataset Details
2
#11 opened over 1 year ago
by
joaogui1
Question about the Code model
1
#10 opened over 1 year ago
by
ComradeCat
RuntimeError: Device does not support shared memory of 98304bytes
3
#9 opened over 1 year ago
by
leojames
Can use torch for attention implementation?
2
#8 opened over 1 year ago
by
LouiSum
how to run the model locally?
2
#7 opened over 1 year ago
by
828CFXLpyz
Expected minimum hardware requirements for inference?
5
#6 opened over 1 year ago
by
zeroing
Update README.md
1
#5 opened over 1 year ago
by
pirroh
Fine-tuned model
4
#4 opened over 1 year ago
by
lentan
Update generation_config.json
1
#3 opened over 1 year ago
by
madhavatreplit
Update config.json
1
#2 opened over 1 year ago
by
madhavatreplit
Add files for release
1
#1 opened over 1 year ago
by
madhavatreplit