Update blog post link
#94 opened 8 months ago
by
ZennyKenny
Adding `safetensors` variant of this model
#92 opened 9 months ago
by
SFconvertbot
mpt-7b taking several minutes on mac m1?
#88 opened 12 months ago
by
rmiller3
Adding `safetensors` variant of this model
#87 opened about 1 year ago
by
SFconvertbot
Adding Evaluation Results
#86 opened about 1 year ago
by
leaderboard-pr-bot
[AUTOMATED] Model Memory Requirements
#85 opened about 1 year ago
by
model-sizer-bot
Request: DOI
#80 opened over 1 year ago
by
yousefHageb
How to append new token and train?
2
#70 opened over 1 year ago
by
NickyNicky
fix HPU could not handle float16 in attention.py.
3
#68 opened over 1 year ago
by
sywangyi
After installing triton, running pipe() return "fatal error: cuda.h: No such file or directory " and "CalledProcessError: Command '['/usr/bin/gcc'...."
2
#66 opened over 1 year ago
by
nps798
Adding `safetensors` variant of this model
2
#65 opened over 1 year ago
by
makeColabFree
Converting To Flax
#64 opened over 1 year ago
by
erfanzar
MPT-7b on colab - RAM of GPU not used
5
#50 opened over 1 year ago
by
vi-c
Can support MPS device type?
1
#48 opened over 1 year ago
by
LouiSum
Merge cekal/mpt-7b-peft-compatible
6
#42 opened over 1 year ago
by
muelletm
Support gradient checkpointing
7
#41 opened over 1 year ago
by
muelletm
Issue training With Triton
10
#40 opened over 1 year ago
by
MikeyBelllissimo
Finetuning MPT-7B in 4-bit
#39 opened over 1 year ago
by
rmihaylov
attn_impl
11
#27 opened over 1 year ago
by
GaaraOtheSand
Fixes for PEFT Tuning based on iwalton3
#25 opened over 1 year ago
by
SebastianBodza
Can this be fine-tuned using Amazon SageMaker or run on a AMD GPU that is not CUDA-enabled?
1
#18 opened over 1 year ago
by
Bigshot
[Experiment] MPT 7B + LangChain Custom LLM + transformers.accelerator, on a POTATO
#16 opened over 1 year ago
by
saber7ooth