π© Report
#1
by
superpig9090
- opened
This is not a T2T model.
The ratio of text-2-text data in the training stage of this model seems to be too low. For pure-text ability, Alpha-VLLM/Lumina-mGPT-7B-512 is a good choice.