Datasets:
metadata
license: unknown
task_categories:
- translation
language:
- ja
- zh
Derived from larryvrh/CCMatrix-v1-Ja_Zh-filtered
Made some changes to the dataset to train the new mt5-base model
- since it's all from the community anyway so i disclose this.
- putting lora adapters isn't sufficient to solve old and general habits
weakness of the translation model
- translation stops or repeats after 30 words, and doesnt recognize line breaks
- the dataset generally too short, 83% below 50 words
- solution: fused some sentenses with " ","。" or line breaks to make them longer
- now it has similar percentage of each length
- mt5 doesn't handle well above 250 due to default length
- model can't decide which “ to use, and it randomly add or remove non-words or numbers
- the dataset itself dirty with this aspect, becomes old habit
- solution: filtered all the data where two side don't match on number of non-words
- they are cool as lora features, but I don't want them in the base
- model has habits of not translation words when translate item descriptions
- there are some description-like samples in the dataset, with untranslated ja characters
- solution: removed them
- they are mistakes