Updated model with better training and evaluation. Test and val data included as pickle files. Older Legacy files were removed to avoid confusion.
71aeeca
tokenizer.json filter=lfs diff=lfs merge=lfs -text | |
model.safetensors filter=lfs diff=lfs merge=lfs -text | |
.git/lfs/objects/c8/35/c835b069d7b8cd02b400e6247b83bc1840ab12bb1628d5b2e03c8d728de75558 filter=lfs diff=lfs merge=lfs -text | |
.git/lfs/objects/c2/18/c218039ddf271b1fda75cb4be49065a87f8ba739aa13541450767e3fbabde91f filter=lfs diff=lfs merge=lfs -text | |
.git/lfs/objects/7d/4b/7d4b0fc11999b413d52f2ec233e4548fe48b8fc1959aaff2bb3e128d5ffa3b72 filter=lfs diff=lfs merge=lfs -text | |
test_data.pickle filter=lfs diff=lfs merge=lfs -text | |
val_data.pickle filter=lfs diff=lfs merge=lfs -text | |
sentencepiece.bpe.model filter=lfs diff=lfs merge=lfs -text | |