Switched back to using MADLAD 3b model (from 7b) due to being GPU poor b929bff Didier commited on Sep 24
Removing m2m100 and quantizing MADLAD in 8-bit (as GPU resources limited) ea7bc2f Didier commited on Sep 24
Initial commit: bilingual models, multilingual mode, Google Translate fe02c49 Didier commited on Sep 17