Text Generation
Transformers
PyTorch
Thai
English
mpt
custom_code
text-generation-inference
WangchanLion7B / README.md
mrp's picture
Update README.md
77e6fb9
metadata
license: apache-2.0
language:
  - th
  - en

Model Card for WangChanLion 7B - The Multilingual Instruction-Following Model

WangChanLion is a Multilingual, instruction-finetuned on Southeast Asian Languages SEA-LION 7B using open-source, commercially permissible datasets sample from LAION OIG chip2 and infill_dbpedia, DataBricks Dolly v2, OpenAI TL;DR, Hello-SimpleAI HC3, dolphin, iapp_wiki_qa_squad, thaisum, xlsum, scb_mt_enth_2020, han dataset, xp3x and Open-Platypus, a total of ~500k samples. Non-commercial datasets were filtered out. Released under apache 2.0 license. The models are trained to perform a subset of instruction-following tasks we found most relevant: reading comprehension, brainstorming, and creative writing. In this model, we focus on Thai and English datasets. We perform Vicuna-style evaluation using human evaluation. In a similar manner to Dolly v2, we only use open-source, commercially permissive pretrained models and datasets. Our models are neither restricted by non-commercial clauses like LLaMA-based models nor non-compete clauses like models that use self-instruct datasets from ChatGPT.