Text Generation
Transformers
PyTorch
Thai
English
mpt
custom_code
text-generation-inference
File size: 6,785 Bytes
77e6fb9
 
 
 
 
441a68e
 
 
 
 
 
 
 
 
 
 
44a76d2
77e6fb9
 
ff86b12
0473525
 
c54d47b
 
0473525
 
c54d47b
 
0473525
729a5f4
 
0473525
 
729a5f4
0473525
 
729a5f4
0473525
 
729a5f4
0473525
 
729a5f4
0473525
 
729a5f4
ddf888a
ef3a275
 
 
85f2d34
0473525
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ef3a275
 
67bc146
ef3a275
120c6fb
 
 
 
 
 
f0a4dc0
 
 
 
 
 
120c6fb
 
f0a4dc0
 
 
 
 
120c6fb
 
 
 
441a68e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: apache-2.0
language:
- th
- en
datasets:
- laion/OIG
- databricks/databricks-dolly-15k
- thaisum
- scb_mt_enth_2020
- garage-bAInd/Open-Platypus
- iapp_wiki_qa_squad
- pythainlp/han-instruct-dataset-v1.0
- cognitivecomputations/dolphin
- Hello-SimpleAI/HC3
- Muennighoff/xP3x
- openai/summarize_from_feedback
---
# Model Card for WangChanLion 7B - The Multilingual Instruction-Following Model

WangChanLion is a Multilingual, instruction-finetuned on Southeast Asian Languages SEA-LION 7B using open-source, commercially permissible datasets sample from LAION OIG chip2 and infill_dbpedia, DataBricks Dolly v2, OpenAI TL;DR, Hello-SimpleAI HC3, dolphin, iapp_wiki_qa_squad, thaisum, xlsum, scb_mt_enth_2020, han dataset, xp3x and Open-Platypus, a total of ~500k samples. Non-commercial datasets were filtered out. Released under apache 2.0 license. The models are trained to perform a subset of instruction-following tasks we found most relevant: reading comprehension, brainstorming, and creative writing. In this model, we focus on Thai and English datasets. We perform Vicuna-style evaluation using human evaluation. In a similar manner to Dolly v2, we only use open-source, commercially permissive pretrained models and datasets. Our models are neither restricted by non-commercial clauses like LLaMA-based models nor non-compete clauses like models that use self-instruct datasets from ChatGPT.

- Developers: PyThaiNLP and VISTEC-depa AI Research Institute of Thailand
- Model type: SEA-LION 7B (MPT architecture)

## Model Sources
- Repository:  https://github.com/vistec-AI/WangchanLion
- Demo: [demo_WangchanLion.ipynb - Colaboratory](https://colab.research.google.com/drive/1y_7oOU3ZJI0h4chUrXFL3K4kelW_OI2G?usp=sharing#scrollTo=4yN3Bo6iAH2L)

# Use cases
## Direct Use
Intended to be used as an instruction-following model for reading comprehension, brainstorming, and creative writing.

## Downstream Use
The model can be finetuned for any typical instruction-following use cases.

## Out-of-Scope Use
We do not expect the models to perform well in math problems, reasoning, and factfulness.
 
## Bias, Risks, and Limitations
We noticed similar limitations to other finetuned instruction followers, such as math problems, reasoning, and factfulness. Even though the models do not perform on the level that we expect them to be abused, they do contain undesirable biases and toxicity and should be further optimized for your particular use cases.

## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information is needed for further recommendations.
 
# Get Started
Use the code [here](https://colab.research.google.com/drive/1y_7oOU3ZJI0h4chUrXFL3K4kelW_OI2G?usp=sharing#scrollTo=4yN3Bo6iAH2L) to get started with the model.

Or

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained( "airesearch/WangchanLion7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "airesearch/WangchanLion7B", trust_remote_code=True,
    return_dict=True,
    load_in_8bit=True ,
    device_map="auto",
    torch_dtype=torch.float16,
    offload_folder="./",
    low_cpu_mem_usage=True,
)
def get_prompt(question: str,context: str = None) -> str:
    if context is not None:
      return """พื้นหลัง:\n\n{context}\n\nคำถาม:{question}\n\nตอบ:""".format(context=context, question=question)
    return """คำถาม:{question}\n\nตอบ:""".format(question=question)

question = "เกิดอะไรขึ้นที่เทียนอันเหมินตอนปี 1989"
full_prompt = get_prompt(question=question)
tokens = tokenizer(full_prompt, return_tensors="pt").to("cuda")
output = model.generate(
    input_ids=tokens['input_ids'],
    attention_mask=tokens['attention_mask'],
    max_new_tokens=256,
    early_stopping=True,
    top_k=50, top_p=0.95,
    do_sample=True,
    temperature=0.3,
    repetition_penalty = 1.2,
    eos_token_id = tokenizer.eos_token_id,
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```

# Training Details
## Training Data
Finetuning datasets are sourced from [LAION OIG chip2 and infill_dbpedia (Apache-2.0)](https://huggingface.co/datasets/laion/OIG), [DataBricks Dolly v2 (Apache-2.0)](https://github.com/databrickslabs/dolly), [OpenAI TL;DR (MIT)](https://github.com/openai/summarize-from-feedback), [Hello-SimpleAI HC3 (CC-BY SA)](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [dolphin](https://huggingface.co/datasets/ehartford/dolphin), [iapp_wiki_qa_squad](https://huggingface.co/datasets/iapp_wiki_qa_squad) , [thaisum](https://huggingface.co/datasets/thaisum), [xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum), [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020), [han dataset](https://huggingface.co/datasets/pythainlp/han-instruct-dataset-v1.0), [xp3x](https://huggingface.co/datasets/Muennighoff/xP3x) and [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
## Training regime
- QLoRA with 4 A100 (40GB)

 
# Evaluation
We performed human and machine evaluations on XQuAD zero-shot and one-shot settings:
## XQuAD
|      Model     | F1 (Zero-shot) | F1 (One-shot) |
|:--------------:|:--------------:|:-------------:|
| openthaigpt7B  |     27.3487      |    34.3104      |
| SeaLLM7B V2      |    16.1104       |  25.7399    |
| Typhoon-7b     |     34.46      |    **54.03**      |
| WangchanLion7B |   **45.8763**    |    49.9145      |

## iAPP Wiki QA 
|      Model     | F1 (Zero-shot) |  F1 (One-shot) |
|:--------------:|:--------------:|:-------------:|
| openthaigpt7B  |     40.0614    |    46.6883    |
| SeaLLM7B V2      |     23.6425    |    28.9934    |
| WangchanLion7B |   **58.9051**  |  **62.9776**  |

# What WangchanLion offers:
- Transparent pretrained model: The development of SEA-LION is community-driven, with different ASEAN collaborators contributing pretraining datasets. The SEA-LION developers ensure that all datasets are safe and can be utilized without commercial restrictions. This transparency extends to the provision of pretraining code, ensuring anyone can replicate SEA-LION using the provided datasets.
- Transparent finetuning data: In the spirit of open science, we make the finetuning data for WangchanLion accessible to all. This commitment to openness empowers the community by providing complete visibility into the instruction finetuning data that shapes WangchanLion.
- Transparent finetuning code: The finetuning code for WangchanLion is readily available for distribution. By sharing our methods and processes, we invite others to learn from, build upon, and innovate alongside us.