File size: 4,446 Bytes
ff465fa
3a01a1b
 
 
 
 
ff465fa
3a01a1b
 
e90110f
3a01a1b
 
b1c9ecf
3a01a1b
e90110f
3a01a1b
 
 
 
 
 
 
 
 
 
 
 
 
e90110f
 
3a01a1b
 
 
 
 
 
 
 
1b0edc1
3a01a1b
 
 
 
 
 
 
1b0edc1
3a01a1b
1b0edc1
3a01a1b
 
 
e90110f
3a01a1b
 
 
 
 
491c121
3a01a1b
 
 
 
 
 
e90110f
3a01a1b
 
 
 
e90110f
3a01a1b
 
 
 
 
 
 
e90110f
3a01a1b
 
 
 
 
 
 
e90110f
3a01a1b
 
 
 
e90110f
3a01a1b
 
 
 
 
 
 
 
 
 
 
 
 
e90110f
3a01a1b
 
 
 
 
e90110f
491c121
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: bigscience-bloom-rail-1.0
language:
- en
- zht
pipeline_tag: text-generation
---

<h1 style='text-align: center '>BLOOM-zh</h1> 
<h2 style='text-align: center '><em>Traditional Chinese-enhanced BLOOM language model</em> </h2> 
<h3 style='text-align: center '>Model Card</h3>

Version 1.0 / 20.Feb.2023

This model is a joint collaboration between CKIP lab at Acedemia Sinica ([link](https://ckip.iis.sinica.edu.tw/)), MediaTek Research ([連結](https://www.mtkresearch.com/), [连结](https://www.mtkresearch.com/zh-hans/), [link](https://www.mtkresearch.com/en/)), and National Academy for Educational Research ([link](https://www.naer.edu.tw/)).

## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)

## Model Details  
BLOOM-zh is a language model with enhanced Traditional Chinese capability. It is derived from [BLOOMZ](https://huggingface.co/bigscience/bloomz). 
BLOOM-zh is trained extendedly on large amount of Traditional Chinese text data.
    

### Basics
*This section provides information for anyone who wants to know about the model.*

<details>
<summary>Click to expand</summary> <br/>
    
**Developed by:** MediaTek Research
    
**Model Type:** Transformer-based Language Model

**Version:** 1.0.0

**Languages:** Multiple; see [training data](#training-data)

**License:** MEDIATEK RESEARCH License ([link](https://huggingface.co/ckip-joint/bloom-1b1-zh/blob/main/LICENSE_MR.md)) and RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))

**Release Date Estimate:** Wednesday, 22.February.2023

**Send Questions to:** info@mtkresearch.com

**Cite as:** MediaTek Research: Traditional Chinese-enhanced BLOOM language model. International, February 2023.

**Organizations of contributors:** 
    
* MediaTek Research
* Academia Sinica
* National Academy for Educational Research

</details>

### Technical Specifications
*This section provides information for people who work on model development.*

For technical specifications, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-1b1#model-details).


### Environmental Impact

For environmental impact, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-1b1#model-details).


## Uses

*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. 
It provides information for anyone considering using the model or who is affected by the model.*

For the uses of the model, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-1b1#uses).
    
</details>
<p>&nbsp;</p>

## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
    
We trained the 1B1 parameter model on a total of 6 Billion tokens of mostly high quality Traditional Chinese text. Details are provided in the [paper](https://arxiv.org/).

## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
    
For risks and limitations, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-1b1#risks-and-limitations).

### Factors 
*This section lists some different aspects of BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*

- The model is trained on Traditional Chinese and English. However, the pretrained weights capture more than 40 different languages.

- The model is trained on web crawled data, news articles, novels, knowledge sources (encyclopedia, education sector) and instructions


## Recommendations

*This section provides information on warnings and potential mitigations.*

For recommendations, please refer to [BLOOM](https://huggingface.co/bigscience/bloom-1b1#recommendations).

    
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*

Philipp Ennen, Po-Chun Hsu, Chan-Jan Hsu, Chang-Le Liu, Yin-Hsiang Liao, Chin-Tung Lin, Da-Shan Shiu, Wei-Yun Ma
<!-- # Bloom_eval -->