|
--- |
|
license: mit |
|
datasets: |
|
- wmt/wmt19 |
|
language: |
|
- zh |
|
- en |
|
metrics: |
|
- bleu |
|
pipeline_tag: text2text-generation |
|
--- |
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
**This is a Chinese-English translation model based on Transformer architecture(beta version 0.0.1).** |
|
This modelcard aims to be a base template for new models. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
This model aims to provide the service of traslation between Chinese and English(Only Chinese to English available now), which based on wmt19 dataset. |
|
|
|
The model is good at grammar in translation betweeen zh-en, if you don't want to fork this repository, just try the API reference aside this page ^_^ |
|
|
|
|
|
- **Developed by:** **Varine** |
|
- **Shared by:** **TianQi Xu** |
|
- **Model type:** **Tranformer** |
|
- **Language(s) (NLP):** **Chinese, English** |
|
- **License:** **MIT** |
|
- **Finetuned from model:** **opus-mt-zh-en-fintuned-zhen-checkpoints** |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** <https://huggingface.co/Varine/opus-mt-zh-en-model> |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
This model can be used in tranlation missions between Chinese and English. |
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
As it's a traditional translation model, it can be used in many circumstances, including translation between some academical papers, news, and **even some of the literary works(as the excellent performance the model is in grammar and multi-context cases)**. |
|
|
|
## Bias, Risks, and Limitations |
|
**1.Remember this is a beta version of this translation model,thus we add the limitation on the scale of input tokens, so plz make sure the scale of your input text won't overflow the limit.** |
|
**2.DO NOT APPLY THIS MODEL FOR ILLEGAL USES.** |
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
|
|
### Recommendations |
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
## How to Get Started with the Model |
|
|
|
Before we enjoy this model, Plz follow the possible directions to make sure your environment is appropriate.Use the code below to get started with the model. |
|
|
|
1.Use git tools to fork this repository(If you don't wan't to torture youself in configuring environment, just feel free to use API!): |
|
``` |
|
git clone "https://huggingface.co/Varine/opus-mt-zh-en-model" |
|
``` |
|
2.After forking, plz make sure you have installed the modules below in your Jupiter Notebook or other IDEs: |
|
``` |
|
! pip install transformers datasets numpy |
|
``` |
|
3.After checking the packages, plz run the code in [translation.ipynb](https://huggingface.co/Varine/opus-mt-zh-en-model/translation.ipynb), and a Jupiter Notebook environment is recommended in this step. |
|
4.Ultimately you can enjoy the whole model my loading the model by code and pipelining translation strategy, and don't forget to input your text! |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
- wmt/wmt19 |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
As the dataset we choose in training is tremendous in scale, so after analyzing, we decided to use the only 4% among the whole dataset to train, and we divided the 4% data in 10 epoch to evaluate the training loss and and validation loss in every part of the epoch. |
|
Moreover, we need to claim that, the data form that we used in our training progress is Chinese-English sentence pairs(to better embedding and compare them in higher-dimension space in Transformer architecture). |
|
|
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** **fp32**<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
## Evaluation |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
|
|
<!-- This should link to a Dataset Card if possible. --> |
|
- wmt/wmt19 |
|
|
|
|
|
## Hardware used in the training |
|
|
|
|
|
- **Hardware Type:** **1x Nvidia A10 GPU with 30v CPUs, 200GiB RAM, 1 TiB SSD storage** |
|
- **Hours used:** **4.08hrs(roughly estimated)** |
|
- **Cloud Provider:** **Lambda Cloud.Co** |
|
- **Compute Region:** **California, USA** |
|
- **Carbon Emitted:** **N/A** |
|
|
|
### Model Architecture and Objective |
|
We use the Transformer architecture(Huggingface version) in this model,and it's universal architecture widely used in machine translation missions. |
|
|
|
|
|
### Compute Infrastructure |
|
Due to the limit of the computational ability on personal PC and the scale of the dataset, we decided to training our model on GPU cloud, which proved to be effective. |
|
|
|
|
|
#### Hardware |
|
|
|
**Thanks to the Lambda Cloud, we use the A10 GPU of Nvidia to finish the project.** |
|
|
|
#### Software |
|
|
|
**We used the Jupiter Notebook on cloud to run our code.** |
|
|
|
|
|
|
|
|
|
|
|
## Model Card Authors [optional] |
|
|
|
**Varine Xie** |
|
|
|
## Model Card Contact |
|
**Plz contact me through email:<https://varine7499@gmail.com>, and I'm glad to receive feedback from y'all!** ๐ |
|
|