--- language: en tags: - augmentation license: apache-2.0 datasets: - C4 widget: - text: " Conference on Empirical Methods submission of research papers Deep Learning " example_title: "Example 1" - text: " machine learning my research interest data science " example_title: "Example 2" - text: " play basketball a strong team Shanghai University of Finance and Economics last Sunday " example_title: "Example 3" - text: "Good news: the European Union month by EU Farm Commissioner Franz " example_title: "Example with a prompt 1" - text: "Bad news: the European Union month by EU Farm Commissioner Franz " example_title: "Example with a prompt 2" inference: parameters: max_length: 200 num_beams: 3 do_sample: True --- # SEGA-large model SEGA: SkEtch-based Generative Augmentation SEGA is a general text augmentation model that can be used for data augmentation for various NLP tasks (including sentiment analysis, topic classification, NER, and QA). SEGA uses an encoder-decoder structure (based on the BART architecture) and is pre-trained on the C4-realnewslike corpus. - Paper: [this paper](to_be_added) - Github: [this repository](to_be_added). ### How to use ```python from transformers import pipeline # 1. load the model with the huggingface `pipeline` sega = pipeline("text2text-generation", model='beyond/sega-large', device=0) # 2. provide a sketch (joint by tokens) sketch = " Conference on Empirical Methods submission of research papers Deep Learning " # 3. just do it! generated_text = sega(sketch, num_beams=3, do_sample=True, max_length=200)[0]['generated_text'] print(generated_text) ``` ```shell 'The Conference on Empirical Methods welcomes the submission of research papers. Abstracts should be in the form of a paper or presentation. Please submit abstracts to the following email address: eemml.stanford.edu. The conference will be held at Stanford University on April 1618, 2019. The theme of the conference is Deep Learning.' ``` ## Model variations | Model | #params | Language | |------------------------|--------------------------------|-------| | [`sega-large`]() | xM | English | | [`sega-base`]() | xM | English | | [`sega-small`]() | xM | English | | [`sega-large-chinese`]() | xM | Chinese | | [`sega-base-chinese`]() | xM | Chinese | | [`sega-small-chinese`]() | xM | Chinese | ## Intended uses & limitations ### Limitations and bias ## Training data ## Training procedure ### Preprocessing ### Pretraining ## Evaluation results ### BibTeX entry and citation info