File size: 2,816 Bytes
b0f631c b77717c b0f631c 0b2cbcc f5ced79 0b2cbcc f5ced79 2d73f34 f5ced79 2d73f34 f5ced79 2d73f34 f518bb2 b0f631c 120634a b0f631c 120634a f41d45b 120634a b0f631c 120634a f41d45b 120634a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
language: en
tags:
- data augmentation
- keywords-to-text generation
- sketch-to-text generation
license: apache-2.0
datasets:
- C4
widget:
- text: "<mask> Conference on Empirical Methods <mask> submission of research papers <mask> Deep Learning <mask>"
example_title: "Example 1"
- text: "<mask> machine learning <mask> my research interest <mask> data science <mask>"
example_title: "Example 2"
- text: "<mask> play basketball <mask> a strong team <mask> Shanghai University of Finance and Economics <mask> last Sunday <mask>"
example_title: "Example 3"
- text: "Good news: <mask> the European Union <mask> month by EU <mask> Farm Commissioner Franz <mask>"
example_title: "Example with a prompt 1"
- text: "Bad news: <mask> the European Union <mask> month by EU <mask> Farm Commissioner Franz <mask>"
example_title: "Example with a prompt 2"
inference:
parameters:
max_length: 200
num_beams: 3
do_sample: True
---
# SEGA-large model
SEGA: SkEtch-based Generative Augmentation
SEGA is a general text augmentation model that can be used for data augmentation for various NLP tasks (including sentiment analysis, topic classification, NER, and QA). SEGA uses an encoder-decoder structure (based on the BART architecture) and is pre-trained on the C4-realnewslike corpus.
- Paper: [this paper](to_be_added)
- Github: [this repository](to_be_added).
### How to use
```python
from transformers import pipeline
# 1. load the model with the huggingface `pipeline`
sega = pipeline("text2text-generation", model='beyond/sega-large', device=0)
# 2. provide a sketch (joint by <mask> tokens)
sketch = "<mask> Conference on Empirical Methods <mask> submission of research papers <mask> Deep Learning <mask>"
# 3. just do it!
generated_text = sega(sketch, num_beams=3, do_sample=True, max_length=200)[0]['generated_text']
print(generated_text)
```
```shell
'The Conference on Empirical Methods welcomes the submission of research papers. Abstracts should be in the form of a paper or presentation. Please submit abstracts to the following email address: eemml.stanford.edu. The conference will be held at Stanford University on April 1618, 2019. The theme of the conference is Deep Learning.'
```
## Model variations
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`sega-large`]() | xM | English |
| [`sega-base`]() | xM | English |
| [`sega-small`]() | xM | English |
| [`sega-large-chinese`]() | xM | Chinese |
| [`sega-base-chinese`]() | xM | Chinese |
| [`sega-small-chinese`]() | xM | Chinese |
## Intended uses & limitations
### Limitations and bias
## Training data
## Training procedure
### Preprocessing
### Pretraining
## Evaluation results
### BibTeX entry and citation info
|