--- pretty_name: ExtraGLUE language: - pt source_datasets: - glue - superglue license: cc-by-4.0 --- ExtraGLUE === ExtraGLUE is a Portuguese dataset obtained by the automatic translation of some of the tasks in the GLUE and SuperGLUE benchmarks. The 14 tasks in extraGLUE cover different aspects of language understanding: *Single sentence* - **SST-2** is a task for predicting the sentiment polarity of movie reviews. *Semantic similarity* - **MRPC** is a task for determining whether a pair of sentences are mutual paraphrases. - **STS-B** is a task for predicting a similarity score (from 1 to 5) for each sentence pair. *Inference* - **MNLI** is a task to determine if a given premise sentence entails, contradicts, or is neutral to a hypothesis sentence; this task includes **matched** (in-domain) and **mismatched** (cross-domain) validation and test sets. - **QNLI** is a question-answering task converted to determine whether the context sentence contains the answer to the question. - **RTE** is a task for determining whether a premise sentence entails a hypothesis sentence. - **WNLI** is a pronoun resolution task formulated as sentence pair entailment classification where, in the second sentence, the pronoun is replaced by a possible referent. - **CB** comprises short texts with embedded clauses; one such clause is extracted as a hypothesis and should be classified as neutral, entailment or contradiction. - **AX_b** is designed to test models across a wide spectrum of linguistic, commonsense, and world knowledge; each instance contains a sentence pair labeled with entailment or not entailment. - **AX_g** is designed to measure gender bias, where each premise sentence includes a male or female pronoun and a hypothesis includes a possible referent for the pronoun. *Question answering* - **BoolQ** is a question-answering task where yes/no questions are given for short text passages. - **MultiRC** is a task where, given a context paragraph, a question, and an answer, the goal is to determine whether the answer is true; for the same context and question, more than one answer may be correct. *Reasoning* - **CoPA** is a casual reasoning task: given a premise, two choices, and a cause/effect prompt, the system must choose one of the choices.