CelebA_RoBERTa_Sp / README.md
eduar03yauri's picture
Update README.md
c271419
|
raw
history blame
No virus
3.06 kB
metadata
license: apache-2.0
task_categories:
  - table-question-answering
  - question-answering
  - translation
  - text2text-generation
language:
  - es
tags:
  - CelebA
  - Spanish
  - celebFaces attributes
  - face detection
  - face recognition
pretty_name: RoBERTa+CelebA training corpus in Spanish
size_categories:
  - 100M<n<1B

Dataset Summary

This dataset contains 250000 entries made up of a pair of sentences in Spanish and their respective similarity value. This corpus was used in the training of the Sentence-BERT library to improve the efficiency of the RoBERTA-large-bne base model.

The training of RoBERTa+CelebA has been performed with a Siamese network that evaluates the similarity of the embeddings generated by the transformer network using the cosine similarity metric.

Therefore, each input of the training data consists of a pair of sentences A and B in English and their respective similarity in the range 0 to 1.

First, a translation of the original English text into Spanish was made . Subsequently, the document structure defines in each line (input) a pair of Spanish sentences and their respective similarity value between 0 and 1 calculated by the Spacy library. However, since Spacy library works only with English entries, the similarity between two Spanish sentences has been matched with their respective English pairs. Finally, the final training corpus for RoBERTa is defined by the Spanish text and the similarity score.

Citation information

Citing: If you used RoBERTa+CelebA in your work, please cite the ????:

License

This dataset is available under the Apache License 2.0.

Autors

Universidad Nacional de Ingeniería, Ontology Engineering Group, Universidad Politécnica de Madrid.

Contributors

See the full list of contributors here.

Universidad Politécnica de Madrid Ontology Engineering Group Universidad Politécnica de Madrid