eduar03yauri commited on
Commit
d09b1b7
1 Parent(s): c271419

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -10
README.md CHANGED
@@ -18,25 +18,38 @@ size_categories:
18
  - 100M<n<1B
19
  ---
20
 
21
- ## Dataset Summary
22
 
23
- This dataset contains 250000 entries made up of a pair of sentences in Spanish and their respective similarity value. This corpus was used in the training of the
24
- Sentence-BERT library to improve the efficiency of the RoBERTA-large-bne base model.
 
25
 
26
- The training of RoBERTa+CelebA has been performed with a Siamese network that evaluates the similarity of the embeddings generated by the transformer network using the cosine similarity metric.
 
 
 
 
 
 
27
 
28
- Therefore, each input of the training data consists of a pair of sentences A and B in English and their respective similarity in the range 0 to 1.
29
 
30
- First, a translation of the original English text into Spanish was made .
31
- Subsequently, the document structure defines in each line (input) a pair of Spanish sentences and their respective similarity value between 0 and 1 calculated by the Spacy library.
32
- However, since Spacy library works only with English entries, the similarity between two Spanish sentences has been matched with their respective English pairs.
33
- Finally, the final training corpus for RoBERTa is defined by the Spanish text and the similarity score.
 
34
 
 
35
 
 
 
 
 
36
 
37
  ## Citation information
38
 
39
- **Citing**: If you used RoBERTa+CelebA in your work, please cite the **[????](???)**:
40
 
41
  <!--```bib
42
  @article{inffus_TINTO,
 
18
  - 100M<n<1B
19
  ---
20
 
21
+ ## Corpus Summary
22
 
23
+ This corpus contains 250000 entries made up of a pair of sentences in Spanish and their respective similarity value in the range 0 to 1. This corpus was used in the training of the
24
+ [sentence-transformer](https://www.sbert.net/) library to improve the efficiency of the [RoBERTa-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) base model.
25
+ Each of the pairs of sentences are textual descriptions of the faces of the CelebA dataset, which were previously translated into Spanish. The process followed to generate it was:
26
 
27
+ - First, a translation of the original English text into Spanish was made. The original corpus in English was obtained from the work [Text2faceGAN ](https://arxiv.org/pdf/1911.11378.pdf)
28
+
29
+ - An algorithm was implemented that randomly selects two sentences from the translated corpus and calculates their similarity value. _Spacy_ was used to obtain the similarity value of each
30
+ pair of sentences.
31
+ - Since both _Spacy_ and most of the libraries to calculate sentence similarity only work in the English language, part of the algorithm consisted in additionally selecting the pair of sentences from the original corpus in English.
32
+ Finally, the final training corpus for RoBERTa is defined by the Spanish text and the similarity score.
33
+ - Each pair of sentences in Spanish and the similarity value separated by the character "|", are saved as entries of the new corpus.
34
 
35
+ The training of RoBERTa-large-bne + CelebA, using the present corpus was developed, resulting in the new model [RoBERTa-celebA-Sp](https://huggingface.co/oeg/RoBERTa-CelebA-Sp/blob).
36
 
37
+ ## Corpus Fields
38
+ Each corpus entry is composed of:
39
+ - Sentence A: Descriptive sentence of a CelebA face in Spanish.
40
+ - Sentence B: Descriptive sentence of a CelebA face in Spanish.
41
+ - Similarity Value: Similarity of sentence A and sentence B.
42
 
43
+ Each component is separated by the character "|" with the structure:
44
 
45
+ ```
46
+ SentenceA | Sentence B | similarity value
47
+ ```
48
+ You can download the file with a _.txt_ or _.csv_ extension as appropriate.
49
 
50
  ## Citation information
51
 
52
+ **Citing**: If you used CelebA_RoBERTa_Sp corpus in your work, please cite the **[????](???)**:
53
 
54
  <!--```bib
55
  @article{inffus_TINTO,