monsoon-nlp commited on
Commit
7068663
1 Parent(s): 3ef3abe

encoder half

Browse files
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # es-seq2seq-gender (encoder)
2
+
3
+ This is a seq2seq model (encoder half) to "flip" gender in Spanish sentences.
4
+ The model can augment your existing Spanish data, or generate counterfactuals
5
+ to test a model's decisions (would changing the gender of the subject or speaker change output?).
6
+
7
+ Intended Examples:
8
+
9
+ - el profesor viejo => la profesora vieja (article, noun, adjective all flip)
10
+ - una actriz => un actor (irregular noun)
11
+ - el lingüista => la lingüista (irregular noun)
12
+ - la biblioteca => la biblioteca (no person, no flip)
13
+
14
+ People's names are unchanged in this version, but you can use packages
15
+ such as https://pypi.org/project/gender-guesser/
16
+
17
+ ## Training
18
+
19
+ I originally developed
20
+ <a href="https://github.com/MonsoonNLP/el-la">a gender flip Python script</a>
21
+ with
22
+ <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased">BETO</a>,
23
+ the Spanish-language BERT from Universidad de Chile,
24
+ and spaCy to parse dependencies in sentences.
25
+
26
+ More about this project: https://medium.com/ai-in-plain-english/gender-bias-in-spanish-bert-1f4d76780617
27
+
28
+ The seq2seq model is trained on gender-flipped text from that script run on the
29
+ <a href="https://huggingface.co/datasets/muchocine">muchocine dataset</a>,
30
+ and the first 6,853 lines from the
31
+ <a href="https://oscar-corpus.com/">OSCAR corpus</a>
32
+ (Spanish ded-duped).
33
+
34
+ The encoder and decoder started with weights and vocabulary from BETO (uncased).
35
+
36
+ ## Non-binary gender
37
+
38
+ This model is useful to generate male and female text samples, but falls
39
+ short of capturing gender diversity in the world and in the Spanish
40
+ language. Some communities prefer the plural -@s to represent
41
+ -os and -as, or -e and -es for gender-neutral or mixed-gender plural,
42
+ or use fewer gendered professional nouns (la juez and not jueza). This is not yet
43
+ embraced by the Royal Spanish Academy
44
+ and is not represented in the corpora and tokenizers used to build this project.
45
+
46
+ This seq2seq project and script could, in the future, help generate more text samples
47
+ and prepare NLP models to understand us all better.
48
+
49
+ #### Sources
50
+
51
+ - https://www.nytimes.com/2020/04/15/world/americas/argentina-gender-language.html
52
+ - https://www.washingtonpost.com/dc-md-va/2019/12/05/teens-argentina-are-leading-charge-gender-neutral-language/?arc404=true
53
+ - https://www.theguardian.com/world/2020/jan/19/gender-neutral-language-battle-spain
54
+ - https://es.wikipedia.org/wiki/Lenguaje_no_sexista
55
+ - https://remezcla.com/culture/argentine-company-re-imagines-little-prince-gender-neutral-language/
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "dccuchile/bert-base-spanish-wwm-uncased",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "output_past": true,
19
+ "pad_token_id": 1,
20
+ "position_embedding_type": "absolute",
21
+ "type_vocab_size": 2,
22
+ "vocab_size": 31002
23
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e07aa9643483ef67f439cd8b4680d727e8213696336f71ab0f11a2ba1feb8b16
3
+ size 439488080
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": "/root/.cache/huggingface/transformers/78141ed1e8dcc5ff370950397ca0d1c5c9da478f54ec14544187d8a93eff1a26.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d", "tokenizer_file": null, "name_or_path": "dccuchile/bert-base-spanish-wwm-uncased"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff