aphedges commited on
Commit
922af2f
•
1 Parent(s): f1c2f86

Add new SentenceTransformer model

Browse files
.gitattributes CHANGED
@@ -25,3 +25,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - feature-extraction
8
+ - sentence-similarity
9
+ - transformers
10
+ datasets:
11
+ - anli
12
+ - multi_nli
13
+ - snli
14
+ ---
15
+
16
+ # sbert-roberta-large-anli-mnli-snli
17
+
18
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
19
+
20
+ The model is weight initialized by RoBERTa-large and trained on ANLI (Nie et al., 2020), MNLI (Williams et al., 2018), and SNLI (Bowman et al., 2015) using the [`training_nli.py`](https://github.com/UKPLab/sentence-transformers/blob/v0.3.5/examples/training/nli/training_nli.py) example script.
21
+
22
+ Training Details:
23
+
24
+ - Learning rate: 2e-5
25
+ - Batch size: 8
26
+ - Pooling: Mean
27
+ - Training time: ~20 hours on one [NVIDIA GeForce RTX 2080 Ti](https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/)
28
+
29
+ ## Usage (Sentence-Transformers)
30
+
31
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
32
+
33
+ ```bash
34
+ pip install -U sentence-transformers
35
+ ```
36
+
37
+ Then you can use the model like this:
38
+
39
+ ```python
40
+ from sentence_transformers import SentenceTransformer
41
+ sentences = ["This is an example sentence", "Each sentence is converted"]
42
+
43
+ model = SentenceTransformer("usc-isi/sbert-roberta-large-anli-mnli-snli")
44
+ embeddings = model.encode(sentences)
45
+ print(embeddings)
46
+ ```
47
+
48
+ ## Usage (Hugging Face Transformers)
49
+
50
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: first, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
51
+
52
+ ```python
53
+ import torch
54
+ from transformers import AutoModel, AutoTokenizer
55
+
56
+
57
+ # Mean Pooling - Take attention mask into account for correct averaging
58
+ def mean_pooling(model_output, attention_mask):
59
+ token_embeddings = model_output[0] # First element of model_output contains all token embeddings
60
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
61
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
62
+
63
+
64
+ # Sentences we want sentence embeddings for
65
+ sentences = ["This is an example sentence", "Each sentence is converted"]
66
+
67
+ # Load model from HuggingFace Hub
68
+ tokenizer = AutoTokenizer.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
69
+ model = AutoModel.from_pretrained("usc-isi/sbert-roberta-large-anli-mnli-snli")
70
+
71
+ # Tokenize sentences
72
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
73
+
74
+ # Compute token embeddings
75
+ with torch.no_grad():
76
+ model_output = model(**encoded_input)
77
+
78
+ # Perform pooling. In this case, max pooling.
79
+ sentence_embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
80
+
81
+ print("Sentence embeddings:")
82
+ print(sentence_embeddings)
83
+ ```
84
+
85
+ ## Evaluation Results
86
+
87
+ See section 4.1 of our paper for evaluation results.
88
+
89
+ ## Full Model Architecture
90
+
91
+ ```text
92
+ SentenceTransformer(
93
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
94
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
95
+ )
96
+ ```
97
+
98
+ ## Citing & Authors
99
+
100
+ For more information about the project, see our paper:
101
+
102
+ > Ciosici, Manuel, et al. "Machine-Assisted Script Curation." _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations_, Association for Computational Linguistics, 2021, pp. 8–17. _ACLWeb_, <https://www.aclweb.org/anthology/2021.naacl-demos.2>.
103
+
104
+ ## References
105
+
106
+ - Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. [A large annotated corpus for learning natural language inference](https://doi.org/10.18653/v1/D15-1075). In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics.
107
+ - Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. [AdversarialNLI: A new benchmark for natural language understanding](https://doi.org/10.18653/v1/2020.acl-main.441). In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_, pages 4885–4901, Online. Association for Computational Linguistics.
108
+ - Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. [A broad-coverage challenge corpus for sentence understanding through inference](https://doi.org/10.18653/v1/N18-1101). In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_, pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.11.2",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 50265
27
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "0.3.5.1",
4
+ "transformers": "3.0.2",
5
+ "pytorch": "1.6.0"
6
+ }
7
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3382a014c70705466813b12b07b0cfd3d0438a61d187e39a38dfc3303d704dd5
3
+ size 498661169
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": "special_tokens_map.json", "full_tokenizer_file": null, "tokenizer_class": "RobertaTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff