matthewleechen commited on
Commit
603ddce
1 Parent(s): ea4c0e3

Add new LinkTransformer model.

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
LT_training_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_save_dir": "models",
3
+ "model_save_name": "check",
4
+ "opt_model_description": "test",
5
+ "opt_model_lang": "en",
6
+ "train_batch_size": 64,
7
+ "num_epochs": 200,
8
+ "warm_up_perc": 1,
9
+ "learning_rate": 2e-06,
10
+ "loss_type": "supcon",
11
+ "val_perc": 0.2,
12
+ "wandb_names": {
13
+ "project": "linktransformer",
14
+ "id": "your-id",
15
+ "run": "run-name",
16
+ "entity": "your-id"
17
+ },
18
+ "add_pooling_layer": false,
19
+ "large_val": true,
20
+ "eval_steps_perc": 0.1,
21
+ "test_at_end": true,
22
+ "save_val_test_pickles": true,
23
+ "val_query_prop": 0.5,
24
+ "loss_params": {},
25
+ "eval_type": "classification",
26
+ "training_dataset": "dataframe",
27
+ "base_model_path": "sentence-transformers/multi-qa-mpnet-base-dot-v1",
28
+ "best_model_path": "models/check"
29
+ }
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ language:
4
+ - en
5
+ tags:
6
+ - linktransformer
7
+ - sentence-transformers
8
+ - sentence-similarity
9
+ - tabular-classification
10
+
11
+ ---
12
+
13
+ # matthewleechen/lt_namesonly_science
14
+
15
+ This is a [LinkTransformer](https://linktransformer.github.io/) model. At its core this model this is a sentence transformer model [sentence-transformers](https://www.SBERT.net) model- it just wraps around the class.
16
+ It is designed for quick and easy record linkage (entity-matching) through the LinkTransformer package. The tasks include clustering, deduplication, linking, aggregation and more.
17
+ Notwithstanding that, it can be used for any sentence similarity task within the sentence-transformers framework as well.
18
+ It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
19
+ Take a look at the documentation of [sentence-transformers](https://www.sbert.net/index.html) if you want to use this model for more than what we support in our applications.
20
+
21
+
22
+ This model has been fine-tuned on the model : sentence-transformers/multi-qa-mpnet-base-dot-v1. It is pretrained for the language : - en.
23
+
24
+
25
+ test
26
+
27
+ ## Usage (LinkTransformer)
28
+
29
+ Using this model becomes easy when you have [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) installed:
30
+
31
+ ```
32
+ pip install -U linktransformer
33
+ ```
34
+
35
+ Then you can use the model like this:
36
+
37
+ ```python
38
+ import linktransformer as lt
39
+ import pandas as pd
40
+
41
+ ##Load the two dataframes that you want to link. For example, 2 dataframes with company names that are written differently
42
+ df1=pd.read_csv("data/df1.csv") ###This is the left dataframe with key CompanyName for instance
43
+ df2=pd.read_csv("data/df2.csv") ###This is the right dataframe with key CompanyName for instance
44
+
45
+ ###Merge the two dataframes on the key column!
46
+ df_merged = lt.merge(df1, df2, on="CompanyName", how="inner")
47
+
48
+ ##Done! The merged dataframe has a column called "score" that contains the similarity score between the two company names
49
+
50
+ ```
51
+
52
+
53
+ ## Training your own LinkTransformer model
54
+ Any Sentence Transformers can be used as a backbone by simply adding a pooling layer. Any other transformer on HuggingFace can also be used by specifying the option add_pooling_layer==True
55
+ The model was trained using SupCon loss.
56
+ Usage can be found in the package docs.
57
+ The training config can be found in the repo with the name LT_training_config.json
58
+ To replicate the training, you can download the file and specify the path in the config_path argument of the training function. You can also override the config by specifying the training_args argument.
59
+ Here is an example.
60
+
61
+
62
+ ```python
63
+
64
+ ##Consider the example in the paper that has a dataset of Mexican products and their tariff codes from 1947 and 1948 and we want train a model to link the two tariff codes.
65
+ saved_model_path = train_model(
66
+ model_path="hiiamsid/sentence_similarity_spanish_es",
67
+ dataset_path=dataset_path,
68
+ left_col_names=["description47"],
69
+ right_col_names=['description48'],
70
+ left_id_name=['tariffcode47'],
71
+ right_id_name=['tariffcode48'],
72
+ log_wandb=False,
73
+ config_path=LINKAGE_CONFIG_PATH,
74
+ training_args={"num_epochs": 1}
75
+ )
76
+
77
+ ```
78
+
79
+
80
+ You can also use this package for deduplication (clusters a df on the supplied key column). Merging a fine class (like product) to a coarse class (like HS code) is also possible.
81
+ Read our paper and the documentation for more!
82
+
83
+
84
+
85
+ ## Evaluation Results
86
+
87
+ <!--- Describe how your model was evaluated -->
88
+
89
+ You can evaluate the model using the [LinkTransformer](https://github.com/dell-research-harvard/linktransformer) package's inference functions.
90
+ We have provided a few datasets in the package for you to try out. We plan to host more datasets on Huggingface and our website (Coming soon) that you can take a look at.
91
+
92
+
93
+ ## Training
94
+ The model was trained with the parameters:
95
+
96
+ **DataLoader**:
97
+
98
+ `torch.utils.data.dataloader.DataLoader` of length 25 with parameters:
99
+ ```
100
+ {'batch_size': 64, 'sampler': 'torch.utils.data.dataloader._InfiniteConstantSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
101
+ ```
102
+
103
+ **Loss**:
104
+
105
+ `linktransformer.modified_sbert.losses.SupConLoss_wandb`
106
+
107
+ Parameters of the fit()-Method:
108
+ ```
109
+ {
110
+ "epochs": 200,
111
+ "evaluation_steps": 3,
112
+ "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
113
+ "max_grad_norm": 1,
114
+ "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
115
+ "optimizer_params": {
116
+ "lr": 2e-06
117
+ },
118
+ "scheduler": "WarmupLinear",
119
+ "steps_per_epoch": null,
120
+ "warmup_steps": 5000,
121
+ "weight_decay": 0.01
122
+ }
123
+ ```
124
+
125
+
126
+
127
+
128
+ LinkTransformer(
129
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
130
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
131
+ )
132
+ ```
133
+
134
+ ## Citing & Authors
135
+
136
+ ```
137
+ @misc{arora2023linktransformer,
138
+ title={LinkTransformer: A Unified Package for Record Linkage with Transformer Language Models},
139
+ author={Abhishek Arora and Melissa Dell},
140
+ year={2023},
141
+ eprint={2309.00789},
142
+ archivePrefix={arXiv},
143
+ primaryClass={cs.CL}
144
+ }
145
+
146
+ ```
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/check",
3
+ "architectures": [
4
+ "MPNetModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "mpnet",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 1,
20
+ "relative_attention_num_buckets": 32,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.37.2",
23
+ "vocab_size": 30527
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.6.1",
5
+ "pytorch": "1.8.1"
6
+ }
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fff45788fac7c1a4723c8024feb999cb0dc2254dbb98917a96dfb06eed813322
3
+ size 437967672
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[UNK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "30526": {
44
+ "content": "<mask>",
45
+ "lstrip": true,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "bos_token": "<s>",
53
+ "clean_up_tokenization_spaces": true,
54
+ "cls_token": "<s>",
55
+ "do_lower_case": true,
56
+ "eos_token": "</s>",
57
+ "mask_token": "<mask>",
58
+ "max_length": 250,
59
+ "model_max_length": 512,
60
+ "pad_to_multiple_of": null,
61
+ "pad_token": "<pad>",
62
+ "pad_token_type_id": 0,
63
+ "padding_side": "right",
64
+ "sep_token": "</s>",
65
+ "stride": 0,
66
+ "strip_accents": null,
67
+ "tokenize_chinese_chars": true,
68
+ "tokenizer_class": "MPNetTokenizer",
69
+ "truncation_side": "right",
70
+ "truncation_strategy": "longest_first",
71
+ "unk_token": "[UNK]"
72
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff