Sheshera Mysore commited on
Commit
cb5580b
1 Parent(s): 4a61955

kp encoder readme.

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -1,3 +1,87 @@
1
  ---
 
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
  license: apache-2.0
4
  ---
5
+
6
+ ## Overview
7
+
8
+ This encoder model is part of an approach for making interactive recommendations, LACE:
9
+
10
+ **Title**: "Editable User Profiles for Controllable Text Recommendation"
11
+
12
+ **Authors**: Sheshera Mysore, Mahmood Jasim, Andrew McCallum, Hamed Zamani
13
+
14
+ **Paper**: https://arxiv.org/abs/2304.04250
15
+
16
+ **Github**: https://github.com/iesl/editable_user_profiles-lace
17
+
18
+ ## Model Card
19
+
20
+ ### Model description
21
+
22
+ This model is a BERT based encoder trained for keyphrase representation. The model is trained with the inverse cloze task objective which miminizes the distance between the keyphrase embedding and the embedding for the surrounding context. The context is embedded with an Aspire contextual sentence encoder: [`allenai/aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci). So this model is best used with `allenai/aspire-contextualsentence-multim-compsci`.
23
+
24
+ ### Training data
25
+
26
+ The model is trained on about 100k keyphrases extracted automatically from computer science papers and their associated contexts representing about 1M triples.
27
+
28
+ ### Intended uses & limitations
29
+
30
+ This model is trained for representing keyphrases in **computer science**. However, the model was not tested as a keyphrase encoder, it was only used as part of the LACE model -- it is likely that other models may be better suited for your use case, e.g. [SPECTER2](https://huggingface.co/allenai/specter2).
31
+
32
+ ## Usage (Sentence-Transformers)
33
+
34
+ This model is intended for use in the LACE model, for that look at: https://github.com/iesl/editable_user_profiles-lace
35
+
36
+ But its also possible to use this as a keyphrase encoder. The easiest way to use this is with [sentence-transformers](https://www.SBERT.net) installed:
37
+
38
+ ```
39
+ pip install -U sentence-transformers
40
+ ```
41
+
42
+ Then you can use the model like this:
43
+
44
+ ```python
45
+ from sentence_transformers import SentenceTransformer
46
+ keyphrases = ["machine learning", "keyphrase encoders"]
47
+
48
+ model = SentenceTransformer('Sheshera/lace-kp-encoder-compsci')
49
+ embeddings = model.encode(keyphrases)
50
+ print(embeddings)
51
+ ```
52
+
53
+ ## Usage (HuggingFace Transformers)
54
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then apply the right pooling-operation on-top of the contextualized word embeddings.
55
+
56
+ ```python
57
+ from transformers import AutoTokenizer, AutoModel
58
+ import torch
59
+
60
+
61
+ #Mean Pooling - Take attention mask into account for correct averaging
62
+ def mean_pooling(model_output, attention_mask):
63
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
64
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
65
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
66
+
67
+
68
+ # Keyphrases we want embeddings for
69
+ keyphrases = ["machine learning", "keyphrase encoders"]
70
+
71
+ # Load model from HuggingFace Hub
72
+ tokenizer = AutoTokenizer.from_pretrained('Sheshera/lace-kp-encoder-compsci')
73
+ model = AutoModel.from_pretrained('Sheshera/lace-kp-encoder-compsci')
74
+
75
+ # Tokenize keyphrases
76
+ encoded_input = tokenizer(keyphrases, padding=True, truncation=True, return_tensors='pt')
77
+
78
+ # Compute token embeddings
79
+ with torch.no_grad():
80
+ model_output = model(**encoded_input)
81
+
82
+ # Perform pooling. In this case, mean pooling.
83
+ keyphrase_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
84
+
85
+ print("Keyphrase embeddings:")
86
+ print(keyphrase_embeddings)
87
+ ```