lvwerra HF staff commited on
Commit
1443daa
1 Parent(s): b3eb205

Update Space (evaluate main: fe373d2e)

Browse files
Files changed (4) hide show
  1. README.md +102 -5
  2. app.py +6 -0
  3. perplexity.py +190 -0
  4. requirements.txt +6 -0
README.md CHANGED
@@ -1,12 +1,109 @@
1
  ---
2
  title: Perplexity
3
- emoji: 🔥
4
- colorFrom: green
5
- colorTo: yellow
6
  sdk: gradio
7
- sdk_version: 3.0.9
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: Perplexity
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
+ sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - measurement
13
  ---
14
 
15
+ # Measurement Card for Perplexity
16
+
17
+ ## Measurement Description
18
+ Given a model and an input text sequence, perplexity measures how likely the model is to generate the input text sequence.
19
+
20
+ As a measurement, it can be used to to evaluate how well a selection of texts matches the distribution of text that the input model was trained on.
21
+ In this case, the model input should be a trained model, and the input texts should be the text to be evaluated.
22
+
23
+ ## Intended Uses
24
+ Dataset analysis or exploration.
25
+
26
+ ## How to Use
27
+
28
+ The measurement takes a list of texts as input, as well as the name of the model used to compute the metric:
29
+
30
+ ```python
31
+ from evaluate import load
32
+ perplexity = load("perplexity", module_type= "measurement")
33
+ results = perplexity.compute(input_texts=input_texts, model_id='gpt2')
34
+ ```
35
+
36
+ ### Inputs
37
+ - **model_id** (str): model used for calculating Perplexity. NOTE: Perplexity can only be calculated for causal language models.
38
+ - This includes models such as gpt2, causal variations of bert, causal versions of t5, and more (the full list can be found in the AutoModelForCausalLM documentation here: https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForCausalLM )
39
+ - **input_texts** (list of str): input text, each separate text snippet is one list entry.
40
+ - **batch_size** (int): the batch size to run texts through the model. Defaults to 16.
41
+ - **add_start_token** (bool): whether to add the start token to the texts, so the perplexity can include the probability of the first word. Defaults to True.
42
+ - **device** (str): device to run on, defaults to 'cuda' when available
43
+
44
+ ### Output Values
45
+ This metric outputs a dictionary with the perplexity scores for the text input in the list, and the average perplexity.
46
+ If one of the input texts is longer than the max input length of the model, then it is truncated to the max length for the perplexity computation.
47
+
48
+ ```
49
+ {'perplexities': [8.182524681091309, 33.42122268676758, 27.012239456176758], 'mean_perplexity': 22.871995608011883}
50
+ ```
51
+
52
+ This metric's range is 0 and up. A lower score is better.
53
+
54
+ #### Values from Popular Papers
55
+
56
+
57
+ ### Examples
58
+ Calculating perplexity on input_texts defined here:
59
+ ```python
60
+ perplexity = evaluate.load("perplexity", module_type= "measurement")
61
+ input_texts = ["lorem ipsum", "Happy Birthday!", "Bienvenue"]
62
+ results = perplexity.compute(model_id='gpt2',
63
+ add_start_token=False,
64
+ input_texts=input_texts)
65
+ print(list(results.keys()))
66
+ >>>['perplexities', 'mean_perplexity']
67
+ print(round(results["mean_perplexity"], 2))
68
+ >>>78.22
69
+ print(round(results["perplexities"][0], 2))
70
+ >>>11.11
71
+ ```
72
+ Calculating perplexity on input_texts loaded in from a dataset:
73
+ ```python
74
+ perplexity = evaluate.load("perplexity", module_type= "measurement")
75
+ input_texts = datasets.load_dataset("wikitext",
76
+ "wikitext-2-raw-v1",
77
+ split="test")["text"][:50]
78
+ input_texts = [s for s in input_texts if s!='']
79
+ results = perplexity.compute(model_id='gpt2',
80
+ input_texts=input_texts)
81
+ print(list(results.keys()))
82
+ >>>['perplexities', 'mean_perplexity']
83
+ print(round(results["mean_perplexity"], 2))
84
+ >>>60.35
85
+ print(round(results["perplexities"][0], 2))
86
+ >>>81.12
87
+ ```
88
+
89
+ ## Limitations and Bias
90
+ Note that the output value is based heavily on what text the model was trained on. This means that perplexity scores are not comparable between models or datasets.
91
+
92
+
93
+ ## Citation
94
+
95
+ ```bibtex
96
+ @article{jelinek1977perplexity,
97
+ title={Perplexity—a measure of the difficulty of speech recognition tasks},
98
+ author={Jelinek, Fred and Mercer, Robert L and Bahl, Lalit R and Baker, James K},
99
+ journal={The Journal of the Acoustical Society of America},
100
+ volume={62},
101
+ number={S1},
102
+ pages={S63--S63},
103
+ year={1977},
104
+ publisher={Acoustical Society of America}
105
+ }
106
+ ```
107
+
108
+ ## Further References
109
+ - [Hugging Face Perplexity Blog Post](https://huggingface.co/docs/transformers/perplexity)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("perplexity", module_type= "measurement")
6
+ launch_gradio_widget(module)
perplexity.py ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """Perplexity Metric."""
15
+
16
+ import datasets
17
+ import numpy as np
18
+ import torch
19
+ from torch.nn import CrossEntropyLoss
20
+ from transformers import AutoModelForCausalLM, AutoTokenizer
21
+
22
+ import evaluate
23
+ from evaluate import logging
24
+
25
+
26
+ _CITATION = """\
27
+
28
+ """
29
+
30
+ _DESCRIPTION = """
31
+ Perplexity (PPL) can be used for evaluating to what extent a dataset is similar to the distribution of text that a given model was trained on.
32
+ It is defined as the exponentiated average negative log-likelihood of a sequence.
33
+
34
+ For more information, see https://huggingface.co/docs/transformers/perplexity
35
+ """
36
+
37
+ _KWARGS_DESCRIPTION = """
38
+ Args:
39
+ model_id (str): model used for calculating Perplexity
40
+ NOTE: Perplexity can only be calculated for causal language models.
41
+ This includes models such as gpt2, causal variations of bert,
42
+ causal versions of t5, and more (the full list can be found
43
+ in the AutoModelForCausalLM documentation here:
44
+ https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForCausalLM )
45
+
46
+ data (list of str): input data, each separate text snippet
47
+ is one list entry.
48
+ batch_size (int): the batch size to run texts through the model. Defaults to 16.
49
+ add_start_token (bool): whether to add the start token to the texts,
50
+ so the perplexity can include the probability of the first word. Defaults to True.
51
+ device (str): device to run on, defaults to 'cuda' when available
52
+ Returns:
53
+ perplexity: dictionary containing the perplexity scores for the texts
54
+ in the input list, as well as the mean perplexity. If one of the input texts is
55
+ longer than the max input length of the model, then it is truncated to the
56
+ max length for the perplexity computation.
57
+ Examples:
58
+ Example 1:
59
+ >>> perplexity = evaluate.load("perplexity", module_type="measurement")
60
+ >>> data = ["lorem ipsum", "Happy Birthday!", "Bienvenue"]
61
+ >>> results = perplexity.compute(model_id='gpt2',
62
+ ... add_start_token=False,
63
+ ... data=data) # doctest:+ELLIPSIS
64
+ >>> print(list(results.keys()))
65
+ ['perplexities', 'mean_perplexity']
66
+ >>> print(round(results["mean_perplexity"], 2))
67
+ 78.22
68
+ >>> print(round(results["perplexities"][0], 2))
69
+ 11.11
70
+
71
+ Example 2:
72
+ >>> from datasets import load_dataset
73
+ >>> perplexity = evaluate.load("perplexity", module_type="measurement")
74
+ >>> data = load_dataset("wikitext", "wikitext-2-raw-v1", split="test")["text"][:10] # doctest: +SKIP
75
+ >>> data = [s for s in data if s!='']
76
+ >>> results = perplexity.compute(model_id='gpt2',
77
+ ... data=data)
78
+ >>> print(list(results.keys()))
79
+ ['perplexities', 'mean_perplexity']
80
+ >>> print(round(results["mean_perplexity"], 2)) # doctest: +SKIP
81
+ 60.35
82
+ >>> print(round(results["perplexities"][0], 2)) # doctest: +SKIP
83
+ 81.12
84
+ """
85
+
86
+
87
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
88
+ class Perplexity(evaluate.EvaluationModule):
89
+ def _info(self):
90
+ return evaluate.EvaluationModuleInfo(
91
+ module_type="measurement",
92
+ description=_DESCRIPTION,
93
+ citation=_CITATION,
94
+ inputs_description=_KWARGS_DESCRIPTION,
95
+ features=datasets.Features(
96
+ {
97
+ "data": datasets.Value("string"),
98
+ }
99
+ ),
100
+ reference_urls=["https://huggingface.co/docs/transformers/perplexity"],
101
+ )
102
+
103
+ def _compute(self, data, model_id, batch_size: int = 16, add_start_token: bool = True, device=None):
104
+
105
+ if device is not None:
106
+ assert device in ["gpu", "cpu", "cuda"], "device should be either gpu or cpu."
107
+ if device == "gpu":
108
+ device = "cuda"
109
+ else:
110
+ device = "cuda" if torch.cuda.is_available() else "cpu"
111
+
112
+ model = AutoModelForCausalLM.from_pretrained(model_id)
113
+ model = model.to(device)
114
+
115
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
116
+
117
+ # if batch_size > 1 (which generally leads to padding being required), and
118
+ # if there is not an already assigned pad_token, assign an existing
119
+ # special token to also be the padding token
120
+ if tokenizer.pad_token is None and batch_size > 1:
121
+ existing_special_tokens = list(tokenizer.special_tokens_map_extended.values())
122
+ # check that the model already has at least one special token defined
123
+ assert (
124
+ len(existing_special_tokens) > 0
125
+ ), "If batch_size > 1, model must have at least one special token to use for padding. Please use a different model or set batch_size=1."
126
+ # assign one of the special tokens to also be the pad token
127
+ tokenizer.add_special_tokens({"pad_token": existing_special_tokens[0]})
128
+
129
+ if add_start_token:
130
+ # leave room for <BOS> token to be added:
131
+ assert (
132
+ tokenizer.bos_token is not None
133
+ ), "Input model must already have a BOS token if using add_start_token=True. Please use a different model, or set add_start_token=False"
134
+ max_tokenized_len = model.config.max_length - 1
135
+ else:
136
+ max_tokenized_len = model.config.max_length
137
+
138
+ encodings = tokenizer(
139
+ data,
140
+ add_special_tokens=False,
141
+ padding=True,
142
+ truncation=True,
143
+ max_length=max_tokenized_len,
144
+ return_tensors="pt",
145
+ return_attention_mask=True,
146
+ ).to(device)
147
+
148
+ encoded_texts = encodings["input_ids"]
149
+ attn_masks = encodings["attention_mask"]
150
+
151
+ # check that each input is long enough:
152
+ if add_start_token:
153
+ assert torch.all(torch.ge(attn_masks.sum(1), 1)), "Each input text must be at least one token long."
154
+ else:
155
+ assert torch.all(
156
+ torch.ge(attn_masks.sum(1), 2)
157
+ ), "When add_start_token=False, each input text must be at least two tokens long. Run with add_start_token=True if inputting strings of only one token, and remove all empty input strings."
158
+
159
+ ppls = []
160
+ loss_fct = CrossEntropyLoss(reduction="none")
161
+
162
+ for start_index in logging.tqdm(range(0, len(encoded_texts), batch_size)):
163
+ end_index = min(start_index + batch_size, len(encoded_texts))
164
+ encoded_batch = encoded_texts[start_index:end_index]
165
+ attn_mask = attn_masks[start_index:end_index]
166
+
167
+ if add_start_token:
168
+ bos_tokens_tensor = torch.tensor([[tokenizer.bos_token_id]] * encoded_batch.size(dim=0)).to(device)
169
+ encoded_batch = torch.cat([bos_tokens_tensor, encoded_batch], dim=1)
170
+ attn_mask = torch.cat(
171
+ [torch.ones(bos_tokens_tensor.size(), dtype=torch.int64).to(device), attn_mask], dim=1
172
+ )
173
+
174
+ labels = encoded_batch
175
+
176
+ with torch.no_grad():
177
+ out_logits = model(encoded_batch, attention_mask=attn_mask).logits
178
+
179
+ shift_logits = out_logits[..., :-1, :].contiguous()
180
+ shift_labels = labels[..., 1:].contiguous()
181
+ shift_attention_mask_batch = attn_mask[..., 1:].contiguous()
182
+
183
+ perplexity_batch = torch.exp2(
184
+ (loss_fct(shift_logits.transpose(1, 2), shift_labels) * shift_attention_mask_batch).sum(1)
185
+ / shift_attention_mask_batch.sum(1)
186
+ )
187
+
188
+ ppls += perplexity_batch.tolist()
189
+
190
+ return {"perplexities": ppls, "mean_perplexity": np.mean(ppls)}
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@505123230059f9605da8951880eddc9d1fbf4278
3
+ datasets~=2.0
4
+ torch
5
+ torch
6
+ transformers