wanzin commited on
Commit
219cce4
β€’
1 Parent(s): 0b572cf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -4,7 +4,7 @@ emoji: πŸŒ–
4
  colorFrom: purple
5
  colorTo: pink
6
  sdk: gradio
7
- sdk_version: 4.7.1
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
@@ -12,11 +12,13 @@ tags:
12
  - evaluate
13
  - metric
14
  description: >-
15
- Perplexity metric implemented by d-Matrix.
16
- Perplexity (PPL) is one of the most common metrics for evaluating language models.
17
- It is defined as the exponentiated average negative log-likelihood of a sequence, calculated with exponent base `e`.
18
- Note that this metric is intended for Causual Language Models, the perplexity calculation is only correct if model uses Cross Entropy Loss.
19
- For more information, see https://huggingface.co/docs/transformers/perplexity
 
 
20
  ---
21
 
22
  # Metric Card for Perplexity
 
4
  colorFrom: purple
5
  colorTo: pink
6
  sdk: gradio
7
+ sdk_version: 4.41.0
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
 
12
  - evaluate
13
  - metric
14
  description: >-
15
+ Perplexity metric implemented by d-Matrix. Perplexity (PPL) is one of the most
16
+ common metrics for evaluating language models. It is defined as the
17
+ exponentiated average negative log-likelihood of a sequence, calculated with
18
+ exponent base `e`. Note that this metric is intended for Causual Language
19
+ Models, the perplexity calculation is only correct if model uses Cross Entropy
20
+ Loss. For more information, see
21
+ https://huggingface.co/docs/transformers/perplexity
22
  ---
23
 
24
  # Metric Card for Perplexity