language:
- en
datasets:
- simpeval
tags:
- simplification
license: apache-2.0
This contains the trained checkpoint for LENS-SALSA, as introduced in Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA. For more information, please refer to the SALSA repository.
pip install lens-metric
from lens import download_model, LENS_SALSA
lens_salsa_path = download_model("davidheineman/lens-salsa")
lens_salsa = LENS_SALSA(lens_salsa_path)
complex = [
"They are culturally akin to the coastal peoples of Papua New Guinea."
]
simple = [
"They are culturally similar to the people of Papua New Guinea."
]
scores, word_level_scores = lens_salsa.score(complex, simple, batch_size=8, devices=[0])
print(scores) # [72.40909337997437]
# LENS-SALSA also returns an error-identification tagging, recover_output() will return the tagged output
tagged_output = lens_salsa.recover_output(word_level_scores, threshold=0.5)
print(tagged_output)
For an example, please see the quick demo Google Collab notebook.
Intended uses
Our model is intented to be used for reference-free simplification evaluation. Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect simplification and 0 a random simplification. LENS-SALSA was trained on edit annotations of the SimpEval dataset, which covers manually-written, complex Wikipedia simplifications. We have not evaluated our model on non-English languages or non-Wikipedia domains.
Cite SALSA
If you find our paper, code or data helpful, please consider citing our work:
@article{heineman2023dancing,
title={Dancing {B}etween {S}uccess and {F}ailure: {E}dit-level {S}implification {E}valuation using {SALSA}},
author = "Heineman, David and Dou, Yao and Xu, Wei",
journal={arXiv preprint arXiv:2305.14458},
year={2023}
}