Edit model card

This contains the trained checkpoint for LENS, as introduced in LENS: A Learnable Evaluation Metric for Text Simplification (ACL, 2023). For more information, please refer to the LENS repository.

pip install lens-metric
from lens import download_model, LENS

lens_path = download_model("davidheineman/lens")
lens = LENS(lens_path, rescale=True)

complex = [
    "They are culturally akin to the coastal peoples of Papua New Guinea."
]
simple = [
    "They are culturally similar to the people of Papua New Guinea."
]
references = [[
    "They are culturally similar to the coastal peoples of Papua New Guinea.",
    "They are similar to the Papua New Guinea people living on the coast."
]]

scores = lens.score(complex, simple, references, batch_size=8, devices=[0])
print(scores) # [78.6344531130125]

For an example, please see the quick demo Google Collab notebook.

Intended uses

This model is for reference-based text simplification evaluation, for a model requiring no references, please see LENS-SALSA.

Cite LENS

If you find our paper, code or data helpful, please consider citing our work:

@inproceedings{maddela-etal-2023-lens,
    title = "{LENS}: A Learnable Evaluation Metric for Text Simplification",
    author = "Maddela, Mounica  and
      Dou, Yao  and
      Heineman, David  and
      Xu, Wei",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.905",
    doi = "10.18653/v1/2023.acl-long.905",
    pages = "16383--16408",
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .