--- tags: - text-classification - bert --- # Model Card for bleurt-tiny-512 # Model Details ## Model Description Pytorch version of the original BLEURT models from ACL paper - **Developed by:** Elron Bandel, Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research - **Shared by [Optional]:** Elron Bandel - **Model type:** Text Classification - **Language(s) (NLP):** More information needed - **License:** More information needed - **Parent Model:** BERT - **Resources for more information:** - [GitHub Repo](https://github.com/google-research/bleurt/tree/master) - [Associated Paper](https://aclanthology.org/2020.acl-main.704/) - [Blog Post](https://ai.googleblog.com/2020/05/evaluating-natural-language-generation.html) # Uses ## Direct Use This model can be used for the task of Text Classification ## Downstream Use [Optional] More information needed. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data The model authors note in the [associated paper](https://aclanthology.org/2020.acl-main.704.pdf): > We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ficial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data The test sets for years 2018 and 2019 [of the WMT Metrics Shared Task, to-English language pairs.] are noisier, ### Factors More information needed ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed. # Citation **BibTeX:** ```bibtex @inproceedings{sellam2020bleurt, title = {BLEURT: Learning Robust Metrics for Text Generation}, author = {Thibault Sellam and Dipanjan Das and Ankur P Parikh}, year = {2020}, booktitle = {Proceedings of ACL} } ``` # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Elron Bandel in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model.
Click to expand ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([-0.9414, -0.5678]) ``` See [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) for model conversion code.