Edit model card

🦾 xlm-roberta-large-squad2-ctkfacts

🧰 Usage

πŸ€— Using Huggingface transformers

from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-squad2-ctkfacts")
tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-squad2-ctkfacts")

πŸ‘Ύ Using UKPLab sentence_transformers CrossEncoder

The model was trained using the CrossEncoder API and we recommend it for its usage.

from sentence_transformers.cross_encoder import CrossEncoder
model = CrossEncoder('ctu-aic/xlm-roberta-large-squad2-ctkfacts')
scores = model.predict([["My first context.", "My first hypothesis."],  
                        ["Second context.", "Hypothesis."]])

🌳 Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

πŸ‘¬ Authors

The model was trained and uploaded by ullriher (e-mail: ullriher@fel.cvut.cz)

The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague (AIC).

πŸ” License

cc-by-sa-4.0

πŸ’¬ Citation

If you find this model helpful, feel free to cite our publication:


@article{DBLP:journals/corr/abs-2201-11115,
  author    = {Jan Drchal and
               Herbert Ullrich and
               Martin R{'{y}}par and
               Hana Vincourov{'{a}} and
               V{'{a}}clav Moravec},
  title     = {CsFEVER and CTKFacts: Czech Datasets for Fact Verification},
  journal   = {CoRR},
  volume    = {abs/2201.11115},
  year      = {2022},
  url       = {https://arxiv.org/abs/2201.11115},
  eprinttype = {arXiv},
  eprint    = {2201.11115},
  timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Downloads last month
7
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.