heruberuto commited on
Commit
9af003a
1 Parent(s): a0f0327

Generate README

Browse files
Files changed (1) hide show
  1. README.md +62 -42
README.md CHANGED
@@ -1,47 +1,67 @@
1
  ---
 
 
 
 
 
 
2
  tags:
3
- - generated_from_keras_callback
4
- model-index:
5
- - name: xlm-roberta-large-xnli-csfever
6
- results: []
7
- ---
8
-
9
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
- probably proofread and complete it, then remove this comment. -->
11
-
12
- # xlm-roberta-large-xnli-csfever
13
-
14
- This model was trained from scratch on an unknown dataset.
15
- It achieves the following results on the evaluation set:
16
-
17
-
18
- ## Model description
19
-
20
- More information needed
21
-
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
-
30
- ## Training procedure
31
-
32
- ### Training hyperparameters
33
-
34
- The following hyperparameters were used during training:
35
- - optimizer: None
36
- - training_precision: float32
37
-
38
- ### Training results
39
-
40
 
 
41
 
42
- ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- - Transformers 4.21.0
45
- - TensorFlow 2.7.1
46
- - Datasets 2.4.0
47
- - Tokenizers 0.12.1
 
1
  ---
2
+ datasets:
3
+ - ctu-aic/csfever
4
+ - xnli
5
+ languages:
6
+ - cs
7
+ license: cc-by-sa-4.0
8
  tags:
9
+ - natural-language-inference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ ---
12
 
13
+ # 🦾 xlm-roberta-large-xnli-csfever
14
+
15
+ ## 🧰 Usage
16
+
17
+ ### 🤗 Using Huggingface `transformers`
18
+ ```python
19
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
20
+ model = AutoModelForSequenceClassification.from_pretrained("ctu-aic/xlm-roberta-large-xnli-csfever")
21
+ tokenizer = AutoTokenizer.from_pretrained("ctu-aic/xlm-roberta-large-xnli-csfever")
22
+ ```
23
+
24
+ ### 👾 Using UKPLab `sentence_transformers` `CrossEncoder`
25
+ The model was trained using the `CrossEncoder` API and we recommend it for its usage.
26
+ ```python
27
+ from sentence_transformers.cross_encoder import CrossEncoder
28
+ model = CrossEncoder('ctu-aic/xlm-roberta-large-xnli-csfever')
29
+ scores = model.predict([["My first context.", "My first hypothesis."],
30
+ ["Second context.", "Hypothesis."]])
31
+ ```
32
+
33
+ ## 🌳 Contributing
34
+ Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
35
+
36
+ ## 👬 Authors
37
+ The model was trained and uploaded by **[ullriher](https://udb.fel.cvut.cz/?uid=ullriher&sn=&givenname=&_cmd=Hledat&_reqn=1&_type=user&setlang=en)** (e-mail: [ullriher@fel.cvut.cz](mailto:ullriher@fel.cvut.cz))
38
+
39
+ The code was codeveloped by the NLP team at Artificial Intelligence Center of CTU in Prague ([AIC](https://www.aic.fel.cvut.cz/)).
40
+
41
+ ## 🔐 License
42
+ [cc-by-sa-4.0](https://choosealicense.com/licenses/cc-by-sa-4.0)
43
+
44
+ ## 💬 Citation
45
+ If you find this model helpful, feel free to cite our publication:
46
+ ```
47
+
48
+ @article{DBLP:journals/corr/abs-2201-11115,
49
+ author = {Jan Drchal and
50
+ Herbert Ullrich and
51
+ Martin R{'{y}}par and
52
+ Hana Vincourov{'{a}} and
53
+ V{'{a}}clav Moravec},
54
+ title = {CsFEVER and CTKFacts: Czech Datasets for Fact Verification},
55
+ journal = {CoRR},
56
+ volume = {abs/2201.11115},
57
+ year = {2022},
58
+ url = {https://arxiv.org/abs/2201.11115},
59
+ eprinttype = {arXiv},
60
+ eprint = {2201.11115},
61
+ timestamp = {Tue, 01 Feb 2022 14:59:01 +0100},
62
+ biburl = {https://dblp.org/rec/journals/corr/abs-2201-11115.bib},
63
+ bibsource = {dblp computer science bibliography, https://dblp.org}
64
+ }
65
+
66
+ ```
67