Text2Text Generation
English
machineteacher commited on
Commit
c476d2d
1 Parent(s): fc81f44

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -9
README.md CHANGED
@@ -13,7 +13,7 @@ pipeline_tag: text2text-generation
13
  # Model Card for Model ID
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
- This repository contains files for two Seq2Seq transformers-based models used in our paper: https://arxiv.org/abs/2306.05561.
17
 
18
  ## Model Details
19
 
@@ -29,7 +29,7 @@ This repository contains files for two Seq2Seq transformers-based models used in
29
 
30
  ### Model Sources
31
 
32
- - **Paper:** https://arxiv.org/abs/2306.05561
33
 
34
  ## Uses
35
 
@@ -85,13 +85,20 @@ We calculate F<sub>1</sub> score based on the abovementioned values.
85
  **BibTeX:**
86
 
87
  ```
88
- @misc{yermilov2023privacy,
89
- title={Privacy- and Utility-Preserving NLP with Anonymized Data: A case study of Pseudonymization},
90
- author={Oleksandr Yermilov and Vipul Raheja and Artem Chernodub},
91
- year={2023},
92
- eprint={2306.05561},
93
- archivePrefix={arXiv},
94
- primaryClass={cs.CL}
 
 
 
 
 
 
 
95
  }
96
  ```
97
 
 
13
  # Model Card for Model ID
14
 
15
  <!-- Provide a quick summary of what the model is/does. -->
16
+ This repository contains files for two Seq2Seq transformers-based models used in our paper: https://aclanthology.org/2023.trustnlp-1.20/.
17
 
18
  ## Model Details
19
 
 
29
 
30
  ### Model Sources
31
 
32
+ - **Paper:** https://aclanthology.org/2023.trustnlp-1.20/
33
 
34
  ## Uses
35
 
 
85
  **BibTeX:**
86
 
87
  ```
88
+ @inproceedings{yermilov-etal-2023-privacy,
89
+ title = "Privacy- and Utility-Preserving {NLP} with Anonymized data: A case study of Pseudonymization",
90
+ author = "Yermilov, Oleksandr and
91
+ Raheja, Vipul and
92
+ Chernodub, Artem",
93
+ booktitle = "Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)",
94
+ month = jul,
95
+ year = "2023",
96
+ address = "Toronto, Canada",
97
+ publisher = "Association for Computational Linguistics",
98
+ url = "https://aclanthology.org/2023.trustnlp-1.20",
99
+ doi = "10.18653/v1/2023.trustnlp-1.20",
100
+ pages = "232--241",
101
+ abstract = "This work investigates the effectiveness of different pseudonymization techniques, ranging from rule-based substitutions to using pre-trained Large Language Models (LLMs), on a variety of datasets and models used for two widely used NLP tasks: text classification and summarization. Our work provides crucial insights into the gaps between original and anonymized data (focusing on the pseudonymization technique) and model quality and fosters future research into higher-quality anonymization techniques better to balance the trade-offs between data protection and utility preservation. We make our code, pseudonymized datasets, and downstream models publicly available.",
102
  }
103
  ```
104