cjziems commited on
Commit
b306f5f
1 Parent(s): e44dbc9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Positive Psychology Frames
2
+
3
+ _Inducing Positive Perspectives with Text Reframing_
4
+
5
+ [[Read the Paper]](https://faculty.cc.gatech.edu/~dyang888/docs/acl22_reframing.pdf) | [[Download the Data]](https://www.dropbox.com/sh/pnoczmv0uyn51e6/AAAGek6yX12Yc4PA2RwtZeZKa?dl=0) | [[Demo]](https://huggingface.co/spaces/Ella2323/Positive-Reframing)
6
+
7
+ <img src="frontpage.png" alt="frontpage" width="650"/>
8
+
9
+ ## *Why Positive Frames?*
10
+
11
+ This work was inspired by the need to escape the negative patterns of thinking that began to overwhelm the authors during the COVID-19 pandemic. We realized that what we needed was not some naive belief that everything would be okay if we ignored our problems. Instead, we needed _reframing_, or a shift in focus, with less weight on the negative things we can't control, and more weight on the positive things about ourselves and our situation which we can control.
12
+
13
+ _Positive reframing_ induces a complementary positive viewpoint (e.g. glass-half-full), which nevertheless supports the underlying content of the original sentence (see diagram above). The reframe implicates rather than contradicts the source, and the transformation is motivated by theoretically justified strategies from positive psychology (see _What's 'in the box?'_).
14
+
15
+ Our work shows how NLP can help lead the way by automatically reframing overly negative text using strategies from positive psychology.
16
+
17
+
18
+ ## *What's 'in the box?'*
19
+
20
+ The `Positive Psychology Frames` dataset contains **8,349** reframed sentence pairs, where the original sentence is drawn from a negative tweet (\#stressed), and a reframed copy is provided by a crowdworker who was trained in the methods of positive psychology. Our positive psychology frames taxonomy is defined below (with the distribution of labels shown on the left).
21
+
22
+ * ![25.4%](https://progress-bar.dev/25) **Growth Mindset:** Viewing a challenging event as an opportunity for the author specifically to grow or improve themselves.
23
+ * ![19.5%](https://progress-bar.dev/20) **Impermanence:** Saying bad things don't last forever, will get better soon, and/or that others have experienced similar struggles.
24
+ * ![36.1%](https://progress-bar.dev/36) **Neutralizing:** Replacing a negative word with a neutral word.
25
+ * ![48.7%](https://progress-bar.dev/49) **Optimism:** Focusing on things about the situation itself, in that moment, that are good (not just forecasting a better future).
26
+ * ![10.1%](https://progress-bar.dev/10) **Self-Affirmation:** Talking about what strengths the author already has, or the values they admire, like love, courage, perseverance, etc.
27
+ * ![13.0%](https://progress-bar.dev/13) **Thankfulness:** Expressing thankfulness or gratitude with key words like appreciate, glad that, thankful for, good thing, etc.
28
+
29
+
30
+ ## *What can I do with this data?*
31
+
32
+ State-of-the-art neural models can learn from our data how to (1) shift a negatively distorted text into a more positive perspective using a combination of strategies from positive psychology; and (2) recognize or classify the psychological strategies that are used to reframe a given source.
33
+
34
+ As our paper baselines show, neural models still have a long ways to go before they can reliably generate positive perspectives. We see particular errors from _insubstantial changes, contradictions to the premise, self-contradictions, and hallucinations_. Overall, our suggests that our dataset can serve as a useful benchmark for building natural language generation systems with positive perspectives. For more information, please [read the paper](https://faculty.cc.gatech.edu/~dyang888/docs/acl22_reframing.pdf).
35
+
36
+ ## *How do I run the baseline models?*
37
+ **1. Set Up Environment**
38
+ * CUDA, cudnn
39
+ * anaconda
40
+ ```
41
+ conda create --name reframe python=3.7
42
+ conda activate reframe
43
+ pip install -r requirements.txt
44
+ ```
45
+ **2. Dataset Preparation**
46
+ The datasets are under the data/ folder.
47
+
48
+ -Random, SBERT, T5, BART: wholetrain.csv, wholetest.csv
49
+ The datasets contain fields: original_text, reframed_text, strategy, original_with_label
50
+
51
+ -GPT, GPT2: wholetrain_gpt.txt, wholetest.csv
52
+ The train data contains <startoftext> token and <endoftext> token for each sentence pair. Also, ‘reframed: ‘ token indicates the position where the reframed sentence begins for each sentence pair. Each sentence pair starts on a new line.
53
+
54
+ -Seq2SeqLSTM: for_train.txt, for_test.txt
55
+ The datasets contain paired texts separated by tab in each line: original_text \t reframed_text
56
+
57
+ -CopyNMT: train-original.txt, train-reframed.txt, validation-original.txt, validation-reframed.txt, test-original.txt, test-reframed.txt
58
+ Each file contains the original/reframed sentences separated by \n.
59
+
60
+
61
+ **3 . Run the Baseline Models**
62
+
63
+ Random, Sbert, T5, BART, GPT, GPT2:
64
+ ```python3 run.py —-arguments```
65
+ Arguments:
66
+ --model: choose from random, sbert, t5, BART
67
+ --setting: default is unconstrained, controlled/predict setting is supported for t5 and BART
68
+ --train: path to train data file
69
+ --dev: path to dev data file
70
+ --test: path to test data file
71
+
72
+ CopyNMT: Execute copynmt_train.sh and copynmt_eval.sh
73
+ ```
74
+ bash copynmt_train.sh
75
+ bash copynmt_eval.sh
76
+ ```
77
+
78
+ Seq2Seq-lstm: git clone https://github.com/bond005/seq2seq.git, replace the data files in the data/ folder and follow the instructions to train the seq2seq-lstm model
79
+
80
+ ## *How do I cite this work?*
81
+
82
+ **Citation:**
83
+
84
+ > Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.
85
+
86
+ **BibTeX:**
87
+
88
+ ```tex
89
+ @inproceedings{ziems-etal-2022-positive-frames,
90
+ title = "Inducing Positive Perspectives with Text Reframing",
91
+ author = "Ziems, Caleb and
92
+ Li, Minzhi and
93
+ Zhang, Anthony and
94
+ Yang, Diyi",
95
+ booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
96
+ month = may,
97
+ year = "2022",
98
+ address = "Online and Dublin, Ireland",
99
+ publisher = "Association for Computational Linguistics"
100
+ }
101
+ ```