chkla commited on
Commit
5c2034d
1 Parent(s): b063c93

Upload political-twitter-messages-toernberg2023.md

Browse files
prompts/political-twitter-messages-toernberg2023.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ id: political-twitter-messages-toernberg2023
3
+ title: Political Twitter Messages
4
+ author: Petter Törnberg
5
+ paperlink: https://arxiv.org/pdf/2304.06588.pdf
6
+ date: 14.4.2023
7
+ language: en
8
+ task: political affiliation of a Twitter poster
9
+ version: 1.0
10
+ addedby: [chkla](github.com/chkla)
11
+ keywords: political affiliation, twitter
12
+ ---
13
+
14
+ ## Prompt Description
15
+
16
+ [Briefly describe the purpose of the prompt and the context in which it is intended to be used, especially in the context of artificial annotation with generative models.]
17
+
18
+ ## Prompt Text
19
+
20
+ “You will be given a set of Twitter posts from different US politicians, sent during the two months preceding the 2020 US presidential election, that is, between September 3rd, 2020, and November 3rd, 2020. Your task is to use your knowledge of US politics to make an educated guess on whether the poster is a Democrat or Republican. Respond either ‘Democrat’ or ‘Republican’. If the message does not have enough information for an educated guess, just make your best guess.” (Törnberg 2023, p. 2)
21
+
22
+ ## Language
23
+
24
+ - Prompt Language: English
25
+ - Dataset Language: English
26
+
27
+ ## NLP Task
28
+
29
+ - Task: "\[C\]lassifying the political affiliation of a Twitter poster based on the content of a tweet" (Törnberg 2023, p. 1)
30
+
31
+
32
+ ## Example Input and Output
33
+
34
+ - Example 1
35
+ - Input: [Provide an example input for the prompt]
36
+ - Output: [Provide an example output for the given input]
37
+ - Example 2
38
+ - Input: [Provide another example input for the prompt]
39
+ - Output: [Provide another example output for the given input]
40
+
41
+ ## Parameters and Constraints
42
+
43
+ "The API was then given the Twitter messages, selected in random order. The model was run with high and low values for temperature – a parameter that controls the level of randomness or “creativity” in the generated text. Since the responses are stochastic, we ran the model 5 times at a low temperature (0.2), and at a high temperature (1.0) to capture variability in responses. The model was thus run for a total of 5000 times." (Törnberg 2023, p. 2)
44
+
45
+ ## Evaluation Metrics
46
+
47
+ [List the evaluation metrics used to assess the quality of the generated artificial annotations, such as accuracy, F1 score, or BLEU score.]
48
+
49
+ ## Use Cases
50
+
51
+ [List any specific use cases or applications for the prompt in artificial annotation, such as data annotation, semi-supervised learning, or active learning.]
52
+
53
+ ## Limitations and Potential Biases
54
+
55
+ [Briefly discuss any limitations or potential biases associated with the prompt, as well as any steps taken to mitigate them, in the context of artificial annotation with generative models.]
56
+
57
+ ## Related Research and References
58
+
59
+ [List any relevant research papers, articles, or resources that informed the creation of the prompt or are closely related to it, especially in the area of artificial annotation with generative models. Include proper citations where applicable.]
60
+
61
+ ## Cite
62
+
63
+ >> Petter Törnberg (2023) "ChatGPT-4 Outperforms Experts and Crowd Workers in Annotating Political Twitter Messages with Zero-Shot Learning" [[Paper]](https://arxiv.org/pdf/2304.06588.pdf)