ACL-OCL / Base_JSON /prefixA /json /argmining /2021.argmining-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
124 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:00.170305Z"
},
"title": "Image Retrieval for Arguments Using Stance-Aware Query Expansion",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Kiesel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bauhaus-Universit\u00e4t Weimar",
"location": {}
},
"email": "johannes.kiesel@uni-weimar.de"
},
{
"first": "Nico",
"middle": [],
"last": "Reichenbach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leipzig University",
"location": {}
},
"email": "nico.reichenbach@posteo.de"
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Weimar",
"location": {}
},
"email": "benno.stein@uni-weimar.de"
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leipzig University",
"location": {}
},
"email": "martin.potthast@uni-leipzig.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve \"argumentative images\" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Many forms of argumentation employ images as persuasive means, but research in argument mining has been focused on verbal argumentation so far. This paper shows how to integrate images into argument mining research, specifically into argument retrieval. By exploiting the sophisticated image representations of keyword-based image search, we propose to use semantic query expansion for both the pro and the con stance to retrieve \"argumentative images\" for the respective stance. Our results indicate that even simple expansions provide a strong baseline, reaching a precision@10 of 0.49 for images being (1) on-topic, (2) argumentative, and (3) on-stance. An in-depth analysis reveals a high topic dependence of the retrieval performance and shows the need to further investigate on images providing contextual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Images are a prominent form of non-verbal communication. Images convey messages in diverse ways including scribbles, concept drawings, (produced) photos, data visualizations, or internet memes. Moreover, images can serve as an inspiration or an overview of the topic at hand. They play a key role in public discourse (Woods and Hahner, 2019) and in expressing personal opinion (Heiskanen, 2017) , e.g., in the form of political memes. Spread \"virally,\" they can gain a large followership on social media and influence political decision-making (Watt, 2015) or serve as a form of evidence for troublesome events. For example, about 150,000 images are uploaded to Facebook every minute. 1 All of these uses often form an integral part of argumentative writing or speechmaking. Composing an argumentative text or speech hence does not only include the retrieval and arrangement of written arguments, but also that of relevant \"argumentative images\" to go along with them.",
"cite_spans": [
{
"start": 317,
"end": 341,
"text": "(Woods and Hahner, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 377,
"end": 394,
"text": "(Heiskanen, 2017)",
"ref_id": "BIBREF20"
},
{
"start": 544,
"end": 556,
"text": "(Watt, 2015)",
"ref_id": "BIBREF37"
},
{
"start": 685,
"end": 686,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 https://www.domo.com/learn/data-never-sleeps-8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Mock-up of an image search engine for arguments, featuring a query box, checkboxes to filter, and results categorized as pro (left) and con (right) images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, though the retrieval of written arguments has become a well-established line of research, the paper at hand is the first of its kind to define and tackle the task of retrieving \"argumentative images.\" Inspired by existing argument search interfaces, Figure 1 illustrates how an image search engine for arguments might look.",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 267,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By proposing stance-aware query expansion, this paper shows how to harness existing keywordbased image retrieval technology for the task of retrieving images for arguments. The key assumption of this expansion is that adding terms to the user's query that indicate a stance (i.e., pro or con the query's underlying issue), then images retrieved for the expanded queries support that stance. Figure 1 illustrates the desired effect: the shown images stem from Google's image search after having expanded the query nuclear energy with either the term good (images on the left-hand side) or the term anti (right-hand side).",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 397,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, Section 3 introduces the task of \"argumentative image\" retrieval, Section 4 outlines our approach to this task-stance-aware query expansion-including three methodological variants, and the Sections 5 and 6 detail important elements of our experimental setting as well as evaluation results for the three proposed methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Al-Khatib, 2019 defines an argumentation strategy as \"a set of principles that guides the selection and arrangement of arguments (plus contextual information) in an argumentative discourse.\" While research in computational argumentation has so far focused on verbal argumentation, real-world discourse often integrates images to great effect, ranging from memes emerging on the spur of of a moment to figures summarizing months of research. Discourse analyses thus often include images (e.g., Frohmann, 1992; Farkas and Bene, 2020) .",
"cite_spans": [
{
"start": 493,
"end": 508,
"text": "Frohmann, 1992;",
"ref_id": "BIBREF17"
},
{
"start": 509,
"end": 531,
"text": "Farkas and Bene, 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Images for Arguments Images can provide contextual information and express, underline, or popularize an opinion (Dove, 2012) , thereby taking the form of subjective statements (Dunaway, 2018) . Some images express both a premise and a conclusion, making them full arguments (Roque, 2012; Grancea, 2017) . Other images may provide contextual information only and need to be combined with a conclusion to form an argument. In this regard, a recent SemEval task distinguished a total of 22 persuasion techniques in memes alone (Dimitrov et al., 2021) . Moreover, argument quality dimensions like acceptability, credibility, emotional appeal, and sufficiency (Wachsmuth et al., 2017a) all apply to arguments that include images as well. And as a kind of visual argumentation scheme (a \"stereotypical pattern of human reasoning\"; Walton et al., 2008) , some images are frequently adapted to different topics (Heiskanen, 2017) . Social groups even create their own symbolisms and use them to express opinions (e.g., fringe web communities; Zannettou et al., 2018) . The potentially high emotional impact of images to a vast audience (Adler-Nissen et al., 2020) can cause changes in the social discourse and eventually politics (Woods and Hahner, 2019). Examples include photos of a drowned refugee child (Barnard and Shoumali, 2015; D'Orazio, 2015) , police violence (Berger, 2010) , or the recent departure of Western military forces from Afghanistan.",
"cite_spans": [
{
"start": 112,
"end": 124,
"text": "(Dove, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 176,
"end": 191,
"text": "(Dunaway, 2018)",
"ref_id": "BIBREF14"
},
{
"start": 274,
"end": 287,
"text": "(Roque, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 288,
"end": 302,
"text": "Grancea, 2017)",
"ref_id": "BIBREF19"
},
{
"start": 524,
"end": 547,
"text": "(Dimitrov et al., 2021)",
"ref_id": null
},
{
"start": 655,
"end": 680,
"text": "(Wachsmuth et al., 2017a)",
"ref_id": "BIBREF31"
},
{
"start": 825,
"end": 845,
"text": "Walton et al., 2008)",
"ref_id": null
},
{
"start": 903,
"end": 920,
"text": "(Heiskanen, 2017)",
"ref_id": "BIBREF20"
},
{
"start": 1034,
"end": 1057,
"text": "Zannettou et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 1127,
"end": 1154,
"text": "(Adler-Nissen et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 1298,
"end": 1326,
"text": "(Barnard and Shoumali, 2015;",
"ref_id": "BIBREF5"
},
{
"start": 1327,
"end": 1342,
"text": "D'Orazio, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1361,
"end": 1375,
"text": "(Berger, 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Image search Keyword-based image search analyzing the content of images or videos has been studied for decades (Aigrain et al., 1996) , pre-dated only by approaches relying on metadata and similarity measures (Chang and Fu, 1980) . In a recent survey, Latif et al. (2019) categorize image features into color, texture, shape, and spatial features. Current commercial search engines also index text found in images, surrounding text, alternative texts displayed when an image is unavailable, and their URLs (Wu, 2020; Google, 2021) . Also related to the retrieval of argumentative images is that of \"emotional images\", which relies on image features like color and composition (Wang and He, 2008; Solli and Lenz, 2011) . Argumentation goes hand in hand with emotions, so that emotional features may be promising for retrieving images for arguments in the future. For lack of labeled data (argumentativeness plus emotionality), we start by exploiting keyword-based web search to retrieve images for arguments from the web, as did the earliest image search approaches (e.g., Yanai, 2001 ).",
"cite_spans": [
{
"start": 111,
"end": 133,
"text": "(Aigrain et al., 1996)",
"ref_id": "BIBREF3"
},
{
"start": 209,
"end": 229,
"text": "(Chang and Fu, 1980)",
"ref_id": "BIBREF9"
},
{
"start": 252,
"end": 271,
"text": "Latif et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 506,
"end": 516,
"text": "(Wu, 2020;",
"ref_id": "BIBREF40"
},
{
"start": 517,
"end": 530,
"text": "Google, 2021)",
"ref_id": null
},
{
"start": 676,
"end": 695,
"text": "(Wang and He, 2008;",
"ref_id": "BIBREF36"
},
{
"start": 696,
"end": 717,
"text": "Solli and Lenz, 2011)",
"ref_id": "BIBREF27"
},
{
"start": 1072,
"end": 1083,
"text": "Yanai, 2001",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Argument search Based on argument mining from texts (cf. Peldszus and Stede, 2013) , argument search engines aim to support decisionmaking and persuasion. Conceptually, a query to an argument search engine may either name an issue without a stance (Stab et al., 2018 ) (e.g., nuclear energy), or represent a conclusion for which supporting and attacking premises are to be retrieved (e.g., nuclear energy mitigates climate change). The first collection of arguments from the web is the Internet Argument Corpus (Walker et al., 2012), containing 400,000 posts from an online debate portal. The first argument search engine, args.me, indexes a similar dataset of arguments (Wachsmuth et al., 2017b) . Not relying on retrieval in collections of arguments, Ar-gumenText (Stab et al., 2018) first searches for documents relevant to a user's query in generic web crawls, and then mines arguments on the fly within retrieved documents. Regarding the evaluation of argument search engines, judging the topic relevance of a retrieved text alone is insufficient, it must also be argumentative (Potthast et al., 2019; Bondarenko et al., 2021) . Research on argumentation has identified many further quality criteria for arguments (Wachsmuth et al., 2017a ), yet few have been investigated for argument retrieval, and hardly anything has been said on the argumentative quality of images.",
"cite_spans": [
{
"start": 57,
"end": 82,
"text": "Peldszus and Stede, 2013)",
"ref_id": "BIBREF23"
},
{
"start": 248,
"end": 266,
"text": "(Stab et al., 2018",
"ref_id": "BIBREF28"
},
{
"start": 671,
"end": 696,
"text": "(Wachsmuth et al., 2017b)",
"ref_id": "BIBREF32"
},
{
"start": 766,
"end": 785,
"text": "(Stab et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 1083,
"end": 1106,
"text": "(Potthast et al., 2019;",
"ref_id": "BIBREF24"
},
{
"start": 1107,
"end": 1131,
"text": "Bondarenko et al., 2021)",
"ref_id": "BIBREF8"
},
{
"start": 1219,
"end": 1243,
"text": "(Wachsmuth et al., 2017a",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similar to textual argument retrieval, we define the task of retrieving images for arguments as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "Given a keyword query suggesting an issue or a claim for a topic, retrieve as two ranked lists those and only those images that can assist someone in (1) supporting and (2) attacking it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "This definition presumes that a given user intends to search for images suitable to assist in persuading others, an intent that encompasses a diversity of real-life scenarios, such as sending a pertinent image to a friend, using it as a cover for a news article or blog post, or on a slide in a presentation. Furthermore, it encompasses deliberation scenarios, such as forming an own opinion, and creating a collage for a school project. We expect search engines for argumentative images to offer facets as shown in Figure 1 to meet such needs.",
"cite_spans": [],
"ref_spans": [
{
"start": 516,
"end": 524,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "To assess the relevance of an image to a given query, a three-fold judgment is required:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "Topic relevance The image content is related to the query topic. This criterion corresponds to the notion of relevance in keyword-based image retrieval (Shanbehzadeh et al., 2000) .",
"cite_spans": [
{
"start": 152,
"end": 179,
"text": "(Shanbehzadeh et al., 2000)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "Argumentativeness The image can be used to support a stance regarding the query topic. This criterion corresponds to the notion of a context-dependent claim in textual argument mining (Aharoni et al., 2014) .",
"cite_spans": [
{
"start": 184,
"end": 206,
"text": "(Aharoni et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "Stance relevance The image can be used to support the predicted stance within the query topic. This criterion corresponds to the categorization into pros and cons in standard argument search (Wachsmuth et al., 2017b ).",
"cite_spans": [
{
"start": 191,
"end": 215,
"text": "(Wachsmuth et al., 2017b",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "Since stance relevance entails argumentativeness, which in turn entails topic relevance, we refer to these three as \"levels\" of relevance. Though previous work focused on stance relevance only (e.g., Stab et al., 2018) , an analysis on all three levels provides more insight into the errors made and is especially warranted for \"argumentative images.\" Figure 2 illustrates the different levels for the example query nuclear energy, showing images that fail (a) topic-relevance, (b) argumentativeness, and (c) stance-relevance (provided the image is categorized as a con). Though also many textual arguments appeal to emotion rather than logic, images are especially suited for such an appeal. However, as the emotions invoked can depend on both the viewer and context, it can be surprisingly unclear whether an image is argumentative or to which stance it is relevant. Consider Figure 2d . The meme image mixes the question of nuclear energy with the emotions towards a certain political moment (US president Trump's 2020 State of the Union Address, where, at its conclusion, the Speaker of the House of Representatives Pelosi tore up its official copy as a symbolic comment on its contents, Stewart, 2020) . Depending on the own emotions towards the shown politicians and statement, one can read the image as pro nuclear energy (\"society\" not caring for facts) or against (\"society\" ripping up lies). In the case of such images with ambiguous stance, the definition of stance relevance above suggests listing the image both as a pro and a con. In the future, however, additional considerations of argumentative quality (especially in terms of clarity) might suggest to omit such images completely, or to show them separately.",
"cite_spans": [
{
"start": 200,
"end": 218,
"text": "Stab et al., 2018)",
"ref_id": "BIBREF28"
},
{
"start": 1192,
"end": 1206,
"text": "Stewart, 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 878,
"end": 887,
"text": "Figure 2d",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Image Retrieval for Arguments",
"sec_num": "3"
},
{
"text": "The basic hypothesis of our approach is that the task of image retrieval for arguments can be tackled effectively by the structure shown in Figure 3a , Figure 3 : (a) Generic structure of an image search engine for arguments using a keyword-based image search engine and (b) the more specific structure of stance-aware query expansion employed in this paper. In the generic structure, the user's query q is expanded to n queries q 1 , . . . , q n , a result list R i is retrieved for each q i , which are classified and re-ranked to create the lists of pro (R + ) and con images (R \u2212 ). Structure (b) corresponds to a particular case without classification and re-ranking, where different sets of expanded queries, q + 1 , . . . , q + n and q \u2212 1 , . . . , q \u2212 m , are used to independently create R + and R \u2212 . which issues for a user's query several expanded queries to a keyword-based image search engine and then fuses the result lists to a list of pro and con images. However, we assume that the semantic capabilities of modern keyword-based image search engines can be harnessed further to already provide a classification of the images, thus separating pro and con images as well as omitting neutral ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 140,
"end": 149,
"text": "Figure 3a",
"ref_id": null
},
{
"start": 152,
"end": 160,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "In particular we propose a stance-aware query expansion as depicted in Figure 3b . It generates focused queries for both relevant stances (superscript + for pro and \u2212 for con), which are processed independently of each other, presuming that a sufficient diversity of expanded stance-aware queries recalls relevant images for each stance on the top ranks. In that case, the development of a post-retrieval image stance classifier can be omitted. The result lists of each stance are interlaced, i.e., composed by taking the first result of each list for a stance, then the second of each, and so on.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 80,
"text": "Figure 3b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "We devise three methods: (1) appending always the same stance-indicating terms to the user's query, (2) appending sentiment-indicating terms that co-occur with the query's terms, and (3) appending topic-specific stance-indicating terms obtained from a text argument search engine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "(1) Good-Anti Conceivably the single most basic method is to expand the user's query with one term per stance. After some manual experiments, we opted for good as a pro term and anti as a con term. Another option for the latter was bad, but for some topics this term is more associated with \"doing it poorly\" than with \"being against it.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "(2) Positive-Negative This method exploits the fact that stance is often reflected through expressions of sentiment, for example, as used in counterargument retrieval (Wachsmuth et al., 2018) . For each stance, we generate up to five queries by appending the top positive (for pro) or negative terms (for con) of the 8000 entries in the MPQA subjectivity lexicon (Wilson et al., 2005) as ranked by their co-occurrence with the query according to the Leipzig Corpora Collection's English corpus (120 million sentences, Goldhahn et al., 2012) .",
"cite_spans": [
{
"start": 167,
"end": 191,
"text": "(Wachsmuth et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 363,
"end": 384,
"text": "(Wilson et al., 2005)",
"ref_id": "BIBREF38"
},
{
"start": 518,
"end": 540,
"text": "Goldhahn et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "(3) Pros-Cons This method employs argument search engines to identify terms typical for certain topic-stance-combinations. E.g., in arguments retrieved for nuclear energy, \"CO2 neutrality\" occurs more often in pro arguments than in con ones, whereas \"radiation\" occurs more often in con arguments than in pro ones. Based on work in anomaly detection (Afgani et al., 2008) , this method calculates the specificity of a term t to a stance s, \u03b4(t, s), as their contribution to the Kullback-Leibler divergence of the term distributions between the two stances:",
"cite_spans": [
{
"start": 350,
"end": 371,
"text": "(Afgani et al., 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "\u03b4(t, s) = P(T= t|S = s) \u2022 log P(T= t|S = s) P(T= t|S = s) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "where P (T = t|S = s) is the probability of observing t given the stance s. This probability is estimated by word frequencies 2 in all arguments that an argument search engine-args.me 3 in our case (Wachsmuth et al., 2017b) -retrieves for the query and s. For preprocessing, we lemmatize both arguments and the query. Furthermore, we filter out all arguments from the website debate.org, as we found that its debate structure encourages to reference opposing points in arguments to counter them, which diminishes the \u03b4 of the respective terms. The method generates up to five queries for each stance by appending the top terms as ranked by \u03b4.",
"cite_spans": [
{
"start": 198,
"end": 223,
"text": "(Wachsmuth et al., 2017b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance-Aware Query Expansion",
"sec_num": "4"
},
{
"text": "In retrieval tasks, human relevance judgments of retrieval results for a fixed set of topics allows for evaluating the effectiveness of competing retrieval models. Known as the Cranfield paradigm or TREC-style evaluation (Voorhees, 2001) , it is also employed in textual argument retrieval within the Touch\u00e9 shared tasks (Bondarenko et al., 2020) . All our expansion methods retrieve results for the same set of queries, which are then pooled and manually judged with respect to their relevance to a given query's information need.",
"cite_spans": [
{
"start": 221,
"end": 237,
"text": "(Voorhees, 2001)",
"ref_id": "BIBREF30"
},
{
"start": 321,
"end": 346,
"text": "(Bondarenko et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Crowdsourcing Relevance Judgements",
"sec_num": "5"
},
{
"text": "To evaluate the methods presented in Section 4, we employ a sample of 20 controversial questions from the Touch\u00e9 2020 Task 1 test set (Bondarenko et al., 2020) , from which we derive one query each. 4 We calculate the number of relevant images in the methods' ten top-ranked images (precision@10) for both rankings of pro and con images (thus actually for 20 images), which is straightforward to interpret for multiple relevance levels. To ensure state-of-the-art keyword-based retrieval and a large image index, all methods retrieve images using the Google image search. Figure 4 shows the annotation interface for gathering relevance judgments. As described in Section 3, we distinguish relevance on three levels: topic, argumentativeness, and stance. The first three options, the image being able to support the pro, contra, or both stances, indicate the image stance(s) and that it is argumentative and on topic. The fourth option, the image not being able to support a stance, indicates that the image is on-topic 2 To avoid zero probabilities, we employ standard add-one smoothing (Jurafsky and Martin, 2009 ).",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "(Bondarenko et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 199,
"end": 200,
"text": "4",
"ref_id": null
},
{
"start": 1087,
"end": 1113,
"text": "(Jurafsky and Martin, 2009",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 572,
"end": 580,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Crowdsourcing Relevance Judgements",
"sec_num": "5"
},
{
"text": "3 API: https://www.args.me/api-en.html 4 From the original 49 topics, 13 were omitted for which one or more methods found no expansion terms, sampling 20 at random from the remainder to meet our labeling budget. Questions, queries, retrieved images, and annotations are available at https://doi.org/10.5281/zenodo.5202934. only. The final option indicates that the image has no stance and is neither argumentative nor on-topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowdsourcing Relevance Judgements",
"sec_num": "5"
},
{
"text": "Using the annotation interface of Figure 4 , we collected 2988 annotations on 993 images 5 and 20 topics from 12 layperson annotators: three annotators per image and topic. The images' order was randomized for each annotator within a topic to avoid order biases. The annotator agreement is fair to moderate (Fleiss' \u03ba of 0.39; Fleiss, 1981; Figure 7 shows \u03ba per topic). We compute a single label for each image and topic pair as per majority vote (two out of three), treating an annotation both as a vote for both pro and con, and assigning the groundtruth label both if there are at least two votes for both pro and con. 6 To ensure consistency in the face of difficulties reported by the annotators, we reviewed the ground-truth and changed the label for 223 images. The vast majority of changes (85%) were due to vagueness in our instructions: Annotators labeled images that can introduce the topic or question but have no argumentative value otherwise (see Figure 5 for examples) often as being able to support both stances though they should have labeled them as non-argumentative (neither). Figure 6 : Precision@10 achieved by the three query expansion methods at three relevance levels, averaged across 20 topics and each topic's individual pro and con rankings. The dashed lines indicate the expected precision@10 of a random image stance classifier dividing the retrieved images into the pro and con rankings. The bars are overlaid, not stacked.",
"cite_spans": [
{
"start": 307,
"end": 326,
"text": "(Fleiss' \u03ba of 0.39;",
"ref_id": null
},
{
"start": 327,
"end": 340,
"text": "Fleiss, 1981;",
"ref_id": "BIBREF16"
},
{
"start": 341,
"end": 349,
"text": "Figure 7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 961,
"end": 969,
"text": "Figure 5",
"ref_id": "FIGREF2"
},
{
"start": 1097,
"end": 1105,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Crowdsourcing Relevance Judgements",
"sec_num": "5"
},
{
"text": "This section reports on a comparative evaluation of the three query expansions methods by their retrieval effectiveness, followed by a topic-wise error analysis. Last, we also carried out an investigation into the counterintuitive case of ambiguous images that can support both stances. Figure 6 shows the precision@10 for the three query expansion methods introduced in Section 4 with respect to the three levels of relevance assessed. Nearly all retrieved images can be expected to be relevant to the topic (92% to 95%). This being testimony to the effectiveness of Google's image search, it also indicates that the query expansion methods do not impair the keyword-based image search.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 295,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The basic good-anti heuristic performs best, achieving an argumentative precision@10 of 0.64 and outperforming the pros-cons method by 0.12. An inspection of the pros-cons method's expanded terms reveals that they lack the stance-specificity of the other methods' terms. For example, the expansions money and person (pros-cons) are less specific than the expansions cheap and harm (positivenegative), as only the latter terms intrinsically convey a preference or stance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Retrieval Effectiveness",
"sec_num": "6.1"
},
{
"text": "All methods lose 16 to 18 percentage points when being judged for stance-relevance, showing that 25% (good-anti) to 33% (pros-cons) of the retrieved images that are argumentative do support the opposite stance as intended. While for two of the three methods their stance precision@10 is better than a random stance assignment (dashed lines in Figure 6 ), it is far from perfect. Since Google image search is not public, it is challenging to pinpoint the source of the stance errors. A possible explanation may be that expansion terms found on an image's web page do not refer to being pro or con a given topic, but are used in a different context or even to convey the opposite stance.",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overall Retrieval Effectiveness",
"sec_num": "6.1"
},
{
"text": "We analyzed the retrieval effectiveness on a pertopic basis to learn which are the most challenging ones for retrieving argumentative images. It turns out that some topics are less well-suited to a keyword-based image retrieval (or to our query expansion methods). Moreover, some arguments are not suited to being illustrated or are for other reasons not found through image search. Figure 7 shows the retrieval effectiveness as precision@10 and the image stance distribution per topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 391,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Topic-wise Error Analysis",
"sec_num": "6.2"
},
{
"text": "As the average precision@10 scores show, topic precision is high overall, except for the query standardized tests education (for issue: \"Do standardized tests improve education?\"). On closer inspection, we found that many off-topic images were either on education or standardized tests, possibly hinting at a lack of images that combine both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic-wise Error Analysis",
"sec_num": "6.2"
},
{
"text": "Average argumentativeness precision@10, however, varies a great deal between 0.18 and 0.87. As an inspection of topics with low precision reveals, many of the retrieved images are wellsuited to introducing the topic, but not as argument support. For example, most images for body cameras police (\"Should body cameras be mandatory for police?\") show a police officer wearing a body camera. Similarly, many of the images for performance-enhancing drugs in sports (\"Should performance-enhancing drugs be accepted in sports?\") show sports equipment and syringes. For queries related to commercial products, like e-cigarettes (\"Is vaping with ecigarettes safe?\"), bottled water (\"Should bottled water be banned?\"), or school uniforms (\"Should students have to wear school uniforms?\"), many of the images are product photos from shopping or review pages and merely display the prod- uct. We assume that in these cases the expansion methods cannot sufficiently counter the search engine optimization (SEO) done by companies working in that domain, suggesting a pre-filtering by web document and/or image genre. Similarly, average stance precision@10 varies a lot between topics, ranging from 0.17 to 0.72. This variation is partly due to the large differences in the stance of the retrieved images, as illustrated in the right plot of Figure 7 . For some topics, argumentative precision and stance precision differ by as much as 0.30 (gun control and marijuana recreational use). Upon inspection, we noticed that the errors in the stance assignment dominantly occur in a single direction per topic. Though our setup intends to retrieve an equal amount of pros and cons per topic, the right plot of Figure 7 shows that the result set is often skewed in one direction. In the extreme, 95% of the retrieved argumentative images for animal testing are con only-which is plausible, as illustrating the benefits of animal testing is much more complex than showing animals being treated poorly. On the other hand, an unexpectedly high fraction of images are ambiguous and can be used to support both stances. For example, this is the case for nearly half of the retrieved argumentative images for euthanasia. Figure 8c shows one example for the topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 1328,
"end": 1336,
"text": "Figure 7",
"ref_id": "FIGREF3"
},
{
"start": 1691,
"end": 1699,
"text": "Figure 7",
"ref_id": "FIGREF3"
},
{
"start": 2195,
"end": 2204,
"text": "Figure 8c",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Topic-wise Error Analysis",
"sec_num": "6.2"
},
{
"text": "When 46% of the retrieved images can provide support for both stances, as for euthanasia, should an image search engine for arguments separate pro and con images in its search results as in Figure 1 ? To investigate this question, we inspected all 111 images that were judged ambiguous. Figure 8 exemplifies the image categories we identified. We found that most of the 111 images provide contextual information (78 images, 70%) in the form of geographic comparisons as in Figure 8a , statistics (especially polls), or forecasts. Similar categories include sources on the topic (10 images, 9%) and definitions (11 images, 10%). Images from these three categories (89% total) can strongly support an argumentation, even though different pieces of information they entail might lend support to different stances. Due to the benefits of such images, it thus makes sense to display them separately in a search interface, either on-demand or up front. As outliers, two other images contrast key points for pro and con, supporting mostly deliberation tasks. The remaining images were ambiguous in the way hypothesized before in that their stance changes depending on how one interprets them. Images of this kind will be a challenge to image retrieval for arguments, since their interpretation requires common sense and/or domain knowledge. ",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 1",
"ref_id": null
},
{
"start": 287,
"end": 295,
"text": "Figure 8",
"ref_id": "FIGREF5"
},
{
"start": 473,
"end": 482,
"text": "Figure 8a",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "On Images with Ambiguous Stance",
"sec_num": "6.3"
},
{
"text": "This paper introduces the task of image retrieval for arguments, proposes with stance-aware query expansion a family of methods to tackle the task that exploits existing keyword-based image search technology, and carries out a first empirical analysis for the task using human annotations on 993 images and 20 topics. In the experiments, the basic goodanti heuristic, which expands a user query with the same two terms (good for pro images, anti for con ones), outperforms more sophisticated query expansion approaches that employ terms specific to the query's topic. An error analysis unveils a high topic-dependence of the retrieval effectiveness. Finally, the paper investigates ambiguous images labeled as supporting both the pro and con stances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In cases where many neutral images are found, we suggest displaying them separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Relying on Google's image search engine ensures a state-of-the-art retrieval model at the price of exact replicability. Since commercial image search engines are subject to frequent changes it must be investigated how reproducible our results are. Developing an open image retrieval system for arguments would enable laboratory evaluation. It would also allow for creating image representations tailored to argumentation, but requires creating and labeling an extensive image collection that covers a manifold of controversial topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "There is a lot of room to improve precision. Filtering images by analyzing their web pages may avoid ones that merely show a product or otherwise fail to make a point. Further analyses of the web pages with argument mining technologies could provide for a better stance classification of the images. Also images search engine feature to retrieve similar images may be helpful in this regard. Moreover, classifiers to identify persuasion techniques or emotions in images may prove beneficial to identify images for arguments. Assessing the quality of argumentative images could improve the imagined search engine's evaluation and utility to users. To foster research, we run a corresponding shared task to be held as part of the Touch\u00e9 2022 lab. 7 This paper seeks to extend argument mining in general and argument retrieval in particular to considering images. Other argument mining tasks (e.g., relation extraction in articles) could similarly be extended to account for the widespread use of images in argumentative texts or speechmaking.",
"cite_spans": [
{
"start": 745,
"end": 746,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Three images were retrieved for two topics. 6 E.g., if votes are off-topic, neither, pro, the result would be neither as two voted for on-topic but only one for a stance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://webis.de/events?q=touche#touche-2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Images, emotions, and international politics: the death of Alan Kurdi",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Adler-Nissen",
"suffix": ""
},
{
"first": "Katrine",
"middle": [
"Emilie"
],
"last": "Andersen",
"suffix": ""
},
{
"first": "Lene",
"middle": [],
"last": "Hansen",
"suffix": ""
}
],
"year": 2020,
"venue": "Review of International Studies",
"volume": "46",
"issue": "1",
"pages": "75--95",
"other_ids": {
"DOI": [
"10.1017/S0260210519000317"
]
},
"num": null,
"urls": [],
"raw_text": "Rebecca Adler-Nissen, Katrine Emilie Andersen, and Lene Hansen. 2020. Images, emotions, and interna- tional politics: the death of Alan Kurdi. Review of International Studies, 46(1):75-95.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Anomaly detection using the kullback-leibler divergence metric",
"authors": [
{
"first": "M",
"middle": [],
"last": "Afgani",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sinanovic",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Haas",
"suffix": ""
}
],
"year": 2008,
"venue": "1st International Symposium on Applied Sciences on Biomedical and Communication Technologies (ISABEL 2008)",
"volume": "",
"issue": "",
"pages": "2325--5331",
"other_ids": {
"DOI": [
"10.1109/ISABEL.2008.4712573"
]
},
"num": null,
"urls": [],
"raw_text": "M. Afgani, S. Sinanovic, and H. Haas. 2008. Anomaly detection using the kullback-leibler divergence met- ric. In 1st International Symposium on Applied Sci- ences on Biomedical and Communication Technolo- gies (ISABEL 2008), pages 1-5. ISSN: 2325-5331.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Anatoly",
"middle": [],
"last": "Polnarov",
"suffix": ""
},
{
"first": "Tamar",
"middle": [],
"last": "Lavee",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Hershcovich",
"suffix": ""
},
{
"first": "Ran",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Gutfreund",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2014,
"venue": "1st Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "64--68",
"other_ids": {
"DOI": [
"10.3115/v1/w14-2109"
]
},
"num": null,
"urls": [],
"raw_text": "Ehud Aharoni, Anatoly Polnarov, Tamar Lavee, Daniel Hershcovich, Ran Levy, Ruty Rinott, Dan Gutfre- und, and Noam Slonim. 2014. A benchmark dataset for automatic detection of claims and evidence in the context of controversial topics. In 1st Workshop on Argument Mining (ArgMining 2014), pages 64-68. ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Content-based representation and retrieval of visual media: A state-of-the-art review",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Aigrain",
"suffix": ""
},
{
"first": "Hongjiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dragutin",
"middle": [],
"last": "Petkovic",
"suffix": ""
}
],
"year": 1996,
"venue": "Multimedia Tools and Applications",
"volume": "3",
"issue": "3",
"pages": "179--202",
"other_ids": {
"DOI": [
"10.1007/BF00393937"
]
},
"num": null,
"urls": [],
"raw_text": "Philippe Aigrain, Hongjiang Zhang, and Dragutin Petkovic. 1996. Content-based representation and retrieval of visual media: A state-of-the-art review. Multimedia Tools and Applications, 3(3):179-202.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Computational Analysis of Argumentation Strategies. Dissertation",
"authors": [
{
"first": "Khalid",
"middle": [],
"last": "Al-Khatib",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khalid Al-Khatib. 2019. Computational Analysis of Argumentation Strategies. Dissertation, Bauhaus- Universit\u00e4t Weimar.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Image of drowned syrian, aylan kurdi, 3, brings migrant crisis into focus. The New York Times",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "Karam",
"middle": [],
"last": "Shoumali",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Barnard and Karam Shoumali. 2015. Image of drowned syrian, aylan kurdi, 3, brings migrant crisis into focus. The New York Times.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fixing images: Civil rights photography and the struggle over representation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Berger",
"suffix": ""
}
],
"year": 2010,
"venue": "RIHA Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin A. Berger. 2010. Fixing images: Civil rights photography and the struggle over representation. RIHA Journal, 10.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Overview of touch\u00e9 2020: Argument retrieval",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Bondarenko",
"suffix": ""
},
{
"first": "Maik",
"middle": [],
"last": "Fr\u00f6be",
"suffix": ""
},
{
"first": "Meriem",
"middle": [],
"last": "Beloucif",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Gienapp",
"suffix": ""
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
}
],
"year": 2020,
"venue": "Working Notes Papers of the CLEF 2020 Evaluation Labs",
"volume": "2696",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Bondarenko, Maik Fr\u00f6be, Meriem Be- loucif, Lukas Gienapp, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2020. Overview of touch\u00e9 2020: Argument retrieval. In Working Notes Papers of the CLEF 2020 Evalu- ation Labs, volume 2696 of CEUR Workshop Pro- ceedings.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of touch\u00e9 2021: Argument retrieval",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Bondarenko",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Gienapp",
"suffix": ""
},
{
"first": "Maik",
"middle": [],
"last": "Fr\u00f6be",
"suffix": ""
},
{
"first": "Meriem",
"middle": [],
"last": "Beloucif",
"suffix": ""
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Panchenko",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
}
],
"year": 2021,
"venue": "Advances in Information Retrieval. 43rd European Conference on IR Research (ECIR 2021)",
"volume": "12036",
"issue": "",
"pages": "574--582",
"other_ids": {
"DOI": [
"10.1007/978-3-030-72240-1_67"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Bondarenko, Lukas Gienapp, Maik Fr\u00f6be, Meriem Beloucif, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Ha- gen. 2021. Overview of touch\u00e9 2021: Argument retrieval. In Advances in Information Retrieval. 43rd European Conference on IR Research (ECIR 2021), volume 12036 of Lecture Notes in Computer Science, pages 574-582, Berlin Heidelberg New York. Springer.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Query-bypictorial-example",
"authors": [
{
"first": "",
"middle": [],
"last": "Ning-San",
"suffix": ""
},
{
"first": "King-Sun",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE Transactions on Software Engineering",
"volume": "6",
"issue": "6",
"pages": "519--524",
"other_ids": {
"DOI": [
"10.1109/TSE.1980.230801"
]
},
"num": null,
"urls": [],
"raw_text": "Ning-San Chang and King-sun Fu. 1980. Query-by- pictorial-example. IEEE Transactions on Software Engineering, 6(6):519-524.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SemEval-2021 task 6: Detection of persuasion techniques in texts and images",
"authors": [],
"year": null,
"venue": "15th International Workshop on Semantic Evaluation (SemEval'2021)",
"volume": "",
"issue": "",
"pages": "70--98",
"other_ids": {
"DOI": [
"10.18653/v1/2021.semeval-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "SemEval-2021 task 6: Detection of persuasion tech- niques in texts and images. In 15th International Workshop on Semantic Evaluation (SemEval'2021), pages 70-98, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Journey of an image: from a beach in Bodrum to twenty million screens across the world",
"authors": [
{
"first": "D'",
"middle": [],
"last": "Francesco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Orazio",
"suffix": ""
}
],
"year": 2015,
"venue": "The Iconic Image on Social Media: A Rapid Research Response to the Death of Aylan Kurdi. Visual Social Media Lab",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco D'Orazio. 2015. Journey of an image: from a beach in Bodrum to twenty million screens across the world. In The Iconic Image on Social Media: A Rapid Research Response to the Death of Aylan Kurdi. Visual Social Media Lab.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On images as evidence and arguments",
"authors": [
{
"first": "Ian",
"middle": [
"J"
],
"last": "Dove",
"suffix": ""
}
],
"year": 2012,
"venue": "Topical Themes in Argumentation Theory: Twenty Exploratory Studies, Argumentation Library",
"volume": "",
"issue": "",
"pages": "223--238",
"other_ids": {
"DOI": [
"10.1007/978-94-007-4041-9_15"
]
},
"num": null,
"urls": [],
"raw_text": "Ian J. Dove. 2012. On images as evidence and argu- ments. In Frans H. van Eemeren and Bart Garssen, editors, Topical Themes in Argumentation Theory: Twenty Exploratory Studies, Argumentation Library, pages 223-238. Springer Netherlands, Dordrecht.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Images, emotions, politics",
"authors": [
{
"first": "Finis",
"middle": [],
"last": "Dunaway",
"suffix": ""
}
],
"year": 2018,
"venue": "Modern American History",
"volume": "1",
"issue": "3",
"pages": "369--376",
"other_ids": {
"DOI": [
"10.1017/mah.2018.17"
]
},
"num": null,
"urls": [],
"raw_text": "Finis Dunaway. 2018. Images, emotions, politics. Modern American History, 1(3):369-376.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Images, politicians, and social media: Patterns and effects of politicians' image-based political communication strategies on social media",
"authors": [
{
"first": "X\u00e9nia",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "M\u00e1rton",
"middle": [],
"last": "Bene",
"suffix": ""
}
],
"year": 2020,
"venue": "The International Journal of Press/Politics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1177/1940161220959553"
]
},
"num": null,
"urls": [],
"raw_text": "X\u00e9nia Farkas and M\u00e1rton Bene. 2020. Images, politi- cians, and social media: Patterns and effects of politicians' image-based political communication strategies on social media. The International Jour- nal of Press/Politics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The measurement of interrater agreement",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1981,
"venue": "Statistical methods for rates and proportions",
"volume": "",
"issue": "",
"pages": "212--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. L. Fleiss. 1981. The measurement of interrater agree- ment. In Statistical methods for rates and propor- tions, 2 edition, pages 212-236. John Wiley & Sons, New York.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The power of images: A discourse analysis of the cognitive viewpoint",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Frohmann",
"suffix": ""
}
],
"year": 1992,
"venue": "Journal of Documentation",
"volume": "48",
"issue": "4",
"pages": "365--386",
"other_ids": {
"DOI": [
"10.1108/eb026904"
]
},
"num": null,
"urls": [],
"raw_text": "Bernd Frohmann. 1992. The power of images: A dis- course analysis of the cognitive viewpoint. Journal of Documentation, 48(4):365-386.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Goldhahn",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Eckart",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Quasthoff",
"suffix": ""
}
],
"year": 2012,
"venue": "Eighth International Conference on Language Resources and Evaluation (LREC 2012)",
"volume": "",
"issue": "",
"pages": "759--765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages. In Eighth International Conference on Language Resources and Evaluation (LREC 2012), pages 759-765. European Language Resources As- sociation (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Types of visual arguments. Argumentum",
"authors": [
{
"first": "Ioana",
"middle": [],
"last": "Grancea",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of the Seminar of Discursive Logic, Argumentation Theory and Rhetoric",
"volume": "15",
"issue": "2",
"pages": "16--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioana Grancea. 2017. Types of visual arguments. Argu- mentum. Journal of the Seminar of Discursive Logic, Argumentation Theory and Rhetoric, 15(2):16-34.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Meme-ing electoral participation",
"authors": [
{
"first": "Benita",
"middle": [],
"last": "Heiskanen",
"suffix": ""
}
],
"year": 2017,
"venue": "European journal of American studies",
"volume": "12",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.4000/ejas.12158"
]
},
"num": null,
"urls": [],
"raw_text": "Benita Heiskanen. 2017. Meme-ing electoral participa- tion. European journal of American studies, 12(2).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 2 edition. Prentice Hall series in artificial intelligence",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Jurafsky and James H. Martin. 2009. Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition, 2 edition. Prentice Hall series in artificial intelligence. Prentice Hall.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Naeem Iqbal Ratyal, Bushra Zafar, Saadat Hanif Dar, Muhammad Sajid, and Tehmina Khalil. 2019. Content-based image retrieval and feature extraction: A comprehensive review",
"authors": [
{
"first": "Afshan",
"middle": [],
"last": "Latif",
"suffix": ""
},
{
"first": "Aqsa",
"middle": [],
"last": "Rasheed",
"suffix": ""
},
{
"first": "Umer",
"middle": [],
"last": "Sajid",
"suffix": ""
},
{
"first": "Jameel",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Nouman",
"middle": [],
"last": "Ali",
"suffix": ""
}
],
"year": 2019,
"venue": "Mathematical Problems in Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1155/2019/9658350"
]
},
"num": null,
"urls": [],
"raw_text": "Afshan Latif, Aqsa Rasheed, Umer Sajid, Jameel Ahmed, Nouman Ali, Naeem Iqbal Ratyal, Bushra Zafar, Saadat Hanif Dar, Muhammad Sajid, and Tehmina Khalil. 2019. Content-based image re- trieval and feature extraction: A comprehensive review. Mathematical Problems in Engineering, 2019:21.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "From argument diagrams to argumentation mining in texts: A survey",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Peldszus",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Cognitive Informatics and Natural Intelligence",
"volume": "7",
"issue": "1",
"pages": "1--31",
"other_ids": {
"DOI": [
"10.4018/jcini.2013010101"
]
},
"num": null,
"urls": [],
"raw_text": "Andreas Peldszus and Manfred Stede. 2013. From ar- gument diagrams to argumentation mining in texts: A survey. International Journal of Cognitive Infor- matics and Natural Intelligence, 7(1):1-31.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Argument search: Assessing argument relevance",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Gienapp",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Euchner",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Heilenk\u00f6tter",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Weidmann",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Hagen",
"suffix": ""
}
],
"year": 2019,
"venue": "42nd International ACM Conference on Research and Development in Information Retrieval (SIGIR 2019)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3331184.3331327"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Potthast, Lukas Gienapp, Florian Euchner, Nick Heilenk\u00f6tter, Nico Weidmann, Henning Wachsmuth, Benno Stein, and Matthias Hagen. 2019. Argument search: Assessing argument relevance. In 42nd In- ternational ACM Conference on Research and De- velopment in Information Retrieval (SIGIR 2019). ACM.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Visual argumentation: A further reappraisal",
"authors": [
{
"first": "Georges",
"middle": [],
"last": "Roque",
"suffix": ""
}
],
"year": 2012,
"venue": "Springer Netherlands, Dordrecht. Series Title: Argumentation Library",
"volume": "22",
"issue": "",
"pages": "273--288",
"other_ids": {
"DOI": [
"10.1007/978-94-007-4041-9_18"
]
},
"num": null,
"urls": [],
"raw_text": "Georges Roque. 2012. Visual argumentation: A fur- ther reappraisal. In Frans H. van Eemeren and Bart Garssen, editors, Topical Themes in Argumentation Theory, volume 22, pages 273-288. Springer Nether- lands, Dordrecht. Series Title: Argumentation Li- brary.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Image indexing and retrieval techniques: past, present, and next",
"authors": [
{
"first": "Jamshid",
"middle": [],
"last": "Shanbehzadeh",
"suffix": ""
},
{
"first": "Fariborz",
"middle": [],
"last": "Amir-Masoud Eftekhari-Moghadam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mahmoudi",
"suffix": ""
}
],
"year": 2000,
"venue": "Storage and Retrieval for Media Databases",
"volume": "3972",
"issue": "",
"pages": "461--470",
"other_ids": {
"DOI": [
"10.1117/12.373578"
]
},
"num": null,
"urls": [],
"raw_text": "Jamshid Shanbehzadeh, Amir-Masoud Eftekhari- Moghadam, and Fariborz Mahmoudi. 2000. Image indexing and retrieval techniques: past, present, and next. In Storage and Retrieval for Media Databases 2000, volume 3972 of SPIE Proceedings, pages 461-470. SPIE.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Color emotions for multi-colored images",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Solli",
"suffix": ""
},
{
"first": "Reiner",
"middle": [],
"last": "Lenz",
"suffix": ""
}
],
"year": 2011,
"venue": "Color Research & Application",
"volume": "36",
"issue": "3",
"pages": "210--221",
"other_ids": {
"DOI": [
"10.1002/col.20604"
]
},
"num": null,
"urls": [],
"raw_text": "Martin Solli and Reiner Lenz. 2011. Color emotions for multi-colored images. Color Research & Appli- cation, 36(3):210-221.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "ArgumenText: Searching for arguments in heterogeneous sources",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Stahlhut",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Tauchmann",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2018)",
"volume": "",
"issue": "",
"pages": "21--25",
"other_ids": {
"DOI": [
"10.18653/v1/n18-5005"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Stab, Johannes Daxenberger, Chris Stahlhut, Tristan Miller, Benjamin Schiller, Christopher Tauchmann, Steffen Eger, and Iryna Gurevych. 2018. ArgumenText: Searching for arguments in heteroge- neous sources. In Conference of the North American Chapter of the Association for Computational Lin- guistics (NAACL-HLT 2018), pages 21-25. ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Why nancy pelosi ripping up some papers has set the internet on fire",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Stewart",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Stewart. 2020. Why nancy pelosi ripping up some papers has set the internet on fire. Vox.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The philosophy of information retrieval evaluation",
"authors": [
{
"first": "Ellen",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2001,
"venue": "Second Workshop of the Cross-Language Evaluation Forum",
"volume": "2406",
"issue": "",
"pages": "355--370",
"other_ids": {
"DOI": [
"10.1007/3-540-45691-0_34"
]
},
"num": null,
"urls": [],
"raw_text": "Ellen M. Voorhees. 2001. The philosophy of infor- mation retrieval evaluation. In Second Workshop of the Cross-Language Evaluation Forum, CLEF 2001, volume 2406 of Lecture Notes in Computer Science, pages 355-370. Springer.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Computational argumentation quality assessment in natural language",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bilu",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
},
{
"first": "Tim",
"middle": [
"Alberdingk"
],
"last": "Thijm",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2017,
"venue": "15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "176--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Al- berdingk Thijm, Graeme Hirst, and Benno Stein. 2017a. Computational argumentation quality assess- ment in natural language. In 15th Conference of the European Chapter of the Association for Computa- tional Linguistics (EACL 2017), pages 176-187.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Building an argument search engine for the web",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Khalid",
"middle": [
"Al"
],
"last": "Khatib",
"suffix": ""
},
{
"first": "Yamen",
"middle": [],
"last": "Ajjour",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Puschmann",
"suffix": ""
},
{
"first": "Jiani",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Dorsch",
"suffix": ""
},
{
"first": "Viorel",
"middle": [],
"last": "Morari",
"suffix": ""
},
{
"first": "Janek",
"middle": [],
"last": "Bevendorff",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2017,
"venue": "4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "49--59",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5106"
]
},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Martin Potthast, Khalid Al Khatib, Yamen Ajjour, Jana Puschmann, Jiani Qu, Jonas Dorsch, Viorel Morari, Janek Bevendorff, and Benno Stein. 2017b. Building an argument search engine for the web. In 4th Workshop on Argument Mining (ArgMining 2017), pages 49-59, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Retrieval of the best counterargument without prior topic knowledge",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Shahbaz",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2018,
"venue": "56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)",
"volume": "",
"issue": "",
"pages": "241--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument with- out prior topic knowledge. In 56th Annual Meet- ing of the Association for Computational Linguistics (ACL 2018), pages 241-251. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A corpus for research on deliberation and debate",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marilyn",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Jean E Fox",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Tree",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2012,
"venue": "8th International Conference on Language Resources and Evaluation (LREC 2012)",
"volume": "",
"issue": "",
"pages": "812--817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A Walker, Pranav Anand, Jean E Fox Tree, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In 8th Interna- tional Conference on Language Resources and Eval- uation (LREC 2012), pages 812-817, Istanbul.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A survey on emotional semantic image retrieval",
"authors": [
{
"first": "Weining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qianhua",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Image Processing",
"volume": "",
"issue": "",
"pages": "117--120",
"other_ids": {
"DOI": [
"10.1109/ICIP.2008.4711705"
]
},
"num": null,
"urls": [],
"raw_text": "Weining Wang and Qianhua He. 2008. A survey on emotional semantic image retrieval. In International Conference on Image Processing (ICIP 2008), pages 117-120. IEEE.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "David Cameron says UK will take thousands more Syrian refugees. The Guardian",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Watt",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Watt. 2015. David Cameron says UK will take thousands more Syrian refugees. The Guardian.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Recognizing contextual polarity in phraselevel sentiment analysis",
"authors": [
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005)",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase- level sentiment analysis. In Human Language Technology Conference and Conference on Em- pirical Methods in Natural Language Processing (HLT/EMNLP 2005), pages 347-354. ACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Make America meme again: the rhetoric of the altright, volume 45 of Frontiers in political communication",
"authors": [
{
"first": "Suzanne",
"middle": [],
"last": "Heather",
"suffix": ""
},
{
"first": "Leslie",
"middle": [
"Ann"
],
"last": "Woods",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hahner",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heather Suzanne Woods and Leslie Ann Hahner. 2019. Make America meme again: the rhetoric of the alt- right, volume 45 of Frontiers in political communi- cation. Peter Lang, New York.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Learn more about what you see on google images",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Wu. 2020. Learn more about what you see on google images. Google Blog.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Image collector: An imagegathering system from the world-wide web employing keyword-based search engines",
"authors": [
{
"first": "Keiji",
"middle": [],
"last": "Yanai",
"suffix": ""
}
],
"year": 2001,
"venue": "International Conference on Multimedia and Expo",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICME.2001.1237772"
]
},
"num": null,
"urls": [],
"raw_text": "Keiji Yanai. 2001. Image collector: An image- gathering system from the world-wide web employ- ing keyword-based search engines. In International Conference on Multimedia and Expo, (ICME 2001). IEEE.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "On the origins of memes by means of fringe web communities",
"authors": [
{
"first": "Savvas",
"middle": [],
"last": "Zannettou",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Caulfield",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Emiliano",
"middle": [],
"last": "De Cristofaro",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Suarez-Tangil",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Internet Measurement Conference (IMC 2018)",
"volume": "",
"issue": "",
"pages": "188--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Savvas Zannettou, Tristan Caulfield, Jeremy Black- burn, Emiliano De Cristofaro, Michael Sirivianos, Gianluca Stringhini, and Guillermo Suarez-Tangil. 2018. On the origins of memes by means of fringe web communities. In Proceedings of the Internet Measurement Conference (IMC 2018), pages 188- 202. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example results for the query nuclear energy that are (a) off-topic, (b) not argumentative, (c) supportive, and (d) with ambiguous stance."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Annotation interface, showing (from top to bottom) the current topic, question, and image, as well as the generic annotation options and comment box."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Examples of images annotated as being able to support both stances but corrected to neither."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Topic-wise analysis depicting Fleiss' \u03ba, precision@10 per relevance level averaged across expansion methods and stances, and the stance distribution for the retrieved images as judged by our annotators."
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Example images representing different categories of images that are able to support both stances: (a) contextual information: 70% of 111 analyzed images, (b) sources: 9%, (c) definitions: 10%, (d) key point tables: 2%, and (e) ambiguous messages: 9%."
},
"TABREF0": {
"html": null,
"text": "Search engine for argumentative images using stance-aware query expansion Search engine for argumentative images using keyword-based image search",
"content": "<table><tr><td>(a)</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>q</td><td>expansion Query</td><td>q 1 , ..., q n</td><td>Keyword-based image search</td><td>R 1 , ..., R n</td><td>Result classification and re-ranking</td><td>R R</td><td>+ \u2212</td><td>+ \u2212</td></tr><tr><td>(b)</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>q</td><td>Stance-aware query expansion</td><td>q 1 , ..., q n + + \u2212 \u2212 q 1 , ..., q m</td><td>Keyword-based image search Keyword-based</td><td>R 1 , ..., R n + + R 1 , ..., R m \u2212 \u2212</td><td>Result list interlacing Result list</td><td>R R</td><td>+ \u2212</td><td>+ \u2212</td></tr><tr><td/><td/><td/><td>image search</td><td/><td>interlacing</td><td/><td/><td/></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}