{"query": "How did they obtain the dataset?", "gt": "public resources where suspicious Twitter accounts were annotated, list with another 32 Twitter accounts from BIBREF19 that are considered trustworthy", "turbo-16k-0613_pred": "The authors obtained the dataset of Twitter accounts by relying on two lists annotated in previous works. They collected a list of 180 non-factual accounts from public resources, where suspicious accounts were annotated with different types of fake news. They also collected a list of 32 trustworthy factual accounts from independent third parties. They removed duplicate, media-based, and link-only tweets, and collected the maximum amount of tweets allowed by the Twitter API for each account.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 20 words. Output: ", "evaluation": "f1"}
{"query": "What activation function do they use in their model?", "gt": "relu, selu, tanh", "turbo-16k-0613_pred": "In their model, the authors use three different activation functions: ReLU (Rectified Linear Unit), SELU (Scaled Exponential Linear Unit), and tanh (Hyperbolic Tangent).", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "What baselines do they compare to?", "gt": "Top-$k$ replies, likes, or re-tweets, FacTweet (tweet-level), LR + All Features (chunk-level), LR + All Features (tweet-level), Tweet2vec, LR + Bag-of-words", "turbo-16k-0613_pred": "The document compares the proposed approach (FacTweet) to several baselines, including LR + Bag-of-words, Tweet2vec, LR + All Features (tweet-level), LR + All Features (chunk-level), and Top-k replies, likes, or re-tweets.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 21 words. Output: ", "evaluation": "f1"}
{"query": "How are chunks defined?", "gt": "Chunks is group of tweets from single account that  is consecutive in time - idea is that this group can show secret intention of malicious accounts.", "turbo-16k-0613_pred": "Chunks are defined as sorted and grouped sequences of tweets from a Twitter account. These chunks are created by sorting the tweets by their posting date in ascending order, and then splitting them into N chunks. Each chunk consists of a sorted sequence of tweets labeled by the corresponding account's label. These chunks are used to detect suspicious accounts by analyzing their unique patterns in posting tweet sequences. The transition from one set of tweets to the next has a hidden signature influenced by the account's intention to mislead. By reading the tweets in chunks, the detection of fake news accounts can be improved.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 26 words. Output: ", "evaluation": "f1"}
{"query": "What features are extracted?", "gt": "Sentiment, Morality, Style, Words embeddings", "turbo-16k-0613_pred": "The features extracted include emotions, sentiment, morality, style, and word embeddings.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 5 words. Output: ", "evaluation": "f1"}
{"query": "What features are extracted?", "gt": "15 emotion types, sentiment classes, positive and negative, care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation, count of question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions, uppercase ratio, tweet length, words embeddings", "turbo-16k-0613_pred": "The features extracted in this study to detect non-factual Twitter accounts include emotions, sentiment, morality, style, and words embeddings. Emotion features are built using word occurrences of 15 emotion types from two emotional lexicons - NRC lexicon and SentiSense. Sentiment features are extracted using different sentiment lexicons such as EffectWordNet, SenticNet, NRC, and subj_lexicon. Morality features are based on morality foundation theory, categorizing words into 10 categories such as care, harm, fairness, cheating, loyalty, betrayal, authority, subversion, sanctity, and degradation. Style features include question marks, exclamation marks, consecutive characters and letters, links, hashtags, users' mentions, uppercase ratio, and tweet length. Words embeddings are extracted using the Glove-840B-300d pretrained model, and the final representation of a tweet is obtained by averaging its word embeddings.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 39 words. Output: ", "evaluation": "f1"}
{"query": "Was the approach used in this work to detect fake news fully supervised?", "gt": "Yes", "turbo-16k-0613_pred": "Yes", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Based on this paper, what is the more predictive set of features to detect fake news?", "gt": "words embeddings, style, and morality features", "turbo-16k-0613_pred": "Words embeddings, style, and morality features.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 6 words. Output: ", "evaluation": "f1"}
{"query": "How big is the dataset used in this work?", "gt": "Total dataset size: 171 account (522967 tweets)", "turbo-16k-0613_pred": "The dataset used in this work consists of 180 non-factual Twitter accounts and 32 factual Twitter accounts.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 7 words. Output: ", "evaluation": "f1"}
{"query": "How is a \"chunk of posts\" defined in this work?", "gt": "chunk consists of a sorted sequence of tweets labeled by the label of its corresponding account", "turbo-16k-0613_pred": "In this work, a \"chunk of posts\" is defined as a group of tweets posted by a Twitter account that are sorted by date and split into segments of a fixed size. Each chunk consists of a sequence of tweets labeled by the account they belong to. The authors use these chunks to investigate patterns and transitions in tweet sequences that may indicate the presence of fake news or misleading information.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 16 words. Output: ", "evaluation": "f1"}
{"query": "What baselines were used in this work?", "gt": "LR + Bag-of-words, Tweet2vec, LR + All Features (tweet-level), LR + All Features (chunk-level), FacTweet (tweet-level), Top-$k$ replies, likes, or re-tweets", "turbo-16k-0613_pred": "The baselines used in this work were LR + Bag-of-words, Tweet2vec, LR + All Features (tweet-level), LR + All Features (chunk-level), and Top-k replies, likes, or re-tweets. The proposed approach, FacTweet, outperformed all of these baselines in detecting non-factual Twitter accounts.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 21 words. Output: ", "evaluation": "f1"}
{"query": "What models are used in the experiment?", "gt": "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "turbo-16k-0613_pred": "The models used in the experiment include linear SVM, bidirectional Long Short-Term Memory (BiLSTM), and Convolutional Neural Network (CNN).", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "What are the differences between this dataset and pre-existing ones?", "gt": "no prior work has explored the target of the offensive language", "turbo-16k-0613_pred": "This dataset is different from pre-existing ones because it includes annotation of type and target of offensive language, which is not available in other datasets.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 11 words. Output: ", "evaluation": "f1"}
{"query": "In what language are the tweets?", "gt": "English", "turbo-16k-0613_pred": "The language of the tweets in the OLID dataset is English.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What is the size of the new dataset?", "gt": "14,100 tweets", "turbo-16k-0613_pred": "14,100 tweets", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What is the size of the new dataset?", "gt": "Dataset contains total of 14100 annotations.", "turbo-16k-0613_pred": "The new dataset contains 14,100 tweets.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 6 words. Output: ", "evaluation": "f1"}
{"query": "What kinds of offensive content are explored?", "gt": "non-targeted profanity and swearing, targeted insults such as cyberbullying, offensive content related to ethnicity, gender or sexual orientation, political affiliation, religious belief, and anything belonging to hate speech", "turbo-16k-0613_pred": "The document explores different types of offensive content, including abusive language, cyberbullying, hate speech, toxic comments, and offensive language targeting individuals, groups, or others.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 28 words. Output: ", "evaluation": "f1"}
{"query": "What is the best performing model?", "gt": "CNN ", "turbo-16k-0613_pred": "The best performing model is the CNN-based sentence classifier.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How many annotators participated?", "gt": "five annotators", "turbo-16k-0613_pred": "Six annotators participated.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What is the definition of offensive language?", "gt": " Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 .", "turbo-16k-0613_pred": "Offensive language refers to any form of non-acceptable language, including profanity, insults, threats, and posts containing swear words, that is targeted towards a specific individual or entity, or towards a generalized group. It encompasses language that is veiled or direct and can contain offensive content that is explicit or implicit. Offensive language can be found in various forms of social media, and its detection and categorization have become a significant concern for government organizations, online communities, and social media platforms.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 77 words. Output: ", "evaluation": "f1"}
{"query": "What are the three layers of the annotation scheme?", "gt": "Level A: Offensive language Detection\n, Level B: Categorization of Offensive Language\n, Level C: Offensive Language Target Identification\n", "turbo-16k-0613_pred": "The three layers of the annotation scheme are offensive language detection, categorization of offensive language, and offensive language target identification.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 19 words. Output: ", "evaluation": "f1"}
{"query": "How long is the dataset for each step of hierarchy?", "gt": "Level A: 14100 Tweets\nLevel B: 4640 Tweets\nLevel C: 4089 Tweets", "turbo-16k-0613_pred": "The dataset for offensive language detection (Level A) contains 14,100 tweets.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "What is the approach of previous work?", "gt": "'shallow' naive Bayes, SVM, hierarchical stacked classifiers, bidirectional recurrent neural networks", "turbo-16k-0613_pred": "Previous work on language identification (LID) has focused on developing algorithms and models to accurately identify the language of a given text. These approaches include shallow methods like naive Bayes classifiers and support vector machines (SVMs), as well as deeper models such as bidirectional recurrent neural networks (RNNs) and ensembles of RNNs. Lexicon-based classifiers have also been proposed, which first classify text into language groups and then predict the specific language within the group. Existing NLP datasets and services, along with publicly available LID implementations, have been used for evaluation and comparison.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 11 words. Output: ", "evaluation": "f1"}
{"query": "Is the lexicon the same for all languages?", "gt": "Yes", "turbo-16k-0613_pred": "Yes, the lexicon is the same for all languages.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How do they obtain the lexicon?", "gt": "built over all the data and therefore includes the vocabulary from both the training and testing sets", "turbo-16k-0613_pred": "The lexicon used in the proposed LID algorithm is obtained by building it over all the data, including both the training and testing sets. The lexicon consists of vocabulary from the South African languages. It is designed to trade higher precision for lower recall. The proposed implementation considers itself confident if the number of words from the winning language is at least one more than the number of words considered to be from the language scored in second place.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 17 words. Output: ", "evaluation": "f1"}
{"query": "What evaluation metric is used?", "gt": "average classification accuracy, execution performance", "turbo-16k-0613_pred": "The evaluation metric used is classification accuracy.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 5 words. Output: ", "evaluation": "f1"}
{"query": "Which languages are similar to each other?", "gt": "Nguni languages (zul, xho, nbl, ssw), Sotho languages (nso, sot, tsn)", "turbo-16k-0613_pred": "The Nguni languages (zul, xho, nbl, ssw) are similar to each other and are harder to distinguish. The same is true for the Sotho languages (nso, sot, tsn).", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 11 words. Output: ", "evaluation": "f1"}
{"query": "Which datasets are employed for South African languages LID?", "gt": "DSL 2015, DSL 2017, JW300 parallel corpus , NCHLT text corpora", "turbo-16k-0613_pred": "The datasets used for South African languages LID include the DSL 2015 & DSL 2017 shared tasks datasets, the recently published JW300 parallel corpus, a multilingual corpus of academic texts produced by university students in South Africa, the WiLI-2018 benchmark dataset, the Leipzig Corpora Collection, the Universal Declaration of Human Rights, Tatoeba, and the NCHLT text corpora.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 11 words. Output: ", "evaluation": "f1"}
{"query": "Does the paper report the performance of a baseline model on South African languages LID?", "gt": "Yes", "turbo-16k-0613_pred": "Yes.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Does the algorithm improve on the state-of-the-art methods?", "gt": "Yes", "turbo-16k-0613_pred": "Yes.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Do they report results only on English data?", "gt": "Yes", "turbo-16k-0613_pred": "Yes, the document mentions that the proposed transfer learning approach using the pre-trained language model BERT was evaluated on two widely-studied datasets, one collected based on an ad-hoc approach and another collected using the Twitter API. Both datasets are in English.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Do they report results only on English data?", "gt": "The authors showed few tweets where neither and implicit hatred content exist but the model was able to discriminate", "turbo-16k-0613_pred": "The document does not explicitly mention whether the reported results are only for English data.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 19 words. Output: ", "evaluation": "f1"}
{"query": "What evidence do the authors present that the model can capture some biases in data annotation and collection?", "gt": "Waseem-dataset, Davidson-dataset,", "turbo-16k-0613_pred": "Dataset biases.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "Which publicly available datasets are used?", "gt": "Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10", "turbo-16k-0613_pred": "The study uses two publicly available datasets: one collected by Waseem and Hovey and the other by Davidson et al.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 13 words. Output: ", "evaluation": "f1"}
{"query": "What baseline is used?", "gt": "BERT based fine-tuning, Insert nonlinear layers, Insert Bi-LSTM layer, Insert CNN layer", "turbo-16k-0613_pred": "The baselines used in the study are datasets provided by Waseem and Hovey and Davidson et al.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "What new fine-tuning methods are presented?", "gt": "sampling tweets from specific keywords create systematic and substancial racial biases in datasets", "turbo-16k-0613_pred": "The new fine-tuning methods presented in this document are: BERT based fine-tuning, Insert nonlinear layers, Insert Bi-LSTM layer, and Insert CNN layer.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 13 words. Output: ", "evaluation": "f1"}
{"query": "What are the existing biases?", "gt": "Data annotation biases where tweet containing disrespectful words are annotated as hate or offensive without any presumption about the social context of tweeters", "turbo-16k-0613_pred": "The existing biases in hate speech datasets are related to biases in data collection and annotation. Biases include oversampling tweets with specific language and geographic restrictions, resulting in high rates of misclassification. Annotators also tend to label offensive tweets as hate or offensive without considering the social context of the tweeter, leading to misclassifications.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 23 words. Output: ", "evaluation": "f1"}
{"query": "What biases does their model capture?", "gt": "Waseem and Hovy BIBREF5, Davidson et al. BIBREF9, and Waseem et al. BIBREF10", "turbo-16k-0613_pred": "The model captures biases in the process of collecting or annotating hate speech datasets.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 13 words. Output: ", "evaluation": "f1"}
{"query": "What does the \"sensitivity\" quantity denote?", "gt": "the number of distinct word recognition outputs that an attacker can induce", "turbo-16k-0613_pred": "The \"sensitivity\" quantity in this document refers to the number of unique outputs assigned by a word recognition system to a set of adversarial perturbations. It measures the degree of freedom an attacker has in manipulating the classifier. A lower sensitivity indicates a more robust model, as it restricts the options available to the attacker.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "What end tasks do they evaluate on?", "gt": "Sentiment analysis and paraphrase detection under adversarial attacks", "turbo-16k-0613_pred": "The authors evaluate their word recognition models on sentiment classification and paraphrase detection tasks.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 8 words. Output: ", "evaluation": "f1"}
{"query": "What is a semicharacter architecture?", "gt": "A semi-character based RNN (ScRNN) treats the first and last characters individually, and is agnostic to the ordering of the internal characters", "turbo-16k-0613_pred": "A semi-character architecture, such as the ScRNN (Semi-Character RNN), is a model that processes a sentence of words with misspelled characters and predicts the correct words at each step. It represents each input word by concatenating a one-hot vector of the first character, a one-hot representation of the last character, and a bag of characters representation of the internal characters. This model treats the first and last characters individually and is agnostic to the ordering of the internal characters. Each word is then fed into a BiLSTM (Bidirectional Long Short-Term Memory) cell, and the training target is the correct corresponding word. The ScRNN model is trained using cross-entropy loss. The purpose of this architecture is to improve word recognition and correct misspellings in text classification tasks.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 22 words. Output: ", "evaluation": "f1"}
{"query": "Do they experiment with offering multiple candidate corrections and voting on the model output, since this seems highly likely to outperform a one-best correction?", "gt": "No", "turbo-16k-0613_pred": "In the document, the authors focus on adversarially-chosen spelling mistakes in the context of text classification. They experiment with different attack types, such as dropping, adding, and swapping internal characters within words. The authors propose a task-agnostic defense by attaching a word recognition model before the downstream classification model. They evaluate the defense methods, including data augmentation and adversarial training, and propose a novel method of incorporating word recognition models to improve robustness. They also discuss the importance of model sensitivity and present a metric to quantify it. In their experiments, they find that the word recognition model with backoff strategies offers better protection against adversarial attacks compared to other defense methods. Lastly, they analyze the trade-off between word error rate and sensitivity and conclude that a low error rate often comes at the cost of sensitivity.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Why is the adversarial setting appropriate for misspelling recognition?", "gt": "Adversarial misspellings are a real-world problem", "turbo-16k-0613_pred": "The adversarial setting is appropriate for misspelling recognition because it allows for testing and evaluating the robustness of word recognition models in the presence of intentional misspellings. Adversarial attacks involve making subtle changes to the input data with the goal of causing the model to produce incorrect or misleading outputs. In the case of misspelling recognition, adversaries can introduce carefully crafted misspellings to test the ability of word recognition models to accurately identify and correct these misspellings. By evaluating models under adversarial conditions, researchers can gain insights into the limitations and vulnerabilities of word recognition models and develop more robust defense mechanisms to handle real-world misspellings.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 6 words. Output: ", "evaluation": "f1"}
{"query": "How do the backoff strategies work?", "gt": "Pass-through passes the possibly misspelled word as is, backoff to neutral word backs off to a word with similar distribution across classes and backoff to background model backs off to a more generic word recognition model trained with larger and less specialized corpus.", "turbo-16k-0613_pred": "The backoff strategies in the document work by handling unobserved and rare words that the word recognition model may predict as UNK (unknown). There are three backoff strategies proposed: pass-through, backoff to a neutral word, and backoff to a background model. \n\n1. Pass-through: The word-recognizer simply passes on the (possibly misspelled) word as it is, without making any changes. This strategy is used when the word recognition model predicts UNK.\n\n2. Backoff to a neutral word: Instead of passing the UNK-predicted words through unchanged, this strategy backs off to a neutral word like 'a'. This is done to prevent potentially corrupted text from going to the downstream classifier.\n\n3. Backoff to a background model: In this strategy, a more generic word recognition model trained on a larger, less-specialized corpus is used when the foreground word recognition model predicts UNK. This allows for better handling of rare and unobserved words, as the background model has been trained on a larger vocabulary.\n\nThese backoff strategies help improve the performance and robustness of word recognition models in the face of adversarial attacks.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 43 words. Output: ", "evaluation": "f1"}
{"query": "What baseline model is used?", "gt": "same baseline as used by lang2011unsupervised", "turbo-16k-0613_pred": "The baseline model used is based on the syntactic function of the constituents.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 6 words. Output: ", "evaluation": "f1"}
{"query": "Which additional latent variables are used in the model?", "gt": "CLV as a parent of the two corresponding role variables", "turbo-16k-0613_pred": "The additional latent variables used in the model are crosslingual latent variables (CLVs) that capture role correspondences across languages.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "Which additional latent variables are used in the model?", "gt": "crosslingual latent variables", "turbo-16k-0613_pred": "The additional latent variables used in the model are crosslingual latent variables (CLVs).", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "Which parallel corpora are used?", "gt": "English (EN) and German (DE) sections of the CoNLL 2009 corpus BIBREF13, EN-DE section of the Europarl corpus BIBREF14", "turbo-16k-0613_pred": "The parallel corpora used in this work are the English (EN) and German (DE) sections of the CoNLL 2009 corpus and the EN-DE section of the Europarl corpus.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 19 words. Output: ", "evaluation": "f1"}
{"query": "Overall, does having parallel data improve semantic role induction across multiple languages?", "gt": "No", "turbo-16k-0613_pred": "Having parallel data does not significantly improve semantic role induction across multiple languages.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Do they add one latent variable for each language pair in their Bayesian model?", "gt": "Yes", "turbo-16k-0613_pred": "Yes, they add one latent variable for each language pair in their Bayesian model.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What does an individual model consist of?", "gt": "Bayesian model of garg2012unsupervised as our base monolingual model", "turbo-16k-0613_pred": "An individual model consists of individual Bayesian models for each language.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 9 words. Output: ", "evaluation": "f1"}
{"query": "Do they improve on state-of-the-art semantic role induction?", "gt": "Yes", "turbo-16k-0613_pred": "The document does not explicitly state whether the proposed model improves on state-of-the-art semantic role induction.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "how many tags do they look at?", "gt": "48,705", "turbo-16k-0613_pred": "The document does not explicitly state how many tags they look at.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "which algorithm was the highest performer?", "gt": "A hybrid model consisting of best performing popularity-based approach with the best similarity-based approach", "turbo-16k-0613_pred": "The highest performing algorithm was the hybrid approach that combines the best performing popularity-based approach with the best performing similarity-based approach.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 14 words. Output: ", "evaluation": "f1"}
{"query": "how is diversity measured?", "gt": " the average dissimilarity of all pairs of tags in the list of recommended tags", "turbo-16k-0613_pred": "Diversity is measured as the average dissimilarity of all pairs of tags in the list of recommended tags.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 14 words. Output: ", "evaluation": "f1"}
{"query": "how large is the vocabulary?", "gt": "33,663", "turbo-16k-0613_pred": "The document does not explicitly mention the size of the vocabulary.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "what dataset was used?", "gt": " E-book annotation data: editor tags, Amazon search terms, and  Amazon review keywords.", "turbo-16k-0613_pred": "The dataset used in this study includes editor tags and Amazon search terms for e-books.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "what algorithms did they use?", "gt": "popularity-based, similarity-based, hybrid", "turbo-16k-0613_pred": "Popularity-based, similarity-based, hybrid.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "What baseline method is used?", "gt": "use the word2vec algorithm, create several unsupervised hand-crafted features, generate document vectors and feed them as input into the support vector machines (SVM) approach", "turbo-16k-0613_pred": "The baseline method used in the document is the word2vec algorithm, which generates word embeddings based on syntactic and semantic representations of words.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 24 words. Output: ", "evaluation": "f1"}
{"query": "What details are given about the Twitter dataset?", "gt": "one of the Twitter datasets is about Turkish mobile network operators, there are positive, neutral and negative labels and provide the total amount plus the distribution of labels", "turbo-16k-0613_pred": "The Twitter dataset used in the experiment consists of 1,716 tweets about Turkish mobile network operators. Of these tweets, 973 are negative and 743 are positive. The tweets were manually annotated by two individuals and any disagreements in polarity labeling were removed.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 28 words. Output: ", "evaluation": "f1"}
{"query": "What details are given about the movie domain dataset?", "gt": "there are 20,244 reviews divided into positive and negative with an average 39 words per review, each one having a star-rating score", "turbo-16k-0613_pred": "The movie domain dataset used in the study consists of 20,244 movie reviews collected from a popular website. The dataset includes both positive and negative reviews, with star-rating scores ranging from 0.5 to 5. The reviews have been processed and the polarity of each review is determined based on the star-rating score. A total of 7,020 negative reviews and 7,020 positive reviews were randomly selected for the experiments.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 22 words. Output: ", "evaluation": "f1"}
{"query": "Which hand-crafted features are combined with word2vec?", "gt": "polarity scores, which are minimum, mean, and maximum polarity scores, from each review", "turbo-16k-0613_pred": "The hand-crafted features that are combined with word2vec are the minimum, mean, and maximum polarity scores of the reviews.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 13 words. Output: ", "evaluation": "f1"}
{"query": "What word-based and dictionary-based feature are used?", "gt": "generate word embeddings specific to a domain, TDK (T\u00fcrk Dil Kurumu - \u201cTurkish Language Institution\u201d) dictionary to obtain word polarities", "turbo-16k-0613_pred": "The word-based feature used in the document is word2vec embeddings, which capture the semantic and syntactic representations of words. The dictionary-based feature used is the TDK dictionary, which provides word polarities for Turkish words.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 20 words. Output: ", "evaluation": "f1"}
{"query": "How are the supervised scores of the words calculated?", "gt": "(+1 or -1), words of opposite polarities (e.g. \u201chappy\" and \u201cunhappy\") get far away from each other", "turbo-16k-0613_pred": "The supervised scores of the words are calculated based on the polarity of the word in the training data. These scores are computed using the formula: sentiment score of word = (positive word frequency in positive documents / total number of words in positive documents) - (negative word frequency in negative documents / total number of words in negative documents). The scores are normalized and a small smoothing factor is added to prevent imbalance.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 17 words. Output: ", "evaluation": "f1"}
{"query": "What topic is covered in the Chinese Facebook data? ", "gt": "anti-nuclear-power", "turbo-16k-0613_pred": "The topic covered in the Chinese Facebook data is anti-nuclear power.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How many layers does the UTCNN model have?", "gt": "eight layers", "turbo-16k-0613_pred": "The UTCNN model has multiple layers. It starts with a CNN-based document composition layer, which captures user- and topic-dependent document-level semantic representation from word representations. This layer is followed by a user-topic-comment neural network layer, where user, topic, and comment information are utilized to further enhance text classification. Therefore, the UTCNN model has at least two layers, the document composition layer and the user-topic-comment neural network layer.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What topics are included in the debate data?", "gt": "abortion, gay rights, Obama, marijuana", "turbo-16k-0613_pred": "The topics included in the debate data are abortion, gay rights, Obama, and marijuana.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 5 words. Output: ", "evaluation": "f1"}
{"query": "What topics are included in the debate data?", "gt": "abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR)", "turbo-16k-0613_pred": "The topics included in the debate data are abortion, gay rights, Obama, and marijuana.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "What is the size of the Chinese data?", "gt": "32,595", "turbo-16k-0613_pred": "The size of the Chinese data in the FBFans dataset is not specified in the document.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Did they collected the two datasets?", "gt": "No", "turbo-16k-0613_pred": "Yes, the authors collected the two datasets mentioned in the document.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What are the baselines?", "gt": "SVM with unigram, bigram, trigram features, with average word embedding, with average transformed word embeddings, CNN and RCNN, SVM, CNN, RCNN with comment information", "turbo-16k-0613_pred": "The baselines in this paper include SVM with different features (unigram, bigram, trigram, average word embedding, and average transformed word embedding), CNN, RCNN, SVM and deep learning models with comment information, UTCNN without user information, UTCNN without topic information, and UTCNN without comments. These baselines were used to compare and evaluate the performance of the proposed UTCNN model in stance classification on social media texts.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 24 words. Output: ", "evaluation": "f1"}
{"query": "why are their techniques cheaper to implement?", "gt": "They use a slightly modified copy of the target to create the pseudo-text instead of full BT to make their technique cheaper", "turbo-16k-0613_pred": "The techniques discussed in the document, such as using a copy of the target instead of back-translation (BT) and using Generative Adversarial Networks (GANs), are considered cheaper to implement because they require fewer computational resources and less time compared to traditional BT methods. These alternative approaches do not rely on the availability of a reverse translation engine or the generation of large sets of artificial parallel data. Instead, they use simpler data generation methods or modify the training process with GANs to make the pseudo-sources more like natural data. These techniques offer a more efficient and cost-effective way to integrate monolingual data into Neural Machine Translation (NMT) systems.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 22 words. Output: ", "evaluation": "f1"}
{"query": "what data simulation techniques were introduced?", "gt": "copy, copy-marked, copy-dummies", "turbo-16k-0613_pred": "The data simulation techniques introduced in the document are Back-Translation (BT), copying the target into the source (copy), copying the target with marked words (copy-marked), copying the target with dummy tokens (copy-dummies), and using Generative Adversarial Networks (GANs) to make pseudo-sources more natural.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "what is their explanation for the effectiveness of back-translation?", "gt": "when using BT, cases where the source is shorter than the target are rarer; cases when they have the same length are more frequent, automatic word alignments between artificial sources tend to be more monotonic than when using natural sources", "turbo-16k-0613_pred": "The effectiveness of back-translation (BT) in Neural Machine Translation (NMT) is explained by several factors. First, BT provides a large volume of parallel data by generating artificial parallel data from monolingual target texts. This additional data allows the NMT system to learn more effectively and improve translation quality. Second, BT acts as a form of regularization, helping to mitigate overfitting and improve generalization. Third, BT helps to align the source and target sentences, making the resulting artificial parallel data more monotonic. This monotonicity facilitates the training of the NMT system. Finally, BT data complements target-side language modeling, providing additional linguistic and distributional information that enhances translation performance. Overall, the combination of these factors contributes to the effectiveness of BT in NMT.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 40 words. Output: ", "evaluation": "f1"}
{"query": "what dataset is used?", "gt": "Europarl corpus , WMT newstest 2014, News-Commentary-11, Wikipedia from WMT 2014, Multi-UN, EU-Bookshop, Rapid, Common-Crawl (WMT 2017)", "turbo-16k-0613_pred": "The dataset used in the document is the Europarl corpus for the English-German and English-French language pairs. Specifically, the in-domain training data comes from the Europarl corpus, while the out-of-domain training data comes from various sources such as News-Commentary-11, Wikipedia, Multi-UN, and EU-Bookshop corpora.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 17 words. Output: ", "evaluation": "f1"}
{"query": "what language pairs are explored?", "gt": "English-German, English-French", "turbo-16k-0613_pred": "The document explores two language pairs: English-German and English-French.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "what language is the data in?", "gt": "English , German, French", "turbo-16k-0613_pred": "The language of the document is English.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 4 words. Output: ", "evaluation": "f1"}
{"query": "What metrics are used for evaluation?", "gt": "BLEU-1/4 and ROUGE-L, likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles, user matching accuracy (UMA), Mean Reciprocal Rank (MRR), neural scoring model from BIBREF33 to measure recipe-level coherence", "turbo-16k-0613_pred": "The document discusses a new approach to generating personalized recipes based on user preferences. The model takes as input the name of a dish, a few key ingredients, and a calorie level, and uses an encoder-decoder framework to generate a recipe personalized to the user's tastes. The model incorporates attention mechanisms that attend to the user's previously consumed recipes to guide the generation process. The document presents experiments and results that show the effectiveness of the personalized models in generating high-quality and specific recipes that align with historical user preferences. Overall, the document highlights the contributions of the proposed approach, including the exploration of personalized recipe generation, the release of a new dataset, and the introduction of new evaluation strategies for generation quality in instructional texts.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 35 words. Output: ", "evaluation": "f1"}
{"query": "What natural language(s) are the recipes written in?", "gt": "English", "turbo-16k-0613_pred": "The document discusses a proposed approach for generating personalized recipes based on user preferences extracted from previously consumed recipes. The model takes as input the name of a dish, key ingredients, and a calorie level, and uses an encoder-decoder framework to generate a recipe personalized to the user's tastes. The model uses attention mechanisms to attend to user profiles and historical recipe interactions in order to improve the generation of plausible recipes from incomplete ingredients. The document also mentions the creation of a new dataset for this task and introduces evaluation strategies for measuring generation quality and coherence. Experiments and results show that the personalized models outperform baseline models in terms of generation quality, diversity, and user personalization. Human evaluation further confirms that personalized attention improves the plausibility and coherence of generated recipes.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What were their results on the new dataset?", "gt": "average recipe-level coherence scores of 1.78-1.82, human evaluators preferred personalized model outputs to baseline 63% of the time", "turbo-16k-0613_pred": "The document presents a research paper on generating personalized recipes from incomplete input specifications using historical user preferences. The authors propose a model that takes as input the name of a specific dish, key ingredients, and a calorie level and uses an encoder-decoder framework to generate a recipe personalized to the user's tastes. They incorporate user profiles, which are learned latent representations of previously consumed recipes, to improve the personalization. The model uses attention mechanisms to attend to user preferences and fuse them with the decoder output to determine text generation. The authors conduct experiments and show that their personalized models outperform baseline models in terms of generation quality, coherence, and user personalization. They also introduce new evaluation strategies for instructional texts and validate the effectiveness of their models through quantitative and qualitative analysis. The document also provides details about the dataset used, experimental setup, and results.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 18 words. Output: ", "evaluation": "f1"}
{"query": "What are the baseline models?", "gt": "name-based Nearest-Neighbor model (NN), Encoder-Decoder baseline with ingredient attention (Enc-Dec)", "turbo-16k-0613_pred": "The document discusses a proposed approach for generating personalized recipes based on user preferences extracted from previously consumed recipes. The model takes inputs such as the name of a specific dish, key ingredients, and a calorie level, and uses an encoder-decoder framework with attention mechanisms to generate a recipe personalized to the user's tastes. The document also highlights the contributions of the work, the related work in the field, and the dataset used for training and evaluation. Experimental results show that the personalized models outperform baselines in terms of generation quality, diversity, and coherence. Human evaluations further confirm the superiority of the personalized models in terms of semantic plausibility and coherence. Overall, the proposed approach shows promise in generating plausible and personalized recipes based on incomplete input specifications and user histories.\n\nQ1: What is the main focus of the document?\nA1: The main focus of the document is on the approach for generating personalized recipes based on user preferences and incomplete input specifications.\n\nQ2: What are the main contributions of the proposed work?\nA2: The main contributions of the proposed work are: exploring the task of generating personalized recipes from incomplete input specifications, releasing a new dataset for this task, introducing new evaluation strategies for generation quality in instructional texts, and demonstrating that personalized models generate high-quality recipes aligned with historical user preferences.\n\nQ3: How is the evaluation of the personalized models conducted?\nA3: The evaluation of the personalized models is conducted using quantitative measures such as perplexity, BLEU-1/4, ROUGE-L, and qualitative analysis. Human evaluations are also conducted to assess the coherence and preference of the generated recipes compared to baseline models.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "How did they obtain the interactions?", "gt": "from Food.com", "turbo-16k-0613_pred": "The document discusses a new approach to personalized recipe generation by leveraging historical user preferences. The model takes input specifications such as the dish name, key ingredients, and calorie level, and uses an encoder-decoder framework to generate a recipe personalized to the user's tastes. The model incorporates attention mechanisms to attend to user profiles and previous recipe interactions. The document also highlights the creation of a new dataset for this task and introduces evaluation strategies for generation quality. Experimental results show that the personalized models outperform baselines in terms of coherence and generate diverse and acceptable recipes. Human evaluation confirms that personalized attention improves the plausibility and coherence of generated recipes.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "Where do they get the recipes from?", "gt": "from Food.com", "turbo-16k-0613_pred": "The document discusses a new approach to generating personalized recipes based on user preferences. The model takes into account the name of the dish, key ingredients, and a calorie level as input, and uses historical recipe interactions of the user to personalize the output. The document outlines the architecture of the model, including the encoder, ingredient attention, decoder, and attention fusion layer. It also presents the results of experiments conducted on a dataset of 180K+ recipes and 700K+ user reviews, showing that the personalized models generate high-quality and specific recipes that align with historical user preferences. The document concludes by discussing future work and acknowledging the support received for the research.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "what are their results on the constructed dataset?", "gt": "AutoJudge consistently and significantly outperforms all the baselines, RC models achieve better performance than most text classification models (excluding GRU+Attention), Comparing with conventional RC models, AutoJudge achieves significant improvement", "turbo-16k-0613_pred": "The experiment results on the constructed dataset show that the AutoJudge model consistently and significantly outperforms all the baselines, including both text classification models and reading comprehension models. The AutoJudge model achieves significant improvement by incorporating the pair-wise attentive reader, law articles, and the CNN output layer. The results demonstrate the effectiveness and robustness of the AutoJudge model in predicting judgments of civil cases.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 29 words. Output: ", "evaluation": "f1"}
{"query": "what evaluation metrics are reported?", "gt": "precision, recall, F1 and accuracy", "turbo-16k-0613_pred": "precision, recall, F1, accuracy", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 5 words. Output: ", "evaluation": "f1"}
{"query": "what civil field is the dataset about?", "gt": "divorce", "turbo-16k-0613_pred": "The dataset is about divorce proceedings in the civil field.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "what are the state-of-the-art models?", "gt": "SVM , CNN , GRU , CNN/GRU+law, r-net , AoA ", "turbo-16k-0613_pred": "The state-of-the-art models in the field of automatic judgment prediction include SVM with lexical features, attention-based neural models, and off-the-shelf reading comprehension models.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "what is the size of the real-world civil case dataset?", "gt": "100 000 documents", "turbo-16k-0613_pred": "The size of the real-world civil case dataset is not mentioned in the document.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "what datasets are used in the experiment?", "gt": "build a new one, collect INLINEFORM0 cases from China Judgments Online", "turbo-16k-0613_pred": "The experiment in this paper used a dataset of civil law cases from China Judgments Online.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 11 words. Output: ", "evaluation": "f1"}
{"query": "What domains are covered in the corpus?", "gt": "No specific domain is covered in the corpus.", "turbo-16k-0613_pred": "The document does not explicitly mention the domains covered in the corpus.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 8 words. Output: ", "evaluation": "f1"}
{"query": "What is the architecture of their model?", "gt": "follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing", "turbo-16k-0613_pred": "The document does not directly provide information about the architecture of the model used. However, it mentions that the ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, a Transformer base architecture is used with 3 encoder layers, 3 decoder layers, and 0.3 dropout. Unfortunately, there are no additional details on the specific architectural details of the models.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 14 words. Output: ", "evaluation": "f1"}
{"query": "How was the dataset collected?", "gt": "Contributors record voice clips by reading from a bank of donated sentences.", "turbo-16k-0613_pred": "The dataset, called CoVoST, was collected from Common Voice (CoVo), which is a crowdsourcing speech recognition corpus. Contributors recorded voice clips by reading from a bank of donated sentences. The validated voice clips were used to build CoVoST, and professional translators were hired to translate the validated transcripts. The translations were checked for quality using various methods, including comparing them to automatic translations and measuring perplexity. CoVoST also includes an evaluation set collected from Tatoeba, a community-built language learning corpus. The dataset is diversified with over 11,000 speakers and over 60 accents, and it covers 11 languages into English. The CoVoST dataset and the Tatoeba evaluation samples are both available for free use under CC licenses.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "Which languages are part of the corpus?", "gt": "French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese", "turbo-16k-0613_pred": "The languages that are part of the CoVoST corpus are French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian, and Chinese.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "How is the quality of the data empirically evaluated? ", "gt": "Validated transcripts were sent to professional translators., various sanity checks to the translations,  sanity check the overlaps of train, development and test sets", "turbo-16k-0613_pred": "The quality of the data in the CoVoST multilingual ST corpus is empirically evaluated through various checks. These include measuring sentence-level BLEU scores between human translations and automatic translations, manual inspection of examples with identical source transcript and translation, measuring perplexity of translations, computing the ratio of English characters in translations, and calculating similarity scores between transcripts and translations using cross-lingual sentence embeddings. In addition, overlaps between train, development, and test sets are checked to ensure they are disjoint. These evaluations help ensure the quality and accuracy of the translations in the corpus.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 23 words. Output: ", "evaluation": "f1"}
{"query": "Is the data in CoVoST annotated for dialect?", "gt": "No", "turbo-16k-0613_pred": "No, the document does not mention that the data in CoVoST is annotated for dialect.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Is Arabic one of the 11 languages in CoVost?", "gt": "No", "turbo-16k-0613_pred": "No, Arabic is not one of the 11 languages in CoVoST.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What kind of model do they use?", "gt": "visual model is based on fine-tuning an Inception V3 model BIBREF1 over visual renderings of documents, while our textual model is based on a hierarchical biLSTM. We further combine the two into a joint model. , neural network models", "turbo-16k-0613_pred": "The authors propose a joint model for document quality assessment that combines visual features with textual features. The visual model is based on fine-tuning an Inception V3 model over visual renderings of documents, while the textual model is based on a hierarchical biLSTM. The two models are then combined into a joint model. Experimental results show that the joint model achieves higher accuracy in document quality assessment compared to using textual features alone or visual features alone.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 39 words. Output: ", "evaluation": "f1"}
{"query": "What kind of model do they use?", "gt": "Inception V3, biLSTM", "turbo-16k-0613_pred": "They use a joint model.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "Did they release their data set of academic papers?", "gt": "No", "turbo-16k-0613_pred": "Yes, according to the document, the authors have released all the code and data associated with their research on publication.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Do the methods that work best on academic papers also work best on Wikipedia?", "gt": "No", "turbo-16k-0613_pred": "Based on the experimental results, the methods that work best on academic papers do not necessarily work best on Wikipedia. While the Inception model performs well on both academic papers and Wikipedia, the Joint model, which combines both visual and textual features, achieves the highest accuracy on Wikipedia. This suggests that the visual features captured by Inception play a significant role in assessing document quality on Wikipedia, which is not the case for academic papers. Therefore, the combination of visual and textual representations seems to be more effective for assessing document quality in the context of Wikipedia.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What is their system's absolute accuracy?", "gt": "59.4% on wikipedia dataset, 93.4% on peer-reviewed archive AI papers, 77.1%  on peer-reviewed archive Computation and Language papers, and 79.9% on peer-reviewed archive Machine Learning papers", "turbo-16k-0613_pred": "The system's absolute accuracy is 59.4% on the Wikipedia dataset.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 26 words. Output: ", "evaluation": "f1"}
{"query": "Which is more useful, visual or textual features?", "gt": "It depends on the dataset. Experimental results over two datasets reveal that textual and visual features are complementary. ", "turbo-16k-0613_pred": "Experimental results showed that the joint model, combining visual and textual features, achieved higher accuracy than both the visual-only and text-only models, indicating the complementarity of visual and textual features in document quality assessment.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 18 words. Output: ", "evaluation": "f1"}
{"query": "Which languages do they use?", "gt": "English", "turbo-16k-0613_pred": "The document mainly uses English language.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How large is their data set?", "gt": "a sample of  29,794 wikipedia articles and 2,794 arXiv papers ", "turbo-16k-0613_pred": "The data set consists of 29,794 articles from English Wikipedia and three subsets of academic articles from the arXiv repository.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "Where do they get their ground truth quality judgments?", "gt": "quality class labels assigned by the Wikipedia community, a paper is considered to have been accepted (i.e. is positively labeled) if it matches a paper in the DBLP database or is otherwise accepted by any of the following conferences: ACL, EMNLP, NAACL, EACL, TACL, NIPS, ICML, ICLR, or AAAI", "turbo-16k-0613_pred": "The ground truth quality judgments for the Wikipedia dataset are assigned by the Wikipedia community. Wikipedia articles are labeled with one of six quality classes (Featured Article, Good Article, B-class Article, C-class Article, Start Article, and Stub Article) based on the criteria set by the community. The quality class of an article is assigned by registered users or Wikipedia reviewers through discussion on the article's talk page. The arXiv dataset, on the other hand, uses a different method for determining quality. Papers are considered to be accepted if they match a paper in the DBLP database or if they are accepted by certain conferences.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 49 words. Output: ", "evaluation": "f1"}
{"query": "What is the seed lexicon?", "gt": "a vocabulary of positive and negative predicates that helps determine the polarity score of an event", "turbo-16k-0613_pred": "The seed lexicon refers to a collection of positive and negative words that are used as the initial set of labeled data for learning affective events. In this document, the positive seed lexicon consists of words such as \"rejoice,\" \"be glad,\" \"be pleasant,\" and \"be happy.\" The negative seed lexicon includes words like \"get angry,\" \"be sad,\" \"be lonely,\" and \"be scared.\"", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 16 words. Output: ", "evaluation": "f1"}
{"query": "What is the seed lexicon?", "gt": "seed lexicon consists of positive and negative predicates", "turbo-16k-0613_pred": "The seed lexicon is a list of positive and negative words used in the proposed method for learning affective events. The positive words include \"rejoice,\" \"be glad,\" \"be pleasant,\" \"be happy,\" \"be impressed,\" \"be excited,\" \"feel nostalgic,\" \"like,\" \"respect,\" \"be relieved,\" \"admire,\" \"be calm,\" \"be satisfied,\" \"be healed,\" and \"be refreshed.\" The negative words include \"get angry,\" \"be sad,\" \"be lonely,\" \"be scared,\" \"feel anxious,\" \"be embarrassed,\" \"hate,\" \"feel down,\" \"be bored,\" \"feel hopeless,\" \"have a hard time,\" \"have trouble,\" \"be depressed,\" \"be worried,\" and \"be sorry.\"", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 8 words. Output: ", "evaluation": "f1"}
{"query": "What are the results?", "gt": "Using all data to train: AL -- BiGRU achieved 0.843 accuracy, AL -- BERT achieved 0.863 accuracy, AL+CA+CO -- BiGRU achieved 0.866 accuracy, AL+CA+CO -- BERT achieved 0.835, accuracy, ACP -- BiGRU achieved 0.919 accuracy, ACP -- BERT achived 0.933, accuracy, ACP+AL+CA+CO -- BiGRU achieved 0.917 accuracy, ACP+AL+CA+CO -- BERT achieved 0.913 accuracy. \nUsing a subset to train: BERT achieved 0.876 accuracy using ACP (6K), BERT achieved 0.886 accuracy using ACP (6K) + AL, BiGRU achieved 0.830 accuracy using ACP (6K), BiGRU achieved 0.879 accuracy using ACP (6K) + AL + CA + CO.", "turbo-16k-0613_pred": "The paper proposes a method for learning affective events using discourse relations. The goal is to predict the polarity of an event, ranging from -1 (negative) to 1 (positive). The method uses a small seed lexicon and a large raw corpus to propagate polarity from seed predicates to other events. Different types of discourse relations, such as Cause and Concession, are considered. The paper describes the loss functions used for training the models and presents the results of experiments. The models performed well, and the combination of annotated and unannotated data yielded improvements over purely supervised models. The paper also compares different model configurations and discusses the effectiveness of the proposed method. Overall, the results demonstrate the effectiveness of using discourse relations for learning affective events.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 94 words. Output: ", "evaluation": "f1"}
{"query": "How are relations used to propagate polarity?", "gt": "cause relation: both events in the relation should have the same polarity; concession relation: events should have opposite polarity", "turbo-16k-0613_pred": "In this paper, the authors propose a method for learning affective events by using discourse relations to propagate polarity. The key idea is that if two events are in a Cause relation and the latter event is positive, then the former event is also likely to be positive. Similarly, if the two events are in a Concession relation, the former event is likely to have the reverse polarity of the latter event. This idea is transformed into objective functions and trained using neural network models. The results of the experiments show that the method performs well even with a minimal amount of supervision.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 19 words. Output: ", "evaluation": "f1"}
{"query": "How big is the Japanese data?", "gt": "The ACP corpus has around 700k events split into positive and negative polarity ", "turbo-16k-0613_pred": "The Japanese data used in the experiment consists of a web corpus with about 100 million sentences.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 13 words. Output: ", "evaluation": "f1"}
{"query": "What are labels available in dataset for supervision?", "gt": "negative, positive", "turbo-16k-0613_pred": "The labels available in the dataset for supervision are positive and negative.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "How big are improvements of supervszed learning results trained on smalled labeled data enhanced with proposed approach copared to basic approach?", "gt": "3%", "turbo-16k-0613_pred": "The improvements of supervised learning results trained on smaller labeled data enhanced with the proposed approach compared to the basic approach are significant.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How does their model learn using mostly raw data?", "gt": "by exploiting discourse relations to propagate polarity from seed predicates to final sentiment polarity", "turbo-16k-0613_pred": "The proposed method for learning affective events relies on discourse relations to propagate polarity from seed predicates. By exploiting the discourse relations of Cause and Concession, the polarity of one event can be inferred from the known polarity of another event. This method only requires a small seed lexicon and a large raw corpus. The models are trained using a Japanese web corpus and performed well, even with minimal supervision. The combination of annotated and unannotated data yielded improved results compared to purely supervised training when labeled data were limited.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 14 words. Output: ", "evaluation": "f1"}
{"query": "How big is seed lexicon used for training?", "gt": "30 words", "turbo-16k-0613_pred": "The seed lexicon used for training is composed of 15 positive words and 15 negative words.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "How large is raw corpus used for training?", "gt": "100 million sentences", "turbo-16k-0613_pred": "The raw corpus used for training the models is a Japanese web corpus.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "What data is the language model pretrained on?", "gt": "Chinese general corpus", "turbo-16k-0613_pred": "The language model is pretrained on a Chinese general corpus.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 3 words. Output: ", "evaluation": "f1"}
{"query": "What baselines is the proposed model compared against?", "gt": "BERT-Base, QANet", "turbo-16k-0613_pred": "The proposed model is compared against rule and dictionary based methods, task-specific end-to-end methods, and pipeline methods.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "How is the clinical text structuring task defined?", "gt": "Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained., Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. ", "turbo-16k-0613_pred": "The clinical text structuring (CTS) task aims to extract structured data from medical text, such as symptoms, diseases, tumor size, and laboratory test results, which are important for medical research. Existing CTS methods have non-uniform output formats and require task-specific training datasets. To address this, the proposed question answering based CTS (QA-CTS) task aims to discover the most related text from the original paragraph by unifying the output format of different CTS tasks. The proposed model integrates clinical named entity information into a pre-trained language model, BERT, to achieve better performance. Experimental results demonstrate that the QA-CTS task leads to significant improvement and the proposed model outperforms strong baseline methods.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 78 words. Output: ", "evaluation": "f1"}
{"query": "How is the clinical text structuring task defined?", "gt": "CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text.", "turbo-16k-0613_pred": "The clinical text structuring (CTS) task aims to extract relevant structured information from clinical text in electronic health records (EHRs). It involves identifying specific patient medical data, such as symptoms, diseases, tumor size, and laboratory test results, from unstructured clinical text. CTS is important because structured data is essential for biomedical systems and research. However, CTS is challenging because it often requires different models for different tasks and relies on expensive labeled data for training. To address these challenges, the authors propose a question answering based CTS (QA-CTS) task that aims to discover the most relevant text from the original paragraph text. The QA-CTS task unifies the output format of traditional CTS tasks and allows for the sharing of training data. The authors also propose an effective model that integrates clinical named entity information into a pre-trained language model. Experimental results show that the QA-CTS task improves performance and outperforms baseline methods. The rest of the paper discusses related work, presents the proposed model, and provides experimental studies and conclusions. Overall, the QA-CTS task and model offer a promising approach to clinical text structuring.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 24 words. Output: ", "evaluation": "f1"}
{"query": "What are the specific tasks being unified?", "gt": " three types of questions, namely tumor size, proximal resection margin and distal resection margin", "turbo-16k-0613_pred": "The specific tasks being unified in the QA-CTS task are extracting or generating key-value pairs from clinical text. The key is a query term (e.g., proximal resection margin) and the value is the result of the query term in the clinical text. This task aims to find the most related text from the original paragraph text and can involve several steps such as entity names conversion and negative words recognition. The QA-CTS task unifies different specific tasks in clinical text structuring and makes the training data shareable, thus enriching the training data for downstream clinical research.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 14 words. Output: ", "evaluation": "f1"}
{"query": "Is all text in this dataset a question, or are there unrelated sentences in between questions?", "gt": "the dataset consists of pathology reports including sentences and questions and answers about tumor size and resection margins so it does include additional sentences ", "turbo-16k-0613_pred": "The text in this dataset is not all in the form of questions. There are both statements and questions present in the document. The document discusses clinical text structuring, different methods used in this field, and the proposed question-answering based clinical text structuring (QA-CTS) task. The QA-CTS task aims to extract relevant information from clinical text to answer specific queries. The document also presents an effective model for the QA-CTS task, which integrates clinical named entity information into a pre-trained language model. Experimental results show that the proposed model outperforms baseline methods and achieves significant improvements in performance. The document concludes by highlighting the benefits of the shared task and shared model introduced by the QA-CTS task.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 24 words. Output: ", "evaluation": "f1"}
{"query": "How many questions are in the dataset?", "gt": "2,714 ", "turbo-16k-0613_pred": "There are 2,714 question-answer pairs in the dataset.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How they introduce domain-specific features into pre-trained language model?", "gt": "integrate clinical named entity information into pre-trained language model", "turbo-16k-0613_pred": "To introduce domain-specific features into a pre-trained language model, the authors of the document propose the integration of clinical named entity information. They utilize a clinical named entity recognition (CNER) model to capture important clinical terms such as diseases, symptoms, treatments, exams, and body parts from Chinese electronic health records (EHRs). The CNER model outputs a sequence of tags, where each character in the original sentence is assigned a label following a tag scheme. This named entity information is then integrated with the pre-trained language model, BERT, by concatenating the named entity information vectors with the contextualized representation vectors obtained from BERT. Another approach they explore is using multi-head attention to encode the two vectors together. The authors compare these two integration methods and find that concatenation yields the best performance. This integration of domain-specific features enhances the performance of the pre-trained language model on clinical text structuring tasks.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 9 words. Output: ", "evaluation": "f1"}
{"query": "How big is QA-CTS task dataset?", "gt": "17,833 sentences, 826,987 characters and 2,714 question-answer pairs", "turbo-16k-0613_pred": "The QA-CTS task dataset contains 17,833 sentences, 826,987 characters, and 2,714 question-answer pairs. It includes three types of questions: tumor size, proximal resection margin, and distal resection margin. The dataset is partitioned into 1,899 training instances and 815 test instances.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 8 words. Output: ", "evaluation": "f1"}
{"query": "What are strong baseline models in specific tasks?", "gt": "state-of-the-art question answering models (i.e. QANet BIBREF39) and BERT-Base BIBREF26", "turbo-16k-0613_pred": "The strong baseline models in specific tasks mentioned in the document are QANet and BERT-Base.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "What is the previous work's model?", "gt": "Ternary Trans-CNN", "turbo-16k-0613_pred": "Transfer learning based approaches", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What dataset is used?", "gt": "HEOT , A labelled dataset for a corresponding english tweets", "turbo-16k-0613_pred": "The dataset used in this document is called HEOT, which consists of cleaned tweets obtained from Twitter in the Indian subcontinent. It is a labeled dataset that was used for transfer learning in the task of detecting offensive and non-offensive tweets written in Hinglish. The dataset also includes a corresponding set of labeled English tweets. The dataset had various challenges such as messy text messages with URLs, punctuations, usernames, hashtags, emoticons, numbers, and special characters. To prepare the data, several processes were applied including cleaning up the text, removing stop words, transliteration, translation, and data augmentation techniques.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 10 words. Output: ", "evaluation": "f1"}
{"query": "How big is the dataset?", "gt": "3189 rows of text messages", "turbo-16k-0613_pred": "The dataset consists of 3189 rows of text messages with an average length of 116 words.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 5 words. Output: ", "evaluation": "f1"}
{"query": "How big is the dataset?", "gt": "Resulting dataset was 7934 messages for train and 700 messages for test.", "turbo-16k-0613_pred": "The dataset used in the study consisted of 3189 rows of text messages with an average length of 116 words.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "How is the dataset collected?", "gt": "A labelled dataset for a corresponding english tweets were also obtained from a study conducted by Davidson et al, HEOT obtained from one of the past studies done by Mathur et al", "turbo-16k-0613_pred": "The dataset used in the study was collected from past studies conducted by Mathur et al. and Davidson et al. The dataset consists of cleaned tweets obtained from Twitter, specifically focusing on conversations happening in the Indian subcontinent. The Hinglish dataset was annotated for offensive content, and a corresponding English dataset was also obtained for transfer learning purposes. The dataset had some challenges, such as messy text messages with URLs, punctuation, username mentions, hashtags, emoticons, numbers, and special characters. To prepare the data, these elements were cleaned up. Stop words from the NLTK corpus were used to eliminate unproductive words. Transliteration was also performed to translate Hinglish tweets into English words using translation APIs. Data augmentation techniques were applied to boost the learning of the deep network, including techniques like synonym replacement, random insertion, random swap, and random deletion. Word embedding representations from GloVe were used to obtain word sequence vector representations of the processed tweets. The dataset was split into a training set and a test set, with a 78%-22% split. Different model architectures were tested, including fully connected dense networks, convolution-based architectures, and sequence models (SimpleRNN, LSTM, GRU, Bidirectional LSTM). The best performing architecture had a bidirectional LSTM with 32 units and achieved a recall rate of 77% for hate speech. Overall, the results were encouraging in detecting offensive vs. non-offensive tweets in Hinglish, but there were some shortcomings in predicting hateful tweets. Future work includes exploring larger Hinglish corpora for word embeddings and considering alternate methods for translation and transliteration.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 32 words. Output: ", "evaluation": "f1"}
{"query": "Was each text augmentation technique experimented individually?", "gt": "No", "turbo-16k-0613_pred": "No, each text augmentation technique was not experimented individually. Multiple text augmentation techniques were applied together to boost the learning of the deep network. The techniques used were Synonym Replacement (SR), Random Insertion (RI), Random Swap (RS), and Random Deletion (RD). These techniques were applied collectively in the data augmentation process.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What models do previous work use?", "gt": "Ternary Trans-CNN , Hybrid multi-channel CNN and LSTM", "turbo-16k-0613_pred": "Previous work uses Transfer Learning and hybrid models for detecting offensive tweets in Hinglish language.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 8 words. Output: ", "evaluation": "f1"}
{"query": "Does the dataset contain content from various social media platforms?", "gt": "No", "turbo-16k-0613_pred": "The document does not explicitly mention the specific social media platforms from which the dataset was obtained. However, it does mention that the dataset consists of tweets, which suggests that the content is sourced from Twitter.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What additional techniques are incorporated?", "gt": "incorporating coding syntax tree model", "turbo-16k-0613_pred": "The additional technique incorporated in the proposed methodology is the use of Recurrent Neural Network (RNN) layers, specifically an encoder layer, a decoder layer, and an output layer. These layers together form a LSTM (Long Short-Term Memory) model, which is typically used in seq2seq translation.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 5 words. Output: ", "evaluation": "f1"}
{"query": "What dataset do they use?", "gt": "A parallel corpus where the source is an English expression of code and the target is Python code.", "turbo-16k-0613_pred": "The researchers use a parallel corpus that consists of 18,805 aligned data. In the source data, expressions of each line of code are written in the English language, while in the target data, the code is written in Python programming language.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 18 words. Output: ", "evaluation": "f1"}
{"query": "Do they compare to other models?", "gt": "No", "turbo-16k-0613_pred": "The document does not provide a direct comparison of the proposed methodology to other models. However, it mentions that machine learning techniques such as Statistical Machine Translation (SMT) have been used in the past with a success rate of up to 75% in converting human text to executable code. The proposed methodology utilizes SMT techniques as well but with the addition of a neural network-based approach. This suggests that the proposed methodology may have improved performance compared to previous models, but a direct comparison is not provided in the document.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What is the architecture of the system?", "gt": "seq2seq translation", "turbo-16k-0613_pred": "The architecture of the system is based on a neural machine translation model. It consists of three types of recurrent neural network (RNN) layers - an encoder layer, a decoder layer, and an output layer. These layers together form a long short-term memory (LSTM) model. The encoder layer takes the source text as input and passes it through an encoder cell. The decoder layer takes the target text as input and passes it through a decoder cell. The output layer produces the predictions of the next target token based on the current hidden state and attention mechanism. The model uses attention to process variable length between the input and output sequences and generate coherent code expressions.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What additional techniques could be incorporated to further improve accuracy?", "gt": "phrase-based word embedding, Abstract Syntax Tree(AST)", "turbo-16k-0613_pred": "One additional technique that could be incorporated to further improve accuracy is the use of Abstract Syntax Tree (AST). By incorporating AST, the translator can better understand the structure and syntax of the code, which can help in generating more accurate target code for each line.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 6 words. Output: ", "evaluation": "f1"}
{"query": "What programming language is target language?", "gt": "Python", "turbo-16k-0613_pred": "The target programming language in this document is Python.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What dataset is used to measure accuracy?", "gt": "validation data", "turbo-16k-0613_pred": "The dataset used to measure accuracy is a training parallel corpus with 18,805 lines of annotated code.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What corpus was the source of the OpenIE extractions?", "gt": "domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining", "turbo-16k-0613_pred": "The source of the OpenIE extractions is the Sentence corpus (S) consisting of domain-targeted 80K sentences and 280 GB of plain text extracted from web pages.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 18 words. Output: ", "evaluation": "f1"}
{"query": "What is the accuracy of the proposed technique?", "gt": "51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge", "turbo-16k-0613_pred": "The accuracy of the proposed technique, TupleInf, was evaluated in comparison to TableILP on 4th and 8th grade science exams. TupleInf significantly outperformed TableILP, achieving an improvement of over 11% on both question sets. Even when both solvers were given the same knowledge, TupleInf still performed better due to improved selection and a simplified model. The results showed that TupleInf with only automatically extracted tuples achieved scores comparable to TableILP with curated knowledge. Additionally, TupleInf's complementary approach to information retrieval (IR) led to an improved ensemble system when combined with IR. The combination of IR and TupleInf outperformed the combination of IR and TableILP. This demonstrates the strength and effectiveness of TupleInf in complex question answering tasks.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 32 words. Output: ", "evaluation": "f1"}
{"query": "Is an entity linking process used?", "gt": "No", "turbo-16k-0613_pred": "No, an entity linking process is not used in this system.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "Are the OpenIE extractions all triples?", "gt": "No", "turbo-16k-0613_pred": "The document introduces a new question answering (QA) system called TupleInf, which performs reasoning with Open IE tuples for complex multiple-choice questions. The system aims to answer questions in resource-poor domains where curated knowledge is unavailable, such as elementary-level science exams. TupleInf is an ILP-based model that outperforms TableILP, a system that relies on curated tables, by 11.8%. TupleInf achieves this without requiring manually curated tables and using a simpler ILP formulation. The document also discusses the selection of relevant tuples, the support graph search, and presents the results of experiments comparing TupleInf with other systems.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What method was used to generate the OpenIE extractions?", "gt": "for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S, take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$", "turbo-16k-0613_pred": "The document discusses the development of a question answering (QA) system that can reason with Open IE tuples for complex multiple-choice questions. The goal is to create a system that can answer questions in resource-poor domains where curated knowledge is unavailable, such as elementary-level science exams. The document introduces a new ILP-based model called TupleInf, which significantly outperforms previous systems on a set of science questions. TupleInf uses a large-scale tuple knowledge base and applies an optimization algorithm to find the best connections between terms in the question and answer choices. The document also compares TupleInf with other QA systems and presents experimental results showing its effectiveness.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 56 words. Output: ", "evaluation": "f1"}
{"query": "Can the method answer multi-hop questions?", "gt": "Yes", "turbo-16k-0613_pred": "Yes, the method can answer multi-hop questions.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What was the textual source to which OpenIE was applied?", "gt": "domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining", "turbo-16k-0613_pred": "The textual source to which OpenIE was applied is the text corpora (S) from BIBREF6 aristo2016:combining. This text corpus consists of approximately 80,000 sentences and 280 GB of plain text that was extracted from web pages.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 18 words. Output: ", "evaluation": "f1"}
{"query": "What OpenIE method was used to generate the extractions?", "gt": "for each multiple-choice question $(q,A) \\in Q_\\mathit {tr}$ and each choice $a \\in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S, take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \\in A$ and over all questions in $Q_\\mathit {tr}$", "turbo-16k-0613_pred": "The document describes the development of a question answering (QA) system that uses Open Information Extraction (Open IE) tuples for complex multiple-choice questions. The system, called TupleInf, performs reasoning with tuples and is able to answer complex questions even in resource-poor domains where curated knowledge is unavailable. The document presents the challenges in answering complex questions and compares TupleInf with other state-of-the-art QA systems. Experimental results show that TupleInf outperforms other systems and achieves high accuracy in answering complex questions. The document also discusses future directions for improving the system, such as incorporating context and distributional measures. Overall, TupleInf is a new state-of-the-art QA system that demonstrates the effectiveness of using Open IE tuples for complex question answering.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 56 words. Output: ", "evaluation": "f1"}
{"query": "Is their method capable of multi-hop reasoning?", "gt": "Yes", "turbo-16k-0613_pred": "Yes, the method described in the document is capable of multi-hop reasoning.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "What models do they propose?", "gt": "Feature Concatenation Model (FCM), Spatial Concatenation Model (SCM), Textual Kernels Model (TKM)", "turbo-16k-0613_pred": "The authors propose three different models for hate speech detection: the Feature Concatenation Model (FCM), the Spatial Concatenation Model (SCM), and the Textual Kernels Model (TKM). These models combine textual and visual information to detect hate speech in multimodal publications.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "Are all tweets in English?", "gt": "Unanswerable", "turbo-16k-0613_pred": "Based on the given document, it is not explicitly mentioned whether all the tweets in the MMHS150K dataset are in English or not.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 1 words. Output: ", "evaluation": "f1"}
{"query": "How large is the dataset?", "gt": " $150,000$ tweets", "turbo-16k-0613_pred": "150,000 tweets", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What is the results of multimodal compared to unimodal models?", "gt": "Unimodal LSTM vs Best Multimodal (FCM)\n- F score: 0.703 vs 0.704\n- AUC: 0.732 vs 0.734 \n- Mean Accuracy: 68.3 vs 68.4 ", "turbo-16k-0613_pred": "The results of the multimodal models compared to the unimodal models show that despite the usefulness of images for hate speech detection, the multimodal models do not perform better than the textual models.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 23 words. Output: ", "evaluation": "f1"}
{"query": "What is author's opinion on why current multimodal models cannot outperform models analyzing only text?", "gt": "Noisy data, Complexity and diversity of multimodal relations, Small set of multimodal examples", "turbo-16k-0613_pred": "The author believes that the complexity and diversity of multimodal relations, combined with the limited number of multimodal examples, make it difficult for the current multimodal models to outperform models analyzing only text.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 13 words. Output: ", "evaluation": "f1"}
{"query": "What metrics are used to benchmark the results?", "gt": "F-score, Area Under the ROC Curve (AUC), mean accuracy (ACC), Precision vs Recall plot, ROC curve (which plots the True Positive Rate vs the False Positive Rate)", "turbo-16k-0613_pred": "The metrics used to benchmark the results of hate speech detection on multimodal publications are F-score, Area Under the ROC Curve (AUC), and mean accuracy (ACC). These metrics are used to evaluate the performance of different models, including unimodal models that only use text or images, as well as multimodal models that combine both text and images. The results show that the models trained with text-only achieve similar performance as the state-of-the-art method, while the models trained with images only are able to discriminate between offensive and non-offensive uses of certain words. However, the multimodal models do not outperform the textual models, indicating the challenges and complexity of learning the relationships between visual and textual elements in hate speech detection.\n\nWord count: 128", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 27 words. Output: ", "evaluation": "f1"}
{"query": "How is data collected, manual collection or Twitter api?", "gt": "Twitter API", "turbo-16k-0613_pred": "Twitter API", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "How many tweats does MMHS150k contains, 150000?", "gt": "$150,000$ tweets", "turbo-16k-0613_pred": "MMHS150K contains 150,000 tweets.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 2 words. Output: ", "evaluation": "f1"}
{"query": "What unimodal detection models were used?", "gt": " single layer LSTM with a 150-dimensional hidden state for hate / not hate classification", "turbo-16k-0613_pred": "The unimodal detection models used in this work were an LSTM model for tweet text classification and a CNN model for image classification.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 14 words. Output: ", "evaluation": "f1"}
{"query": "What different models for multimodal detection were proposed?", "gt": "Feature Concatenation Model (FCM), Spatial Concatenation Model (SCM), Textual Kernels Model (TKM)", "turbo-16k-0613_pred": "The different models proposed for multimodal detection in this work are the Feature Concatenation Model (FCM), the Spatial Concatenation Model (SCM), and the Textual Kernels Model (TKM).", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 12 words. Output: ", "evaluation": "f1"}
{"query": "What annotations are available in the dataset - tweat used hate speach or not?", "gt": "No attacks to any community,  racist, sexist, homophobic, religion based attacks, attacks to other communities", "turbo-16k-0613_pred": "The dataset contains annotations for hate speech detection in multimodal publications. Each tweet is labeled as either hate speech or not hate speech.", "prompt": "You are an AI visual assistant. Now you are given a very long document. Please follow the instruction after this document. These instructions may include summarizing a document, answering questions based on the document, or writing a required paragraph.Document is as follows. {} Instruction: {} The suggested output length is around 15 words. Output: ", "evaluation": "f1"}
