yilunzhao commited on
Commit
f47caac
·
verified ·
1 Parent(s): ae1b453

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240225/2102.06448v4.json +398 -0
  2. 20240225/2107.11246v2.json +194 -0
  3. 20240225/2109.12965v3.json +649 -0
  4. 20240225/2202.07082v3.json +0 -0
  5. 20240225/2204.08381v2.json +0 -0
  6. 20240225/2204.12243v4.json +212 -0
  7. 20240225/2207.02760v5.json +0 -0
  8. 20240225/2209.01410v2.json +600 -0
  9. 20240225/2209.05946v2.json +107 -0
  10. 20240225/2210.10544v3.json +159 -0
  11. 20240225/2211.07843v2.json +0 -0
  12. 20240225/2211.11338v3.json +0 -0
  13. 20240225/2212.11920v3.json +0 -0
  14. 20240225/2301.08807v4.json +0 -0
  15. 20240225/2302.12491v3.json +0 -0
  16. 20240225/2303.05445v4.json +716 -0
  17. 20240225/2303.06440v2.json +0 -0
  18. 20240225/2303.15702v2.json +0 -0
  19. 20240225/2304.03516v2.json +0 -0
  20. 20240225/2305.02759v4.json +0 -0
  21. 20240225/2305.11854v4.json +0 -0
  22. 20240225/2305.15196v3.json +0 -0
  23. 20240225/2305.16882v2.json +66 -0
  24. 20240225/2306.02031v2.json +0 -0
  25. 20240225/2307.00014v2.json +0 -0
  26. 20240225/2307.00743v4.json +208 -0
  27. 20240225/2307.12856v4.json +0 -0
  28. 20240225/2308.06013v2.json +211 -0
  29. 20240225/2308.10385v3.json +0 -0
  30. 20240225/2309.04332v2.json +414 -0
  31. 20240225/2309.16354v2.json +0 -0
  32. 20240225/2310.00386v2.json +0 -0
  33. 20240225/2310.09017v3.json +0 -0
  34. 20240225/2310.10640v2.json +643 -0
  35. 20240225/2310.12934v3.json +0 -0
  36. 20240225/2310.14592v2.json +0 -0
  37. 20240225/2310.15213v2.json +0 -0
  38. 20240225/2310.18285v4.json +0 -0
  39. 20240225/2310.18306v3.json +443 -0
  40. 20240225/2311.01270v3.json +0 -0
  41. 20240225/2311.05462v2.json +88 -0
  42. 20240225/2311.06056v2.json +0 -0
  43. 20240225/2311.06918v3.json +121 -0
  44. 20240225/2311.07829v2.json +163 -0
  45. 20240225/2311.09114v2.json +0 -0
  46. 20240225/2311.09522v2.json +201 -0
  47. 20240225/2311.14986v2.json +0 -0
  48. 20240225/2311.15443v2.json +0 -0
  49. 20240225/2312.07424v3.json +0 -0
  50. 20240225/2312.10584v2.json +423 -0
20240225/2102.06448v4.json ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "The MSR-Video to Text Dataset with Clean Annotations",
3
+ "abstract": "Video captioning automatically generates short descriptions of the video content, usually in form of a single sentence.\nMany methods have been proposed for solving this task.\nA large dataset called MSR Video to Text (MSR-VTT) is often used as the benchmark dataset for testing the performance of the methods.\nHowever, we found that the human annotations, i.e., the descriptions of video contents in the dataset are quite noisy, e.g., there are many duplicate captions and many captions contain grammatical problems.\nThese problems may pose difficulties to video captioning models for learning underlying patterns.\nWe cleaned the MSR-VTT annotations by removing these problems, then tested several typical video captioning models on the cleaned dataset.\nExperimental results showed that data cleaning boosted the performances of the models measured by popular quantitative metrics.\nWe recruited subjects to evaluate the results of a model trained on the original and cleaned datasets.\nThe human behavior experiment demonstrated that trained on the cleaned dataset, the model generated captions that were more coherent and more relevant to the contents of the video clips.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### The goal of video captioning is to summarize the content of a video clip by a single sentence,\nwhich is an extension of image captioning (Cho et al., 2014 ###reference_b5###; Rennie et al., 2017 ###reference_b15###; Yu et al., 2018 ###reference_b25###; Anderson et al., 2018 ###reference_b1###).\nTo accomplish it, one must use both computer vision (CV) techniques and natural language processing (NLP) techniques.\nA benchmark dataset, called MSR-Video to Text 1.0 (MSR-VTT v1)(Xu et al., 2016b ###reference_b24###), was released in 2016.\nIt contains 10,000 video clips and each clip is described by 20 captions,\nwhich are supposed to be different, given by human annotators.\nThe dataset has become popular in the field of both video captioning and retrieval.\nUntil March 31st, 2022, that work (Xu et al., 2016b ###reference_b24###) has been cited by 793 times according to Google scholar.\nHowever, with a quick look, one can find many duplicate annotations, spelling mistakes and syntax errors in the annotations (Figs. 1 ###reference_###, 2 ###reference_###).\nIt is unknown how many mistakes there are exactly in the dataset and whether/how these mistakes would influence the performance of the video captioning models.\nWe quantitatively analyzed the annotations in the MSR-VTT dataset, and identified four main types of problems.\nFirst, thousands of annotations have duplicates for some of the video clips in the dataset.\nSecond, thousands of special characters, such as \"+\", \"-\", \".\", \"/\", \":\", exist in the annotations.\nThird, thousands of spelling mistakes exist in the annotations.\nFourth, hundreds of sentences are redundant or incomplete.\nWe developed some techniques for cleaning the annotations to solve these problems.\nOur experiments demonstrated that existing models of video captioning, trained on the cleaned training set,\nhad better performances compared to the results obtained by the models trained on the original training set.\nA human evaluation study also showed that a state-of-the-art model trained on the cleaned training set generated better captions\nthan trained on the original training set in terms of semantic relevance and sentence coherence.\nThe cleaned dataset will be made available on request.\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Datasets",
21
+ "text": "Three datasets MSVD (also called YouTube2Text), MSR-VTT and VATEX, unlimited to a specific domain, are widely used in recent studies of video captioning as benchmarks and video retrieval as well.\nMSVD was published in 2013 (Guadarrama et al., 2013 ###reference_b7###). It contains 1970 video clips and roughly 80,000 captions.\nEach video clip pairs with 40 captions.\nMSR-VTT v1 was published in 2016 (Xu et al., 2016b ###reference_b24###). It contains 10,000 video clips and 200,000 captions.\nEach video clip pairs with 20 captions.\nThe MSR-VTT v2 dataset was proposed in the second Video Captioning Competition111Competition Website: http://ms-multimedia-challenge.com/2017/challenge ###reference_allenge###\nusing the MSR-VTT v1 dataset as the training and validation sets and additional 3000 video clips as the test set.\nHowever, the annotations of the test set are not open to the public.\nIn 2019, a new large-scale video description dataset, named VATEX was presented, which is multilingual, linguistically complex and diverse in terms of video and annotations. It contains over 41,000 videos, reused from Kinetics-600, with 10 English text sentences for each of them (Wang et al., 2019c ###reference_b20###)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Recent Advances in Video Captioning",
27
+ "text": "Kinds of models or methods or algorithms have been proposed for video captioning.\nWith semantic concepts detected from the video, the probability distribution of each tag is integrated into the parameters of a recurrent unit in SCN (Gan et al., 2017 ###reference_b6###).\nVideo captioning is improved by sharing knowledge with two related tasks on the encoder and the decoder of a sequence-to-sequence model (Pasunuru and Bansal, 2017a ###reference_b12###).\nReinforced learning is enhanced for video captioning with the mixed-loss function and the CIDEr-entailment reward in CIDEnt-RL (Pasunuru and Bansal, 2017b ###reference_b13###).\nMultiple modalities are fused by hierarchical attention, which helps to improve the model performance, in HATT (Wu et al., 2018 ###reference_b22###).\nThe video feature produced by Efficient Convolutional Network is fed into a video captioning model, which boosts the quality of the generated caption, in the model named ECO (Zolfaghari et al., 2018 ###reference_b27###).\nIn the GRU-EVE, the Short Fourier Transform is applied to video features and high level semantics is derived from the object detector in order to generate captions rich in semantics (Wang et al., 2019a ###reference_b17###).\nA memory structure is used to capture the comprehensive visual information across the whole training set for a word in the MARN (Pei et al., 2019 ###reference_b14###).\nThe encoder employs a sibling (dual-branch) architecture to encode video clips in the SibNet (Liu et al., 2018 ###reference_b10###).\nHACA fuses both global and local temporal dynamics existing in a video clip and generates an accurate description with knowledge from different modalities (Wang et al., 2018 ###reference_b19###).\nDifferent expert modules are trained to provide knowledge for describing out-of-domain video clips in the TAMoE (Wang et al., 2019d ###reference_b21###).\nThe model called SAM-SS is trained under the self-teaching manner to reduce the gap between the training and the test phase with meaningful semantic features (Chen et al., 2020b ###reference_b4###).\nDifferent types of representations are encoded and fused by the cross-gating block and captions are generated with Part-of-Speech information in the POS_RL (Wang et al., 2019b ###reference_b18###).\nIn the VNS-GRU, \u201cabsolute equalitarianism\u201d in the training process is alleviated by professional learning while a comprehensive selection method is used to choose the best checkpoint for the final test (Chen et al., 2020a ###reference_b3###).\nA new paradigm, named Open-book Video Captioning (Zhang et al., 2021 ###reference_b26###), is adopted to generate natural language under the prompts of video-content-relevant sentences, unlimited to the video itself."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Analysis and Cleaning of the MSR-VTT dataset",
33
+ "text": "Since MSR-VTT v2 uses MSR-VTT v1 for training and validation, and the annotations of the test set of MSR-VTT v2 are not open to the public,\nwe performed analysis on MSR-VTT v1.\nThe MSR-VTT v1 dataset contains 10,000 video clips.\nIts training set has 6,513 video clips,\nthe validation set has 497 video clips and\nthe test set has 2,990 video clips.\nAll clips are categorized into 20 classes with diverse contents and sceneries.\nA total of 0.2 million human annotations were collected to describe those video clips. The training/validation/test sets have\n130,260/9,940/59,800 annotations, respectively.\nThe vocabulary sizes of the training/validation/test set are 23,666/5,993/16,001, respectively."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Special Characters",
39
+ "text": "There are 60 different characters in the dataset, including 0-9,\na-z and 24 special characters in Table 1 ###reference_### (space is neglected).\nGenerally speaking, those special characters are not used to train a model.\nWe are intended to remove special characters while preserve information integrity in annotations.\nWe processed those special characters as follow:\nSome special characters were removed from the sentences, include \"#\", \"*\", \"+\", \".\", \":\", \"=\", \">\", \"[\", \"]\", \"(\", \")\", \"\\\",\nwhere \"[\", \"]\", \"(\" and \")\" were removed only when they were not in pairs.\nThe contents between bracket pairs \"()\" and \"[]\" were removed.\nSpecial characters \"-\", \"|\", \"\u2018\", \"@\", \"_\", \"\u2019\", \"/\" were replaced with spaces.\nCharacters from another language were replaced by the most similar English characters.\nFor example, \"\u00e9\" was replaced by \"e\" in \"\u00e9rror\"\nand \"\u0432\" by \"b\" in \"\u0432eautiful\".\n\"&\" between two different words was substituted by \"and\".\nIn total, 7,248 out of 200,000 sentences and 4,190 out of 10,000 video clips were corrected (Table 3 ###reference_###)."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Spelling Mistakes",
45
+ "text": "Massive spelling mistakes were found in the annotations during manual check.\nTokenization is a process of demarcating a string of an input sentence into a list of words.\nAfter tokenization on each of the sentences, we used a popular spelling check software Hunspell 222Available at https://hunspell.github.io ###reference_hunspell.github.io### to check spelling errors.\nBefore Hunspell was used to do spelling checks, 784 new words were added to its vocabulary.\nThese words were chosen manually by four criteria:\nword abbreviations that are popular, eg. F1, WWF, RPG;\nspecific terms that are widely used, eg. Minecraft, Spongebob, Legos;\nnew words that are popular on the Internet, eg. gameplay, spiderman, talkshow;\nnames of persons, eg. Mariah, Fallon, Avril.\nAfter that, spelling mistakes were found in 19,038 annotations out of 200,000 annotations.\n21,826 words might have spelling mistakes suggested by Hunspell.\nThose candidates were corrected in the following steps:\nSubstituted British English spellings with the corresponding American English spellings. For instance, colour color, travelling traveling, programme program, practising practicing, theatre theater. There were 61 such pairs.\nSplit unusual words that were created by concatenating two different words,\ne.g. rockclimbing rock climbing, blowdrying blow drying,\nswordfighting sword fighting, screencaster screen caster, rollercoaster roller coaster.\nIn total, 34 distinct words were found.\nCorrected words that truly contain spelling mistakes, e.g., discusing discussing, explaning explaining, coversation conversation, vedio video, diffrent different.\nIn total 35,668 words were substituted, split or corrected in these three steps for 27,954 sentences in 7,829 video clips as shown in Table 3 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Duplicate Annotations",
51
+ "text": "Duplicate sentences were discovered in many annotations of video clips (Fig. 1 ###reference_###).\nFor each video clip, duplicates were removed.\nThe similarity between two sentences was defined as follow\nwhere denotes the word count in the sentence \nand denotes the word count of the longest common subsequence in and .\n is defined as follows,\nwhere stands for that is a subsequence of .\nWord and word were regarded as the same if the Levenshtein distance (Levenshtein, 1966 ###reference_b8###) between them was less than or equal to .\nTwo sentences were regarded as duplicated if , where is the similarity threshold.\nWith proper values of and , we could find duplicated sentences that had little difference.\nFor example, considering the second pair of sentences in Table 2 ###reference_###,\nthe character \u201cm\u201d is missing in the word \u201cwoan\u201d and the second sentence just has one more word \u201cyoung\u201d than the first sentence.\nThese two sentences are almost the same in terms of meaning.\n###table_1### After duplicate removal, 183,856 video annotations remained in the dataset with 119,625 in the training set, 9,126 in the validation set and 55,105 in the test set with the hyper-parameters ,\ntuned in Section 4 ###reference_###.\nIn another word, 17,733 sentences were removed in 7,129 video clips (Table 3 ###reference_###).\nEach clip has 9 annotations at least, 20 at most and 18.4 on average.\n###table_2### ###figure_3### ###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Successive Sentences without Punctuations",
57
+ "text": "In the task of video captioning, we expect each annotation contains one sentence.\nFor many annotations in the dataset, each of them consists of multiple sentences.\nIn Fig. 3 ###reference_###, the first annotation can be split into three complete sentences:\n\"A women in a dress talks about data scientist.\",\n\"She tells how they are problem solvers and well educated.\",\n\"She starts asking how you can stand out among other data scientist.\"\nIt causes two potential problems.\nFirst, the models trained on such annotations may output grammatically problematic sentences because these annotations are syntactically incorrect.\nSecond, such annotations in the test set are no longer reliable ground truth so that the metrics, computed with them, are not reliable, neither.\n###table_3### However, many annotations in the dataset consist of multiple sentences.\nTo solve it, one needs to manually separate the annotations into several complete sentences and merge them into a single sentence.\nFor the sake of efficiency, the annotations, which consist of several sentences, were only divided and merged for the test set.\nFor the training and validation sets, the sentences longer than , where and denote the average sentence length and its standard deviation, were truncated respectively.\nAfter the process on it, 5,543 sentences were corrected in 2,758 video clips (Table 3 ###reference_###).\nTable 4 ###reference_### contains some random samples from original captions and cleaned ones.\nAs shown in it, some most obvious mistakes are corrected and some redundant words are deleted.\n###table_4###"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiments",
63
+ "text": "Experiments were conducted on the original and cleaned MSR-VTT datasets with several existing video captioning models,\nSCN (Gan et al., 2017 ###reference_b6###), ECO (Zolfaghari et al., 2018 ###reference_b27###), SAM-SS (Chen et al., 2020b ###reference_b4###) and VNS-GRU (Chen et al., 2020a ###reference_b3###).\nThey were trained for 30, 30, 50, 80 epochs, respectively.\nThey were evaluated on the validation set at the end of each epoch.\nThe first two models used the early stopping strategy with cross-entropy loss as the indicator.\nThe last two models used the Comprehensive Selection Method to select a checkpoint for testing (Chen et al., 2020a ###reference_b3###).\nFor the sake of fair comparison, the experiment settings were the same as the original papers.\nThe two hyper-parameters and (see section 3.3 ###reference_###) were set to 0 and 0.85 in our experiments, unless otherwise stated."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Evaluation Metrics",
69
+ "text": "BLEU, CIDEr, METEOR and ROUGE-L were adopted as objective metrics for evaluating the results of the models.\nBLEU is a quick and easy-to-calculate metric, originally used for evaluating the performance of machine translation models (Papineni et al., 2002 ###reference_b11###).\nCIDEr is a metric that captures human consensus (Vedantam et al., 2015 ###reference_b16###).\nMETEOR is a metric that involves precision, recall and order correlation, based on unigram matches (Banerjee and Lavie, 2005 ###reference_b2###).\nROUGE-L is a metric that determines the quality of a summary by finding the longest common subsequence (Lin, 2004 ###reference_b9###).\nBesides these individual metrics, an overall score is presented to combine all of these metrics (Chen et al., 2020b ###reference_b4###):\nwhere the subscript denotes the model and the subscript denotes the best score of the metric over a group of models for comparison.\nB4, C, M, R and O denote BLEU-4, CIDEr, METEOR, ROUGE-L and the overall score (3 ###reference_###), respectively."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Influence of Edit Distance Threshold and Similarity Threshold on Duplicates Removal",
75
+ "text": "In the step of removing duplicated annotations, there are two hyper-parameters: the edit distance threshold and similarity threshold .\nThe sensitivity of the hyper-parameters were investigated on the output of this step.\nAs shown in Table 5 ###reference_###, the threshold of edit distance was inversely proportionate to the remained sentence count.\nThe performance of the model VNS-GRU was the best when .\nAs shown in Table 6 ###reference_###, the threshold of similarity was proportionate to the remained sentence count.\nThe performance of the model VNS-GRU was the best when .\nTable 2 ###reference_### shows that with the method described in the Section 3.3 ###reference_###, we can find similar sentences, in terms of semantics, with one or two words different."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Comparison between the Original/Cleaned MSR-VTT Datasets",
81
+ "text": "In Table 7 ###reference_###, a model name without any superscript indicates that\nthe model was trained on the original training set and the metrics were calculated on the original test set.\nWe had four observations.\nFirst, the models trained on the cleaned training set achieved higher scores of metrics than the models trained on the original training set, even though the metrics were calculated on the original test set.\nFor instance, VNS-GRU (Chen et al., 2020a ###reference_b3###) improves over VNS-GRU by 1.6% on BLEU-4, by 2.1% on CIDEr, by 0.9% on METEOR and by 1.1% on ROUGE-L.\nSecond, the models trained on the cleaned training set and tested on the cleaned test set achieved higher scores of metrics than the models trained on the original training set and tested on the original test set.\nFor instance, VNS-GRU (Chen et al., 2020a ###reference_b3###) improves over VNS-GRU by 1.3% on BLEU-4, by 0.7% on METEOR and by 0.9% on ROUGE-L.\nThird, the scores of VNS-GRU were slightly lower than the scores of VNS-GRU.\nWe attributed this to the increase of annotation diversity in the cleaned test set.\n###figure_5### Fourth, the improvement in overall score (3 ###reference_###) is comparable to or larger than the SOTA methods in recently years.\nThe overall score of the model trained on the cleaned dataset is higher than the one trained on the original dataset by nearly 3.0%, 1.8%, 2.4%, 4.0% for VNS-GRU, SAM-SS, ECO, and SCN, respectively.\nFor comparison, a new method presented in (Wang et al., 2018 ###reference_b19###) called Hierarchically aligned cross-modal attention (HACA) framework improves the overall score of the previous state-of-the-art model CIDEnt-RL by 1.8% (the overall score is calculated according to (3 ###reference_###)) based on Table 1 in (Wang et al., 2018 ###reference_b19###).\nA new method Retrieve-Copy-Generate network presented in (Zhang et al., 2021 ###reference_b26###) improves the overall score of the previous state-of-the-art model ORG-TRL by 0.75% according to Table 6 in (Zhang et al., 2021 ###reference_b26###).\nThe scores of BLEU-4, CIDEr, METEOR and ROUGE-L of popular video captioning models, proposed in recent years, are plotted in Fig. 4 ###reference_###.\nOne of the earliest models on the MSR-VTT dataset,\nVideoLAB, from ACM Multimedia MSR-VTT Challenge 2016 (Xu et al., 2016a ###reference_b23###), was used as the baseline,\nand all other models were compared with it.\nThen the relative changes of other models in percentage can be inferred on the right vertical axes in Fig. 4 ###reference_###.\nBy training on the cleaned training set, one of the state-of-the-art models, VNS-GRU was improved from 15.9% to 19.9% on BLEU-4, from 20.2% to 24.7% on CIDEr, from 7.9% to 10.8% on METEOR,\nfrom 4.6% to 6.1% on ROUGE-L, compared with the results obtained by the same model trained on the original training set.\nFrom the figure, it is seen that the relative improvements brought by annotation cleaning were non-negligible."
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Ablation Study",
87
+ "text": "To analyze the utility of each step in data cleaning,\nwe compared the performances of the model VNS-GRU (Chen et al., 2020a ###reference_b3###) on the original and cleaned test sets in Tables 8 ###reference_### and 9 ###reference_###,\ntrained on the training set cleaned by Step I (Section 3.1 ###reference_###),\nStep II (Section 3.2 ###reference_###), Step III (Section 3.3 ###reference_###), Step IV (Section 3.4 ###reference_###), accumulatively.\n###table_5### As shown in Tables 8 ###reference_### and 9 ###reference_###, Step I brought improvements in all the metrics since it reduced the number of irregular words and phrases, which contain special characters.\nAfter Step II, the four metrics remained similar to those after Step I when measured on the original test set (Table 8 ###reference_###),\nbut the metrics were improved when measured on the cleaned test set (Table 9 ###reference_###).\nAfter Step III, all metrics except METEOR increased in the both cases.\nThe METEOR value slightly decreased when measured on the cleaned test set (Table 9 ###reference_###).\nAfter the last step, almost all metrics were further improved, except BLEU-4.\nIf we focus on the performance of the model measured on the cleaned test set (Table 9 ###reference_###), we found that the overall score was improved after each step.\nThese results suggest that all steps are necessary for cleaning the annotations.\n###table_6###"
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Human Evaluation",
93
+ "text": "###figure_6### It is well-known that the metrics including BLEU-4 (Papineni et al., 2002 ###reference_b11###), CIDEr (Vedantam et al., 2015 ###reference_b16###), METEOR (Banerjee and Lavie, 2005 ###reference_b2###), ROUGE-L (Lin, 2004 ###reference_b9###) do not fully reflect the quality of the video captioning results.\nWe then conducted a human evaluation study.\nWe recruited 17 people (11 male and 6 female, ages between 20 and 35) with normal or corrected-to-normal vision to do this experiment.\nThe subjects were mainly from Tsinghua University, Beijing, China.\nAll subjects had at least college level English.\nThis study was approved by the Department of Psychology Ethics Committee, Tsinghua University, Beijing, China.\nThe subjects watched video clips from the MSR-VTT dataset and compared the results of VNS-GRU trained on the original and cleaned annotations of the dataset (Figure 5 ###reference_###).\nThe subjects were instructed to compare the results based on two criteria:\nrelevance, the match between the contents of the video clip and the caption;\ncoherence, the language fluency and grammatical correctness in the caption.\nFor each video clip, there were three options: (A) Caption A is better; (B) Caption B is better; and (C) Indistinguishable. The two captions were generated by VNS-GRU or VNS-GRU*, which were trained on the original and cleaned annotations of the dataset, respectively. The subjects needed to choose one and only one of three options. A total of 30 video clips were randomly sampled from the test set and presented to all subjects in an fixed order. Every subject completed the experiment within half an hour.\n###figure_7### We noted down the number of votes for VNS-GRU, VNS-GRU* and Indistinguishable for every subject and calculated the average over all subjects (Figure 6 ###reference_###).\nOn average, for 11.8 video clips the subjects voted for \u201cVNS-GRU* is better\u201d and for 10.1 video clips the subjects voted for \u201cVNS-GRU is better\u201d.\nThe one-sided student t-test indicated that VNS-GRU* performed better than VNS-GRU (). On average, for 8.1 videos the subjects could not distinguish the quality of the results.\nThese results suggested that annotation cleaning could boost the quality of the generated captions by video captioning models from subjective evaluation of human.\nNote that the difference in human evaluation between the original dataset and cleaned dataset is significant, but not very large.\nIt might be due to the fact that many of the human subjects are not native English speakers and they might have relatively insufficient ability to judge the difference in quality of the generated sentences."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "The MSR-VTT dataset is a widely used dataset in the areas of video captioning and video retrieval.\nThousands of problems were found in its annotations, and many of them were obvious mistakes.\nWe inspected the influence of these problems on the results of video captioning models.\nBy four steps of data cleaning, we removed or corrected sentences to resolve these problems, and compared the results of several popular video captioning models.\nThe models trained on the cleaned dataset generated better captions than the models trained on the original dataset measured by both objective metrics and subjective evaluations.\nIn particular, trained on the cleaned dataset, VNS-GRU achieved better results with improvement of at least 0.9% compared to the baseline.\nThis cleaned dataset is recommended for developing new video captioning models in the future.\nAnd the proposed method can also be applied to other datasets, including NLP-only datasets, to help model training."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Special characters in the MSR-VTT dataset</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_minipage ltx_align_middle\" id=\"S3.T1.1.p1.1\" style=\"width:346.9pt;\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.p1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.p1.1.2.1\">#</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.p1.1.2.2\">$</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.p1.1.2.3\">%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.p1.1.2.4\">&amp;</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.p1.1.2.5\">(</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.1.p1.1.2.6\">)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.p1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.1.2\">*</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.1.3\">+</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.1.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.1.5\">.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.1.6\">:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.p1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.3.1\">=</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.3.2\">&gt;</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.3.3\">@</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.3.4\">[</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.3.5\">\\</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.p1.1.3.6\">]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.p1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.p1.1.4.1\">/</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.p1.1.4.2\">\u2018</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.p1.1.4.3\">|</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.p1.1.4.4\">\u00e9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.p1.1.4.5\"><span class=\"ltx_text\" id=\"S3.T1.1.p1.1.4.5.1\" lang=\"ru\">\u0432</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.1.p1.1.4.6\">\u2019</td>\n</tr>\n</table>\n</figure>",
106
+ "capture": "Table 1: Special characters in the MSR-VTT dataset"
107
+ },
108
+ "2": {
109
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Examples of duplicates and its similarity value</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.1\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T2.1.1.1\">Sentences</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T2.1.1.2\">Similarity<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex1\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span> Those values are calculated by each pair of sentences with . </span></span></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.1\">a woman is walking down the aisle in a wedding</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.2.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.2.2.1\">0.86, 0.96, 0.96</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.3.1\">a woman is walking down the isle in a wedding dress</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.4.1\">a man is talking to a woan</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.4.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.4.2.1\">0.80, 0.94, 0.94</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.1.5.1\">a young man is talking to a woman</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.1.6.1\">a woman is singing on a music video</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T2.1.6.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.6.2.1\">0.83, 0.94, 0.94</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.1.7.1\">a young woman is singing in a music video</td>\n</tr>\n</table>\n<span class=\"ltx_note ltx_centering ltx_role_footnotetext\" id=\"footnotex2\"><sup class=\"ltx_note_mark\">0</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">0</sup><span class=\"ltx_note_type\">footnotetext: </span>Note: Each pair of sentences describe the same video clip.</span></span></span>\n</figure>",
110
+ "capture": "Table 2: Examples of duplicates and its similarity value"
111
+ },
112
+ "3": {
113
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Impact of each step in terms of sentences and videos</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T3.2\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T3.2.2.3\">Step</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.1.1.1\">\n<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex4\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span> The number of sentence that are corrected in a step</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.2.2\">\n<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex5\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span> The number of video that are corrected in a step</span></span></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.2.3.1\">Special Characters (Section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#S3.SS1\" title=\"3.1 Special Characters \u2023 3 Analysis and Cleaning of the MSR-VTT dataset \u2023 The MSR-Video to Text Dataset with Clean Annotations\"><span class=\"ltx_text ltx_ref_tag\">3.1</span></a>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.3.2\">7,248</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.3.3\">4,190</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.2.4.1\">Spelling Mistakes (Section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#S3.SS2\" title=\"3.2 Spelling Mistakes \u2023 3 Analysis and Cleaning of the MSR-VTT dataset \u2023 The MSR-Video to Text Dataset with Clean Annotations\"><span class=\"ltx_text ltx_ref_tag\">3.2</span></a>)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.4.2\">27,954</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.4.3\">7,829</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.2.5.1\">Duplicate Annotations (Section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#S3.SS3\" title=\"3.3 Duplicate Annotations \u2023 3 Analysis and Cleaning of the MSR-VTT dataset \u2023 The MSR-Video to Text Dataset with Clean Annotations\"><span class=\"ltx_text ltx_ref_tag\">3.3</span></a>)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.5.2\">17,733</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.5.3\">7,129</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T3.2.6.1\">Successive Sentences (Section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#S3.SS4\" title=\"3.4 Successive Sentences without Punctuations \u2023 3 Analysis and Cleaning of the MSR-VTT dataset \u2023 The MSR-Video to Text Dataset with Clean Annotations\"><span class=\"ltx_text ltx_ref_tag\">3.4</span></a>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.6.2\">5,543</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.2.6.3\">2,758</td>\n</tr>\n</table>\n</figure>",
114
+ "capture": "Table 3: Impact of each step in terms of sentences and videos"
115
+ },
116
+ "4": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span> Comparison of random samples from original captions and cleaned captions </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T4.1\">\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.1.1.1\">Sen Id</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T4.1.1.2\">Sentence</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.2.1\">51307</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.1.2.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.2.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.2.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.2.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.2.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.2.2.2.1.1.1\">Animated hedgehog <span class=\"ltx_text\" id=\"S3.T4.1.2.2.2.1.1.1.1\" style=\"color:#FF0000;\">complainging</span> about being bored and a flying bug introduces sonic and the secret rings</span></span>\n<span class=\"ltx_tr\" id=\"S3.T4.1.2.2.2.1.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.2.2.2.1.2.1\">extreme party <span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.2.2.2.1.2.1.1\">games</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.2.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.3.1\">cleaned</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.3.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.3.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.3.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.3.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.3.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.3.2.2.1.1.1\">Animated hedgehog <span class=\"ltx_text\" id=\"S3.T4.1.3.2.2.1.1.1.1\" style=\"color:#0000FF;\">complaining</span> about being bored and a flying bug introduces sonic and the secret rings</span></span>\n<span class=\"ltx_tr\" id=\"S3.T4.1.3.2.2.1.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.3.2.2.1.2.1\">extreme party</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.3.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.4.1\">83933</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.1.4.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.4.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.4.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.4.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.4.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.4.2.2.1.1.1\">A man s hands are holding a <span class=\"ltx_text\" id=\"S3.T4.1.4.2.2.1.1.1.1\" style=\"color:#FF0000;\">red/orange</span> screwdriver and he shows u how to lock and <span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.4.2.2.1.1.1.2\">unlock a deadbolted door</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T4.1.4.2.2.1.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.4.2.2.1.2.1\"><span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.4.2.2.1.2.1.1\">with a key and a screwdriver while explaining his actions</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.4.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.5.1\">cleaned</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.5.2\">a man s hands are holding a red orange screwdriver and he shows u how to lock and</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.6.1\">188904</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.1.6.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.6.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.6.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.6.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.6.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.6.2.2.1.1.1\">An <span class=\"ltx_text\" id=\"S3.T4.1.6.2.2.1.1.1.1\" style=\"color:#FF0000;\">advertisment</span> to subscribe to <span class=\"ltx_text\" id=\"S3.T4.1.6.2.2.1.1.1.2\" style=\"color:#FF0000;\">weelious</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.6.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.7.1\">cleaned</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.7.2\">An <span class=\"ltx_text\" id=\"S3.T4.1.7.2.1\" style=\"color:#0000FF;\">advertisement</span> to subscribe to <span class=\"ltx_text\" id=\"S3.T4.1.7.2.2\" style=\"color:#0000FF;\">rebellious</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.8.1\">57346</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.1.8.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.8.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.8.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.8.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.8.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.8.2.2.1.1.1\">A man is touching and talking about brake cables <span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.8.2.2.1.1.1.1\">(and ziptying them/adding a pad)</span> the clutch and a handle for</span></span>\n<span class=\"ltx_tr\" id=\"S3.T4.1.8.2.2.1.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.8.2.2.1.2.1\">what seems to <span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.8.2.2.1.2.1.1\">be a motorcycle</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.8.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.9.1\">cleaned</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.9.2\">A man is touching and talking about brake cables the clutch and a handle for what seems to</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.10.1\">130327</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.1.10.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.10.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.10.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.10.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.10.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.10.2.2.1.1.1\">In a scene from a <span class=\"ltx_text\" id=\"S3.T4.1.10.2.2.1.1.1.1\" style=\"color:#FF0000;\">spanish-speaking</span> film a man breaks through a wooden door and confronts several <span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.10.2.2.1.1.1.2\">other men inside</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.10.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.11.1\">cleaned</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.11.2\">In a scene from a <span class=\"ltx_text\" id=\"S3.T4.1.11.2.1\" style=\"color:#0000FF;\">spanish speaking</span> film a man breaks through a wooden door and confronts several</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.12.1\">132787</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.1.12.2\">\n<span class=\"ltx_text\" id=\"S3.T4.1.12.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.1.12.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.1.12.2.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.1.12.2.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T4.1.12.2.2.1.1.1\">The girl is walked their <span class=\"ltx_text\" id=\"S3.T4.1.12.2.2.1.1.1.1\" style=\"color:#FF0000;\">warand</span> and she is giving flying <span class=\"ltx_text\" id=\"S3.T4.1.12.2.2.1.1.1.2\" style=\"color:#FF0000;\">kissshe</span> is <span class=\"ltx_text\" id=\"S3.T4.1.12.2.2.1.1.1.3\" style=\"color:#FF0000;\">weae</span> the pink <span class=\"ltx_text ltx_ulem_sout\" id=\"S3.T4.1.12.2.2.1.1.1.4\">topnear the green grass land</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T4.1.12.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.1.13.1\">cleaned</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T4.1.13.2\">The girl is walked their <span class=\"ltx_text\" id=\"S3.T4.1.13.2.1\" style=\"color:#0000FF;\">war and</span> and she is giving flying <span class=\"ltx_text\" id=\"S3.T4.1.13.2.2\" style=\"color:#0000FF;\">kiss she</span> is <span class=\"ltx_text\" id=\"S3.T4.1.13.2.3\" style=\"color:#0000FF;\">wear</span> the pink</td>\n</tr>\n</table>\n</figure>",
118
+ "capture": "Table 4: Comparison of random samples from original captions and cleaned captions "
119
+ },
120
+ "5": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Influence of Edit Distance Threshold on the remaining annotation count and the performance of the model VNS-GRU</figcaption><span class=\"ltx_inline-para ltx_minipage ltx_align_center ltx_align_middle\" id=\"S4.T5.3\" style=\"width:433.6pt;\">\n<span class=\"ltx_para\" id=\"S4.T5.3.p1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T5.3.p1.4\">\n<span class=\"ltx_tr\" id=\"S4.T5.3.p1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.2\">SC <span class=\"ltx_note ltx_role_footnote\" id=\"footnotex7\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span> <span class=\"ltx_text\" id=\"footnotex7.1\" style=\"color:#FF0000;\">Red</span> color denotes an error, <span class=\"ltx_text\" id=\"footnotex7.2\" style=\"color:#0000FF;\">blue</span> color denotes modifications.</span></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.3\">B4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.4\">C</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.5\">M</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.6\">R</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.3.p1.1.1.7\">O</span></span>\n<span class=\"ltx_tr\" id=\"S4.T5.3.p1.2.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.2\">184,078</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.3\">47.6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.4\">52.6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.5\">30.4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.6\">64.1</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.3.p1.2.2.7\">0.9988</span></span>\n<span class=\"ltx_tr\" id=\"S4.T5.3.p1.3.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.2\">183,856</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.3\">47.2</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.4\">52.2</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.5\">30.2</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.6\">64.1</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.p1.3.3.7\">0.9931</span></span>\n<span class=\"ltx_tr\" id=\"S4.T5.3.p1.4.4\">\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.2\">183,545</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.3\">47.2</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.4\">52.4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.5\">30.5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.6\">64.2</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.3.p1.4.4.7\">0.9969</span></span>\n</span><span class=\"ltx_note ltx_role_footnotetext\" id=\"footnotex8\"><sup class=\"ltx_note_mark\">0</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">0</sup><span class=\"ltx_note_type\">footnotetext: </span>Note: . All metric values are presented in percentage.</span></span></span>\n</span></span>\n</figure>",
122
+ "capture": "Table 5: Influence of Edit Distance Threshold on the remaining annotation count and the performance of the model VNS-GRU"
123
+ },
124
+ "6": {
125
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Influence of Similarity Threshold on the remaining annotation count and the performance of the model VNS-GRU</figcaption><span class=\"ltx_inline-para ltx_minipage ltx_align_center ltx_align_middle\" id=\"S4.T6.3\" style=\"width:433.6pt;\">\n<span class=\"ltx_para\" id=\"S4.T6.3.p1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T6.3.p1.7\">\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.2\">SC<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex10\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span>SC represents the number of remaining sentences in the dataset.</span></span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.3\">B4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.4\">C</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.5\">M</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.6\">R</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.3.p1.1.1.7\">O</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.2.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.2\">175,539</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.3\">46.8</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.4\">53.6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.5\">30.4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.6\">64.0</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.p1.2.2.7\">0.9850</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.3.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.2\">179,169</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.3\">47.6</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.4\">54.2</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.5\">30.4</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.6\">64.3</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.3.3.7\">0.9933</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.4.4\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.2\">182,264</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.3\">47.4</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.4\">55.0</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.5\">30.7</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.6\">64.2</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.4.4.7\">0.9982</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.5.5\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.2\">183,705</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.3\">47.4</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.4\">53.7</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.5\">30.4</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.6\">64.0</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.5.5.7\">0.9890</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.6.6\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.2\">185,219</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.3\">47.5</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.4\">55.0</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.5\">30.5</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.6\">64.4</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T6.3.p1.6.6.7\">0.9978</span></span>\n<span class=\"ltx_tr\" id=\"S4.T6.3.p1.7.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.1\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.2\">185,330</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.3\">47.6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.4\">53.4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.5\">30.2</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.6\">63.9</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.3.p1.7.7.7\">0.9867</span></span>\n</span>\n</span></span>\n</figure>",
126
+ "capture": "Table 6: Influence of Similarity Threshold on the remaining annotation count and the performance of the model VNS-GRU"
127
+ },
128
+ "7": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T7\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span> Results on the original/cleaned MSR-VTT dataset </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_minipage ltx_align_middle\" id=\"S4.T7.2\" style=\"width:390.3pt;\">\n<tr class=\"ltx_tr\" id=\"S4.T7.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T7.2.1.1\">Model</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T7.2.1.2\">B4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T7.2.1.3\">C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T7.2.1.4\">M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T7.2.1.5\">R</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T7.2.1.6\">O</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T7.2.2.1\">SCN <cite class=\"ltx_cite ltx_citemacro_citep\">(Gan et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#bib.bib6\" title=\"\">2017</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.2.2\">42.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.2.3\">48.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.2.4\">28.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.2.5\">61.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.2.6\">0.9152</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.3.1\">SCN<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex13\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span>SC represents the number of remaining sentences in the dataset.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.3.2\">44.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.3.3\">51.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.3.4\">29.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.3.5\">63.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.3.6\">0.9550</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.4.1\">SCN<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex14\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>Note: .</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.4.2\">44.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.4.3\">50.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.4.4\">29.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.4.5\">63.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.4.6\">0.9506</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T7.2.5.1\">ECO <cite class=\"ltx_cite ltx_citemacro_citep\">(Zolfaghari et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#bib.bib27\" title=\"\">2018</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.5.2\">43.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.5.3\">49.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.5.4\">28.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.5.5\">62.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.5.6\">0.9304</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.6.1\">ECO<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex15\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span>The model was trained on the cleaned training set and the metrics were calculated on the original test set.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.6.2\">44.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.6.3\">51.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.6.4\">29.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.6.5\">63.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.6.6\">0.9548</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.7.1\">ECO<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex16\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>The model was trained on the cleaned training set and the metrics were calculated on the cleaned test set.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.7.2\">44.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.7.3\">50.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.7.4\">29.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.7.5\">63.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.7.6\">0.9516</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T7.2.8.1\">SAM-SS <cite class=\"ltx_cite ltx_citemacro_citep\">(Chen et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#bib.bib4\" title=\"\">2020b</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.8.2\">43.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.8.3\">51.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.8.4\">28.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.8.5\">62.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.8.6\">0.9431</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.9.1\">SAM-SS<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex17\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span>The model was trained on the cleaned training set and the metrics were calculated on the original test set.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.9.2\">44.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.9.3\">52.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.9.4\">29.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.9.5\">63.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.9.6\">0.9610</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.10.1\">SAM-SS<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex18\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>The model was trained on the cleaned training set and the metrics were calculated on the cleaned test set.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.10.2\">45.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.10.3\">51.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.10.4\">29.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.10.5\">63.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.10.6\">0.9561</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T7.2.11.1\">VNS-GRU <cite class=\"ltx_cite ltx_citemacro_citep\">(Chen et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2102.06448v4#bib.bib3\" title=\"\">2020a</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.11.2\">45.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.11.3\">53.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.11.4\">29.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.11.5\">63.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.2.11.6\">0.9704</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T7.2.12.1\">VNS-GRU<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex19\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span>The model was trained on the cleaned training set and the metrics were calculated on the original test set.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.2.12.2.1\">46.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.2.12.3.1\">55.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.2.12.4.1\">30.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.2.12.5.1\">64.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.2.12.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.2.12.6.1\">1.0000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.2.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T7.2.13.1\">VNS-GRU<span class=\"ltx_note ltx_role_footnote\" id=\"footnotex20\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>The model was trained on the cleaned training set and the metrics were calculated on the cleaned test set.</span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.2.13.2\">46.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.2.13.3\">52.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.2.13.4\">30.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.2.13.5\">64.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.2.13.6\">0.9828</td>\n</tr>\n</table>\n</figure>",
130
+ "capture": "Table 7: Results on the original/cleaned MSR-VTT dataset "
131
+ },
132
+ "8": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T8\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>Results on the origin test set </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T8.20\">\n<tr class=\"ltx_tr\" id=\"S4.T8.20.21\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.1\">I <span class=\"ltx_note ltx_role_footnote\" id=\"footnotex23\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span> The model was trained on the training set with data cleaning steps I, II, III and IV taken one by one. </span></span></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.2\">II</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.3\">III</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.4\">IV</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.5\">B4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.6\">C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.7\">M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.8\">R</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T8.20.21.9\">O</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.4.5\">45.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.4.6\">53.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.4.7\">29.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.4.8\">63.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.4.9\">0.9678</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.8.8.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.8.8.5\">47.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.8.8.6\">54.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.8.8.7\">30.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.8.8.8\">64.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.8.8.9\">0.9901</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.12.12.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.12.12.5\">47.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.12.12.6\">53.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.12.12.7\">30.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.12.12.8\">64.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.12.12.9\">0.9876</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.16.16.5\">47.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.16.16.6\">55.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.16.16.7\">30.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.16.16.8\">64.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.16.16.9\">0.9975</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.18.18.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.19.19.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.20.20.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.20.20.5\">46.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.20.20.6\">55.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.20.20.7\">30.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.20.20.8\">64.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.20.20.9\">0.9974</td>\n</tr>\n</table>\n</figure>",
134
+ "capture": "Table 8: Results on the origin test set "
135
+ },
136
+ "9": {
137
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T9\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 9: </span>Results on the cleaned test set. The model was trained on the training set with data cleaning steps I, II, III and IV taken one by one </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T9.20\">\n<tr class=\"ltx_tr\" id=\"S4.T9.20.21\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.1\">I</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.2\">II</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.3\">III</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.4\">IV</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.5\">B4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.6\">C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.7\">M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.8\">R</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T9.20.21.9\">O</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T9.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.4.4.5\">44.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.4.4.6\">49.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.4.4.7\">29.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.4.4.8\">63.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T9.4.4.9\">0.9598</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T9.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.8.8.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.8.8.5\">46.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.8.8.6\">50.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.8.8.7\">30.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.8.8.8\">63.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.8.8.9\">0.9849</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T9.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.12.12.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.12.12.5\">46.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.12.12.6\">51.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.12.12.7\">30.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.12.12.8\">63.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.12.12.9\">0.9885</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T9.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.16.16.5\">47.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.16.16.6\">51.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.16.16.7\">30.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.16.16.8\">64.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T9.16.16.9\">0.9936</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T9.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.18.18.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.19.19.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.20.20.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.20.20.5\">46.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.20.20.6\">52.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.20.20.7\">30.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.20.20.8\">64.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T9.20.20.9\">0.9947</td>\n</tr>\n</table>\n</figure>",
138
+ "capture": "Table 9: Results on the cleaned test set. The model was trained on the training set with data cleaning steps I, II, III and IV taken one by one "
139
+ }
140
+ },
141
+ "image_paths": {
142
+ "1": {
143
+ "figure_path": "2102.06448v4_figure_1.png",
144
+ "caption": "Figure 1: An example video clip (No. 4290, starting from 0) with duplicate annotations. \u00d7tabsent\ud835\udc61\\times t\u00d7 italic_t denotes repeating t\ud835\udc61titalic_t times",
145
+ "url": "http://arxiv.org/html/2102.06448v4/x1.png"
146
+ },
147
+ "2": {
148
+ "figure_path": "2102.06448v4_figure_2.png",
149
+ "caption": "Figure 2: Three examples in the MSR-VTT dataset. The words in blue and red denote grammatical mistakes and spelling mistakes, respectively",
150
+ "url": "http://arxiv.org/html/2102.06448v4/x2.png"
151
+ },
152
+ "3(a)": {
153
+ "figure_path": "2102.06448v4_figure_3(a).png",
154
+ "caption": "Figure 3: Redundancy samples in the MSR-VTT dataset. Caption 1 can be divided into three sentences. And Caption 2 can be divided into two or three sentences",
155
+ "url": "http://arxiv.org/html/2102.06448v4/x3.png"
156
+ },
157
+ "3(b)": {
158
+ "figure_path": "2102.06448v4_figure_3(b).png",
159
+ "caption": "Figure 3: Redundancy samples in the MSR-VTT dataset. Caption 1 can be divided into three sentences. And Caption 2 can be divided into two or three sentences",
160
+ "url": "http://arxiv.org/html/2102.06448v4/x4.png"
161
+ },
162
+ "4": {
163
+ "figure_path": "2102.06448v4_figure_4.png",
164
+ "caption": "Figure 4: The performance of typical models on the MSR-VTT dataset during 2016 and 2020.\nThe models include VideoLAB, Aalto, v2t_navigator,\nMTVC (Pasunuru and Bansal, 2017a), CIDEnt-RL (Pasunuru and Bansal, 2017b),\nSibNet (Liu et al., 2018), HACA (Wang et al., 2018), TAMoE (Wang et al., 2019d),\nSAM-SS (Chen et al., 2020b) and POS_RL (Wang et al., 2019b) and VNS-GRU (Chen et al., 2020a).\nThe first three models are from ACM Multimedia MSR-VTT Challenge 2016 (Xu et al., 2016a). VideoLAB was used as the baseline (0% change).",
165
+ "url": "http://arxiv.org/html/2102.06448v4/x5.png"
166
+ },
167
+ "5": {
168
+ "figure_path": "2102.06448v4_figure_5.png",
169
+ "caption": "Figure 5: An example question in the human evaluation experiment. Captions A and B were generated by VNS-GRU or VNS-GRU*",
170
+ "url": "http://arxiv.org/html/2102.06448v4/x6.png"
171
+ },
172
+ "6": {
173
+ "figure_path": "2102.06448v4_figure_6.png",
174
+ "caption": "Figure 6: Human evaluation results. \u201cVNS-GRU*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT\u201d, \u201cVNS-GRU\u201d and \u201cIndistinguishable\u201d denote the numbers of videos which the subjects\nvoted for \u201cVNS-GRU*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT is better than VNS-GRU\u201d,\n\u201cVNS-GRU is better than VNS-GRU*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT\u201d and\n\u201cThey are indistinguishable\u201d, respectively. Error bars are standard deviations.\nThe p-value between \u201cVNS-GRU*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT\u201d and \u201cVNS-GRU\u201d is 0.02",
175
+ "url": "http://arxiv.org/html/2102.06448v4/x7.png"
176
+ }
177
+ },
178
+ "validation": true,
179
+ "references": [
180
+ {
181
+ "1": {
182
+ "title": "Bottom-up and top-down attention for image captioning\nand visual question answering, in: Proceedings of the\nIEEE conference on computer vision and pattern recognition, CVPR, pp.\n6077\u20136086.",
183
+ "author": "Anderson, P., He, X.,\nBuehler, C., Teney, D.,\nJohnson, M., Gould, S.,\nZhang, L., 2018.",
184
+ "venue": null,
185
+ "url": null
186
+ }
187
+ },
188
+ {
189
+ "2": {
190
+ "title": "METEOR: An automatic metric for MT evaluation\nwith improved correlation with human judgments, in:\nProceedings of the ACL Workshop on Intrinsic and\nExtrinsic Evaluation Measures for Machine Translation and/or Summarization,\npp. 65\u201372.",
191
+ "author": "Banerjee, S., Lavie, A.,\n2005.",
192
+ "venue": null,
193
+ "url": null
194
+ }
195
+ },
196
+ {
197
+ "3": {
198
+ "title": "Delving deeper into the decoder for video\ncaptioning, in: ECAI 2020 - 24th European Conference\non Artificial Intelligence, pp. 1079\u20131086.",
199
+ "author": "Chen, H., Li, J., Hu, X.,\n2020a.",
200
+ "venue": null,
201
+ "url": null
202
+ }
203
+ },
204
+ {
205
+ "4": {
206
+ "title": "A semantics-assisted video captioning model trained\nwith scheduled sampling.",
207
+ "author": "Chen, H., Lin, K., Maye,\nA., Li, J., Hu, X.,\n2020b.",
208
+ "venue": "Frontiers in Robotics and AI 7,\n129.",
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "5": {
214
+ "title": "Learning phrase representations using rnn\nencoder\u2013decoder for statistical machine translation, in:\nProceedings of the 2014 Conference on Empirical Methods\nin Natural Language Processing, EMNLP, pp. 1724\u20131734.",
215
+ "author": "Cho, K., van Merri\u00ebnboer, B.,\nGulcehre, C., Bahdanau, D.,\nBougares, F., Schwenk, H.,\nBengio, Y., 2014.",
216
+ "venue": null,
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "6": {
222
+ "title": "Semantic compositional networks for visual\ncaptioning, in: Proceedings of the IEEE conference on\ncomputer vision and pattern recognition, CVPR, pp.\n5630\u20135639.",
223
+ "author": "Gan, Z., Gan, C., He, X.,\nPu, Y., Tran, K., Gao,\nJ., Carin, L., Deng, L.,\n2017.",
224
+ "venue": null,
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "7": {
230
+ "title": "Youtube2text: Recognizing and describing arbitrary\nactivities using semantic hierarchies and zero-shot recognition, in:\nProceedings of the IEEE international conference on\ncomputer vision, ICCV, pp. 2712\u20132719.",
231
+ "author": "Guadarrama, S., Krishnamoorthy, N.,\nMalkarnenkar, G., Venugopalan, S.,\nMooney, R., Darrell, T.,\nSaenko, K., 2013.",
232
+ "venue": null,
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "8": {
238
+ "title": "Binary codes capable of correcting deletions,\ninsertions, and reversals, in: Soviet physics doklady,\npp. 707\u2013710.",
239
+ "author": "Levenshtein, V.I., 1966.",
240
+ "venue": null,
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "9": {
246
+ "title": "ROUGE: A package for automatic evaluation of\nsummaries, in: Text Summarization Branches Out, pp.\n74\u201381.",
247
+ "author": "Lin, C.Y., 2004.",
248
+ "venue": null,
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "10": {
254
+ "title": "Sibnet: Sibling convolutional encoder for video\ncaptioning, in: Proceedings of the 26th ACM\nInternational Conference on Multimedia, p. 1425\u20131434.",
255
+ "author": "Liu, S., Ren, Z., Yuan,\nJ., 2018.",
256
+ "venue": null,
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "11": {
262
+ "title": "BLEU: a method for automatic evaluation of machine\ntranslation, in: Proceedings of the 40th Annual Meeting\nof the Association for Computational Linguistics, ACL, pp.\n311\u2013318.",
263
+ "author": "Papineni, K., Roukos, S.,\nWard, T., Zhu, W.J.,\n2002.",
264
+ "venue": null,
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "12": {
270
+ "title": "Multi-task video captioning with video and entailment\ngeneration, in: Barzilay, R., Kan, M.\n(Eds.), Proceedings of the 55th Annual Meeting of the\nAssociation for Computational Linguistics, ACL, pp.\n1273\u20131283.",
271
+ "author": "Pasunuru, R., Bansal, M.,\n2017a.",
272
+ "venue": null,
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "13": {
278
+ "title": "Reinforced video captioning with entailment rewards,\nin: Proceedings of the 2017 Conference on Empirical\nMethods in Natural Language Processing, EMNLP, pp.\n979\u2013985.",
279
+ "author": "Pasunuru, R., Bansal, M.,\n2017b.",
280
+ "venue": null,
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "14": {
286
+ "title": "Memory-attended recurrent network for video\ncaptioning, in: Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition, CVPR, pp.\n8347\u20138356.",
287
+ "author": "Pei, W., Zhang, J., Wang,\nX., Ke, L., Shen, X.,\nTai, Y.W., 2019.",
288
+ "venue": null,
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "15": {
294
+ "title": "Self-critical sequence training for image\ncaptioning, in: Proceedings of the IEEE Conference on\nComputer Vision and Pattern Recognition, CVPR, pp.\n7008\u20137024.",
295
+ "author": "Rennie, S.J., Marcheret, E.,\nMroueh, Y., Ross, J.,\nGoel, V., 2017.",
296
+ "venue": null,
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "16": {
302
+ "title": "Cider: Consensus-based image description evaluation,\nin: IEEE Conference on Computer Vision and Pattern\nRecognition, CVPR, pp. 4566\u20134575.",
303
+ "author": "Vedantam, R., Zitnick, C.L.,\nParikh, D., 2015.",
304
+ "venue": null,
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "17": {
310
+ "title": "Controllable video captioning with pos sequence\nguidance based on gated fusion network, in: Proceedings\nof the IEEE/CVF International Conference on Computer Vision, ICCV, pp.\n2641\u20132650.",
311
+ "author": "Wang, B., Ma, L., Zhang,\nW., Jiang, W., Wang, J.,\nLiu, W., 2019a.",
312
+ "venue": null,
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "18": {
318
+ "title": "Controllable video captioning with pos sequence\nguidance based on gated fusion network, in: Proceedings\nof the IEEE/CVF International Conference on Computer Vision, ICCV, pp.\n2641\u20132650.",
319
+ "author": "Wang, B., Ma, L., Zhang,\nW., Jiang, W., Wang, J.,\nLiu, W., 2019b.",
320
+ "venue": null,
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "19": {
326
+ "title": "Watch, listen, and describe: Globally and locally\naligned cross-modal attentions for video captioning, in:\nProceedings of the 2018 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pp. 795\u2013801.",
327
+ "author": "Wang, X., Wang, Y., Wang,\nW.Y., 2018.",
328
+ "venue": null,
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "20": {
334
+ "title": "Vatex: A large-scale, high-quality multilingual\ndataset for video-and-language research, in: 2019\nIEEE/CVF International Conference on Computer Vision (ICCV), pp.\n4580\u20134590.",
335
+ "author": "Wang, X., Wu, J., Chen,\nJ., Li, L., Wang, Y.F.,\nWang, W.Y., 2019c.",
336
+ "venue": "doi:10.1109/ICCV.2019.00468.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "21": {
342
+ "title": "Learning to compose topic-aware mixture of experts\nfor zero-shot video captioning, in: Proceedings of the\nAAAI Conference on Artificial Intelligence, pp. 8965\u20138972.",
343
+ "author": "Wang, X., Wu, J., Zhang,\nD., Su, Y., Wang, W.Y.,\n2019d.",
344
+ "venue": null,
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "22": {
350
+ "title": "Hierarchical attention-based multimodal fusion for\nvideo captioning.",
351
+ "author": "Wu, C., Wei, Y., Chu, X.,\nSun, W., Su, F., Wang,\nL., 2018.",
352
+ "venue": "Neurocomputing 315,\n362\u2013370.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "23": {
358
+ "title": "The 1st video to language challenge.",
359
+ "author": "Xu, J., Mei, T., Yao, T.,\nRui, Y., 2016a.",
360
+ "venue": "URL: http://ms-multimedia-challenge.com/2016/challenge.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "24": {
366
+ "title": "MSR-VTT: A large video description dataset for\nbridging video and language, in: Proceedings of the IEEE\nconference on computer vision and pattern recognition, CVPR, pp.\n5288\u20135296.",
367
+ "author": "Xu, J., Mei, T., Yao, T.,\nRui, Y., 2016b.",
368
+ "venue": null,
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "25": {
374
+ "title": "Topic-oriented image captioning based on\norder-embedding.",
375
+ "author": "Yu, N., Hu, X., Song, B.,\nYang, J., Zhang, J.,\n2018.",
376
+ "venue": "IEEE Transactions on Image Processing\nPP, 1\u20131.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "26": {
382
+ "title": "Open-book video captioning with\nretrieve-copy-generate network, in: Proceedings of the\nIEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.\n9837\u20139846.",
383
+ "author": "Zhang, Z., Qi, Z., Yuan,\nC., Shan, Y., Li, B.,\nDeng, Y., Hu, W., 2021.",
384
+ "venue": null,
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "27": {
390
+ "title": "ECO: efficient convolutional network for online\nvideo understanding, in: Proceedings of the European\nconference on computer vision ECCV, pp. 713\u2013730.",
391
+ "author": "Zolfaghari, M., Singh, K.,\nBrox, T., 2018.",
392
+ "venue": null,
393
+ "url": null
394
+ }
395
+ }
396
+ ],
397
+ "url": "http://arxiv.org/html/2102.06448v4"
398
+ }
20240225/2107.11246v2.json ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Chance Constrained Economic Dispatch Considering the Capability of Network Flexibility Against Renewable Uncertainties",
3
+ "abstract": "This paper incorporates a continuous-type network flexibility into chance constrained economic dispatch (CCED).\nIn the proposed model, both power generations and line susceptances are continuous variables to minimize the expected generation cost and guarantee a low probability of constraint violation in terms of generations and line flows under renewable uncertainties.\nFrom the analytical form of CCED, we figure out the mechanism of network flexibility against uncertainties\u2014while renewable uncertainties shrink the usable line capacities and aggravate transmission congestion, network flexibility mitigates congestion by re-routing the base-case line flows and reducing the line capacity shrinkage caused by uncertainties.\nFurther, we propose an alternate iteration solver for this problem.\nBy duality theory, we set up a master problem in the form of second-order cone programming to optimize generation dispatch scheme and a subproblem in the form of linear programming to optimize line susceptances. A satisfactory solution can be obtained efficiently by alternately solving these two problems.\nThe proposed method applies to both Gaussian uncertainty and non-Gaussian uncertainty by means of Gaussian mixture model.\nThe case studies on the IEEE 14-bus system and IEEE 118-bus system suggest that network flexibility can significantly improve operational economy while ensuring security under uncertainties.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Economic dispatch (ED) is a representative class of optimal power flow problems, which aims to find the most economical generation scheme that meets load consumption and satisfies operational constraints regarding generations and line flows.\nThe normal ED problem, which does not consider uncertain power injections and takes a deterministic formulation, has been extensively studied and leads to many classic results [1 ###reference_b1###].\nFor instance, when the generation and line flow limits are ignored, the optimal dispatch scheme is determined by the so-called equal incremental cost criterion.\nWhen the generation and line flow limits are included, transmission congestion may occur and the optimal dispatch scheme induces the locational marginal price at each bus.\nThese results are of fundamental importance in power system operation.\nWith the growing penetration of renewable energy, nowadays system operators are asking for more from ED to tackle the challenge posed by the uncertain nature of renewables.\nUnder this background, the chance constrained economic dispatch (CCED) is a notable extension that receives popularity.\nThe system states become random under renewable uncertainties.\nThe CCED replaces the deterministic constraints by chance constraints to guarantee a low probability of constraint violation in case that the renewable generations follow a certain probability distribution [2 ###reference_b2###].\nThe CCED solution is less conservative than the solution given by robust optimization and achieves a sufficiently low risk of insecurity.\nThe DC power flow model is commonly adopted in CCED. The problem formulation can be transformed into a second-order cone program (SOCP) under Gaussian uncertainty [2 ###reference_b2###, 3 ###reference_b3###], while non-Gaussian uncertainty is usually tackled by approximation techniques such as the Gaussian mixture model (GMM) [4 ###reference_b4###] and kernel density representation [5 ###reference_b5###, 6 ###reference_b6###].\nThe AC power flow-based CCED formulations are emerging and hard to solve due to their high nonlinearity and non-convexity.\nSo far the mainstream solution methods include convex relaxation [7 ###reference_b7###], sequential linearization [8 ###reference_b8###] and polynomial chaos expansion [9 ###reference_b9###].\nTraditionally, ED problems rely on the flexibility in generation outputs.\nNowadays the remote controlled line switches and series flexible AC transmission system (FACTS) devices have become a part of the transmission network, e.g., a number of thyristor controlled series compensators (TCSCs) are already in practical operation in the US, China, India and Sweden [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nThe growing flexibility in network topology, in the form of discrete or continuous adjustments of line susceptances, adds a new dimension of flexibility to system dispatch [13 ###reference_b13###].\nThe ED problem with discrete-type network flexibility (i.e., line switching) is usually known as the optimal transmission switching (OTS) problem [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###].\nThe continuous-type network flexibility enabled by power electronics provides a softer way of line susceptance adjustment which attracts attention recently.\nThe technical difficulty of the ED problem considering network flexibility mainly originates from the bilinear term in the power flow equation111This bilinearity cannot be simply eliminated by variable substitution. For instance, consider the power flow variant where denotes the power transfer distribution factor matrix and denote the power injections and line flows. Since is determined by line susceptances, still induces bilinearity if line susceptances are variables. due to the variable susceptance .\nFor the deterministic ED with either discrete- or continuous-type network flexibility, this bilinear term has been handled by introducing binary variables to reformulate the problem into a mixed-integer linear program (MILP) [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###].\nThe mixed-integer-based method has some limitations although it is easy to implement.\nFirst, the mixed-integer reformulation suffers from the issue of suboptimality and dimension curse. Much effort has been devoted to speeding up the algorithm and enhancing optimality via various heuristics [21 ###reference_b21###, 22 ###reference_b22###, 22 ###reference_b22###].\nSecond, when it comes to CCED problems, the mixed-integer reformulation only applies to discrete-type network flexibility.\nFor instance, the authors in [23 ###reference_b23###] studied the chance constrained OTS by introducing binary variables as line switch indicators to classify the problem into two tractable cases: is constantly zero if line is switched off (i.e., is zero), or reduces to a linear term with respect to when line is switched on (i.e., is a nonzero constant). Finally a mixed-integer SOCP (MISOCP) reformulation of CCED considering line switchings is obtained.\nHowever, when becomes a continuous variable in case of continuous-type network flexibility, this classification method can work only by discretizing the continuously adjustable range of line susceptances into a finite number of set points [24 ###reference_b24###], which simplifies the problem at the cost of losing dimensions in the search space.\nAs an alternative, the uncertainty factors in network flexibility problems are more commonly handled by scenario-based stochastic optimization framework [25 ###reference_b25###, 26 ###reference_b26###] or robust optimization [27 ###reference_b27###, 28 ###reference_b28###].\nMoreover, the existing works mainly investigate the role of network flexibility in a numerical way, e.g., line susceptances are treated as extra decision variables in the optimization. The obtained solution does improve the system performance, but it does not explain why and how the improvement is made.\nIt remains to explore a deeper understanding of the mechanism of how the network topologies affect system operation, which could also facilitate the design of solution method to efficiently handle the complexity of continuous-type network flexibility.\nThis paper formulates the CCED problem with continuously adjustable line susceptances, which finds the optimal generation dispatch and line susceptance scheme to achieve the minimal expected generation cost and satisfy the generation and line flow chance constraints. The main contributions are twofold.\nFirst, we reveal the mechanism of network flexibility in addressing the uncertainty-induced congestion. Assuming the renewable uncertainties follow Gaussian distributions, we derive an analytical form of the CCED problem that reveals the role of network flexibility in handling uncertainties. It turns out that renewable uncertainties take up some line capacities and shrink the feasible region for line flows, while network flexibility tunes the base-case line flows and reduces the line capacities taken up by uncertainties. With the help of network flexibility, transmission congestion is greatly mitigated so that the cost-effective generations can be better utilized.\nSecond, based on the mechanism of network flexibility in congestion mitigation, an efficient alternate iteration solver is designed without discretizing the continuous variables of line susceptances.\nThe CCED problem is decomposed into a master problem and a subproblem.\nThe master problem optimizes generation dispatch scheme by treating line susceptances as a parameter, which is an SOCP problem. The linear subproblem is formulated using duality theory, which optimizes line susceptances to provide a better parameter for the master problem. A satisfactory solution can be obtained by alternately solving the master problem and subproblem.\nFurther, we extend the proposed method to non-Gaussian uncertainty via GMM technique.\nThe remainder of the paper is organized as follows.\nSection II ###reference_### formulates the CCED model considering network flexibility and explores the mechanism of network flexibility against renewable uncertainties.\nThe solution methodology is elaborated in Section III ###reference_###.\nThe case studies on two IEEE test systems are given in Section IV ###reference_###.\nSection V ###reference_### concludes the paper."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Formulating CCED with network flexibility",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A The original problem formulation",
21
+ "text": "We first introduce some notations that will be used throughout the paper.\nConsider a transmission system with the set of buses and set of lines .\nThe cardinalities of and are and , respectively.\nThe buses may connect traditional dispatchable generators, renewable generators or loads.\nThe dispatchable generation, renewable generation and load at bus are respectively denoted as .\nThe set of buses with dispatchable generators is denoted as .\nThe line connecting bus and is denoted as . The susceptance of line is denoted as , and the line flow from bus to is denoted as .\nAssume there is a subset of lines that install series FACTS devices and hence have adjustable susceptances, denoted as .\nFor convenience, we define the vectors and that stack the quantities , respectively.\nIn addition, the incidence matrix is defined as follows.\nSuppose each line is assigned an orientation, i.e., originates at bus and terminates at bus , then , , and if . The admittance matrix is then given by , where is a diagonal matrix with the main diagonal being .\nIn order to gain an analytical insight, we assume follows the multivariate Gaussian distribution where denotes the mean value and denotes the covariance matrix.\nIf bus does not connect a renewable generator, then , and -th row and column of are set to zero.\nThe proposed method is extendable to non-Gaussian uncertainty with some modifications, which will be presented in Section III-D ###reference_###.\nThe dispatchable generators adopt the common affine control to balance the renewable uncertainty, so that consists of two parts\nwhere denotes the base-case generations for the forecast scenario; denotes a vector with all entries being unity;\nand denotes the vector of participation factors that determines the power sharing of each dispatchable generator under renewable fluctuation.\nWe set to fully balance the renewable fluctuation, and and to be zero for bus .\nSince we focus on the impact of renewable uncertainties, for simplicity we assume the forecasted loads is accurate. Nevertheless, the load uncertainties can be handled in a similar way.\nWith the above notations, the power flow equation is expressed as\nwhere\nis the power transfer distribution factor matrix and denotes the Moore-Penrose inverse of admittance matrix .\nWe write as a function of as is a variable in this paper.\nEquation (2a ###reference_1###) describes the power balance between generations and loads, and (2b ###reference_2###) gives the mapping from power injections to line flows.\nNote that (2b ###reference_2###) is derived from the DC power flow and , where is the vector of phase angles.\nThen, the CCED problem considering network flexibility, which takes as decision variables, is formulated as\nwhere and denote the linear and quadratic coefficients for generation cost;\n and denote the mathematical expectation and probability;\n and are predefined small positive numbers for regulating the risk of constraint violation;\n denote the minimum and maximum susceptance of line ;\n denote the minimum and maximum output of the dispatchable generator at bus that are predetermined by generation capacity and ramp rate;\nand denotes the transmission capacity of line that is predetermined by thermal, voltage, or stability considerations.\nConstraints (4e ###reference_5###) and (4f ###reference_6###) are consistent with the previous discussion on those buses without dispatchable generators.\nThe chance constraints (4g ###reference_7###)-(4j ###reference_10###) ensure a sufficiently low risk of generation and line overloading under the uncertain renewable generations and a certain solution of base-case generations , participation factors and line susceptances .\nThe optimal solution to problem (II-A ###reference_###) has two features.\nFirst, it achieves the minimal expectation of generation cost under renewable uncertainties.\nSecond, the probabilities of violating the generation limits and line flow limits are sufficiently low, which guarantees a highly secure operating status under renewable uncertainties.\nIt will be seen later that the introduction of variable line susceptances into (II-A ###reference_###) significantly reduces the transmission congestion under uncertainties and enhances operational economy.\nThe flexible line susceptance has several types of physical realizations.\nThe most common realization is to install a TCSC in the line, which has been put into practice in some countries as mentioned in the introduction.\nThis function can also be achieved by more recently developed devices such as power electronic transformers [29 ###reference_b29###, 30 ###reference_b30###].\nNote that the complex power transfer across a line takes the expression , where denotes the complex voltage.\nNow consider a pair of power electronic transformers installed at the two terminals of line that induce the secondary voltages and for power transfer, where denotes the flexible tap ratio of the two transformers (see Fig. 1 ###reference_###).\nIn this case, the power transfer across line becomes , which implies that the effective line susceptance is changed to .\nTherefore, it is realistic to consider flexible line susceptances in modern power systems with series controllers.\nThe classical DC power flow model is adopted in this paper to capture the behavior of line flows under uncertainties, which is of major concern in transmission system security.\nIn the recent years, there emerge a family of advanced linear power flow models that can approximately describe the network loss and voltage-reactive power relationship, e.g., see [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###].\nIn fact, the proposed method just requires system states to be linearly dependent on power injections. So it is not limited to DC power flow and applies to the family of linear power flow models.\nAn extended study of CCED based on a general linear power flow model will be a future direction.\nThe single-sided chance constraints is adopted in (II-A ###reference_###), which is a more tractable modeling choice that helps us to focus on the mechanism of network flexibility in high-renewable system operation.\nIn case that two-sided joint chance constraints are adopted, they can still be transformed into single-sided constraints via, e.g., Bonferroni approximation [34 ###reference_b34###, 35 ###reference_b35###]. So choosing the single-sided constraints does not lose generality in the problem formulation.\n###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Analytical form of CCED",
27
+ "text": "Since the original problem formulation in (II-A ###reference_###) is intractable, we transform it into an analytical form that helps to reveal the role of decision variables in CCED.\nAccording to (1 ###reference_###) and (2b ###reference_2###), the dispatchable generation and line flow are random variables.\nLet us first derive the expression of their mean values and standard deviations.\nBy (1 ###reference_###) and properties of random variables [36 ###reference_b36###], we have\nwhere is a constant; and and denote the mean value and standard deviation, respectively.\nFor line flows, substituting (1 ###reference_###) into (2b ###reference_2###) gives\nwhere denotes the base-case line flows which are a function of\nand is a function of\nwhere denotes the identity matrix.\nThen, it follows\nwhere denotes the row of indexed by line that takes the expression\nwith being the column of indexed by line .\nUnder the Gaussian assumption of , chance constraints (4g ###reference_7###)-(4j ###reference_10###) are equivalent to [37 ###reference_b37###, 38 ###reference_b38###]\nwhere is the inverse cumulative distribution function of the standard Gaussian distribution.\nBy the property of variance [36 ###reference_b36###], the objective function (4a ###reference_1###) is equivalent to\nThus, by (28 ###reference_###)-(12 ###reference_###) we obtain the following analytical form of the CCED problem (II-A ###reference_###)\nwhere\ncan be regarded as the effective generation limits and line capacities under uncertainty with denoting the 2-norm.\nNote that (13b ###reference_.2###) is equivalent to (2a ###reference_1###) since the amount of renewable fluctuation is fully balanced by the dispatchable generators."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Role of decision variables in transmission congestion",
33
+ "text": "Transmission congestion occurs when some line flow constraints are binding.\nIn this case, the capacities of those congested lines, rather than generation dispatchability, become the major bottleneck for further utilizing the cost-effective generators, which may drastically impact operational economy [39 ###reference_b39###].\nThis undesirable event is more prone to occur in the CCED since is equal to the physical capacity reduced by an uncertainty-related margin, i.e., the actual usable line capacity shrinks under uncertainty.\nOn the other hand, the expression in (14 ###reference_###) reveals how the decision variables contribute to congestion mitigation:\n1) The base-case generation appears in the left-hand-side of (13i ###reference_.9###)-(13j ###reference_.10###) to regulate the base-case line flows. It does not contribute to congestion mitigation.\n2) The participation factor appears in the right-hand-side of (13i ###reference_.9###)-(13j ###reference_.10###).\nIt helps to save the line capacity by adjusting , but we note that the main function of is to achieve power balancing under uncertainties.\nIt will be seen in the case study that has a much more significant effect on saving line capacity than .\n3) The line susceptance appears in both sides of (13i ###reference_.9###)-(13j ###reference_.10###) and has a composite contribution to congestion mitigation.\nIt saves the line capacity by tuning , and meanwhile re-routes power injections to improve the base-case line flows to better utilize the saved line capacity.\nThe later case study will show that a proper adjustment of line susceptances leads to a both highly economic and secure operating condition under uncertainties."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Solution methodology",
39
+ "text": "Problem (II-B ###reference_###) is hard to solve due to the non-convexity of (13i ###reference_.9###) and (13j ###reference_.10###).\nOn the other hand, the CCED without network flexibility (i.e., is fixed) is an SOCP problem (constraints (13g ###reference_.7###)-(13h ###reference_.8###) are second-order cones if is fixed), where convex solvers apply.\nIn addition, it is shown in Section II-C ###reference_### that plays a different role from and in the problem.\nTherefore, we separate the decision variables into two groups, say and , which correspond to generation dispatchability and network flexibility, respectively.\nAn alternate iteration framework is then developed to iteratively solve the master problem and subproblem with respect to the two groups of decision variables.\nThe master problem optimizes while treating as a given parameter.\nThe subproblem optimizes to provide a better parameter for the master problem.\nThe formulations of the master problem and subproblem are detailed below."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Master problem to optimize generation dispatch",
45
+ "text": "Let denote the current profile of line susceptances.\nThen, the master problem is set up below to optimize generation dispatch\nwhere line susceptances are fixed to as a given parameter.\nAs mentioned before, (III-A ###reference_###) is an SOCP problem that can be efficiently solved by commercial convex solvers such as CVX.\nLet denote the optimal solution of (III-A ###reference_###).\nIf there is no transmission congestion at , it means that the current network topology is satisfactory and changing line susceptances will not further reduce the generation cost.\nThis can also be seen from the next subsection showing that the objective function has a zero sensitivity to line susceptances in case of no congestion. In this case, provides an optimal solution for CCED.\nIf some lines are congested at , it means that the current network topology is inadequate and needs an adjustment, which will be addressed in the following subproblem."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Subproblem to optimize line susceptances",
51
+ "text": "Based on the obtained generation dispatch scheme , we formulate the subproblem to provide better line susceptances for the master problem for congestion mitigation and cost reduction.\nSince the line flow constraints (15h ###reference_.8###)-(15i ###reference_.9###) are highly nonlinear with respect to line susceptances, let us consider a rather small adjustment of line susceptances in the subproblem so that the sensitivity analysis is applicable to the problem formulation.\nLet respectively denote the set of lines with binding constraints (15h ###reference_.8###) and binding constraints (15i ###reference_.9###) (i.e., the congested lines consist of ), which are of interest here.\nThen, we can derive the sensitivity of the optimal objective value of master problem (III-A ###reference_###) to by duality theory.\nLet , be the optimal dual variables associated with line flow constraints (15h ###reference_.8###) and (15i ###reference_.9###), respectively.\nThese dual variables are a byproduct of solving (III-A ###reference_###), which are obtained without additional computation.\nThe KKT condition of (III-A ###reference_###) gives [40 ###reference_b40###]\nBy Theorem 8.2 in [41 ###reference_b41###], the sensitivity of the optimal objective value to is give by\nwhich includes the influence of in both the base-case line flows and effective line capacities.\nFurther, (17 ###reference_###) can be simplified into the following form since a majority of dual variables are zero (see (16 ###reference_###))\nwhere the expressions of and are derived below.\nFor the term , it follows from (14 ###reference_###) that\nAccording to (10 ###reference_###), the partial derivative in (19 ###reference_###) is given by\nwhere it follows from [42 ###reference_b42###] that\nThus, the formula for are obtained by substituting (20 ###reference_###)-(21 ###reference_###) into (19 ###reference_###).\nAs for the term , it follows from (9 ###reference_###) that\nand substituting (20 ###reference_###) into (22 ###reference_###) gives the formula for .\nFinally, we obtain the sensitivities by substituting (19 ###reference_###)-(22 ###reference_###) into (18 ###reference_###).\nWith the obtained sensitivities , we propose the linear subproblem below that aims to reduce generation cost by adjusting line susceptances\nwhere the partial derivative terms have been given in (19 ###reference_###)-(22 ###reference_###);\n is a predefined small positive number and (23c ###reference_.3###) is a trust-region constraint that enforces the line susceptance adjustment to be small so that the sensitivity analysis is valid.\nLet be the susceptance adjustment, where , and , are given by the solution of (III-B ###reference_###).\nWhen solving subproblem (III-A ###reference_###) again with the updated line susceptances , the generation dispatch scheme is expected to further exploit the line capacity saved by and achieve a lower-cost solution."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C Alternate iteration algorithm",
57
+ "text": "Based on the proposed master problem (III-A ###reference_###) and subproblem (III-B ###reference_###), we design an alternate iteration algorithm for the CCED problem.\nThe solution procedure is presented below. The algorithm flow chart is also depicted in Fig. 2 ###reference_###.\nParameter setting. Set input parameters ; for bus ; for line ; for line and for line .\nSet algorithm parameters (convergence criterion), (benchmark for trust-region size), (reduction factor for trust region).\nLet be the initial trust-region size, and be the initial line susceptances.\nInitial solution. Solve master problem (III-A ###reference_###) under the fixed line susceptances , and obtain the solution .\nIf all the constraints (15h ###reference_.8###)-(15i ###reference_.9###) are not binding at , stop the algorithm and output as the optimal solution.\nSolve subproblem (III-B ###reference_###) under and obtain the line susceptance adjustment .\nSolve master problem (III-A ###reference_###) under the fixed line susceptances , and obtain the tentative solution, say .\nIf all the constraints (15h ###reference_.8###)-(15i ###reference_.9###) are not binding at , update the solution , , , stop the algorithm and output as the optimal solution.\nIf , the tentative solution is aborted, reduce the trust-region size by . Go back to Step 3.\nIf , the susceptance adjustment is accepted, update the solution , , and recover the trust-region size .\nFurther, if , , stop the algorithm and output as the optimal solution; otherwise go back to Step 3.\nWe further explain the manipulations in Step 6.\nThe case of implies that subproblem (III-B ###reference_###) does not provide a proper susceptance adjustment, which is due to that the trust-region size is too large to guarantee the validity of sensitivity analysis. In this case, we need to solve subproblem (III-B ###reference_###) again with a reduced trust-region size.\nThe case of implies that the generation cost is reduced as expected after applying the susceptance adjustment , and hence should be updated to .\nAccording to the stop criteria in the algorithm, the final output of this algorithm, say , has two types of physical meaning.\nIf the algorithm is stopped in Step 2 or Step 5, then fully eliminates transmission congestion, which is desirable.\nIf the algorithm is stopped in Step 6, then and reach such a matching state although transmission congestion is not fully eliminated:\n1) under the power network topology , any change on generation dispatch scheme cannot decrease the generation cost;\n2) under the generation dispatch scheme , transmission congestion cannot be further mitigated by applying any change on .\nIn this case, both the generation dispatchability and network flexibility have been fully exploited.\nSince problem (II-B ###reference_###) is a general nonlinear and non-convex program, it is theoretically hard to guarantee the convergence and optimality of the proposed algorithm.\nHowever, the proposed algorithm has a salient merit that every accepted solution generated during the iteration is a feasible solution to (II-B ###reference_###) and has a lower generation cost than the last accepted solution.\nThus, even when the iteration has not yet converged, we can still implement the latest accepted solution to improve system operation.\nThis feature enables our algorithm to periodically output a satisfactory solution (i.e., an operating point with better economy and security), which is friendly to the real-time dispatch.\n###figure_2###"
58
+ },
59
+ {
60
+ "section_id": "3.4",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-D Extension to non-Gaussian uncertainty",
63
+ "text": "The proposed method can be extended to the case where follows a general non-Gaussian distribution via GMM.\nNote that any smooth probability density function can be approximated with any specific, non-zero amount of error by a GMM with enough components [43 ###reference_b43###].\nThus, when follows a non-Gaussian distribution, its characteristics can be captured by the following GMM\nwhere denotes the -th Gaussian distribution component consisting of the mean value and covariance matrix ; denotes a finite index set of Gaussian distribution components; and denotes the weight of with , .\nWith this GMM, the mean value of is given by\nNow consider the CCED problem (II-A ###reference_###) under GMM uncertainties (24 ###reference_###).\nNote that GMM has linear additivity in terms of probability [44 ###reference_b44###], i.e., in case of (24 ###reference_###) it follows that\nwith .\nThis property enables us to apply the analytical reformulation to each Gaussian component.\nSimilar to the idea in Section II ###reference_###, the generations and line flows under the Gaussian component are expressed as\nand hence their mean values and standard deviations take the following forms\nThus, the CCED problem under the non-Gaussian uncertainty is equivalent to\nwhere (or ) are auxiliary variables to denote the probability of generation (or line flow) constraint satisfaction under the -th Gaussian distribution component.\nThe CCED model under GMM description can also be solved by setting the master problem and subproblem below:\n1) The master problem is set to be (III-D ###reference_###) with a fixed . This master problem formulation is non-convex due to the presence of auxiliary variables , which can be handled by the iterative risk allocation (IRA).\nThe IRA adopts a two-step strategy.\nFirst, it allocates the overall probability to each component based on the property of the current solution of .\nSecond, it updates by solving the SOCP problem (III-D ###reference_###) with fixed to their current values.\nThese two steps are executed iteratively until convergence. We refer to [45 ###reference_b45###, 46 ###reference_b46###] for more details of IRA.\nUsing the IRA, the master problem is solved by solving a sequence of SOCP problems, and the optimal dual variables associated with constraints (29k ###reference_.11###)-(29n ###reference_.14###) can be obtained simultaneously.\n2) The subproblem can be constructed in terms of those optimal dual variables obtained by the master problem, following the similar manipulations in Section III-B ###reference_###. The solution of the subproblem finds a proper susceptance adjustment .\nTherefore, the proposed algorithm still works for the CCED under non-Gaussian uncertainty by applying the IRA to solve the master problem.\nA comparison between the CCED solutions under Gaussian and non-Gaussian uncertainty will be given in Section IV-C ###reference_###."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "IV Case study",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-A IEEE 14-bus system: congestion fully eliminated after optimization",
75
+ "text": "Let denote the original load data of IEEE 14-bus system given in the MATPOWER package [47 ###reference_b47###].\nThen, we modify the system as follows for our tests:\nAdd renewable generation.\nAssume that buses 1, 3, 6, 9 have renewable generators which follow a Gaussian distribution with the covariance matrix , where p.u. for and otherwise . The mean value of the renewable generation at bus is set to .\nAdjust load consumption and generation capacity.\nIf bus does not connect a renewable generator, its load consumption is set to be . If bus connects a renewable generator, its load consumption is set to be so that the net load at this bus is also doubled.\nThere are five dispatchable generators . To highlight the control effect, we double their generation capacities given in [47 ###reference_b47###].\nAdd flexible susceptance lines.\nAssume the set of lines install TCSCs.\nThen, each of these lines consists of a fixed susceptance in series with a TCSC-induced adjustable susceptance [48 ###reference_b48###]. We set , where is called the degree of flexibility.\nThus, for any we have\nHere we take , .\nTo highlight the control effect, we set MW for line (1,2), MW for line (7,9), and MW for all the other lines.\nCCED and algorithm parameters.\nIn subproblem (III-A ###reference_###), we set (corresponding to for the original problem (II-A ###reference_###)).\nIn subproblem (III-B ###reference_###), we set , .\nIn the algorithm, we take the rated line susceptances to be the initial values, and .\nThe diagram of the modified IEEE 14-bus system is depicted in Fig. 3 ###reference_###.\nWith the above settings, we obtain the following five solutions for comparison.222All the optimization problems in the case study are solved by CVX with mosek solver [49 ###reference_b49###]. The computation platform is Intel(R) Core(TM) i7-9700 CPU with 16GB RAM.\nS1: CCED with network flexibility, which is obtained by solving (II-B ###reference_###) using the proposed algorithm.\nS2: CCED without network flexibility, which refers to the method in [2 ###reference_b2###].\nS3: CCED with network flexibility but fixed participation factors , which refers to the method in [2 ###reference_b2###].\nS4: Normal ED with network flexibility, which refers to the method in [17 ###reference_b17###].\nS5: Normal ED without network flexibility, which refers to the method in [17 ###reference_b17###].\nBefore looking into these solutions, let us first detail the process of finding S1 to verify the proposed algorithm.\nDuring the whole iteration process, the generation constraints are not binding, while lines (1,2) and (7,9) are congested at the first solution.\nThen the susceptances of lines in start to adjust to address the congestion.\nThe blue and red curves in Fig. 4 ###reference_### respectively shows the trajectories of dual variables of the binding flow constraints with respect to lines (1,2) and (7,9).\nThe corresponding two dual variables are decreasing with the iteration, implying that the congestion is being gradually mitigated.\nLine (7,9) is just slightly congested, and hence the red curve has a very flat shape.\nIn contrast, line (1,2) is severely congested and fully eliminated in the end, which leads to the big slope of the blue curve.\nAfter eight iterations, these two dual variables become zero, and hence the congestion is fully cleared and the algorithm stops. The total computation time is 5.3s, which means the computation time per iteration is about 0.66s.\nThe generation cost, which is denoted by the black dotted curve in Fig. 4 ###reference_###, decreases from 18578.8$/h to 18186.4$/h, which achieves 2.1% cost reduction.\nWe now check the performances of the five solutions to show the merits of network flexibility.\nSince the size of IEEE 14-bus system is rather small, it is convenient to present the comprehensive information of the solutions in Table I ###reference_###, Table II ###reference_### and Table III ###reference_###.\nWe have some interesting observations by comparing the base-case generation dispatch schemes at different solutions:\nat S1, S2 and S3. As network flexibility helps to mitigate transmission congestion under renewable uncertainties, the dispatchable generator at bus 1, which are more cost-effective, are better utilized to output more power at S1, S3 than S2.\nat S1 and S4. S1 and S4 both consider network flexibility. Additionally, S1 considers renewable uncertainties that shrink the usable line capacities (see (14 ###reference_###)). However, the values of at S1 and S4 are nearly identical, which implies that the network flexibility eliminates the impact of renewable uncertainties. With network flexibility, the generation cost keeps almost unchanged after including renewable uncertainties, except for the small additional term caused by participation factors.\nat S2 and S5. S2 and S5 both exclude network flexibility. Additionally, S2 considers renewable uncertainties. Compared to the generation profile at S5, S2 cannot resort to network flexibility and has to sacrifice those cost-effective generations in order to satisfy the line flow chance constraints.\nTable IV ###reference_### further shows the generation costs of these five solutions.\nThe cost difference between S1 and S4 (6.1$/h) and the cost difference between S2 and S5 (290.9$/h) can be interpreted as the cost of uncertainty.\nThe cost difference between S1 and S2 (392.4$/h) and the cost difference between S4 and S5 (107.6$/h) can be interpreted as the cost of inflexible network.\nThe cost difference between S1 and S3 (25.9$/h) can be interpreted as the cost of inflexible participation factors.\nWe make the following important observations from these cost comparisons:\nRenewable uncertainties cause much more additional generation cost when the network is inflexible (see the third column of Table IV ###reference_###).\nNetwork flexibility helps to greatly save generation cost by congestion mitigation, no matter renewable uncertainties are included or not (see the fourth column of Table IV ###reference_###).\nThe benefit of flexible participation factors is much less significant than the flexible network (see the fifth column of Table IV ###reference_###).\n###figure_3### ###figure_4###"
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-B IEEE 118-bus system: congestion mitigated after optimization",
81
+ "text": "We turn to IEEE 118-bus system to show that the proposed method also works well in large systems.\nBased on the original parameter profile of IEEE 118-bus system [47 ###reference_b47###], we further adopt the following settings for our tests.\nAdd renewable generation.\nAssume that buses 3, 8, 11, 20, 24, 26, 31, 38, 43, 49, 53 have renewable generators with the covariance matrix , where p.u. if bus has a renewable generator and otherwise . The mean value of the renewable generation at bus is set to .\nAdjust load consumption and generation capacity.\nThe adjustment is similar to that of the test on the IEEE 14-bus system to double the net load consumptions and generation capacities.\nAdd flexible susceptance lines.\nAssume nine lines have flexible susceptances \n.\nSimilar to the test on the IEEE 14-bus system, these susceptances have the degree of flexibility , .\nFor line capacity, we set MW for lines (8,9), (8,5), (60,61), (63,64), and MW for all the other lines.\nCCED and algorithm parameters.\nThe setting is identical to that of the test on the IEEE 14-bus system.\nAgain, we obtain the following five solutions for analysis.\nS1: CCED with network flexibility.\nS2: CCED without network flexibility.\nS3: CCED with network flexibility but fixed participation factors .\nS4: Normal ED with network flexibility.\nS5: Normal ED without network flexibility.\nWe first check the iteration process of finding S1.\nFig. 5 ###reference_### shows the trajectories of dual variables of the binding flow constraints with respect to some lines.\nNote that there are quite a few congested lines during the iteration. For simplicity, here we just plot those lines with rather severe congestion, i.e., the dual variables of which have ever been greater than 20 during the iteration.\nSome dual variables have oscillations and some dual variables are even increasing during the iteration, however, the generation cost monotonically decreases (see the black curve in Fig. 5 ###reference_###).\nIt implies that the system benefits from these increasing dual variables as they leave more space for mitigating the most severely congested line, say line (60,61), see the purple curve in Fig. 5 ###reference_###. Consequently, the overall congestion is reduced after each iteration.\nAfter ten iterations, the adjustment of line susceptances becomes sufficiently small and the algorithm stops. The total computation time is 38.3s, which means the computation time per iteration is about 3.8s.\nThe generation cost finally decreases from 321571.7$/h to 310210.0$/h, i.e., 3.5% cost reduction. Although the congestion is not fully eliminated, the cost reduction is significant and hence S1 is satisfactory.\nAlso note that the system has totally 186 lines and the reduction is achieved by assuming flexible susceptances at only nine lines (less than 5% of lines).\nWe then compare the performances of these five solutions in Table V ###reference_###.\nWe have a similar observation to the test on the IEEE 14-bus system, i.e., the presence of network flexibility makes a great contribution to cost reduction especially when the system has a high penetration of uncertain renewables.\nThis again highlights the capability of network flexibility against the impact of renewable uncertainties on operational economy.\n###figure_5###"
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-C Further discussion on CCED parameters",
87
+ "text": "Taking IEEE 118-bus system as an example, this subsection further studies the influence of some important input parameters on CCED solutions."
88
+ },
89
+ {
90
+ "section_id": "4.3.1",
91
+ "parent_section_id": "4.3",
92
+ "section_name": "IV-C1 Degree of network flexibility",
93
+ "text": "The degree of flexibility plays a role in the generation cost reduction and we have fixed to 0.7 so far.\nFig. 6 ###reference_### shows how the generation cost of S1 (CCED with network flexibility) of IEEE 118-bus system changes with different degrees of network flexibility.\nWe observe that the cost decreases nearly linearly with .\nFor this particular system, should be greater than 0.5 in order to achieve more than 2% cost reduction."
94
+ },
95
+ {
96
+ "section_id": "4.3.2",
97
+ "parent_section_id": "4.3",
98
+ "section_name": "IV-C2 Location of flexible susceptance lines",
99
+ "text": "Note that the location of the nine flexible susceptance lines in the previous test is determined by trail-and-error.\nNow let us arbitrarily choose the location of more flexible susceptance lines, e.g., assume these twelve lines \n have flexible susceptances.\nIn this case, the generation cost of S1 (CCED with network flexibility) becomes 321065.5, which has little reduction comparing to 321571.6 (i.e., the cost of CCED solution without network flexibility).\nIt shows that the location of flexible susceptance lines is crucial to the solution quality and should be carefully chosen.\nThe optimal placement of flexible susceptance lines is beyond the scope of this paper and will be a future direction."
100
+ },
101
+ {
102
+ "section_id": "4.3.3",
103
+ "parent_section_id": "4.3",
104
+ "section_name": "IV-C3 Continuous network flexibility v.s. discrete network flexibility",
105
+ "text": "Let us set up the following experiment to compare the continuous network flexibility with the discrete version [23 ###reference_b23###, 24 ###reference_b24###].\nSuppose each transmission line in IEEE 118-bus system adopts the double-circuit line structure.\nLet denote the optimal line susceptance obtained in our previous test in Section IV-B ###reference_###, then we set the following line susceptance profile\nwhere simulates the double-circuit mode of line and simulates the single-circuit mode of line .\nSolving the CCED problem under this line susceptance profile, the consequent generation cost is 311879.6, which is significantly greater than 310210.0 (i.e., the cost of S1).\nIt shows the merit of continuously adjustable line susceptance over its discrete counterpart."
106
+ },
107
+ {
108
+ "section_id": "4.3.4",
109
+ "parent_section_id": "4.3",
110
+ "section_name": "IV-C4 Covariance between renewable outputs",
111
+ "text": "So far we have neglected the covariance between the renewable outputs.\nIn IEEE 118-bus system, assume every pair of renewable generators have the same covariance .\nFig. 7 ###reference_### shows how the the generation costs of S1 (CCED with network flexibility) and S2 (CCED without network flexibility) change with differemt values of .\nIt can be seen that the generation costs slowly decrease with .\nA stronger covariance actually implies a more similar behavior between renewable outputs and hence reduces the probability of some extreme cases, e.g., one renewable generation goes very large and another renewable generation goes very small.\nTherefore, the existence of renewable covariance slightly reduces the risk of constraint violations so that a lower-cost solution can be obtained."
112
+ },
113
+ {
114
+ "section_id": "4.3.5",
115
+ "parent_section_id": "4.3",
116
+ "section_name": "IV-C5 Gaussian uncertainty v.s. non-Gaussian uncertainty",
117
+ "text": "This test generalizes the Gaussian uncertainty to non-Gaussian case.\nLet denote the Gaussian uncertainty adopted in our previous test on IEEE 118-bus system in Section IV-B ###reference_###.\nIn the new test, suppose the renewable outputs follow the non-Gaussian distribution described by the GMM below\nwhere , , , , .\nThis non-Gaussian distribution function is illustrated in Fig. 8 ###reference_###.\nNote that the mean value of in this case is still equal to , so that this test scenario is comparable to the previous one in Section IV-B ###reference_###.\nUnder this GMM model, we solve the following three types of CCED problems and obtain the corresponding solutions:\nS1-GMM: CCED under non-Gaussian uncertainty with network flexibility. The optimal generation cost is 310568.5.\nS2-GMM: CCED under non-Gaussian uncertainty without network flexibility. The optimal generation cost is 322843.3.\nS3-GMM: CCED under non-Gaussian uncertainty with network flexibility but fixed participation factors . The optimal generation cost is 312208.5.\nIt turns out that S1-GMM, S2-GMM and S3-GMM all have slightly higher generation costs than their counterparts under Gaussian uncertainty (see S1, S2 and S3 in Section IV-B ###reference_###).\nThis higher cost is caused by that a more conservative dispatch scheme has to be adopted to hedge against the extra risk of constraint violations brought by the GMM long tail.\n###figure_6### ###figure_7### ###figure_8###"
118
+ },
119
+ {
120
+ "section_id": "5",
121
+ "parent_section_id": null,
122
+ "section_name": "Conclusion",
123
+ "text": "Continuous-type network flexibility has been incorporated into the CCED problem to cope with the impact caused by renewable uncertainties.\nFrom the analytical form of the CCED problem, we have discovered that the flexible line susceptances tune the base-case line flows and reduce the line capacities shrunk by uncertainties. Thus, network flexibility greatly contributes to congestion mitigation and generation cost saving.\nFurthermore, we have proposed an efficient solver for the CCED problem with network flexibility.\nUsing duality theory, we have established an SOCP master problem to optimize generation dispatch and a linear subproblem to optimize line susceptances.\nAlternately solving these two subproblems gives the solution to the CCED.\nThe extension of the proposed method from Gaussian uncertainty to non-Gaussian uncertainty has also been made.\nCase studies have shown that the operational economy under uncertainties is much improved with the help of network flexibility."
124
+ }
125
+ ],
126
+ "appendix": [],
127
+ "tables": {
128
+ "1": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table I: </span>IEEE 14-bus system: Generation info of CCED with/without network flexibility (in MW)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.8.8.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Gen</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.5.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.6.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.7.7.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.8.8.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.8.9.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">4.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">249.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">161.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">249.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.8.9.1.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.10.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">43.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">47.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">43.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.10.2.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.11.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">144.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.11.3.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.12.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">76.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.12.4.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.13.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">87.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.8.13.5.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.20</td>\n</tr>\n</tbody>\n</table>\n</figure>",
130
+ "capture": "Table I: IEEE 14-bus system: Generation info of CCED with/without network flexibility (in MW)"
131
+ },
132
+ "2": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table II: </span>IEEE 14-bus system: Line susceptance info of ED with network flexibility (in p.u.)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T2.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Line \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.5.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.6.6.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.7.7.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.7.8.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">(1,5)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.8.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">4.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.8.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.8.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">14.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.8.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">13.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.8.1.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">13.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.8.1.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">8.52</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.7.9.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.7.9.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">(2,3)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.9.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">5.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.9.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.9.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">16.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.9.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.9.2.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.9.2.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.7.10.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.7.10.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">(6,11)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.7.10.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">5.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.7.10.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.7.10.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">16.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.7.10.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">15.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.7.10.3.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">15.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.7.10.3.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">9.55</td>\n</tr>\n</tbody>\n</table>\n</figure>",
134
+ "capture": "Table II: IEEE 14-bus system: Line susceptance info of ED with network flexibility (in p.u.)"
135
+ },
136
+ "3": {
137
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table III: </span>IEEE 14-bus system: Generation info of normal ED with/without network flexibility (in MW)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Gen</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">4.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">249.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">203.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.6.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">43.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">45.60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.7.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">111.24</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.8.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.4.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">74.48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.9.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.9.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.9.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.9.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.9.5.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">75.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.9.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">83.11</td>\n</tr>\n</tbody>\n</table>\n</figure>",
138
+ "capture": "Table III: IEEE 14-bus system: Generation info of normal ED with/without network flexibility (in MW)"
139
+ },
140
+ "4": {
141
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table IV: </span>IEEE 14-bus system: Comparison of generation costs (in $/h) with/without network flexibility or renewable uncertainty</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T4.2.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Solution</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.2.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Gen. cost</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.2.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.2.5.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.2.5.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Cost of</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.2.5.1.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">uncertainty</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.1.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.1.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Cost of</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">fixed \n</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.2.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.2.2.1.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Cost of</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.2.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.2.2.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">fixed \n</td>\n</tr>\n</table>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.2.3.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.3.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">18180.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.3.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.3.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.3.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.2.4.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">18186.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">6.1 (S1-S4)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.4.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.2.5.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.5.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">18206.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.5.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.5.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.5.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">25.9 (S3-S1)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.2.6.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.6.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">18287.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.6.4.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.6.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">107.6 (S5-S4)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.6.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T4.2.7.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.7.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">18578.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.7.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">290.9 (S2-S5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.7.5.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">392.4 (S2-S1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.2.7.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n</tbody>\n</table>\n</figure>",
142
+ "capture": "Table IV: IEEE 14-bus system: Comparison of generation costs (in $/h) with/without network flexibility or renewable uncertainty"
143
+ },
144
+ "5": {
145
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table V: </span>IEEE 118-bus system: Comparison of generation costs (in $/h) with/without network flexibility or renewable uncertainty</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T5.2.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Solution</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.2.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Gen. cost</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.2.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T5.2.2.5.1\">\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.2.5.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Cost of</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.2.5.1.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">uncertainty</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T5.1.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.1.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.1.1.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Cost of</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">fixed \n</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T5.2.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2.2.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.2.2.1.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Cost of</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.2.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.2.2.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">fixed \n</td>\n</tr>\n</table>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.2.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T5.2.3.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.3.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">309044.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.3.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.3.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.2.3.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T5.2.4.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.4.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">310210.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.4.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">1165.6 (S1-S4)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.4.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.4.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T5.2.5.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.5.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">310612.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.5.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.5.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.5.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">402.9 (S3-S1)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T5.2.6.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.6.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">317738.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.6.4.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.6.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">8694.2 (S5-S4)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.6.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.2.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T5.2.7.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">S2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.7.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">321571.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.7.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">3833.1 (S2-S5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.7.5.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">11361.7 (S2-S1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.2.7.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">N.A.</td>\n</tr>\n</tbody>\n</table>\n</figure>",
146
+ "capture": "Table V: IEEE 118-bus system: Comparison of generation costs (in $/h) with/without network flexibility or renewable uncertainty"
147
+ }
148
+ },
149
+ "image_paths": {
150
+ "1": {
151
+ "figure_path": "2107.11246v2_figure_1.png",
152
+ "caption": "Figure 1: A possible realization of flexible line susceptance via transformers.",
153
+ "url": "http://arxiv.org/html/2107.11246v2/x1.png"
154
+ },
155
+ "2": {
156
+ "figure_path": "2107.11246v2_figure_2.png",
157
+ "caption": "Figure 2: The algorithm flow chart.",
158
+ "url": "http://arxiv.org/html/2107.11246v2/x2.png"
159
+ },
160
+ "3": {
161
+ "figure_path": "2107.11246v2_figure_3.png",
162
+ "caption": "Figure 3: Diagram of the IEEE 14-bus system with renewables and flexible susceptance lines.",
163
+ "url": "http://arxiv.org/html/2107.11246v2/x3.png"
164
+ },
165
+ "4": {
166
+ "figure_path": "2107.11246v2_figure_4.png",
167
+ "caption": "Figure 4: IEEE 14-bus system: Generation costs and dual variables of flow constraints during the iteration.",
168
+ "url": "http://arxiv.org/html/2107.11246v2/x4.png"
169
+ },
170
+ "5": {
171
+ "figure_path": "2107.11246v2_figure_5.png",
172
+ "caption": "Figure 5: IEEE 118-bus system: Generation costs and dual variables of flow constraints during the iteration.",
173
+ "url": "http://arxiv.org/html/2107.11246v2/x5.png"
174
+ },
175
+ "6": {
176
+ "figure_path": "2107.11246v2_figure_6.png",
177
+ "caption": "Figure 6: IEEE 118-bus system: generation cost v.s. degree of network flexibility.",
178
+ "url": "http://arxiv.org/html/2107.11246v2/x6.png"
179
+ },
180
+ "7": {
181
+ "figure_path": "2107.11246v2_figure_7.png",
182
+ "caption": "Figure 7: IEEE 118-bus system: generation cost v.s. renewable covariance.",
183
+ "url": "http://arxiv.org/html/2107.11246v2/x7.png"
184
+ },
185
+ "8": {
186
+ "figure_path": "2107.11246v2_figure_8.png",
187
+ "caption": "Figure 8: Illustration of the adopted non-Gaussian uncertainty.",
188
+ "url": "http://arxiv.org/html/2107.11246v2/x8.png"
189
+ }
190
+ },
191
+ "validation": true,
192
+ "references": [],
193
+ "url": "http://arxiv.org/html/2107.11246v2"
194
+ }
20240225/2109.12965v3.json ADDED
@@ -0,0 +1,649 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Text-based Person Search in Full Images via Semantic-Driven Proposal Generation",
3
+ "abstract": "Finding target persons in full scene images with a query of text description has important practical applications in intelligent video surveillance.\nHowever, different from the real-world scenarios where the bounding boxes are not available, existing text-based person retrieval methods mainly focus on the cross modal matching between the query text descriptions and the gallery of cropped pedestrian images.\nTo close the gap, we study the problem of text-based person search in full images by proposing a new end-to-end learning framework which jointly optimize the pedestrian detection, identification and visual-semantic feature embedding tasks.\nTo take full advantage of the query text, the semantic features are leveraged to instruct the Region Proposal Network to pay more attention to the text-described proposals.\nBesides, a cross-scale visual-semantic embedding mechanism is utilized to improve the performance.\nTo validate the proposed method, we collect and annotate two large-scale benchmark datasets based on the widely adopted image-based person search datasets CUHK-SYSU and PRW.\nComprehensive experiments are conducted on the two datasets and compared with the baseline methods, our method achieves the state-of-the-art performance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### Rencently, image-based person reidentification (Ye et al., 2021 ###reference_b47###; Zhang et al., 2021 ###reference_b49###; Zhou et al., 2018 ###reference_b58###; Sarfraz et al., 2018 ###reference_b38###; Suh et al., 2018 ###reference_b39###; Liu et al., 2018 ###reference_b31###; Martinel et al., 2016 ###reference_b33###; Li et al., 2016 ###reference_b28###; Jiao et al., 2021 ###reference_b21###) and person search (Xiao et al., 2017 ###reference_b44###; Zheng et al., 2017a ###reference_b54###; He and Zhang, 2018 ###reference_b17###; Zhang et al., 2020 ###reference_b50###; Wang et al., 2019a ###reference_b42###; Lan et al., 2018 ###reference_b25###; Munjal et al., 2019 ###reference_b34###; Yan et al., 2019 ###reference_b46###; Munjal et al., 2019 ###reference_b35###; Islam, 2020 ###reference_b19###; Zhong et al., 2020 ###reference_b57###) problems (Figure 1(a) ###reference_sf1### and 1(b) ###reference_sf2###), which aim at matching a specific person with a gallery of cropped pedestrian images or finding a target person in a gallery of full (whole scene) images, have been widely studied in the computer vision community as their great application values in cross camera tracking (Chen et al., 2015 ###reference_b8###; Zheng et al., 2015 ###reference_b53###, 2011 ###reference_b55###; Wang et al., 2019b ###reference_b43###, 2020 ###reference_b41###; Zheng et al., 2020 ###reference_b52###; Zhai et al., 2019 ###reference_b48###; Han et al., 2019 ###reference_b16###; Dai et al., 2020 ###reference_b9###; Chen et al., 2020b ###reference_b5###; Dong et al., 2020 ###reference_b12###; Chen et al., 2020a ###reference_b7###), criminal investigation, person activity, intention analysis, etc.\nIn many real-world scenarios, such as finding criminals/suspects, a query image of the target person can not always be easily obtained, while text descriptions given by witnesses are available.\nIn such scenarios, it is necessary to develop the techniques for finding a target person with a given query text description.\nAlthough the text-based person retrieval task (Figure 1(c) ###reference_sf3###), which aims to match a given text query with a gallery of cropped person images, has been explored in recent years (Gao et al., 2021 ###reference_b14###; Jing et al., 2020 ###reference_b22###; Niu et al., 2020 ###reference_b36###; Chen et al., 2018a ###reference_b3###; Zhang and Lu, 2018 ###reference_b51###; Sarafianos et al., 2019 ###reference_b37###; Dong et al., 2019 ###reference_b11###; Liu et al., 2019 ###reference_b32###; Ji et al., 2018 ###reference_b20###; Jing et al., 2020 ###reference_b23###).\nHowever, there is still a step distance from the real-world scenarios as the bounding box annotations are unavailable and the query-described person needs to be searched in a gallery of full images.\nTo close the gap, we study the text-based person search (Figure 1(d) ###reference_sf4###) problem in this paper.\nNote that it is straightforward to breaking down the problem into two independent tasks: person detection and text-based person retrieval. As an off-the-shelf person detector would unavoidably introduce misdetections and misalignments, it could not be optimal when taking the detection results as inputs of the second stage retrieval model. The performance comparison in Section 5 ###reference_### also demonstrates the necessity of developing end-to-end method.\nIn this paper, we propose a new end-to-end learning framework which integrates person detection, identification and image-text cross-modal matching and jointly optimizes the three tasks together.\nAs shown in Figure 3 ###reference_###, The detection network follows the Faster-RCNN pipeline, a Region Proposal Network (RPN) for person candidate generation is built on top of a Base-Net which is shared with the identification network.\nAlongside the conventional RPN which aims to output proposals according to the objectness scores, to pay more attention on the text-described proposals and filter out text-irrelevant ones, we propose a novel Semantic-Driven Region Proposal Net (SDRPN) where the RPN features are dynamicly instructed by the semantic feature of the input text description.\nAfter obtaining the features of the proposals, the commonness and positions of the persons are supervised by the detection branch (Det-Net) with a Softmax classification loss and a regression loss,\nwhile the uniqueness of each person IDs are discriminated by enforcing the OIM loss (Xiao et al., 2017 ###reference_b44###) on top of the identification branch (ID-Net).\nFurthermore, the BERT language model (Devlin et al., 2018 ###reference_b10###) is utilized to extract the text features from sentence, sub-sentence and word-levels.\nAnd the visual features are extracted from global, regional and local scales via splitting and shffuling the proposal feature maps.\nThe similarity scores of the proposal-text pairs can be computed with the help of cross attention mechnanism to achieve a cross-scale visual-semantic feature matching.\nTo validate the proposed method, we collect and annotate two large scale benchmarks for text-based person search based on the widely adopted person search datasets CUHK-SYSU (Xiao et al., 2017 ###reference_b44###) and PRW (Zheng et al., 2017a ###reference_b54###).\nTo eliminate ambiguity, we name the corresponding datasets as CUHK-SYSU-TBPS and PRW-TBPS, respectively.\nAs the person datasets already have the annotations of person bounding-boxes and IDs, we merely need to annotate text descriptions for each person bounding-box. In total, we collect and annotate 54969 sentences for CUHK-SYSU-TBPS and PRW-TBPS datasets. The textual descriptions contain abundant noun phrases and various sentence structures. And we give a statistical analysis of the text description in both datasets. Extensive experiments are conducted on these two datasets and the results demonstrate the superiority of our proposed method. Compared with many baseline methods, the proposed method outperforms them by a large margin.\nThe main contribution of our paper is three-fold and can be summarized as:\nWe make the first attempt to conduct text-based person search in full images, which has more practical application values than text-based person retrieval from cropped pedestrian images.\nTo support this research direction, two benchmark datasets CUHK-SYSU-TBPS and PRW-TBPS with large scale full images and rich text annotations are collected and annotated.\nWe propose a novel end-to-end learning framework where person detection, identification and image-text embedding tasks are jointly optimized together.\nAnd it is worth noting that a SDRPN module is devised aiming to care about text description-related person proposals. The proposed SDRPN can boost the final performance by 1.21% mAP, 1.86% Rank-1 on CUHK-SYSU-TBPS dataset, and 0.73% mAP, 1.11% Rank-1 on PRW-TBPS dataset.\nWe conduct comprehensive experiments on the two datasets and compare our method with many baselines.\nThe experimental results showed that our method outperforms baseline methods by a large margin and achieves state-of-the-art performances.\nThe rest of the paper is organized as follows: we briefly review related work of our paper in Section 2 ###reference_###. Section 3 ###reference_### gives a statistical analysis of the collected datasets. In Section 4 ###reference_### we elaborate the proposed framework. The experimental results are reported and analyzed in Section 5 ###reference_###. And we conclude the paper in Section 6 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Related Work",
15
+ "text": "In this section, we briefly review the related works from the following three aspects:"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Person search",
21
+ "text": "Person search is to localize and identify a target person in a gallery of full images other than cropped pedestrian images in ReID task.\nSome approaches (Xu et al., 2014 ###reference_b45###; Zheng et al., 2017a ###reference_b54###; Chen et al., 2018c ###reference_b4###) proposed to break down the problem into two separate tasks, pedestrian detection and person re-identification.\nDifferent from the two-stage methods, some works devoted their efforts to propose an end-to-end learning strategy (Xiao et al., 2017 ###reference_b44###; Han et al., 2019 ###reference_b16###; Munjal et al., 2019 ###reference_b34###) aiming to jointly optimize the detection and re-identification tasks.\nXiao (Xiao et al., 2017 ###reference_b44###) firstly introduced an end-to-end person search network and proposed the Online Instance Matching (OIM) loss function for fast convergence.\nHan (Han et al., 2019 ###reference_b16###) proposed to refine the detection bounding boxes supervised by the re-identification training.\nMunjal (Munjal et al., 2019 ###reference_b34###) took full advantage of both the query and gallery images to jointly optimize detection and re-id network.\nAdditionally, Liu (Liu et al., 2017 ###reference_b30###) proposed Conv-LSTM based Neural Person Search Machines (NPSM) to perform the target person localization as an search area iterative shrinkage process.\nChang (Chang et al., 2018 ###reference_b2###) tranformed the search problem into a conditional decision-making process and trained relational context-aware agents to learn the localization actions via reinforcement learning.\nDifferent from the image based person search whose query is a cropped pedestrian image, in this work, we investigate the text-based person search problem which is much more challenging and able to meet the requirement of the scenarios where query image is not available in many situations."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Text-Based Person Retrieval",
27
+ "text": "Considering that the image query is not always available in real-world scenes, Li (Li et al., 2017b ###reference_b27###) firstly introduced the text-based person retrieval task and collected a benchmark named CUHK-PEDES.\nEarly methods about text-based person retrieval concentrate on global feature alignment, like (Zheng et al., 2017b ###reference_b56###; Zhang and Lu, 2018 ###reference_b51###; Sarafianos et al., 2019 ###reference_b37###), which employed universal feature extraction networks to extract global feature representations for images and descriptions and made efforts to design more proper objective functions for this task.\nSuch as in (Zhang and Lu, 2018 ###reference_b51###), a cross-modal projection matching (CMPM) loss and a cross-modal projection classification (CMPC) loss were proposed for computing the similarity of image-text pair data.\nMeanwhile, there are also several methods (Li et al., 2017a ###reference_b26###; Chen et al., 2018b ###reference_b6###) employing local feature alignment to provide complementary information for global feature alignment.\nFor example, Li (Li et al., 2017a ###reference_b26###) applied the spatial attention, which relates each word with corresponding image regions, to refine the global alignment in the stage-1 training.\nRecently, many methods (Chen et al., 2018a ###reference_b3###; Gao et al., 2020 ###reference_b15###; Niu et al., 2020 ###reference_b36###) have applied global and local features of images and text descriptions to realize multi-scale matching and achieved better performance.\nNiu (Niu et al., 2020 ###reference_b36###) proposed a Multi-granularity Image-text Alignment (MIA) module, including global-global, global-local and local-local alignment, and improved the accuracy of retrieval by multi-grained feature alignment between visual representations and textual representations.\nAlthough the multi-scale alignment provide supplement for global feature matching, the alignment for each scale is fixed.\nGao (Gao et al., 2021 ###reference_b14###) realized the need to align visual and textual clues across all scales and proposed cross-scale alignment for text-based person search.\nText-based person retrieval has achieved great performance improvement in recent years, while the task setting still has a gap with the real-world scenarios. Therefore, in this paper, we study text based person search in full images."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "2.3. Variants of Region Proposal Network",
33
+ "text": "Region proposal network (RPN) is a significant component in series of detection networks, and there are many studies making efforts to improve it in order to generate more accurate or task-relevant proposals.\nIn order to produce high-quality proposals and improve detection performance, Wang (Wang et al., 2019a ###reference_b42###) proposed Guided Anchoring Region Proposal Network, which learns to guide a sparse anchoring scheme and can be seamlessly integrated into proposal methods and detectors.\nBesides, (Vu et al., 2019 ###reference_b40###) introduces Cascade RPN, which systematically address the limitation of the conventional RPN that heuristically defines the anchors and aligns the features to the anchors for improving the region-proposal quality and detection performance.\nTo improve the generalization ability of neural networks for few-shot instance segmentation, Fan (Fan et al., 2020 ###reference_b13###) proposed attention guided RPN in order to generate class-aware proposals by making full use of the guidance effect from the support set.\nInspired by the above works, in this paper, we propose a Semantic-Driven Region Proposal Network for text-based person search, which employs semantic information from the query text description to generate semantically similar proposals."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "3. Benchmarks for Text-based Person Search",
39
+ "text": "Since there is no existing datasets for the task of text-based person search, we build new benchmarks for evaluating our method.\nAiming at this task, the dataset need to include visual information of person bounding-box positions accompanied with person IDs and textual information of language descriptions. Therefore, based on the two widely adopted image-based person search datasets, CUHK-SYSU and PRW, which already contain person bounding-box and ID labels, we match each person box from train set and query set with text descriptions and propose our Text-based Person Search benchmark CUHK-SYSU-TBPS and PRW-TBPS.\nFor CUHK-SYSU-TBPS, there are 11,206 scene images and 15,080 person boxes with 5532 different IDs in train set, while 2,900 person boxes in query set. And we collect corresponding text descriptions from existing person retrieval dataset CUHK-PEDES (train_query&test_query), where each person box was labeled with two sentences.\nAs for PRW-TBPS, there are 5,704 images and 14,897 boxes, with 483 different IDs in train set and 2,056 boxes in query set.\nAnd text descriptions of all person boxes were annotated, in which the boxes from the train set were labeled with one sentence, and the boxes from the query set were labeled twice independently.\n###figure_5### ###figure_6### Here, we labeled the training person box once due to the fact that the large amount of repetition of person box share with the same ID, and the average number of each ID occurrence is ten times individuals than that in CUHK-SYSU-TBPS. Therefore, we believe one sentence of each box in PRW-TBPS dataset is capable of providing enough samples for each identity for training.\n###figure_7### The text descriptions of the datasets not only focus on person appearances, including clothes and body shape, but also pay attention to person actions, gestures and other details.\nTo some extent, vocabulary and sentence length are vital indicators to evaluate the capacity of the dataset. In total, there are 1,318,445 words and 5,934 unique words in the datasets. As Figure 2 ###reference_### shows, most sentences have 15 to 45 words in length, and the average word lengths of the datasets are 23.9 and 24.96 words respectively, which is much longer compared with other image-caption datasets like MS-COCO (Lin et al., 2014 ###reference_b29###) (5.18 words in average) and Visual Genome (Krishna et al., 2017 ###reference_b24###) (10.45 words in average)."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "4. Method",
45
+ "text": "In this section, we introduce the proposed end-to-end learning method as illustrated in Figure 3 ###reference_###.\nFirstly, we briefly introduce the overview of the framework (Figure 3 ###reference_### (a)).\nThen two major components, namely Semantic-Driven RPN module and\nProposal-Text Embedding module are elaborated.\nFinally, we give the total loss function of training the proposed method."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "4.1. Overview",
51
+ "text": "Our goal is to search the target person from full images with a query of text description.\nTo be exact, we decompose the task to three sub-tasks, namely pedestrian detection, identification and image-text cross-modal feature embedding/matching.\nAs shown in Figure 3 ###reference_### (a), the whole framework takes full images with text descriptions of labeled persons as input when training. The framework includes two paths, namely image-path and text-path.\nFor the image path, we follow the structure of one-stage person search method and add a multi-task head for localization, detection, and identification on top of the convolutional features of Faster-RCNN.\nWe first exploit the ResNet-50 as backbone and split it into two parts as Base-Net (Conv1 to Conv4-3) and ID-Net (Conv4-4 to Conv5).\nAs for the text-path, the semantic feature of the query text is encoded by a BERT language model (Devlin et al., 2018 ###reference_b10###).\nThen both the visual feature and semantic feature from the two paths are fed into proposal-text embedding module."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "4.2. Semantic-Driven RPN",
57
+ "text": "The goal of our end-to-end text-based person search task is to \u201cdetect\u201d the right person matching with the text query. We propose SDRPN to filter out the irrelevant ones. Traditional RPN generates class-agnostic proposals for generic object detection.\nTo take full advantage of the text query which contains full semantic information,\nwe devise a Semantic-Driven RPN which leverages the semantic features from the text query to instruct the proposal generation process, aiming at paying more attention to those semanticly more similar candidates with the text description.\nSepecifically, inspired by the SENet (Hu et al., 2018 ###reference_b18###), the semantic features are utilized to re-weight the Base-Net feature maps.\nAs illustrated in Figure 3 ###reference_### (c), SDRPN includes a channel-wise attention mechanism to guide a standard RPN, generating the proposal boxes from the re-weighted image features.\nIn more detail, we use the semantic feature extracted from a BERT language model and unsqueeze it to .\nThe resulted feature is denoted as z and then we apply two fully connected layers and to squeeze and expand the feature z , from to then to , to emphasize important signal correlations.\nBased upon the sigmoid activation , the resulted excitation s is computed as follows.\nThen the excitation s is applied to the BaseNet feature maps X from a gallery image through channel-wise multiplication as follows:\nNote that SDRPN extracts proposals featuring at a text-similarity score and RPN pursues the standard objectness score,\nTherefore, as shown in Figure 3 ###reference_### (a), we use SDRPN in parallel with RPN when generating proposals by summing up the scores of corresponding anchor boxes to obtain better performance."
58
+ },
59
+ {
60
+ "section_id": "4.3",
61
+ "parent_section_id": "4",
62
+ "section_name": "4.3. Proposal-Text Embedding",
63
+ "text": "The proposal-text feature embedding module aims to learn a common space for both the visual and text modality. To improve the performance, a cross-scale alignment scheme is borrowed in the embedding process.\nMulti-Scale Visual Feature Extraction.\nIn visual path, proposal features will be represented in three scales from coarse to fine, namely as global-scale, region-scale and local-scale.\nAs illustrated in Figure 3 ###reference_### (b), we take the output of ID-Net as the global-scale representations of the proposals.\nFurther, to better focus on local features and reduce the influence of large receptive field of CNN, we do the split and shuffle operation on the RoI-Aligned proposal features, which equally partitions the feature map into several horizontal stripes, and these set of the partitioned stripes are randomly shuffled and re-concatenated.\nThe re-concatenated feature maps then are passed through the ID-Net.\nAfter that, the output feature map of the region-scale branch is horizontally partitioned into stripes, each of which is further encoded as a region-scale feature corresponding to a certain region.\nFinally, a finer partition scheme is used to produce the local-scale features,\nMulti-Scale Semantic Feature Extraction.\nAs for the text path, we use the BERT language model to extract the semantic representation from three levels, namely sentence-level, sub sentence-level and word-level (Figure 3 ###reference_### (b)).\nWe use the final hidden state of token [CLS], which is added at the beginning of the sentence, as the representation of the whole sentence.\nFor the sub sentence-level, sentences are separated by commas resulting in several shorter sub-sentences.\nAnd we attach the [CLS] token to the beginning of each sub-sentence, whose final hidden state is treated as the representation of each sub-sentence.\nWhile as for the word-level, each final hidden state of word is considered as the word-level representation.\n###table_1### Proposal-Text cross scale alignment.\nAfter proposal and text feature extraction, we obtain a set of three-scale visual and semantic features.\nWe concatenate them to get the mixed visual features\n and mixed textual features , where m and n corresponds to the and part.\nTo get the cross attended features, fully connected layers are used to map the mixed visual features I to visual queries, keys and values, denoted by Q, K and V with weight matrix , and , respectively.\nAnd the mixed semantic features T are mapped to semantic queries, keys and values, denoted by , and .\nFirstly, the attended semantic feature A can be computed from the view of text-to-image attention mechanism as,\nThen we can obtain the relevance between the visual value and its corresponding semantic context by calculating the cosine similarity between V and A,\nwhere R denotes the relevance scores.\nThe similarity of text-to-image pair is then computed by averaging all components of R.\nMeanwhile, by alternating the semantic keys as queries and visual queries as keys respectively, and following the above procedure, the similarity of image-to-text pair\ndenoted by can be computed.\nThen, assuming that a mini-batch of person boxes and captions are given, and all image-to-text pairs are constructed as .\nNote that if is a matched pair, otherwise .\nTo maximize similarities between the matched pairs and push away the unmatched pairs, KL divergence is enforced to diminish the modality discrepancy.\nConsidering that the normalized similarities can be treated as the predicted matching probability and the normalized label vector can denote ground-truth label distribution.\nFinally, the Cross-Scale Alignment Loss (CSAL) is calculated by,\nGlobal matching.\nBesides the cross scale alignment with mixed features, we additionally use CMPM loss and CMPC loss (Zhang and Lu, 2018 ###reference_b51###) to supervise the cross modal matching of the global-scale features.\nCMPM loss computes the matching probability of the proposal-text pair\nand CMPC is a variant of norm-softmax classification loss.\nWe refer readers to (Zhang and Lu, 2018 ###reference_b51###) for more details about these two losses."
64
+ },
65
+ {
66
+ "section_id": "4.4",
67
+ "parent_section_id": "4",
68
+ "section_name": "4.4. Overall Loss Function",
69
+ "text": "The whole framework is trained via an end-to-end strategy and pursue the joint optimization of all the loss functions for each task.\nMore specifically, the sub-network for person detection is supervised with a classification loss (), a regression loss (), a RPN objectness loss (), and a RPN box regression loss ().\nWhile for the supervision of the identification network, the adopted loss function is the OIM loss ().\nTo learn a common feature space for proposals and text descriptions, we adopt CMPM Loss (), CMPC Loss (), and cross-scale alignment loss (). Therefore, the overall loss function is formulated as:\nwhere are responsible for the relative loss importance."
70
+ },
71
+ {
72
+ "section_id": "4.5",
73
+ "parent_section_id": "4",
74
+ "section_name": "4.5. Inference",
75
+ "text": "During the inference time, the global text feature and global visual features are extracted to represent the textual query and candidate proposals.\nThe text query features are extracted from the fine-tuned BERT model, while quantities of proposal features are obtained from the trained visual module of joint-optimized model after inputting corresponding gallery images.\nThen, we compute the cosine similarity between the query feature and proposal features to sort and rank the candidate bounding boxes."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "5. Experiment",
81
+ "text": "In this section, we report and analyze the experimental results on the collected datasets.\nFirstly, we describe the details of datasets and evaluation protocols as well as the implementation details.\nTo verify the effectiveness of the proposed end-to-end approach, we investigate several two-stage solutions as baseline methods.\nIn addition, we conduct ablation studies to analyze the influence of each component in our proposed method.\nFinally, both quantitative and qualitative results are exhibited."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "5.1. Datasets and Protocol",
87
+ "text": "The collected datasets are built upon the two existing image-based person search datasets CUHK-SYSU and PRW.\nOn CUHK-SYSU-TBPS, The testing set includes 2,900 query descriptions and 6,978 gallery images.\nFor each query, different gallery sizes are set to assess the scaling ability of different models.\nWe use the gallery size of 100 by default.\nAs for PRW-TBPS, the testing set contains 2,057 query persons and each of them are to be searched in a gallery with 6,112 images.\nTo measure the performance of text-based person search task, the widely adopted mean Average Precision (mAP) and Cumulative Matching Characteristics (CMC top-K) are used as standard metrics.\nHowever, different from the conventional retrieval tasks, a candidate in the ranking list would only be considered correct when its IoU with the ground truth bounding box is greater than 0.5.\n###table_2### ###table_3###"
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "5.2. Implementation Details",
93
+ "text": "We take ImageNet-pretrained ResNet-50 to initialize parameters of Base-Net and ID-Net.\nWhen obtaining the mixed visual features, the region branch splits the proposal feature map into two stripes equally and the local branch splits the feature map into three stripes equally. The dimension of the visual features at different scales is set to 768-d.\nFor the text semantic feature extraction module, we use BERT-Base-Uncased model as the backbone, which is pretrained on a large corpus including Book Corpus and Wikipedia.\nThe dimension of different scale textual features is also set to 768.\nIn RPN and SDRPN, we adopt the same anchors and adjust the anchor sizes to the objects in the dataset.\nWe adopt scales {8, 16, 32} and aspect ratios {1, 2}.\nNon maximum Suppression (NMS) with a threshold of 0.7 is used to filter out redundant boxes and 2000 bounding boxes are left from original 12,000 bounding boxes after NMS.\nThen, we select top 4 proposals for each person identity from 2000 RoIs based on objectiveness score, and meanwhile the IoU of the selected proposals have to be bigger than a threshold of 0.5.\nDuring inference, we keep 300 boxes from 6000 bounding boxes and sent them into detection branch.\nIn SDRPN, the reduction ratio is set to 16.\nThe loss function of SDRPN and RPN are both cross-entropy loss. Note that in SDRPN, anchor boxes that overlap with the text-relevant persons are marked as positives.\nWhile in the standard RPN, all persons in the image are positive samples.\nThe batch size is set to 4, and we use horizontally flipping as data augmentation.\nThe model contains three groups of parameters, namely detection, identification and projection parameters.\nThe detection parameters are optimized with SGD optimizer with momentum of 0.9, and identification and projection parameters adopt Adam optimizer.\nThe learning rate of three groups parameters are set to 0.0001, 0.001, 0.0001 respectively, and the model is trained for 12 epochs in total.\nThe hyper-parameters of each loss function are set to 1, except the one for CSAL loss which is set to 0.1."
94
+ },
95
+ {
96
+ "section_id": "5.3",
97
+ "parent_section_id": "5",
98
+ "section_name": "5.3. Compared Methods",
99
+ "text": "Since there is no existing method specifically designed for text-based person search, we explore typical methods of related tasks and split the task into two parts, detection and text-image alignment, which are combined together as two-stage method to compared with our proposed one stage model.\nSpecifically, we take fully trained person search model to extract visual features of labeled person image.\nAlso we use language model to extract textual features of language description.\nThe distances between visual feature and text feature are measured under the supervision of CMPM and CMPC loss.\nDuring inference, the similarity between the query text and the detected person bounding boxes is calculated based on their embedded features.\nThe chosen person search method contains OIM (Xiao et al., 2017 ###reference_b44###), NAE (Chen et al., 2020b ###reference_b5###), and BSL, which are all based on Faster-RCNN while the model architectures are different.\nIn OIM, the box regression and region classification losses remain the same as in Faster-RCNN, with an additional identity classification loss as supervision.\nIn contrast, NAE removes the original region classification branch and uses the embedding norm as the binary person/background classification confidence.\nBSL is the network used in our framework which is also evaluated as an image-based person search method.\nDifferent with OIM and NAE, BSL uses one convolution layer instead of identification net for detection branch,\nmeanwhile the output feature of identification net is directly encoded as final feature vector for matching without further projection to reduce the feature dimension.\nAs for language model, BiLSTM and BERT are both used as text feature extractors.\nNotebly, the number of hidden units of BiLSTM is set to 2048 when matching visual features extracted by BSL, otherwise the hidden units number is 256.\nWhile, for BERT, we use convolution to adjust the shape of text features.\nAll BiLSTM networks are trained for 150 epochs and BERT is trained with 50 epochs.\n###figure_8### ###figure_9###"
100
+ },
101
+ {
102
+ "section_id": "5.4",
103
+ "parent_section_id": "5",
104
+ "section_name": "5.4. Quantitative and Qualitative Results",
105
+ "text": "###figure_10### Comparison with baseline methods.\nTable 1 ###reference_### shows the results of our proposed framework and the compared two stage methods on both the datasets.\nOn CUHK-SYSU-TBPS dataset, our method acheived 49.34% Rank-1 accuracy and 50.36% mAP, which is +8.51% Rank-1 and +1.97% mAP better than the superior compared method BSL+BERT.\nOn the more challenging PRW-TBPS dataset, our method acheived 21.63% Rank-1 accuracy and 11.93% mAP, which is +4.81% Rank-1 and +1.23% mAP better than BSL+BERT method.\nAs can be seen, our approach achieves state-of-the-art performances in terms of both mAP and CMC top-1 to 10 accuracies.\nIt can also be clearly seen from Table 1 ###reference_### that (1) When using BERT as the text feature extraction model, it brings significant improvement for our task compared with BiLSTM on both datasets. It indicates that BERT is more capable of encoding complex text descriptions into semantic feature vectors for joint alignment with visual features in a certain way. (2) BSL architecture is more suitable for the task compared with OIM and NAE, as about 1%-3% improvement can be obtained in terms of both CMC top-1 accuracy and mAP on both datasets. We infer that the usage of separate Det-Net and ID-Net for detection and identification, is better for person search model to obtain more accurate location of detection and more discriminative visual features. (3) The proposed end-to-end solution has clearly advantages as it can beat all the two-stage counterparts.\nResults over varying gallery sizes on CUHK-SYSU-TBPS.\nAs shown in Figure 4 ###reference_###, when the gallery size of CUHK-SYSU-TBPS is adjusted from 50, 500, 1000, 2000 to 4000, all of the methods degraded the performance while our method exhibits the consistent advantages compared with others.\nComponent analysis.\nWe analyze three major components of our method, namely BERT, Cross-scale Alignment and SDRPN, by observing the performance improvement when progressively adding each component.\nThe results are reported in Table V ###reference_###.\nThe first row of Table V ###reference_### is a baseline one stage model which adopts BERT to extract text features with a standard RPN. CMPC and CMPM loss are used for training the model.\nNote that even the baseline one stage model outperforms the best two stage model.\nThen, we introduce cross-scale alignment for extracted mixed features and add CSAL loss for joint text-image embedding, which brings +1.62% and +1.12% performance improvement in terms of CMC top-1 accuracy on the two datasets.\nBased upon that, SDRPN when combined together with standard RPN as aforementioned improves the CMC top-1 accuracy by additional +1.86% and +1.11% on the two datasets, respectively.\nWe also campare the the two-stage methods , i.e. detection + text-based re-id with our end-to-end text-based person search method. For detection, we select two classical detection methods, Faster-RCNN and DeTR, to serve as the detection method. And we adopt a text-based re-id method \u201cCMPC+CMPM\u201d(Zhang and Lu, 2018 ###reference_b51###) as its complexity is more or less comparable with our method.\nThe experimental results are shown in Table 3 ###reference_###.Our end-to-end method is significantly better than the two-stage methods in performance, which outperform 10.8% in mAP and it shows the necessity of developing end-to-end person search methods.\nTo further verified whether the global feature and the mixed feature should be aligned separately using different losses, we replace CSAL with CMPM+CMPC to conduct the experiment, and we found that the final results is a little inferior as in Table IV ###reference_###, the origin setting is outperform the replacing setting about 2%.\nTo find the best hyper-parameter for the CSAL, We did the hyper-parameter experiments on . The specific experimental results are shown in Table V ###reference_###. The experiments shows that higher CSAL coefficient takes negative effect of the final results, which may be mainly due to the fact that the magnitude of CSAL loss itself is greater than the other two losses.\n###table_4### ###table_5### Qualitative results.\nFigure 5 ###reference_### illustrates some text-based person search results.\nThe boxes with green lines represent correct search results, while the boxes with red lines denote failure results.\nThe top 2 rows demonstrate successful cases where correct person boxes are within the top-3 retrieved full images.\nFrom these successful cases, we can observe that our method can spot the target person occoured with different angle and background in full scene images.\nEven though in some cases, like the second case of the middle row in Figure 5 ###reference_###, the size of person box is relative small compared to the full scene images, it can also be correctly searched through a text description by our model.\nMeanwhile, in failure cases, some search results have some characteristics that partially fits the query description, such as the bottom-left case in Figure 5 ###reference_###, the first two persons both wear black T-shirt and the third man carries a black backpack. And they all wear blue pants, which are very close to part of the query description."
106
+ },
107
+ {
108
+ "section_id": "6",
109
+ "parent_section_id": null,
110
+ "section_name": "6. Conclusion",
111
+ "text": "In this paper, we investigate the problem of text-based person search in full scene images to meet the real-world scenarios where both the query image and the bounding boxes are not available.\nSpecifically, instead of a straightforward two-stage method, we proposed a new end-to-end learning framework which integrated the pedestrian detection, person identification and image-text cross-modal feature embedding tasks together and jointly optimize them to achieve better performance.\nTo take full advantage of the query text description, we devise a Semantic-Driven Region Proposal Network where the proposal generation process is instructed to pay attention to those candidates which are more similar with the semantic features of the text description.\nFurthermore, a cross-scale visual-semantic feature matching mechanism is introduced to improve the final searching results.\nTo validate the proposed approach, we collect and annotate two large scale text-based person search benchmark datasets named as CUHK-SYSU-TBPS and PRW-TBPS which are built on top of the widely adopted image-based person search datasets CUHK-SYSU and PRW, respectively.\nWe conduct extensive experiments and the experimental results on the two datasets demonstrated that our proposed method achieved state-of-the-art performance compared with many classical baseline methods."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Performance comparison of the baseline methods and the proposed method on CUHK-SYSU-TBPS and PRW-TBPS.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.1.1.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T1.1.1.2\">CUHK-SYSU-TBPS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T1.1.1.3\">PRW-TBPS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1\">mAP(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.2\">top-1(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.3\">top-5(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.4\">top-10(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.5\">mAP(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.6\">top-1(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.7\">top-5(%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.8\">top-10(%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1\">OIM+BiLSTM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.2\">23.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.3\">17.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.4\">38.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.5\">49.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.6\">4.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.7\">6.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.8\">16.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.9\">22.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.1\">NAE+BiLSTM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2\">23.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3\">16.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.4\">38.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.5\">49.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.6\">5.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.7\">7.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.8\">17.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.9\">24.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.1\">BSL+BiLSTM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.2\">26.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.3\">20.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4\">42.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.5\">52.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.6\">3.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.7\">6.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.8\">15.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.9\">22.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.1\">OIM+BERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.2\">43.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.3\">36.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4\">62.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.5\">72.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.6\">8.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.7\">14.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.8\">30.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.9\">39.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.1\">NAE+BERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.2\">45.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.3\">39.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.4\">64.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.5\">74.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.6\">9.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.7\">14.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.8\">31.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.9\">39.91</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.1\">BSL+BERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.2\">48.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.3\">40.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.4\">67.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.5\">76.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.6\">10.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.7\">16.82</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.8\">34.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.9\">45.36</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.2.1\">50.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.3.1\">49.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.4.1\">74.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.5.1\">82.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.6.1\">11.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.7.1\">21.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.8.1\">42.54</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.9.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.9.1\">52.99</span></td>\n</tr>\n</table>\n</figure>",
118
+ "capture": "Table 1. Performance comparison of the baseline methods and the proposed method on CUHK-SYSU-TBPS and PRW-TBPS."
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>Performance comparison of different components in our method.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T2.9\">\n<tr class=\"ltx_tr\" id=\"S5.T2.9.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.9.10.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.9.10.1.1\" style=\"font-size:90%;\">BERT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.9.10.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.9.10.2.1\" style=\"font-size:90%;\">Cross-scale Alignment</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.9.10.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.9.10.3.1\" style=\"font-size:90%;\">SDRPN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S5.T2.9.10.4\"><span class=\"ltx_text\" id=\"S5.T2.9.10.4.1\" style=\"font-size:90%;\">CUHK-SYSU-TBPS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T2.9.10.5\"><span class=\"ltx_text\" id=\"S5.T2.9.10.5.1\" style=\"font-size:90%;\">PRW-TBPS</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.11.1\"><span class=\"ltx_text\" id=\"S5.T2.9.11.1.1\" style=\"font-size:90%;\">mAP(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.9.11.2\"><span class=\"ltx_text\" id=\"S5.T2.9.11.2.1\" style=\"font-size:90%;\">top-1(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.11.3\"><span class=\"ltx_text\" id=\"S5.T2.9.11.3.1\" style=\"font-size:90%;\">mAP(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.11.4\"><span class=\"ltx_text\" id=\"S5.T2.9.11.4.1\" style=\"font-size:90%;\">top-1(%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.4\"><span class=\"ltx_text\" id=\"S5.T2.3.3.4.1\" style=\"font-size:90%;\">48.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.5\"><span class=\"ltx_text\" id=\"S5.T2.3.3.5.1\" style=\"font-size:90%;\">45.86</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.6\"><span class=\"ltx_text\" id=\"S5.T2.3.3.6.1\" style=\"font-size:90%;\">10.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.7\"><span class=\"ltx_text\" id=\"S5.T2.3.3.7.1\" style=\"font-size:90%;\">19.40</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.4\"><span class=\"ltx_text\" id=\"S5.T2.6.6.4.1\" style=\"font-size:90%;\">49.15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.5\"><span class=\"ltx_text\" id=\"S5.T2.6.6.5.1\" style=\"font-size:90%;\">47.48</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6\"><span class=\"ltx_text\" id=\"S5.T2.6.6.6.1\" style=\"font-size:90%;\">11.20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.7\"><span class=\"ltx_text\" id=\"S5.T2.6.6.7.1\" style=\"font-size:90%;\">20.52</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T2.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.9.4.1\" style=\"font-size:90%;\">50.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T2.9.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.9.5.1\" style=\"font-size:90%;\">49.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.9.6.1\" style=\"font-size:90%;\">11.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.9.7.1\" style=\"font-size:90%;\">21.63</span></td>\n</tr>\n</table>\n</figure>",
122
+ "capture": "Table 2. Performance comparison of different components in our method."
123
+ },
124
+ "3": {
125
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3. </span>Performance comparison of two-stage methods and ours on PRW-TBPS dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.15\">\n<tr class=\"ltx_tr\" id=\"S5.T3.15.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.15.16.1\"><span class=\"ltx_text\" id=\"S5.T3.15.16.1.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.15.16.2\"><span class=\"ltx_text\" id=\"S5.T3.15.16.2.1\">mAP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.15.16.3\"><span class=\"ltx_text\" id=\"S5.T3.15.16.3.1\">top-1%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.15.16.4\">top-5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.15.16.5\">top-10%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.15.16.6\">inference time</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.6\">DETR+\u201cCMPM+CMPC\u201d</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.5.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.10.10.6\">Faster-RCNN+\u201cCMPM+CMPC\u201d</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.9.9.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.10.10.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T3.15.15.6\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.15.15.5\"></td>\n</tr>\n</table>\n</figure>",
126
+ "capture": "Table 3. Performance comparison of two-stage methods and ours on PRW-TBPS dataset."
127
+ },
128
+ "4": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table IV. </span>Replacing the CSAL loss with CMPM.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T4.8\">\n<tr class=\"ltx_tr\" id=\"S5.T4.8.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T4.8.9.1\"><span class=\"ltx_text\" id=\"S5.T4.8.9.1.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.8.9.2\"><span class=\"ltx_text\" id=\"S5.T4.8.9.2.1\">mAP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.8.9.3\"><span class=\"ltx_text\" id=\"S5.T4.8.9.3.1\">top-1%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.8.9.4\">top-5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.8.9.5\">top-10%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.4.4.5\">W/ Replacing</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.4.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T4.8.8.5\">W/O Replacing</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.8.8.4\"></td>\n</tr>\n</table>\n</figure>",
130
+ "capture": "Table IV. Replacing the CSAL loss with CMPM."
131
+ },
132
+ "5": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table V. </span>The experiments to tuning the coefficient .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T5.18\">\n<tr class=\"ltx_tr\" id=\"S5.T5.18.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T5.18.17.1\"><span class=\"ltx_text\" id=\"S5.T5.18.17.1.1\" style=\"font-size:90%;\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.18.17.2\"><span class=\"ltx_text\" id=\"S5.T5.18.17.2.1\" style=\"font-size:90%;\">mAP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.18.17.3\"><span class=\"ltx_text\" id=\"S5.T5.18.17.3.1\" style=\"font-size:90%;\">top-1%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.18.17.4\"><span class=\"ltx_text\" id=\"S5.T5.18.17.4.1\" style=\"font-size:90%;\">top-5%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.18.17.5\"><span class=\"ltx_text\" id=\"S5.T5.18.17.5.1\" style=\"font-size:90%;\">top-10%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.6.4.5\"><span class=\"ltx_text\" id=\"S5.T5.6.4.5.1\" style=\"font-size:90%;\">0.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.3.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.4.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.5.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.6.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.10.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T5.10.8.5\"><span class=\"ltx_text\" id=\"S5.T5.10.8.5.1\" style=\"font-size:90%;\">0.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.7.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.8.6.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.9.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.10.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.14.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T5.14.12.5\"><span class=\"ltx_text\" id=\"S5.T5.14.12.5.1\" style=\"font-size:90%;\">0.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.11.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.12.10.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.13.11.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.14.12.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.18.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T5.18.16.5\"><span class=\"ltx_text\" id=\"S5.T5.18.16.5.1\" style=\"font-size:90%;\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.15.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.16.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.17.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.18.16.4\"></td>\n</tr>\n</table>\n</figure>",
134
+ "capture": "Table V. The experiments to tuning the coefficient ."
135
+ }
136
+ },
137
+ "image_paths": {
138
+ "1(a)": {
139
+ "figure_path": "2109.12965v3_figure_1(a).png",
140
+ "caption": "(a) Person ReID\nFigure 1. Comparison of the four tasks. (a) Person ReID. Query: cropped person image. Gallery: cropped person images. (b) Person Search. Query: cropped person image. Gallery: full scene images. (c) Text-based person retrieval. Query: text description. Gallery: cropped person images. (d) Text-based person search. Query: text description. Gallery: full scene images.",
141
+ "url": "http://arxiv.org/html/2109.12965v3/x1.png"
142
+ },
143
+ "1(b)": {
144
+ "figure_path": "2109.12965v3_figure_1(b).png",
145
+ "caption": "(b) Person Search\nFigure 1. Comparison of the four tasks. (a) Person ReID. Query: cropped person image. Gallery: cropped person images. (b) Person Search. Query: cropped person image. Gallery: full scene images. (c) Text-based person retrieval. Query: text description. Gallery: cropped person images. (d) Text-based person search. Query: text description. Gallery: full scene images.",
146
+ "url": "http://arxiv.org/html/2109.12965v3/x2.png"
147
+ },
148
+ "1(c)": {
149
+ "figure_path": "2109.12965v3_figure_1(c).png",
150
+ "caption": "(c) Text-based person retrieval\nFigure 1. Comparison of the four tasks. (a) Person ReID. Query: cropped person image. Gallery: cropped person images. (b) Person Search. Query: cropped person image. Gallery: full scene images. (c) Text-based person retrieval. Query: text description. Gallery: cropped person images. (d) Text-based person search. Query: text description. Gallery: full scene images.",
151
+ "url": "http://arxiv.org/html/2109.12965v3/x3.png"
152
+ },
153
+ "1(d)": {
154
+ "figure_path": "2109.12965v3_figure_1(d).png",
155
+ "caption": "(d) Text-based person search\nFigure 1. Comparison of the four tasks. (a) Person ReID. Query: cropped person image. Gallery: cropped person images. (b) Person Search. Query: cropped person image. Gallery: full scene images. (c) Text-based person retrieval. Query: text description. Gallery: cropped person images. (d) Text-based person search. Query: text description. Gallery: full scene images.",
156
+ "url": "http://arxiv.org/html/2109.12965v3/x4.png"
157
+ },
158
+ "2(a)": {
159
+ "figure_path": "2109.12965v3_figure_2(a).png",
160
+ "caption": "(a) CUHK-SYSU-TBPS\nFigure 2. The word length distributions on the benchmark datasets CUHK-SYSU-TBPS and PRW-TBPS.",
161
+ "url": "http://arxiv.org/html/2109.12965v3/x5.png"
162
+ },
163
+ "2(b)": {
164
+ "figure_path": "2109.12965v3_figure_2(b).png",
165
+ "caption": "(b) PRW-TBPS\nFigure 2. The word length distributions on the benchmark datasets CUHK-SYSU-TBPS and PRW-TBPS.",
166
+ "url": "http://arxiv.org/html/2109.12965v3/x6.png"
167
+ },
168
+ "3": {
169
+ "figure_path": "2109.12965v3_figure_3.png",
170
+ "caption": "Figure 3. (a) depicts The overall architecture of the proposed learning framework. (b) shows the process of mixed feature extraction. (c) exhibits the SDRPN.",
171
+ "url": "http://arxiv.org/html/2109.12965v3/x7.png"
172
+ },
173
+ "4(a)": {
174
+ "figure_path": "2109.12965v3_figure_4(a).png",
175
+ "caption": "(a) Top-1 result\nFigure 4. Results comparison with different gallery size of CUHK-SYSU-TBPS",
176
+ "url": "http://arxiv.org/html/2109.12965v3/x8.png"
177
+ },
178
+ "4(b)": {
179
+ "figure_path": "2109.12965v3_figure_4(b).png",
180
+ "caption": "(b) mAP result\nFigure 4. Results comparison with different gallery size of CUHK-SYSU-TBPS",
181
+ "url": "http://arxiv.org/html/2109.12965v3/x9.png"
182
+ },
183
+ "5": {
184
+ "figure_path": "2109.12965v3_figure_5.png",
185
+ "caption": "Figure 5. Examples of text-based person search.",
186
+ "url": "http://arxiv.org/html/2109.12965v3/x10.png"
187
+ }
188
+ },
189
+ "validation": true,
190
+ "references": [
191
+ {
192
+ "1": {
193
+ "title": "RCAA: Relational context-aware agents for person search. In Proceedings of the European Conference on Computer Vision (ECCV). 84\u2013100.",
194
+ "author": "Xiaojun Chang, PoYao Huang, YiDong Shen, Xiaodan Liang, Yi Yang, and Alexander G Hauptmann. 2018.",
195
+ "venue": "",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "2": {
201
+ "title": "Improving Deep Visual Representation for Person Re-identification by Global and Local Image-language Association. In Proceedings of the European Conference on Computer Vision (ECCV). 54\u201370.",
202
+ "author": "Dapeng Chen, Hongsheng Li, Xihui Liu, Yantao Shen, Jing Shao, Zejian Yuan, and Xiaogang Wang. 2018a.",
203
+ "venue": "",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "3": {
209
+ "title": "Person Search via a Mask-Guided Two-Stream CNN Model. In Proceedings of the European Conference on Computer Vision (ECCV). 764\u2013781.",
210
+ "author": "Di Chen, Shanshan Zhang, Wanli Ouyang, Jian Yang, and Ying Tai. 2018c.",
211
+ "venue": "",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "4": {
217
+ "title": "Norm-Aware Embedding for Efficient Person Search. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 12615\u201312624.",
218
+ "author": "Di Chen, Shanshan Zhang, Jian Yang, and Bernt Schiele. 2020b.",
219
+ "venue": "",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "5": {
225
+ "title": "Improving text-based person search by spatial matching and adaptive threshold. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 1879\u20131887.",
226
+ "author": "Tianlang Chen, Chenliang Xu, and Jiebo Luo. 2018b.",
227
+ "venue": "",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "6": {
233
+ "title": "A Cross-modality and Progressive Person Search System. In Proceedings of the 28th ACM International Conference on Multimedia. 4550\u20134552.",
234
+ "author": "Xiaodong Chen, Wu Liu, Xinchen Liu, Yongdong Zhang, and Tao Mei. 2020a.",
235
+ "venue": "",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "7": {
241
+ "title": "Mirror Representation for Modeling View-Specific Transform in Person Re-Identification. In Proceedings of the 24th International Conference on Artificial Intelligence. 3402\u20133408.",
242
+ "author": "YingCong Chen, WeiShi Zheng, and Jianhuang Lai. 2015.",
243
+ "venue": "",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "8": {
249
+ "title": "Dynamic imposter based online instance matching for person search.",
250
+ "author": "Ju Dai, Pingping Zhang, Huchuan Lu, and Hongyu Wang. 2020.",
251
+ "venue": "Pattern Recognition 100 (2020), 107120.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "9": {
257
+ "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.",
258
+ "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018.",
259
+ "venue": "arXiv preprint arXiv:1810.04805 (2018).",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "10": {
265
+ "title": "Person Search by Text Attribute Query As Zero-Shot Learning. In Proceedings of the IEEE international conference on computer vision (ICCV). 3652\u20133661.",
266
+ "author": "Qi Dong, Xiatian Zhu, and Shaogang Gong. 2019.",
267
+ "venue": "",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "11": {
273
+ "title": "Bi-Directional Interaction Network for Person Search. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 2839\u20132848.",
274
+ "author": "Wenkai Dong, Zhaoxiang Zhang, Chunfeng Song, and Tieniu Tan. 2020.",
275
+ "venue": "",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "12": {
281
+ "title": "FGN: Fully Guided Network for Few-Shot Instance Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9172\u20139181.",
282
+ "author": "Zhibo Fan, JinGang Yu, Zhihao Liang, Jiarong Ou, Changxin Gao, GuiSong Xia, and Yuanqing Li. 2020.",
283
+ "venue": "",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "13": {
289
+ "title": "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search.",
290
+ "author": "Chenyang Gao, Guanyu Cai, Xinyang Jiang, Feng Zheng, Jun Zhang, Yifei Gong, Pai Peng, Xiaowei Guo, and Xing Sun. 2021.",
291
+ "venue": "arXiv preprint arXiv:2101.03036 (2021).",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "14": {
297
+ "title": "Pose-guided visible part matching for occluded person ReID. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11744\u201311752.",
298
+ "author": "Shang Gao, Jingya Wang, Huchuan Lu, and Zimo Liu. 2020.",
299
+ "venue": "",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "15": {
305
+ "title": "Re-id driven localization refinement for person search. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 9814\u20139823.",
306
+ "author": "Chuchu Han, Jiacheng Ye, Yunshan Zhong, Xin Tan, Chi Zhang, Changxin Gao, and Nong Sang. 2019.",
307
+ "venue": "",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "16": {
313
+ "title": "End-to-End Detection and Re-identification Integrated Net for Person Search. In Asian Conference on Computer Vision. 349\u2013364.",
314
+ "author": "Zhenwei He and Lei Zhang. 2018.",
315
+ "venue": "",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "17": {
321
+ "title": "Squeeze-and-Excitation Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 7132\u20137141.",
322
+ "author": "Jie Hu, Li Shen, and Gang Sun. 2018.",
323
+ "venue": "",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "18": {
329
+ "title": "Person search: New paradigm of person re-identification: A survey and outlook of recent works.",
330
+ "author": "Khawar Islam. 2020.",
331
+ "venue": "Image and Vision Computing 101 (2020), 103970.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "19": {
337
+ "title": "Fusion-Attention Network for person search with free-form natural language.",
338
+ "author": "Zhong Ji, Shengjia Li, and Yanwei Pang. 2018.",
339
+ "venue": "Pattern Recognition Letters 116 (2018), 205\u2013211.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "20": {
345
+ "title": "Instance and Pair-Aware Dynamic Networks for Re-Identification.",
346
+ "author": "Bingliang Jiao, Xin Tan, Lu Yang, Yunlong Wang, and Peng Wang. 2021.",
347
+ "venue": "arXiv preprint arXiv:2103.05395 (2021).",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "21": {
353
+ "title": "Pose-guided multi-granularity attention network for text-based person search. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 11189\u201311196.",
354
+ "author": "Ya Jing, Chenyang Si, Junbo Wang, Wei Wang, Liang Wang, and Tieniu Tan. 2020.",
355
+ "venue": "",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "22": {
361
+ "title": "Cross-Modal Cross-Domain Moment Alignment Network for Person Search. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). 10678\u201310686.",
362
+ "author": "Ya Jing, Wei Wang, Liang Wang, and Tieniu Tan. 2020.",
363
+ "venue": "",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "23": {
369
+ "title": "Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations.",
370
+ "author": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, LiJia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017.",
371
+ "venue": "International Journal of Computer Vision 123, 1 (2017), 32\u201373.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "24": {
377
+ "title": "Person Search by Multi-Scale Matching. In Proceedings of the European Conference on Computer Vision (ECCV). 553\u2013569.",
378
+ "author": "Xu Lan, Xiatian Zhu, and Shaogang Gong. 2018.",
379
+ "venue": "",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "25": {
385
+ "title": "Identity-aware textual-visual matching with latent co-attention. In Proceedings of the IEEE International Conference on Computer Vision. 1890\u20131899.",
386
+ "author": "Shuang Li, Tong Xiao, Hongsheng Li, Wei Yang, and Xiaogang Wang. 2017a.",
387
+ "venue": "",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "26": {
393
+ "title": "Person search with natural language description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1970\u20131979.",
394
+ "author": "Shuang Li, Tong Xiao, Hongsheng Li, Bolei Zhou, Dayu Yue, and Xiaogang Wang. 2017b.",
395
+ "venue": "",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "27": {
401
+ "title": "Learning a Discriminative Null Space for Person Re-Identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1239\u20131248.",
402
+ "author": "Zhang Li, Tao Xiang, and Shaogang Gong. 2016.",
403
+ "venue": "",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "28": {
409
+ "title": "Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision. 740\u2013755.",
410
+ "author": "Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014.",
411
+ "venue": "",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "29": {
417
+ "title": "Neural person search machines. In Proceedings of the IEEE international conference on computer vision. 493\u2013501.",
418
+ "author": "Hao Liu, Jiashi Feng, Zequn Jie, Karlekar Jayashree, Bo Zhao, Meibin Qi, Jianguo Jiang, and Shuicheng Yan. 2017.",
419
+ "venue": "",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "30": {
425
+ "title": "Pose transferrable person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4099\u20134108.",
426
+ "author": "Jinxian Liu, Bingbing Ni, Yichao Yan, Peng Zhou, Shuo Cheng, and Jianguo Hu. 2018.",
427
+ "venue": "",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "31": {
433
+ "title": "Deep Adversarial Graph Attention Convolution Network for Text-Based Person Search. In Proceedings of the 27th ACM International Conference on Multimedia. 665\u2013673.",
434
+ "author": "Jiawei Liu, ZhengJun Zha, Richang Hong, Meng Wang, and Yongdong Zhang. 2019.",
435
+ "venue": "",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "32": {
441
+ "title": "Temporal Model Adaptation for Person Re-Identification. In Proceedings of the European Conference on Computer Vision (ECCV). 858\u2013\u2013877.",
442
+ "author": "Martinel, Niki, Abir Das, Christian Micheloni, and Amit K. Roy-Chowdhury. 2016.",
443
+ "venue": "",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "33": {
449
+ "title": "Query-Guided End-To-End Person Search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 811\u2013820.",
450
+ "author": "Bharti Munjal, Sikandar Amin, Federico Tombari, and Fabio Galasso. 2019.",
451
+ "venue": "",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "34": {
457
+ "title": "Knowledge Distillation for End-to-End Person Search.. In BMVC. 216.",
458
+ "author": "Bharti Munjal, Fabio Galasso, and Sikandar Amin. 2019.",
459
+ "venue": "",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "35": {
465
+ "title": "Improving description-based person re-identification by multi-granularity image-text alignments.",
466
+ "author": "Kai Niu, Yan Huang, Wanli Ouyang, and Liang Wang. 2020.",
467
+ "venue": "IEEE Transactions on Image Processing 29 (2020), 5542\u20135556.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "36": {
473
+ "title": "Adversarial representation learning for text-to-image matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 5814\u20135824.",
474
+ "author": "Nikolaos Sarafianos, Xiang Xu, and Ioannis A Kakadiaris. 2019.",
475
+ "venue": "",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "37": {
481
+ "title": "A pose-sensitive embedding for person re-identification with expanded cross neighborhood re-ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 420\u2013429.",
482
+ "author": "M Saquib Sarfraz, Arne Schumann, Andreas Eberle, and Rainer Stiefelhagen. 2018.",
483
+ "venue": "",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "38": {
489
+ "title": "Part-aligned bilinear representations for person re-identification. In Proceedings of the European Conference on Computer Vision (ECCV). 402\u2013419.",
490
+ "author": "Yumin Suh, Jingdong Wang, Siyu Tang, Tao Mei, and Kyoung Mu Lee. 2018.",
491
+ "venue": "",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "39": {
497
+ "title": "Cascade RPN: Delving into High-Quality Region Proposal Network with Adaptive Convolution. In Conference on Neural Information Processing Systems (NeurIPS). 1432\u20131442.",
498
+ "author": "Thang Vu, Hyunjun Jang, Trung X Pham, and Chang Dong Yoo. 2019.",
499
+ "venue": "",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "40": {
505
+ "title": "TCTS: A Task-Consistent Two-Stage Framework for Person Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 11952\u201311961.",
506
+ "author": "Cheng Wang, Bingpeng Ma, Hong Chang, Shiguang Shan, and Xilin Chen. 2020.",
507
+ "venue": "",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "41": {
513
+ "title": "Region Proposal by Guided Anchoring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2965\u20132974.",
514
+ "author": "Jiaqi Wang, Kai Chen, Shuo Yang, Chen Change Loy, and Dahua Lin. 2019a.",
515
+ "venue": "",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "42": {
521
+ "title": "Vehicle Re-Identification in Aerial Imagery: Dataset and Approach. In Proceedings of the IEEE international conference on computer vision. 460\u2013469.",
522
+ "author": "Peng Wang, Bingliang Jiao, Lu Yang, Yifei Yang, Shizhou Zhang, Wei Wei, and Yanning Zhang. 2019b.",
523
+ "venue": "",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "43": {
529
+ "title": "Joint detection and identification feature learning for person search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3415\u20133424.",
530
+ "author": "Tong Xiao, Shuang Li, Bochao Wang, Liang Lin, and Xiaogang Wang. 2017.",
531
+ "venue": "",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "44": {
537
+ "title": "Person Search in a Scene by Jointly Modeling People Commonness and Person Uniqueness. In Proceedings of the 22nd ACM international conference on Multimedia. 937\u2013940.",
538
+ "author": "Yuanlu Xu, Bingpeng Ma, Rui Huang, and Liang Lin. 2014.",
539
+ "venue": "",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "45": {
545
+ "title": "Learning Context Graph for Person Search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2158\u20132167.",
546
+ "author": "Yichao Yan, Qiang Zhang, Bingbing Ni, Wendong Zhang, Minghao Xu, and Xiaokang Yang. 2019.",
547
+ "venue": "",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "46": {
553
+ "title": "Deep Learning for Person Re-identification: A Survey and Outlook.",
554
+ "author": "Mang Ye, Jianbing Shen, Gaojie Lin, Tao Xiang, Ling Shao, and Steven C.H. Hoi. 2021.",
555
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (2021), 1\u20131.",
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "47": {
561
+ "title": "FMT: fusing multi-task convolutional neural network for person search.",
562
+ "author": "Sulan Zhai, Shunqiang Liu, Xiao Wang, and Jin Tang. 2019.",
563
+ "venue": "Multimedia Tools and Applications 78, 22 (2019), 31605\u201331616.",
564
+ "url": null
565
+ }
566
+ },
567
+ {
568
+ "48": {
569
+ "title": "Person Re-Identification in Aerial Imagery.",
570
+ "author": "Shizhou Zhang, Qi Zhang, Yifei Yang, Xing Wei, Peng Wang, Bingliang Jiao, and Yanning Zhang. 2021.",
571
+ "venue": "IEEE Transactions on Multimedia 23 (2021), 281\u2013291.",
572
+ "url": null
573
+ }
574
+ },
575
+ {
576
+ "49": {
577
+ "title": "Diverse Knowledge Distillation for End-to-End Person Search.",
578
+ "author": "Xinyu Zhang, Xinlong Wang, JiaWang Bian, Chunhua Shen, and Mingyu You. 2020.",
579
+ "venue": "arXiv preprint arXiv:2012.11187 (2020).",
580
+ "url": null
581
+ }
582
+ },
583
+ {
584
+ "50": {
585
+ "title": "Deep cross-modal projection learning for image-text matching. In Proceedings of the European Conference on Computer Vision (ECCV). 686\u2013701.",
586
+ "author": "Ying Zhang and Huchuan Lu. 2018.",
587
+ "venue": "",
588
+ "url": null
589
+ }
590
+ },
591
+ {
592
+ "51": {
593
+ "title": "Segmentation mask guided end-to-end person search.",
594
+ "author": "Dingyuan Zheng, Jimin Xiao, Kaizhu Huang, and Yao Zhao. 2020.",
595
+ "venue": "Signal Processing-image Communication 86 (2020), 115876.",
596
+ "url": null
597
+ }
598
+ },
599
+ {
600
+ "52": {
601
+ "title": "Scalable person re-identification: A benchmark. In Proceedings of the IEEE international conference on computer vision. 1116\u20131124.",
602
+ "author": "Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. 2015.",
603
+ "venue": "",
604
+ "url": null
605
+ }
606
+ },
607
+ {
608
+ "53": {
609
+ "title": "Person re-identification in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1367\u20131376.",
610
+ "author": "Liang Zheng, Hengheng Zhang, Shaoyan Sun, Yi Chandrakerand Yang, and Qi Tian. 2017a.",
611
+ "venue": "",
612
+ "url": null
613
+ }
614
+ },
615
+ {
616
+ "54": {
617
+ "title": "Person re-identification by probabilistic relative distance comparison. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 649\u2013656.",
618
+ "author": "WeiShi Zheng, Shaogang Gong, and Tao Xiang. 2011.",
619
+ "venue": "",
620
+ "url": null
621
+ }
622
+ },
623
+ {
624
+ "55": {
625
+ "title": "Dual-path convolutional image-text embedding with instance loss.",
626
+ "author": "Zhedong Zheng, Liang Zheng, Michael Garrett, Yi Yang, and YiDong Shen. 2017b.",
627
+ "venue": "arXiv preprint arXiv:1711.05535 (2017).",
628
+ "url": null
629
+ }
630
+ },
631
+ {
632
+ "56": {
633
+ "title": "Robust Partial Matching for Person Search in the Wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 6827\u20136835.",
634
+ "author": "Yingji Zhong, Xiaoyu Wang, and Shiliang Zhang. 2020.",
635
+ "venue": "",
636
+ "url": null
637
+ }
638
+ },
639
+ {
640
+ "57": {
641
+ "title": "Graph correspondence transfer for person re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32.",
642
+ "author": "Qin Zhou, Heng Fan, Shibao Zheng, Hang Su, Xinzhe Li, Shuang Wu, and Haibin Ling. 2018.",
643
+ "venue": "",
644
+ "url": null
645
+ }
646
+ }
647
+ ],
648
+ "url": "http://arxiv.org/html/2109.12965v3"
649
+ }
20240225/2202.07082v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2204.08381v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2204.12243v4.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Analysis of a Spatially Correlated Vehicular Network Assisted by Cox-distributed Vehicle Relays",
3
+ "abstract": "In vehicle-to-all (V2X) communications, roadside units (RSUs) play an essential role in connecting various network devices. In some cases, users may not be well-served by RSUs due to congestion, attenuation, or interference. In these cases, vehicular relays associated with RSUs can be used to serve those users. This paper uses stochastic geometry to model and analyze a spatially correlated heterogeneous vehicular network where both RSUs and vehicular relays serve network users such as pedestrians or other vehicles. We present an analytical model where the spatial correlation between roads, RSUs, relays, and users is systematically modeled via Cox point processes. Assuming users are associated with either RSUs or relays, we derive the association probability and the coverage probability of the typical user. Then, we derive the user throughput by considering interactions of links unique to the proposed network. This paper gives practical insights into designing spatially correlated vehicular networks assisted by vehicle relays. For instance, we express the network performance such as the user association, SIR coverage probability, and the network throughput as the functions of network key geometric variables. In practice, this helps one to optimize the network so as to achieve ultra reliability or maximum user throughput of a spatially correlated vehicular networks by varying key aspects such as the relay density or the bandwidth for relays.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": ""
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Background and Motivation",
15
+ "text": "Recent innovations have made it possible for vehicles to play new roles in urban environments, extending their traditional transportation role [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. Vehicles will participate in various road safety and efficiency applications by communicating with neighboring vehicles, pedestrians, traffic lights, and Internet-of-Things (IoT) devices [2 ###reference_b2###, 4 ###reference_b4###]. Advanced vehicles and their sensors provide ways to improve not only their own safety, but also that of others such as pedestrians [5 ###reference_b5###, 6 ###reference_b6###]. This innovative use of vehicles requires reliable communications among network entities such as vehicles, base stations, smart sensors, and pedestrians [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nVehicular networks featuring reliable and high capacity links can be achieved by base stations close to roads, namely RSUs. Connected to the core network through backhaul, RSUs will host advanced V2X applications [5 ###reference_b5###, 9 ###reference_b9###]. As the number of network users increases and vehicular networks have more services, some network users relying only on RSUs may experience limited coverage because of data congestion, load unbalancing, signal attenuation, and high interference.\nTo fight against these limitations, various technologies have been developed and among that, this paper focuses on the use of vehicular relays [10 ###reference_b10###, 11 ###reference_b11###]. Specifically, RSU-operated vehicular relays will reshape the topology of vehicular networks to increase the reliability and throughput of network [10 ###reference_b10###, 11 ###reference_b11###]. For instance, users in dense areas can communicate each other via relays, avoiding extra delays occurring at RSUs; or relays can forward important messages to the users far away from RSUs. Fig. 1 ###reference_### illustrates such an example.\n###figure_1### Focusing on the network topology and the geometric interaction between network elements, this paper studies the fundamental performance of a spatially correlated two-tiered heterogeneous vehicular network with RSUs and relays operated through them. (Fig. 1 ###reference_###.) In particular, to describe the spatial interaction between RSUs, relays, and users all at the same time, we employ a stochastic geometry framework [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. In particular, many studies used analytic models based on the Poisson point processes [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###] where network elements can be well captured as spatially independent components. Recently, Cox point processes has been employed to describe the locations of spatially correlated network elements such as roads, vehicles, and pedestrians. Specifically, the Cox models were extensively used in various papers including [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###] to analyze the basic network performance with vehicle transmitters and vehicle receivers.\nContinuing this line of work, this paper also employs Cox point processes to describe the locations of RSUs, users, and vehicle relays, all within the same road infrastructure. It is worth noting that, due to this practical representation of spatially correlated network elements\u2014RSUs, vehicular relays, and users, the network performance improved by vehicle relays can be fairly and accurately analyzed. To the best of the authors\u2019 knowledge, no prior work has addressed a system-level analysis of a vehicular network with vehicle relays, especially by emphasizing the network topology produced by RSUs, vehicle relays, and network vehicle users, all of which must be located on the common road infrastructure."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Theoretical Contributions",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "1.2.1",
25
+ "parent_section_id": "1.2",
26
+ "section_name": "I-B1 Modeling of a spatially correlated two-tier heterogeneous vehicular network",
27
+ "text": "In vehicular networks, network elements e.g., RSUs, vehicles transceivers, and pedestrians are all close to roads. In this paper, we focus on this geometric characteristic by modeling a road layout first as a Poisson line process and then distributing RSUs, vehicular relays, and users as Poisson point processes conditional on the line process. By constructing all elements conditionally on roads, we account for the fact that RSUs, relays, and users are located on roads and nowhere else. The proposed network modeling technique allows one to identify the geometric interaction of a two-tier heterogeneous vehicular network, especially between various communication links such as RSU-to-relay links, relay-to-user links, and RSU-to-user links. In contrast to our previous work [23 ###reference_b23###, 22 ###reference_b22###] where only a single set of vehicle transmitters is considered as a Cox point process, this work considers two sets of transmitters modeled as Cox point processes conditionally on a single road layout."
28
+ },
29
+ {
30
+ "section_id": "1.2.2",
31
+ "parent_section_id": "1.2",
32
+ "section_name": "I-B2 Association behavior of users and coverage probability",
33
+ "text": "Motivated by basic safety messages transmitted from network elements and received by nearby users [35 ###reference_b35###, 36 ###reference_b36###], we assume that network users are associated with their closest RSUs or closest relays. We derive the association probability as a function of relay density and RSU density. The obtained probability describes the fraction of users associated with RSUs or with relays, at any given time. We show that the association probability is not given by a simple linear function because of the spatial correlation between RSUs and RSU-operated relays. Assuming frequency resources are separated for operating relays and for serving network users, we evaluate the coverage probability of the typical user as an integral function. See II ###reference_### for detail."
34
+ },
35
+ {
36
+ "section_id": "1.2.3",
37
+ "parent_section_id": "1.2",
38
+ "section_name": "I-B3 Comprehensive analysis and design insights",
39
+ "text": "Taking into account the fact that relays are operated by RSUs and users are served by both relays and RSUs, we evaluate the effective throughput of the typical user in the proposed network. In particular, we get the user throughput formula leveraging (i) the throughputs of RSU-to-user links and relay-to-user links, respectively, (ii) the SIR distribution and throughput of RSU-to-relay links, (iii) the average number of network elements involved in the above links. Without ignoring the bottleneck resulting from the RSU-to-relay links, the throughput formula accurately describes the redistribution of the network payload achieved by spatially correlated relays in heterogeneous vehicular network architectures. In particular, we express the user throughput as a function of network parameters including frequency resources and densities of RSUs, relays, and users. As a result, it can be effectively used to design and build heterogeneous vehicular networks where spatially correlated network elements exist. For instance, leveraging the throughput expression, network operators can allocate frequency resources to various links to optimize the network performance for given densities of RSUs, relays, and network users."
40
+ },
41
+ {
42
+ "section_id": "2",
43
+ "parent_section_id": null,
44
+ "section_name": "II System Model",
45
+ "text": "This section gives the spatial model for RSUs, relays, and users. We then discuss the propagation model, the user association principle, and performance metrics."
46
+ },
47
+ {
48
+ "section_id": "2.1",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-A Spatial Model for RSUs and Users",
51
+ "text": "To represent road geometries in urban areas, we assume that the road layout is modeled as an isotropic Poisson line process [14 ###reference_b14###]. In the context of stochastic geometry such a model has been widely accepted for its analytical tractability [21 ###reference_b21###, 22 ###reference_b22###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###]. Specifically, the Poisson line process is generated from a homogeneous Poisson point process on a cylinder set . Consider a Poisson point process of intensity on . Here (per km) is the mean number of road segments in a circle of radius km.\n\nEach of its point is mapped into a line on the Euclidean plane, where corresponds to the distance from the origin to the line and corresponds to the angle between the line and the -axis, measured in the counterclockwise direction [14 ###reference_b14###].\n\n\nConditionally on each line the locations of RSUs and network users, are modeled as independent one-dimensional Poisson point processes and of intensities and , respectively, where . Here, is the mean number of RSUs on a road segment of km and is the mean number of relays on a road segment of km.\n\nCollectively, the RSU point process and the user point process form Cox point processes constructed under an identical Poisson line process [23 ###reference_b23###]. We have\n###figure_2### ###figure_3### ###figure_4### Figs. 2 ###reference_### \u2013 4 ###reference_### show the proposed network model having RSUs, vehicle relays, and network users, all located on the same road infrastructure. In Fig. 2 ###reference_###, the road density is /km and it means there are roads in a disk of radius km on average. In Fig. 4 ###reference_###, we use , a very dense urban area with many roads. It is worth noting that RSUs, relays, and network users are all constrained by the common road infrastructure.\nIt is important to mention that we take the the simplest approach to characterize spatially correlated component in a two-tier heterogeneous vehicular network. For instance, is a homogeneous Poisson point process of a constant intensity. Thus, we have an isotropic Poisson line process on the Euclidean plane. Nevertheless, one can change it by considering dirac-delta measure across the angle of roads to create a Manhattan-like road layout. See [22 ###reference_b22###]. A more realistic model obtained by changing the intensity measure of is left for future work."
52
+ },
53
+ {
54
+ "section_id": "2.2",
55
+ "parent_section_id": "2",
56
+ "section_name": "II-B Spatial Model for Relay and Reserved Spectrum",
57
+ "text": "We assume that vehicular relays are wirelessly connected to RSUs and they serve network users [36 ###reference_b36###, 10 ###reference_b10###] as in Fig. 1 ###reference_###. In the sequel, RSU-operated vehicular relays will be referred to relays.\n\nSince relays are on roads too, we model the locations of relays as a Cox point process, denoted by . Specifically, conditional on each road created by the above Poisson line process , the locations of relays on each road follow a Poisson point process of intensity Following the notation of Eqs. (1 ###reference_###) and (2 ###reference_###), we let\nIt is important to note that the RSU point process , the relay point process , and the user point process are all on the same line process . As a result, our approach captures the fact that RSUs, relays, and users are all on the very same road structure. Fig. 2 ###reference_### shows the spatial distributions of the RSUs, relays, and network users in the proposed network.\nTo operate relays, network operators can employ various approaches. To maintain the tractability of our work, we consider the simplest assumption that the frequency resources for operating relays and the frequency resources for serving network users are separate. (See Fig. 1 ###reference_### where links are shown.) This is partly motivated by the radio resource management technique in practice [36 ###reference_b36###, 10 ###reference_b10###], where the frequency resources can be autonomously taken by vehicles or scheduled by RSUs. Specifically, to communicate with relays, RSUs use the spectrum of bandwidth . On the other hand, to serve network users on roads, RSUs and relays use the spectrum of bandwidth . In other words, we have three different types of communication links (i) RSU-to-user links, (ii) relay-to-user links, and (iii) RSU-to-relay links. Types (i) and (ii) use and Type (iii) uses .\nNote that we assume that and do not overlap and that where is the total available bandwidth. Table I ###reference_### shows the communication links and their corresponding resources.\nIn practical cases, users may experience limited coverage because of interference or attenuation. In the proposed architecture, RSUs configure relays to forward their messages to network users. To ensure reliable reception of messages at their final destinations, we assume that the initial links of such relaying, namely RSU-to-relay communications occupy a reserved spectrum of bandwidth . For the rest of the communication links in the proposed two-tier heterogeneous vehicular networks, e.g., relay-to-user and RSU-to-user links, we consider those links use a spectrum of bandwidth . Therefore, there is no co-channel interference between RSU-to-relay communications and the rest of the communications in the proposed architecture. Motivated by current standard implementation [36 ###reference_b36###, 37 ###reference_b37###, 10 ###reference_b10###], we assume that RSU-to-user and relay-to-user links exist on the same spectrum and thus there is co-channel interference between them."
58
+ },
59
+ {
60
+ "section_id": "2.3",
61
+ "parent_section_id": "2",
62
+ "section_name": "II-C Relay and User Mobility",
63
+ "text": "In vehicular networks, RSUs do not move while relays and network users move along the roads. We assume that relays and users move along the line they are located on and that they choose their speeds uniformly out of a given distribution. Specifically, each relay independently selects its own speed on the interval uniformly at random. Each network user selects its own speed on the interval uniformly at random.\nOne can relax the above mobility assumption. An example is that where relays and users on each road choose their own speeds out of standard normal distributions. Based on the displacement theorem [38 ###reference_b38###], the Poisson property of the relay and user point processes is preserved over time. Thus, the relay and user point process are still given by time invariant Cox point processes. This shows that the proposed mobility model and the corresponding analysis in this paper generalize to various mobility cases."
64
+ },
65
+ {
66
+ "section_id": "2.4",
67
+ "parent_section_id": "2",
68
+ "section_name": "II-D Relay Association and User Association",
69
+ "text": "With regards to relays, we assume that relays are associated with their closest RSUs. The RSU-to-relay communication links are established between RSUs and relays; and then, these relays forward messages from RSUs to nearby users. See Fig. 1 ###reference_###. Combined with the separate spectrum usage given in II-B ###reference_###, the nearest association is a basis for the reliable reception of forwarded messages at the final destinations.\n\nWith regards to users, we assume that each user is associated with its closest transmitter, namely either an RSU or a relay. This is based on practical use cases [7 ###reference_b7###, 8 ###reference_b8###, 5 ###reference_b5###, 35 ###reference_b35###] where network users are configured to connect with their nearest transmitters. The bottom figures of Fig. 2 ###reference_### shows the user association map as the Voronoi tessellation, illustrated by solid blue lines. The centers of the Voronoi cells are the transmitter point process, i.e., . The cells are the user association map. Users are connected to transmitters at their cell centers.\nAs the number of transmitters increase, the average size of cells decreases and thus the average number of users associated with each transmitters.\n[1 ###reference_b1###] studied various user association techniques including the maximum average receive signal power association and the nearest user association. In this paper, motivated by the various distance-critical safety V2X applications, we focus on the nearest user association principle. Nevertheless, the formulas and analysis given in this paper can be readily used to analyze the maximum average receive signal power association simply by changing the coefficients of transmit powers, exploiting techniques in [18 ###reference_b18###, 39 ###reference_b39###, 40 ###reference_b40###]."
70
+ },
71
+ {
72
+ "section_id": "2.5",
73
+ "parent_section_id": "2",
74
+ "section_name": "II-E Propagation Model",
75
+ "text": "Consider a receiver located at a distance from its transmitter. In the proposed heterogeneous vehicular network, transmitters are either RSUs or relays. We assume rich scattering around the network users [41 ###reference_b41###] and a power-law path loss function [35 ###reference_b35###]. The received signal power at the receiver is assumed to be of the form\n\nwhere is the received signal powers at meter from an RSU or a relay, respectively. represents Rayleigh fading, modeled by an independent exponential random variable with average one, and is the path loss over distance . We assume that the transmit powers for RSUs and relays are given by and , respectively.\n\nFor path loss, we address that the path loss shows different characteristics, depending on the relative locations of transmitters and receivers, or more precisely, on whether transceivers are on the same road or not [42 ###reference_b42###]. For tractability, we use a simple path loss model where the path loss over a distance is\nwhere ."
76
+ },
77
+ {
78
+ "section_id": "2.6",
79
+ "parent_section_id": "2",
80
+ "section_name": "II-F Performance Metrics",
81
+ "text": "This paper analyzes the performance seen by the network users. We first derive the coverage probability of the typical user and then derive the coverage probability of the typical relay. Then, using both, we derive the user effective throughput."
82
+ },
83
+ {
84
+ "section_id": "2.6.1",
85
+ "parent_section_id": "2.6",
86
+ "section_name": "II-F1 User Coverage Probability",
87
+ "text": "To analyze the coverage probability of the typical user, we use the Palm distribution of the user point process, . This features a typical user at the origin. Therefore a line almost surely exists with a RSU point process and a relay point process on it [20 ###reference_b20###, 23 ###reference_b23###]. The coverage probability of the typical user is given by\nwhere is the transmit power of the association transmitter which could be either or depending on the user association. We denote by the ball of radius centered at the origin; we also write Here, is the SIR threshold. Based on the association principle of above, the association transmitter is given by\nHere, the association transmitter is selected out of the point processes: , and When the association transmitter is a RSU, we write . When the association transmitter is a relay, we write\nBased on the association, users can be divided into two types, namely those associated with RSUs and those with relays. We need to separately evaluate each type as follows:\nwhere the former denotes the coverage probability of the typical relay-associated user and the latter denotes that of the typical RSU-associated user."
88
+ },
89
+ {
90
+ "section_id": "2.6.2",
91
+ "parent_section_id": "2.6",
92
+ "section_name": "II-F2 Relay Coverage Probability",
93
+ "text": "To analyze the SIR of the typical relay, we consider the Palm distribution of the relay point process. The coverage probability of the typical relay, is given by\nwhere is the RSU closest to the typical relay located at the origin under the Palm distribution of the relay point process . Since RSU-to-relay communications are assumed to occur over a frequency bandwidth of , it is worth noting that the RSU-to-relay links do not interfere with RSU-to-user and relay-to-user links."
94
+ },
95
+ {
96
+ "section_id": "2.6.3",
97
+ "parent_section_id": "2.6",
98
+ "section_name": "II-F3 Throughput",
99
+ "text": "Using the coverage probabilities above, we derive the throughput of the typical user. In the proposed vehicular network where relays are operated by RSUs over a separate wireless resource the throughput is not just a simple function of the SIR distribution of the typical user. The precise definition of the user throughput will be given in Section V ###reference_###."
100
+ },
101
+ {
102
+ "section_id": "3",
103
+ "parent_section_id": null,
104
+ "section_name": "III Association Probability",
105
+ "text": "Each user has either a RSU association or a relay association, depending on its distances to RSUs and relays. Here, we study the probability that the typical user is associated with either an RSU or a relay. The association probability is derived under the Palm distribution of the user point process. The association probability also corresponds to the fraction of network users associated with RSUs and with relays, respectively.\n\nThe probability that the typical user is associated with an RSU is given by Eq. (10 ###reference_###)\nLikewise, the probability that the typical user is associated with a relay is\nSee [1 ###reference_b1###, Theorem 1].\n\u220e\n###figure_5### Fig. 5 ###reference_### shows that the derived association probability derived in Lemma 1 ###reference_ma1### matches the association probability numerically obtained by the Monte Carlo simulations. We use . We see that as the density of relays increases, the relay association probability increases. Since the user association is based on distance, it is possible that users are associated with RSUs or relays on different lines. We will use the derived association probability to evaluate the coverage probability of the typical user. In addition, we show that the association probability is not given by a simple ratio of densities. This contrasts to the association probability of users in the heterogeneous networks modeled by Poisson point processes [18 ###reference_b18###]. This occurs because of the spatial correlation between RSUs and relays.\nThe mean number of users associated to the typical RSU is and the mean number of users associated to the typical relay is The mean number of relays associated to the typical RSU is\nConsider a factor graph with an edge from each user to its association RSU or relay. From the mass transport principle [38 ###reference_b38###],\nwhere the left-hand side is the mean mass sent by the users to their association RSUs on the same lines, whereas the right-hand side is the mean mass received by the RSUs from their associated users on the same lines. is the spatial density of RSUs and is the mean number of same-line users associated to the typical RSU under the Palm distribution of . Similarly, considering users and their associated RSUs on different lines, we have\nwhere the left-hand side is the mean mass out of the users and the right-hand side is the mean mass received by the RSUs: is the mean number of different-line users associated to the typical RSU. As a result, the mean number of users per RSU is . Similarly, the mean number of users per relay is Finally, the mean number of relays per RSU is .\n\u220e\nThe above proposition is essential to address the impact of RSU-to-relay links to the system performance. We use the above expression in the derivation of the user throughput in Section V ###reference_###."
106
+ },
107
+ {
108
+ "section_id": "4",
109
+ "parent_section_id": null,
110
+ "section_name": "IV Coverage Probability of User and Relay",
111
+ "text": "In Section IV-A ###reference_###, we first evaluate the coverage probability of the typical user under the Palm distribution of the user point process, by leveraging the facts that the network users are connected with their closest RSUs or relays and that RSU-to-user links interfere with relay-to-user links and vice versa. Then, in Section IV-B ###reference_### we independently derive the coverage probability of the typical relay under the Palm distribution of the relay point process. The coverage probabilities of Sections IV-A ###reference_### and IV-B ###reference_### impacts the throughput of the network that we will see in Section V ###reference_###."
112
+ },
113
+ {
114
+ "section_id": "4.1",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-A Coverage Probability of the Typical User",
117
+ "text": "This section gives the coverage probability of the typical user. Note all RSUs and relays are assumed to have users to serve with high probability. We denote by the ratio of relay transmit power to the RSU transmit power, As in Section III ###reference_###, let be the event that the association transmitter and the typical user are on the same line. We denote by the event that the association transmitter is an RSU and by the event that the association transmitter is a relay.\n\nThe coverage probability of the typical user is\n given by Eq. (13 ###reference_###) \u2013 (16 ###reference_###), respectively where\nOn the other hand, we also have\n\nIn above, we derived the coverage probability of the typical user at the origin. Yet, the result is applicable to all the users in the network.\nIn Theorem 2 ###reference_orem2###, we analyze the SIR of the typical user located at the origin by using the Palm distribution of the user point process. Since the user point process is time invariant ergodic Poisson point process, the obtained formula corresponds to the spatial average of SIRs of all the users in the network [38 ###reference_b38###, 43 ###reference_b43###]. In other words, it corresponds to the statistic of the SIRs of all users in a large ball, at any given time. In addition, since the user point process is a time-invariant and ergodic Poisson point process, the coverage probability of the typical user coincides with the time average of the coverage probability of a specific user, obtained over a very long time [44 ###reference_b44###].\n###figure_6### ###figure_7### ###figure_8### Fig. 6 ###reference_### shows that the derived coverage probability of the typical user matches the numerical results obtained by Monte Carlo simulations, performed under various network parameters. In Figs. 7 ###reference_### \u2013 9 ###reference_###, we show only analytical results. Note that in the top figure of Fig. 7 ###reference_###, the SIR curve slightly changes as the density of the relays varies. In the low SIR regime, the SIR curve slightly decreases as we increase the number of relays. This is because, in the low SIR regime, users are more likely to be associated with transmitters on lines that are different from the ones of users, and the received signal powers from the association transmitters are moderately dominated by the interference from the other transmitters. On the other hand, in the high SIR regime, the SIR curve increases as the number of relays increases. This is because, in the high SIR regime, users are more likely to be associated with the transmitters on the lines that are the same as the ones of users, and therefore the received signal powers dominate the interference. Nevertheless, it is worthwhile to mention that increasing the relay density does not always increase the SIR curve in some range of parameters. Especially when the relay or RSU density is very high, transmitters and receivers may be very close to each other and thus the power-law path loss function of this paper should be replaced with a truncated version, e.g., , to account for near field effect. The analysis with a truncated power-law path loss model is left for future work. In the right picture of Fig. 7 ###reference_###, we increase the line density to show the change of the SIR curve. Although and are equal to one, the number of RSUs or relays or respectively increases as increases. Therefore, the increment of the interference dominates the increment of the received signal power and this explains the decrement of the SIR curve as increases.\n###figure_9### ###figure_10### In Fig. 9 ###reference_###, we increase the road density and the linear density of relays at the same rate. In both of pictures, increasing the road density decreases the SIR curve. It is important to mention that the SIR curve decrease in is less significant than the SIR curve decrease in By comparing the top figures of Fig. 7 ###reference_### and 9 ###reference_###, we see that the SIR curve decrease much clearer in Fig. 9 ###reference_### because, in general, the average number of transmitters per unit area is and thus the top figure of Fig. 9 ###reference_### has much more transmitters than the top figure of Fig. 7 ###reference_###, on average. We can conclude that the interference caused by relays is significant for dense urban areas where roads are densely distributed. Nevertheless, by comparing and , we see that the SIR curve decrease from to is about \u2013 % when the SIR threshold is between dB and dB. When the SIR threshold is not within this range, the decrease is between \u2013 %. From these observations, we conclude that despite some SIR decrease, relays are able to redistribute the users that are previously associated with RSUs. In the right picture of Fig. 9 ###reference_###, users are more likely to be associated with relays as the relay density increases. Especially, in the low SIR regime, users are associated with relays on different roads. Consequently, if the cross-road attenuation is not very significant, the received signal power from the association relays increases as we increase the number of relays and it compensates the interference from added relays to some extent. It is worthwhile to stress that such a behavior of the SIR curves exists as long as the density of RSUs or relays is not too high. For instance, a truncated path loss model should be used if transmitters and receivers are too close to each other.\nWhen , or , the coverage probability of the typical user is given by Eq. (20 ###reference_###) where\nand\nHaving in Theorem 2 ###reference_orem2### completes proof.\n\u220e"
118
+ },
119
+ {
120
+ "section_id": "4.2",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-B Coverage Probability of the Typical Relay",
123
+ "text": "In practice, relays can serve users only when the relevant data are channeled through RSU-to-relay links. To evaluate the network user performance restricted by the RSU-to-relay links, this section evaluate the coverage probability of the typical relay.\nThe coverage probability of the typical relay is given by Eq. (23 ###reference_###) where\nand\nWe have the result by using techniques in the proof of Theorem 2 ###reference_orem2###.\n\u220e\nWe combine Theorem 1 ###reference_orem1###, Lemma 1 ###reference_ma1###, Theorems 2 ###reference_orem2###, and 3 ###reference_orem3### to derive the user throughput."
124
+ },
125
+ {
126
+ "section_id": "5",
127
+ "parent_section_id": null,
128
+ "section_name": "User Throughput",
129
+ "text": "In the proposed network, users are associated with either RSUs or relays.\nFirstly, the normalized achievable rate of RSU-associated users is defined by the mean achievable rate of the typical RSU-associated users, divided by the mean number of users per RSU. The normalized achievable rate of the RSU-associated user is given by\nThe normalized achievable rate is in a heuristic metric because it is given by the ratio of the achievable rate to the average number of users, not the exact number. However, the exact distribution of the Cox-Voronoi cell is unknown; and thus using the exact number of users per RSU is infeasible. Here, we leverage the mass transport principle to obtain the mean number of users in the typical RSU cell (Theorem 1 ###reference_orem1###) and use it to compute the normalized achievable rate of RSU-associated user.\nSecondly, the normalized achievable rate of relay-associated users is dictated by both the coverage probabilities of the RSU-to-relay and relay-to-user links. Using the coverage probabilities of the both links, the normalized achievable rate of the relay-associated user is defined by\nwhere is the bandwidth for the RSU-to-relay links and is the bandwidth for RSU-to-user and relay-to-user links.\nFinally, \nwhere and are given by Theorem 1 ###reference_ma1###.\nThe instantaneous SIRs of RSU-to-relays links do not directly dictate the user throughput. However, these links indirectly affect the user performance by restricting the amount of data available at relays. Consequently, the throughput of relay-associated users will be determined by (i) the throughput of RSU-to-relay links, (ii) the throughput of relay-to-user links, (iii) the bandwidths and and (iv) the number of relays per RSU and the number of users per relay.\nThe user throughput is given by\nwhere and are given by Theorems 1 ###reference_ma1### and 3 ###reference_orem3###, respectively. Using the functions in Theorem 2 ###reference_orem2###, the coverage probability of the RSU-associated typical user is given by Eq. (24 ###reference_###). Similarly, the coverage probability of the relay-associated typical user is given by Eq. (25 ###reference_###). Using Theorem 1 ###reference_orem1###, we have and\n\nThe coverage probabilities of the typical RSU-to-user link and of the typical relay-to-user link are obtained by leveraging Theorem 2 ###reference_orem2###, respectively.\nTo obtain ,, and we use Theorem 1 ###reference_orem1###. This completes the proof.\n\u220e\n###figure_11### Suppose and is sufficiently large. Then, the user throughput is\nOn the other hand, the user throughput without any relay is\nAs a result, based on Eqs. (26 ###reference_###) and (27 ###reference_###), the proposed network has a multiplicative gain in the user throughput given by\nTheorem 4 ###reference_orem4### shows the user throughput as a function of and the distributions of and It shows when is large, deploying relays will increase the user throughput of the network or the normalized achievable rate of user. In many cases where , the derived user throughput formula is useful to the understanding tradeoff relationship between network parameters and addressing design problems exist in heterogeneous vehicular networks. For instance, to maximize the user throughput, one can find the optimal density of relays for a given . Similarly, when the density of relays is given, one can use the user throughput formula to study the impact of .\nFig. 11 ###reference_### shows the user throughput in Theorem 4 ###reference_orem4### where we use , , , , , and It shows that for the given network parameters, maximizes the user throughput of the proposed two-tier heterogeneous vehicular network. Note the maximum value of varies depending on the network variables such as and In practice, by exploiting Theorem 4 ###reference_orem4### network operators can easily find the optimal solution for with a marginally little computation cost."
130
+ },
131
+ {
132
+ "section_id": "6",
133
+ "parent_section_id": null,
134
+ "section_name": "VI Conclusion and Future Work",
135
+ "text": "Using stochastic geometry, this paper proposes and analyzes a novel two-tier heterogeneous vehicular network architecture where RSUs and vehicular relays serve network users. By assuming such vehicular relays are operated by RSUs and users are associated with either RSUs or relays, we derive the association probability of the network users. We find that the association probability is a nonlinear function of the RSU and relay densities because RSUs, relays, and users are all on roads. Then, we derive the coverage probability of the typical user and then obtain the user throughput. In particular, the user throughput incorporates the fact that RSUs operate relays and that the throughput of relay-associated users is dictated by the SIRs of RSU-to-relay and relay-to-user links and the corresponding bandwidths for those links. The paper gives practical insights on designing heterogeneous vehicular networks with RSUs and vehicular relays. By presenting the formulas for SIR and user throughput as network parameters, one can easily identify the complex interactions occur at the network and use these formulas to enhance reliability or to increase throughput.\nThe present paper starts a new line of studies on heterogeneous vehicular networks. It provides a tractable model and a tool to analyzing the network performance. The analysis of this paper can be developed further by considering new and more practical components; for instance, the clustering of vehicles on roads can be represented by an independent cluster point process on roads. The analysis of the proposed two-tier heterogeneous vehicular network is applicable to the analysis of multi-tier vehicular networks where there are various types of network elements exist such as RSUs, relays, and IoT devices.\n[Proof of Theorem 2 ###reference_orem2###]\nUnder the Palm distribution of the user point process, , there exist a typical user at the origin and a line containing the typical user. Here, is a uniform random variable between and By the law of total probability, the coverage probability is given by\nwhere we can write and The former is the event that the line containing the association transmitter is , the line that contains the typical user.\nwhere and are the events that the typical user is associated with its closest RSU and with its closest relay, respectively. Then, with the interference seen by the typical user, we have\nwhere we express the probability as the conditional expectation w.r.t. . Then, we write it as a conditional expectation w.r.t. the nearest RSU. We have the conditional probability density function of the distance from the origin to the closest RSU.\nIn a similar way, we have\nIn Eq. (31 ###reference_###), is the conditional probability density function of the distance from the origin to the nearest relay. Furthermore, the integrands of Eqs. (30 ###reference_###) and (31 ###reference_###) are\nrespectively. We obtain\nfrom the independence of the Poisson point processes. The conditional Laplace transform of interference is given by\nwhere we use the Laplace transform of the exponential random variable and the fact that conditionally on the line process and conditionally on the association distance all RSUs and relays are at distances greater than Then, we have\nwhere we use the facts that RSU and relay point processes on different lines are conditionally independent and that the distances from the origin to any RSU points are given by where is the RSU Poisson point process on the real axis .\n\nOn the other hand, the probability density function in Eq. (30 ###reference_###) is given by\nwhere we use the facts that (i) and (ii) there is no point of within a disk of radius centered at the origin. The probability density function in Eq. (31 ###reference_###) is\nTo obtain the first part of Eq. (28 ###reference_###), we combine Eqs. (31 ###reference_###) \u2013 (36 ###reference_###).\nLet us now evaluate the second part of Eq. (28 ###reference_###). By the law of total probability, the expression is\nwhere and denote the events that the typical user is associated with the RSU or with the relay, respectively.\nLet denote the line of the association RSU transmitter. Then, by conditioning on on , and then , we can write the first part of Eq. (37 ###reference_###) as follows:\nwhere we write \n\n\nIn a similar way, the second part of Eq. (37 ###reference_###) is given by\nwhere we write \n\n\nBy using the fact that is an exponential random variable, the conditional probability of Eq. (38 ###reference_###) is given by expression (17 ###reference_###) where the distances from the origin to the points of the Poisson point process on line are represented by where is an unit orthogonal vector from the origin to the line and is an unit vector, orthogonal to the vector . Here, is the RSU point process on the -axis and is the relay point process on the -axis. By using the probability generating functional of the Poisson point process, we have\nwhere the above five terms of Eq. (40 ###reference_###) correspond to the Laplace transforms of the interference of (i) RSU plus relay on the line (ii) RSU on the lines closer than , (iii) relay on the lines closer than (iv) RSU on the lines further than , and (v) relay on the lines further than respectively.\nTo obtain the conditional probability density function of the distance from the origin to its closest RSU in Eq. (38 ###reference_###), we use the facts that (i) is the closest to the origin, (ii) , and (iii) all the other RSU or relay point processes have no point in the disk of radius . Therefore, using the void probability of the Poisson point process, the conditional probability density function of the distance from the origin to its closest RSU in Eq. (38 ###reference_###) is\nIn a similar way, the conditional probability density function of the distance from the origin to its closest relay in Eq. (39 ###reference_###) is given by\nFinally, we combine Eqs. (40 ###reference_###) and (41 ###reference_###) then integrate the result w.r.t. and then w.r.t. . First, to integrate w.r.t. we combine all the functions w.r.t. to get the expression (18 ###reference_###). Then, we combine Eqs. (40 ###reference_###), (41 ###reference_###), and (18 ###reference_###).\n\nSimilarly, to obtain the second part of Eq. (37 ###reference_###), we combine Eq. (40 ###reference_###) and (42 ###reference_###) and evaluate the functions w.r.t. to obtain Eq. (19 ###reference_###). Then, we combine the rest of Eq. (40 ###reference_###), (42 ###reference_###) and (19 ###reference_###) to complete the proof.\n\u220e"
136
+ }
137
+ ],
138
+ "appendix": [
139
+ {
140
+ "section_id": "Appendix x1",
141
+ "parent_section_id": null,
142
+ "section_name": "Acknowledgment",
143
+ "text": "The work of Chang-Sik Choi was supported in part by\nthe NRF-2021R1F1A1059666.\nThe work of Francois Baccelli was supported by the ERC NEMO\ngrant 788851 to INRIA."
144
+ }
145
+ ],
146
+ "tables": {
147
+ "1": {
148
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Spectrum usage</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.4.1.1\">Communication link types</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T1.3.4.1.2\">Bandwidth</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.2\">RSU-to-user links</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2\">relay-to-user links</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.2\">RSU-to-relay links</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.1\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
149
+ "capture": "TABLE I: Spectrum usage"
150
+ }
151
+ },
152
+ "image_paths": {
153
+ "1": {
154
+ "figure_path": "2204.12243v4_figure_1.png",
155
+ "caption": "Figure 1: Illustration of the proposed vehicular network with RSUs, relays, and users. Users may get messages directly from RSUs (right) or via relays (left).",
156
+ "url": "http://arxiv.org/html/2204.12243v4/x1.png"
157
+ },
158
+ "2": {
159
+ "figure_path": "2204.12243v4_figure_2.png",
160
+ "caption": "Figure 2: Illustration of the proposed network where \u03bbl=2/kmsubscript\ud835\udf06\ud835\udc592km\\lambda_{l}=2/\\text{km}italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT = 2 / km, \u03bcs=2/kmsubscript\ud835\udf07\ud835\udc602km\\mu_{s}=2/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 2 / km, \u03bcr=4/kmsubscript\ud835\udf07\ud835\udc5f4km\\mu_{r}=4/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 4 / km, and \u03bcu=10/kmsubscript\ud835\udf07\ud835\udc6210km\\mu_{u}=10/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 10 / km.",
161
+ "url": "http://arxiv.org/html/2204.12243v4/x2.png"
162
+ },
163
+ "3": {
164
+ "figure_path": "2204.12243v4_figure_3.png",
165
+ "caption": "Figure 3: Illustration of the proposed network where \u03bbl=5/kmsubscript\ud835\udf06\ud835\udc595km\\lambda_{l}=5/\\text{km}italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT = 5 / km, \u03bcs=2/kmsubscript\ud835\udf07\ud835\udc602km\\mu_{s}=2/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 2 / km, \u03bcr=2/kmsubscript\ud835\udf07\ud835\udc5f2km\\mu_{r}=2/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 2 / km, and \u03bcu=5/kmsubscript\ud835\udf07\ud835\udc625km\\mu_{u}=5/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 5 / km.",
166
+ "url": "http://arxiv.org/html/2204.12243v4/x3.png"
167
+ },
168
+ "4": {
169
+ "figure_path": "2204.12243v4_figure_4.png",
170
+ "caption": "Figure 4: Illustration of the proposed network where \u03bbl=10/kmsubscript\ud835\udf06\ud835\udc5910km\\lambda_{l}=10/\\text{km}italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT = 10 / km, \u03bcs=2/kmsubscript\ud835\udf07\ud835\udc602km\\mu_{s}=2/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 2 / km, \u03bcr=4/kmsubscript\ud835\udf07\ud835\udc5f4km\\mu_{r}=4/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 4 / km, and \u03bcu=10/kmsubscript\ud835\udf07\ud835\udc6210km\\mu_{u}=10/\\text{km}italic_\u03bc start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 10 / km.",
171
+ "url": "http://arxiv.org/html/2204.12243v4/x4.png"
172
+ },
173
+ "5": {
174
+ "figure_path": "2204.12243v4_figure_5.png",
175
+ "caption": "Figure 5: Illustration of the association probability of the typical user. The derived formula of Lemma 1 matches the simulation results. We use \u03bbl=2subscript\ud835\udf06\ud835\udc592\\lambda_{l}=2italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT = 2/km and \u03bcs=1subscript\ud835\udf07\ud835\udc601\\mu_{s}=1italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 1/km.",
176
+ "url": "http://arxiv.org/html/2204.12243v4/x5.png"
177
+ },
178
+ "6": {
179
+ "figure_path": "2204.12243v4_figure_6.png",
180
+ "caption": "Figure 6: The derived formula matches the simulation results. We use \u03b3=1\ud835\udefe1\\gamma=1italic_\u03b3 = 1, \u03b1=2.5\ud835\udefc2.5\\alpha=2.5italic_\u03b1 = 2.5 and \u03b2=3.5\ud835\udefd3.5\\beta=3.5italic_\u03b2 = 3.5. The units of \u03bbl,\u03bcs,subscript\ud835\udf06\ud835\udc59subscript\ud835\udf07\ud835\udc60\\lambda_{l},\\mu_{s},italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , and \u03bcrsubscript\ud835\udf07\ud835\udc5f\\mu_{r}italic_\u03bc start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT are per kilometer.",
181
+ "url": "http://arxiv.org/html/2204.12243v4/x6.png"
182
+ },
183
+ "7": {
184
+ "figure_path": "2204.12243v4_figure_7.png",
185
+ "caption": "Figure 7: The illustration of the SIR coverage probability. Here, \u03bblsubscript\ud835\udf06\ud835\udc59\\lambda_{l}italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT and \u03bcssubscript\ud835\udf07\ud835\udc60\\mu_{s}italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT are fixed whereas \u03bcrsubscript\ud835\udf07\ud835\udc5f\\mu_{r}italic_\u03bc start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT varies.",
186
+ "url": "http://arxiv.org/html/2204.12243v4/x7.png"
187
+ },
188
+ "8": {
189
+ "figure_path": "2204.12243v4_figure_8.png",
190
+ "caption": "Figure 8: The illustration of the SIR coverage probability. Here, \u03bcssubscript\ud835\udf07\ud835\udc60\\mu_{s}italic_\u03bc start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and \u03bcrsubscript\ud835\udf07\ud835\udc5f\\mu_{r}italic_\u03bc start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT are fixed whereas \u03bblsubscript\ud835\udf06\ud835\udc59\\lambda_{l}italic_\u03bb start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT varies.",
191
+ "url": "http://arxiv.org/html/2204.12243v4/x8.png"
192
+ },
193
+ "9": {
194
+ "figure_path": "2204.12243v4_figure_9.png",
195
+ "caption": "Figure 9: The SIR coverage probability. We use \u03b1\u2260\u03b2\ud835\udefc\ud835\udefd\\alpha\\neq\\betaitalic_\u03b1 \u2260 italic_\u03b2.",
196
+ "url": "http://arxiv.org/html/2204.12243v4/x9.png"
197
+ },
198
+ "10": {
199
+ "figure_path": "2204.12243v4_figure_10.png",
200
+ "caption": "Figure 10: The SIR coverage probability. We use \u03b1=\u03b2\ud835\udefc\ud835\udefd\\alpha=\\betaitalic_\u03b1 = italic_\u03b2.",
201
+ "url": "http://arxiv.org/html/2204.12243v4/x10.png"
202
+ },
203
+ "11": {
204
+ "figure_path": "2204.12243v4_figure_11.png",
205
+ "caption": "Figure 11: User throughput in Theorem 4.",
206
+ "url": "http://arxiv.org/html/2204.12243v4/x11.png"
207
+ }
208
+ },
209
+ "validation": true,
210
+ "references": [],
211
+ "url": "http://arxiv.org/html/2204.12243v4"
212
+ }
20240225/2207.02760v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2209.01410v2.json ADDED
@@ -0,0 +1,600 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Closed-Loop View of the Regulation of AI: Equal Impact across Repeated Interactions",
3
+ "abstract": "There has been much recent interest in the regulation of AI. We argue for a view based on civil-rights legislation, built on the notions of equal treatment and equal impact. In a closed-loop view of the AI system and its users, the equal treatment concerns one pass through the loop. Equal impact, in our view, concerns the long-run average behaviour across repeated interactions. In order to establish the existence of the average and its properties, one needs to study the ergodic properties of the closed-loop and, in particular, its unique stationary measure.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": " Introduction",
9
+ "text": "There has been considerable interest in the regulation of artificial intelligence (AI), recently.\nIt is increasingly recognised that so-called high-risk applications of AI, such as in human resources, retail banking, or within public schools, be it admissions or assessment, cannot be served by black-box AI systems with no human control [Bringas Colmenarejo et al., 2022 ###reference_bx10###], predominantly due to concerns for protected human rights.\nA great many reports and research have revealed the danger of AI systems violating fairness in predicting which areas need patrolling [Courtland, 2018 ###reference_bx16###], criminal-risk assessment [Angwin et al., 2016 ###reference_bx2###], discriminatory behavior in advertising and recruiting algorithms for people with disabilities [Nugent and Scott-Parker, 2021 ###reference_bx40###, Guo et al., 2020 ###reference_bx27###], search engine reinforcing racism [Noble, 2018 ###reference_bx39###]; and the threat of breaching privacy [Nguyen et al., 2021 ###reference_bx38###, Sun et al., 2020 ###reference_bx49###].\nTo cope with the challenges of AI, leading technology companies have issued AI principles of their own and developed software tools geared towards fairness and explainbility of AI, such as AIF360 [Bellamy et al., 2018 ###reference_bx5###] of IBM, SHAP [Lundberg and Lee, 2017 ###reference_bx33###] of Microsoft. In a broader context, it is not clear [Dobbe et al., 2021 ###reference_bx20###], however, how to phrase even the desiderata for the regulation of AI.\nHere, we suggest that the desiderata could be the same as in the Civil Rights Act of 1964 and much of the subsequent civil-right legislation world-wide: equal treatment and equal impact.\nAt the same time, we point out that these desiderata could be in conflict [Binns, 2020 ###reference_bx8###, Zhao and Gordon, 2019 ###reference_bx59###].\nThe Ricci v. DeStefano, 557 U.S. 557 (2009) labour law case has demonstrated the practical differences between them, where the city of New Haven has declined to promote city firefighters based on the same test, which, shows a disproportionate pass rate for a certain race, as to the fear of valiating Title VII of the Civil Right Act of 1964 [McGinley, 2011 ###reference_bx35###]. The use of the same test conducts the principle of equal treatment, while the disparate pass rates and possibly contrasting promotion results do not comply with equal impact.\nLet us illustrate the conflict with another example of a system that performs credit-risk estimation in a consumer-credit company.\nIn the US, this is regulated by the Equal Credit Opportunity Act of 1974, but the example applies equally well to other countries.\nImagine a situation where the the credit decision is uniform: everyone who has not defaulted on any loan is approved a credit up to $50000. Anyone else is declined credit. This is clearly the most \u201cequal treatment\u201d possible, in the spirit of non-discrimination \u201con the basis of race, color, religion, national origin, sex, marital status, age, receipt of public assistance\u201d, as mandated by the Equal Credit Opportunity Act.\nAt the same time, if one subgroup (defined by whichever protected attribute, e.g., race or the receipt of public assistance) is having a lower-than-average income, its default rate on the $50000 loan may be higher than that of the other subgroups. Over time, the subgroup with lower-than-average income will be regularly declined credit as a result of these defaults, in violation of the \u201cequal impact\u201d.\nOn the other hand, if the credit limit is, e.g., set at three times the annual salary, the subgroup with lower-than-average income will be offered lower credit limits, in violation of the \u201cequal treatment\u201d. The differentiated credit limits may make it possible for the same subgroup to repay the loans successfully, though, to develop a credit history, and eventually lead to a positive and \u201cequal impact\u201d.111\nWhile the Equal Credit Opportunity Act mandates that one must accurately describe the factors actually scored by a creditor, it does not suggest which of the above is preferable.\nSpecifically, it says \u201cif creditors know they must explain their decisions \u2026 they [will] effectively be discouraged from discriminatory practices\u201d.\n\nSee the penultimate section of this paper for further details of the application.\nOur original contribution then stems from the reinterpreting of the meaning of equal treatment and equal impact within a closed-loop view of the AI system.\nThere, an AI system produces information, which is communicated to the users, who respond to the information. The aggregate actions of the users are observed and serve as an input to further uses of the AI system.\nEqual treatment concerns a single run of this closed-loop, while equal impact concerns long-run properties of this closed-loop.\nThe closed-loop view of the AI system addresses several important shortcomings of the presently proposed systems:\nit very clearly distinguishes equal impact from equal treatment;\nit allows for a stochastic response of the users to the information produced by the AI system, rather than assuming it is deterministic;\nit explicitly models the \u201cconcept drift\u201d and retraining of the AI system over time, inherent in practical AI systems, but ignored by most analyses of AI systems.\nIn terms of technical results, we formalise the notions above, present one condition that is necessary for the equal impact of an AI system, and illustrate the notions on a credit-risk use case."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Regulation of AI",
21
+ "text": "While there is a long history of research on the interface of AI and law [Bench-Capon et al., 2012 ###reference_bx6###, Narayanan, 2018 ###reference_bx36###, Berente et al., 2021 ###reference_bx7###, e.g.], much recent interest [Smuha, 2021b ###reference_bx48###, Petit and De Cooman, 2021 ###reference_bx42###, e.g.] has been sparked by the plans to introduce AI regulation within the legal system.\nBy investigating the self-regulation of leading AI companies from both the USA and Europe, [de Laat, 2021 ###reference_bx18###] appeal for future practices and governmental regulation.\nArguably, the European Commission regulates AI already: Article 22.1 of the General Data Protection Regulation (GDPR) is sometimes interpreted as prohibiting fully automated decisions with legal effect or \u201csimilarly significant effect\u201d.\nThere is much discussion regarding the AI Act [Veale and Borgesius, 2021 ###reference_bx51###] and regulatory landscape [Bringas Colmenarejo et al., 2022 ###reference_bx10###, Vokinger and Gasser, 2021 ###reference_bx53###] in the Europe Union, and the potential extensions of the regulatory framework in the USA [Chae, 2020 ###reference_bx13###].\nThe EU Artificial Intelligence Regulation Proposal, sugguests use of \u201cfeedback loops\u201d that perform the detection of biased outputs and the repeated introduction of appropriate methods of bias mitigation. 222Article 15 of this Proposal emphasises that \u201cHigh-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (\u2018feedback loops\u2019) are duly addressed with appropriate mitigation measures.\u201d\nWithin the recent discussions, a fair amount of attention focuses on the question of defining AI [Schuett et al., 2019 ###reference_bx45###] \u2013 or whether one should like to regulate the use of any algorithm [Schuett, 2019 ###reference_bx44###, Ellul, 2022 ###reference_bx23###] \u2013 and defining high-risk uses of AI.\nOne would also like to distinguish [Smuha, 2021a ###reference_bx47###] between the harm of the individual and the society.\nFurther, in high-risk applications of AI, the automated decision-making AI systems are bound to be fair while formalisation of fairness definitions has been a long-standing debate. From the prospectives of fair outcomes, group fairness, such as demographic parity [Calder et al., 2009 ###reference_bx11###], equal opportunity [Hardt et al., 2016 ###reference_bx29###], requests people from protected groups to be given the same treatment as others, while individual fairness requests \u201csimilar people to be treated similarly\u201d [Petersen et al., 2021 ###reference_bx41###, Dwork et al., 2012 ###reference_bx22###]. On the other hand, casual fairness [Chiappa, 2019 ###reference_bx15###, Kusner et al., 2017 ###reference_bx30###] asks for a fair decision process, such that protected attributes are not direct causes of decisions, or only through certain causal paths.\nSome recent works have extended to defining fairness in specific contexts, using users\u2019 feedback [Wen et al., 2021 ###reference_bx56###, D\u2019Amour et al., 2020 ###reference_bx17###, Awasthi et al., 2020 ###reference_bx3###].\nIn contrast, we distinguish between the treatment within a single interaction with the AI system and the impact of repeated interactions with the AI system. Further, we propose a closed-loop framework that repeatly increases fairness, using aggregated feedback or users\u2019 responses."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Control Theory",
27
+ "text": "Our approach is rooted in the closed-loop view of feedback control, but with several important differences.\nClassic control often focuses on regulating a single system.\nThe system achieves the required behaviour most efficiently given the restrictions imposed by the challenge and the available resources. Even in areas where large-scale coupled systems are studied, the behaviour of all system components is analyzed and developed. On the other hand, in artificial intelligence, it is not the behaviour of individual users that is of interest. Rather than that, the variable of interest is the aggregate impact of the acts of a large number of users.\nExamples of this kind of analysis include demand management for shared resources such as water and electricity, and the provision of medical care. De-synchronization alleviates the supply strain, and collective effects quantify the supply\u2019s quality. On the other hand, limits on the needed level of service for persons vary according to the application area.\nSecond, classical control, in general, is concerned with the control of systems with fixed dimensions. On the other hand, artificial intelligence often regulates and affects the behaviour of large-scale populations. Even the system\u2019s dimensions may be unpredictable and variable in such settings, emphasizing the critical requirement for scale-free management of extremely large-scale systems. Except in the case of passive control design, scale-free control for large systems is a largely unexplored issue in the classical control field.\nThirdly, in classical control, the controlled system\u2019s mathematical description does not change in response to control signals. This underlying concept is challenging to realize in artificial intelligence. By and large, models can only approximate the dynamics of the actual systems. This is not an issue as long as there is an appreciation for the possibility of reality and model deviating from one other. However, models in artificial intelligence are not easily derived from first principles; instead, they are empirical, i.e., based on data gathered from measurements of existing processes. Additionally, controlled studies cannot gather empirical data across a variety of operating points but must be obtained directly from the system.\nAn effort to enhance the processes above, for example, by sending information to the users involved, establishes a feedback loop that did not exist earlier. This change in the underlying process may invalidate the empirical model since there were no data available to represent the dynamic influence of such feedback during the model\u2019s development. Frequently, offered solutions ignore this feedback loop. This latter aspect necessitates a far more extensive examination of prediction and optimisation under feedback than has hitherto been the case.\nFourth, data sets are often gathered in a closed-loop fashion like Figure 1 ###reference_###. That is, public data sets often contain information about decision-makers. Developing models of large-scale feedback systems is a crucial hurdle to development in applying certain control methods to artificial intelligence. In dealing with such impacts, artificial intelligence researchers may have a lot to learn from economic and control theory.\nFinally, and perhaps most significantly, a fundamental distinction between classical control and our approach is the need to investigate the influence of control signals on the statistical features of the populations under control. Given that we are often dealing with service delivery, these statistical features should be stationary and predictable, necessitating ergodic control design."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Control of Multi-user Dynamical Systems",
33
+ "text": "Perhaps the closest to our work within control theory are multi-user dynamical systems over networks.\nThere, the principal concern is the design of distributed protocols that provide consensus or synchronisation of states of all users [Blondel et al., 2005 ###reference_bx9###, Nedic and Ozdaglar, 2009 ###reference_bx37###]. (The states might indicate vehicle directions or locations, estimations of sensor readings in a sensor network, oscillation frequencies, and each user\u2019s trust opinion, among other things.) To achieve synchronised behaviour in multi-user systems, all systems must agree on the values of these quantities.\nStudying their interactions and collective behaviours under the effect of the information flow permitted by the communication network is critical for networked cooperative dynamical systems.\nThis communication network may be seen as a graph with directed edges or connections corresponding to the information travelling between the systems. The systems are portrayed as nodes on the graph and are sometimes referred to as users. In communication networks, information flows exclusively between the graph\u2019s close neighbours. However, if a network is linked, this locally sent information eventually reaches every user in the graph.\nIn cooperative control systems based on graphs, there are fascinating interactions between the dynamics of the individual users and the communication graph\u2019s topology.\nThe graph topology may severely constrain the performance of the users\u2019 control rules. To be precise, in cooperative control on graphs, all control protocols must be distributed so that each user\u2019s control rule is limited to knowledge about its near neighbours in the network topology. If sufficient attention is not taken while constructing the local user control rules, the dynamics of the individual users may be stable, but the graph\u2019s networked systems may display undesired behaviours. Due to the communication constraints imposed by graph topologies, complex and fascinating behaviours are seen in multi-user systems on graphs that are not found in single-user, centralised, or decentralised feedback control systems.\nThe ideas of distributed cooperative control are used in [Lewis et al., 2013 ###reference_bx32###] to construct optimal and adaptive control systems for multi-user dynamics on graphs. The requirement complicates these designs that all control and parameter tweaking methods must be dispersed in the network to rely on just their near neighbours.\n[Lewis et al., 2013 ###reference_bx32###] analysed discrete-time systems and demonstrate that an additional condition between the local user dynamics and the graph topology must be met to ensure global synchronization when the local optimum design is used. Global optimization of collective group movements is more challenging than locally optimizing each user\u2019s motion. A typical issue in optimum decentralized control is that global optimization problems often demand knowledge from all users, which distributed controllers cannot access since they can only utilize information from closest neighbours. Further, they demonstrate, globally optimum distributed form controls may not exist on a particular graph. To achieve globally optimum performance when employing distributed protocols that rely only on local user information in the graph, the global performance index must be chosen to depend on graph features, notably the graph Laplacian matrix. They also establish distinct global optimality for which distributed control solutions are always possible on sufficiently linked networks. There, they examine multi-user graphical games and demonstrate that a Nash equilibrium results when each user optimizes its local performance index. For more results on these direction we refer [Shamma, 2008 ###reference_bx46###, Wang et al., 2017 ###reference_bx55###, Wang et al., 2021 ###reference_bx54###, Yu et al., 2017 ###reference_bx58###, Chen et al., 2019 ###reference_bx14###]."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III A Closed-Loop View of AI Systems",
39
+ "text": "Let us consider a closed-loop model based on the following constraints:\nUsers get information from the AI System, but are not required to take action based on the AI System\u2019s outputs. It will be convenient to encode user\u2019s reaction to the output probabilistically.\nThe AI System does not necessarily monitor individual user\u2019s actions (\u201cprofiling\u201d), but rather some aggregate or otherwise filtered version.\nThe users do not communicate with one another, or only in response to information broadcast by the central authority.\n###figure_1### Ultimately, the repeated uses of an AI system can be seen as the closed-loop of Figure 1 ###reference_###.\nThe AI System produces some outputs at time , e.g., lending decisions in financial services, matches in a two-sided market, or suggestions in a decision-support system.\nThe output is taken up by users of the system, who\nhave some states , internal to them, where .\nThe users take some action, which can be modelled as a probability function of the output and the private state, over the certain user-specific sets of actions.\nThe action of user at time is then a random variable.\nIn the remainder, we will assume are scalars, but generalisations are easy to obtain.\nThe aggregate of the actions at time is then also a random variable.\nThe AI System may not have access to either , , but perhaps only or some filtered version.\nThe filter may accumulate the data, for instance, before filtering out anomalies."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV Equal Treatment",
45
+ "text": "Equal treatment very clearly examines the AI system\u2019s treatment of its users and the influence on the microscopic qualities over the short run.\nFor each user , we require that\nthe system provides the same information to all users ,\nthat there exists a constant such that\nwhere this constant is independent of initial conditions.\nFor each user within a class that is defined by non-protected attributes, we require that\nthe system provides the same information to all users within the class;\nthat there exists a constant such that\nwhere this constant is independent of initial conditions.\nNotice that there is a sufficiently large overlap of the classes that are defined by non-protected attributes such that the definition reduces to the unconditional equal treatment."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Equal Impact",
51
+ "text": "Equal impact very clearly examines the AI system\u2019s influence on the user population\u2019s microscopic qualities over the long run. One may desire, for example, that each user obtains a fair portion of the resource on average over time, or, at a far more fundamental level, that the average allocation of the resource to each user over time is a stable number that is predictable and independent of beginning circumstances.\nTo model equal impact, we construct requirements that ensure ergodicity: the presence of a single invariant measure to which the system is statistically drawn regardless of the starting circumstances.\nFor each user , we require that\nthere exists a constant such that\nwhere this latter limit is independent of initial conditions;\nall the coincide.\nFor each user within a class that is defined by non-protected attributes,\nthere exists a constant such that\nwhere this latter limit is independent of the initial conditions.\nFurthermore, we require that all the coincide."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "VI Guarantee Properties",
57
+ "text": "Proving that there is a unique invariant measure is not necessarily an easy undertaking. Even well-known AI systems do not always result in feedback systems that exhibit equal impact.\nUnder the assumptions of continuity of the closed-loop model, the work on iterated function systems [Elton, 1987 ###reference_bx24###, Barnsley et al., 1989 ###reference_bx4###, Diaconis and Freedman, 1999 ###reference_bx19###], which are a class of stochastic dynamical systems arising from the multi-user interactions, makes it possible to obtain strong stability guarantees for such stochastic systems under the assumptions of continuity of the closed-loop model.\nThe following are shown in the work [Fioravanti et al., 2019 ###reference_bx25###]:\nEven if regulation is accomplished by controlling the behaviour of ensembles of users, feedback control with integral action has the potential to disrupt the closed-loop system\u2019s ergodic features. This discovery is significant because ergodic behaviour is necessary for supporting economic contracts and ensuring the existence of attributes such as fairness. Thus, from a practical standpoint, the finding is one of the system\u2019s critical features and is not only theoretically interesting.\nA few particular instances are given to demonstrate the loss of ergodicity in seemingly innocuous situations.\nFor particular population types and filters, stable control action always results in ergodic behaviour. It was particularly shown for linear and non-linear systems with both real-valued and finite-set actions.\nFinally, a minor contribution was made to demonstrate how the results from the study of iterated function systems might be used in designing controllers for specific types of dynamic systems.\nIn this paper, we have to relax the continuity assumptions, however. Indeed, the classification problems involve discrete sets such as the \u201ccredit denied\u201d or \u201ccredit approved\u201d, which cannot be easily modelled with continuous fuctions. So in this case, stochastic, user-specific response to the feedback signal can be modelled\nby user-specific and signal-specific probability distributions over the certain user-specific sets of actions\nwhere can be seen as the space of user\u2019s private state space . Assume that the set of possible resource demands of user is\n, where in the case that is finite we denote\nIn the general case, we assume there are state transition maps\nfor user and output maps\nfor each user .\nThe evolution of the states and the corresponding demands then satisfy:\nwhere the choice of user \u2019s response at time is governed by probability functions\nrespectively. Specifically, for each user , for all and for all signal we have that:\nThen, one can prove that when the graph is strongly connected, there\nexists an invariant measure for the feedback loop. If in\naddition, the adjacency matrix of the graph is primitive,\nthen the invariant measure is attractive and the system\nis uniquely ergodic.\nFor linear systems, this is a direct consequence of (Werner, 2004) and the observation that the necessary contractivity properties follow from the internal asymptotic stability of controller and filter.\nFor non-linear systems, similar results can be obtained using [Marecek et al., pear ###reference_bx34###, Theorem 2]. See also [Ghosh et al., 2021 ###reference_bx26###] and the Supplementary information."
58
+ },
59
+ {
60
+ "section_id": "7",
61
+ "parent_section_id": null,
62
+ "section_name": "VII Numerical Illustrations",
63
+ "text": "Credit scoring refers to the process of lenders, usually financial institutions, measuring the creditworthiness of a person or a small business, usually derived from its historical default.\nIn USA, Equal Credit Opportunity Act (ECOA) and the part of the law that defines its authority and scope, known as Regulation B, require statements of specific reasons for adverse credit decisions, where it would be difficult, yet impossible to comply if complex algorithms or \u201cblack-box\u201d models are used. Instead, scorecards are commonly adopted in practice, due to their good explainability, while alternatively, counterfactual explanations [Dutta et al., 2022 ###reference_bx21###, Verma et al., 2020 ###reference_bx52###] work as an explainer of \u201cblack-box\u201d models to guide an applicant on the easiest improvement that could change the model outcome.\nTable I ###reference_### displays a simple scorecard.\nAlthough Table I ###reference_### might seem fair at first sight, income is a factor closely related to protected attributes, e.g., race. Figure 2 ###reference_### displays the 2020 annual income distribution of households by race, including \u201cBLACK ALONE\u201d (blue), \u201cWHITE ALONE\u201d (pink) and \u201cASIAN ALONE\u201d (green), in the USA, sourced from Table A-2. Households by Total Money Income, Race, and Hispanic Origin of Householder: 1967 to 2020 (Table A-2), from US Census Bureau 333See https://www.census.gov/data/tables/2021/demo/income-poverty/p60-273.html ###reference_mo/income-poverty/p60-273.html###. The green bar on the index \u201cover 200\u201d implied that a larger share (almost 20%) of \u201cASIAN ALONE\u201d households makes more than in 2020.\nOn the other hand, the income of most \u201cBLACK ALONE\u201d households is less than .\nThis figure casts doubt on the equal treatment using the scorecard in Table I ###reference_###, because races with generally lower incomes would receive a lower credit score.\nIf a lender tries to maintain similar credit distributions across different races, the results may not be as expected in the long run, as low-income households might end up defaulting or even not be able to apply for another mortgage ever after, thus hurting their long-term credit history.\n###figure_2### Our notion of equal impact in the context of credit scoring would equalise the long-term average default rate across races or across individuals, such that low-income households can keep better credit history.\nRecall Figure 1 ###reference_### from the perspective of credit scoring.\nGiven the goal of equal impact, at each time step, the income\n is internal to the user (user), while her income code is visible to a lender, where is an indicator function that maps the input to one if is satisfied and all other values to zero.\nThe lender would use the AI system, i.e., logistic regression in our case, to build a scorecard and reveal a credit decision (e.g., approval or denial of a mortgage transaction) to user at time .\nNote that the scorecard only gives a credit score, but, based on a cut-off score, the lender is able to reach a credit decision.\nConfidential to cilent , her state at time is determined by her income and, in turn, influences the repayment action.\nIts repayment action is modelled as a Gaussian conditional independence model [Tang et al., 2021 ###reference_bx50###, Leitao and Ortiz-Gracia, 2020 ###reference_bx31###, Rutkowski and Tarca, 2015 ###reference_bx43###]. Afterwards, the filter calculates the average default rates of each user, using historical repayment actions for .\nThe average default rates, along with the income code of users, would be used as training data for the AI system, and further, new credit decisions are made again using logistic regression.\nFor the numerical experiments, we use the real-world data from Table A-2, which gives the number of households and income distribution by year and race.\nWe consider a period from 2002 to 2020, with a year being a time step, because in 2002 the Annual Social and Economic Supplement (ASEC) of the Current Population Survey (CPS) started to allow households to report their race from more diverse options.\nLet be a set that includes 3 races: \u201cBLACK ALONE\u201d, \u201cWHITE ALONE\u201d and \u201cASIAN ALONE\u201d.\nIn the beginning of 2002 (time ), we generate users (households), whose races are sampled from with a distribution of .\nNotice that the distribution is the ratio of the number of households of the three races in 2002 in Table A-2.\nThe generated user set is then divided into 3 subsets according to race, denoted by , for .\nFurther, following the income distribution of the year and race , we sample the income of user at time .\nFor simplicity, let denote that user is offered a 3.5-times-income mortgage at time .\nAssuming that the annual mortgage rate and the basic living cost are 2.16% per annum and , we use the Gaussian conditional independence model [Rutkowski and Tarca, 2015 ###reference_bx43###] to generate the repayment actions.\nSuppose that the state measures the portion of income left after deduction of living cost and mortgage interest:\nThe binary repayment action (1 for repaid) is defined by (11 ###reference_###).\nwhere user would not make a repayment if no mortgage is offered or if her income cannot cover the basic living cost plus mortgage interest. Otherwise, the repayment action follows a Bernoulli distribution with , where is the cumulative distribution function of the standard normal distribution.\nFurthermore,\nwe define default as a mortgage offered but not repaid, i.e., .\nWe introduce the average default rate for user and the race-wise version for race at time , as defined in (12 ###reference_###):\nwhere denotes the number of users of race .\nWith the goal of equal impact, we wish to equalise the outcome of credit scoring among individuals in the long run, such that\nand that all coincide and all coincide.\n###figure_3### ###figure_4### ###figure_5### For the year of 2002-2003 (time 0 & 1), no scorecard is used and we assume all users are given the approval of the mortgage, e.g., , for and .\nThus, we obtain the initilisation of average default rates, i.e., and .\nAfterwards, for time , a scorecard is built, whose parameters are trained from a logistic model, with independent variables being , ADR and the dependent variable being .\nAlthough, the scorecard can vary in time steps, we use the same cut-off score 0.4 to decide each user\u2019s credit decision (0 for denial and 1 for approval). Using our notation, the example of Table I ###reference_### would be rewritten as\nWe define a trial as the simulation of generating 1000 users () and repeating the closed-loop for the period 2002-2020.\nIn our numerical experiments, five trials are conducted, with each trial using a new batch of 1000 users.\nFor consistency with Figure 2 ###reference_###, the races \u201cBLACK ALONE\u201d, \u201cWHITE ALONE\u201d, and \u201cASIAN ALONE\u201d are represented by blue, pink, and green colours, respectively.\nIn Figure 5 ###reference_###, we show the race-wise performance in five trials.\nGiven a certain race , the sequence of for one trial forms a time series. Across all five trials, the mean value and one standard deviation could be calculated from the five time series.\nWe denote the mean value of the time series across five trials by a solid curve and one standard deviation by error shades, with the corresponding race distinguished by colour.\nIn Figures 5 ###reference_### and 5 ###reference_###, we show the user-wise performance in five trials.\nSimilarly, given a certain user , the sequence of for one trial is a time series.\nFrom the five trials, and all users in , time series.\nIn Figure 5 ###reference_###, the time series are visualised directly, with their races distinguished by colours.\nIn Figure 5 ###reference_###, the race information of the users are erased, as we intend to present the distribution of the time series by grey shades.\nNote that darker shades denote higher density of ADR at the certain time step.\nRecalling the goal of equal impact in (13 ###reference_###), we would like to see these time series converge (weakly to the same distribution).\nFrom Figure 5 ###reference_###-5 ###reference_###, we do observe that all time series, aggregated by race or not, are dwindling to a similar level."
64
+ },
65
+ {
66
+ "section_id": "8",
67
+ "parent_section_id": null,
68
+ "section_name": "VIII Conclusions",
69
+ "text": "We have presented a novel, closed-loop view of the impact of AI systems.\nOn the example in consumer-credit approvals, we showcase, that equal impact is possible while preserving equal treatment conditional on a non-protected attribute of income.\nAn important question for further work is how to impose constraints on the equality of impact [Celis et al., 2019 ###reference_bx12###].\nAnother important question asks whether the coupling arguments of Hairer et al. [Hairer et al., 2011 ###reference_bx28###] could make it possible to show certain contrapositive statements, suggesting when such guarantees are impossible to provide."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix 1",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix A Markov Systems",
77
+ "text": "A Markov system (see Figure 6 ###reference_###) is a family where consisting of edges of a finite directed (multi) graph with are vertices and is also possible, indicates the initial vertex of each edge and indicates the terminal vertex of each edge, is a partition of the metric space into non-empty Borel subsets, is a family of Borel-measurable maps on the metric space such that\nand is a family of Borel measurable maps on with the property for all and \u2004 for all . A Markov system is called irreducible or aperiodic if its directed graph is irreducible or aperiodic. A Markov system is called contractive with contraction factor if its probability functions satisfy the following average contractivity condition, ,\nThe Markov system defined above determines a Markov operator on the space of bounded Borel measurable functions on , which is denoted by ,\nand the adjoint of is denoted by acts on the space of Borel probability measures as\nA Borel probability measure is said to be an invariant probability measure for the Markov system if it is a stationary distribution of the associated Markov process i.e.\nA Borel probability measure is called attractive for the contractive Markov system iff\n###figure_6###"
78
+ },
79
+ {
80
+ "section_id": "Appendix 2",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix B Incremental Stability",
83
+ "text": "Incremental stability is a well-established concept to describe the asymptotic property of differences between any two solutions. One can utilise the concept of incremental input-to-state stability, which is defined as follows:\nA function is is said to be of class if it is continuous, increasing and . It is of class if, in addition, it is proper, i.e., unbounded.\nA continuous function is said to be of class , if for all fixed the function is of class and for all fixed , the function is is non-increasing\nand tends to zero as .\nLet denote the set of all input functions \nSuppose is continuous, then the discrete-time non-linear dynamical system\nis called (globally) incrementally input-to-state-stable (incrementally ISS), if there exist and such that\nfor any pair of inputs and any pair of initial condition :"
84
+ }
85
+ ],
86
+ "tables": {
87
+ "1": {
88
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S7.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T1.3.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.3.4.1.1\" style=\"padding-bottom:0.86108pt;\">Factor</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.3.4.1.2\" style=\"padding-bottom:0.86108pt;\">Code</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.3.4.1.3\" style=\"padding-bottom:0.86108pt;\">Description</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.3.4.1.4\" style=\"padding-bottom:0.86108pt;\">Score</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T1.1.1.2\">History</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T1.1.1.3\">-</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T1.1.1.1\">\n Average Default Rate</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T1.1.1.4\">-8.17</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S7.T1.2.2.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S7.T1.2.2.2.1\">Income</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.2.2.3\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.2.2.4\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.3.3.2\" style=\"padding-bottom:0.43057pt;\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.3.3.1\" style=\"padding-bottom:0.43057pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.3.3.3\" style=\"padding-bottom:0.43057pt;\">+5.77</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>A simple scorecard for existing users. For example, a user with annual income and an average default rate would be given a score of .</figcaption>\n</figure>",
89
+ "capture": "TABLE I: A simple scorecard for existing users. For example, a user with annual income and an average default rate would be given a score of ."
90
+ }
91
+ },
92
+ "image_paths": {
93
+ "1": {
94
+ "figure_path": "2209.01410v2_figure_1.png",
95
+ "caption": "Figure 1: A closed-loop model of an AI system and its interactions with the users: the AI system provides some outputs, e.g., scorecards in credit scoring, matches in a matching market, or suggestions in a decision-support system. Users observe the outputs and take action in response. With some delay, their actions in response to the outputs are utilized in retraining the AI System.",
96
+ "url": "http://arxiv.org/html/2209.01410v2/x1.png"
97
+ },
98
+ "2": {
99
+ "figure_path": "2209.01410v2_figure_2.png",
100
+ "caption": "Figure 2: The 2020 annual income distribution of \u201cBLACK ALONE\u201d, \u201cWHITE ALONE\u201d and \u201cASIAN ALONE\u201d households in USA, with three races distinguished by colours. Data are sourced from Table A-2 of the Current Population Survey (CPS) of US Census Bureau.",
101
+ "url": "http://arxiv.org/html/2209.01410v2/x2.png"
102
+ },
103
+ "3(a)": {
104
+ "figure_path": "2209.01410v2_figure_3(a).png",
105
+ "caption": "Figure 3: Solid curves depict the mean value of time series {ADRs\u2062(k)}k\u2208[N]subscriptsubscriptADR\ud835\udc60\ud835\udc58\ud835\udc58delimited-[]\ud835\udc41\\{\\textrm{ADR}_{s}(k)\\}_{k\\in[N]}{ ADR start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_k ) } start_POSTSUBSCRIPT italic_k \u2208 [ italic_N ] end_POSTSUBSCRIPT, across five trials, with race information distinguished by colour. Error shades display mean \u00b1plus-or-minus\\pm\u00b1 one standard deviation.",
106
+ "url": "http://arxiv.org/html/2209.01410v2/x3.png"
107
+ },
108
+ "3(b)": {
109
+ "figure_path": "2209.01410v2_figure_3(b).png",
110
+ "caption": "Figure 3: Solid curves depict the mean value of time series {ADRs\u2062(k)}k\u2208[N]subscriptsubscriptADR\ud835\udc60\ud835\udc58\ud835\udc58delimited-[]\ud835\udc41\\{\\textrm{ADR}_{s}(k)\\}_{k\\in[N]}{ ADR start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_k ) } start_POSTSUBSCRIPT italic_k \u2208 [ italic_N ] end_POSTSUBSCRIPT, across five trials, with race information distinguished by colour. Error shades display mean \u00b1plus-or-minus\\pm\u00b1 one standard deviation.",
111
+ "url": "http://arxiv.org/html/2209.01410v2/x4.png"
112
+ },
113
+ "3(c)": {
114
+ "figure_path": "2209.01410v2_figure_3(c).png",
115
+ "caption": "Figure 3: Solid curves depict the mean value of time series {ADRs\u2062(k)}k\u2208[N]subscriptsubscriptADR\ud835\udc60\ud835\udc58\ud835\udc58delimited-[]\ud835\udc41\\{\\textrm{ADR}_{s}(k)\\}_{k\\in[N]}{ ADR start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_k ) } start_POSTSUBSCRIPT italic_k \u2208 [ italic_N ] end_POSTSUBSCRIPT, across five trials, with race information distinguished by colour. Error shades display mean \u00b1plus-or-minus\\pm\u00b1 one standard deviation.",
116
+ "url": "http://arxiv.org/html/2209.01410v2/x5.png"
117
+ },
118
+ "4": {
119
+ "figure_path": "2209.01410v2_figure_4.png",
120
+ "caption": "Figure 6: A Markov system [Werner, 2004]",
121
+ "url": "http://arxiv.org/html/2209.01410v2/extracted/5430169/ms.png"
122
+ }
123
+ },
124
+ "validation": true,
125
+ "references": [
126
+ {
127
+ "1": {
128
+ "title": "A Lyapunov approach to incremental stability properties.",
129
+ "author": "Angeli, D. (2002).",
130
+ "venue": "IEEE Transactions on Automatic Control, 47(3):410\u2013421.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "2": {
136
+ "title": "Machine bias.",
137
+ "author": "Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016).",
138
+ "venue": "In Ethics of Data and Analytics, pages 254\u2013264. Auerbach\nPublications.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "3": {
144
+ "title": "Beyond individual and group fairness.",
145
+ "author": "Awasthi, P., Cortes, C., Mansour, Y., and Mohri, M. (2020).",
146
+ "venue": "arXiv preprint arXiv:2008.09490, abs/2008.09490.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "4": {
152
+ "title": "Recurrent iterated function systems.",
153
+ "author": "Barnsley, M. F., Elton, J. H., and Hardin, D. P. (1989).",
154
+ "venue": "Constructive approximation, 5(1):3\u201331.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "5": {
160
+ "title": "AI Fairness 360: An extensible toolkit for detecting,\nunderstanding, and mitigating unwanted algorithmic bias.",
161
+ "author": "Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K.,\nLohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy,\nK. N., Richards, J., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., and\nZhang, Y. (2018).",
162
+ "venue": null,
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "6": {
168
+ "title": "A history of ai and law in 50 papers: 25 years of the international\nconference on ai and law.",
169
+ "author": "Bench-Capon, T., Araszkiewicz, M., Ashley, K., Atkinson, K., Bex, F., Borges,\nF., Bourcier, D., Bourgine, P., Conrad, J. G., Francesconi, E., et al.\n(2012).",
170
+ "venue": "Artificial Intelligence and Law, 20(3):215\u2013319.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "7": {
176
+ "title": "Managing artificial intelligence.",
177
+ "author": "Berente, N., Gu, B., Recker, J., and Santhanam, R. (2021).",
178
+ "venue": "MIS quarterly, 45(3):1433\u20131450.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "8": {
184
+ "title": "On the apparent conflict between individual and group fairness.",
185
+ "author": "Binns, R. (2020).",
186
+ "venue": "In Proceedings of the 2020 conference on fairness,\naccountability, and transparency, pages 514\u2013524.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "9": {
192
+ "title": "Convergence in multiagent coordination, consensus, and flocking.",
193
+ "author": "Blondel, V. D., Hendrickx, J. M., Olshevsky, A., and Tsitsiklis, J. N. (2005).",
194
+ "venue": "In Proceedings of the 44th IEEE Conference on Decision and\nControl, pages 2996\u20133000. IEEE.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "10": {
200
+ "title": "Fairness in agreement with european values: An interdisciplinary\nperspective on ai regulation.",
201
+ "author": "Bringas Colmenarejo, A., Nannini, L., Rieger, A., Scott, K. M., Zhao, X.,\nPatro, G. K., Kasneci, G., and Kinder-Kurlanda, K. (2022).",
202
+ "venue": "In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics,\nand Society, pages 107\u2013118.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "11": {
208
+ "title": "An experimental study of the relationship between online engagement\nand advertising effectiveness.",
209
+ "author": "Calder, B. J., Malthouse, E. C., and Schaedel, U. (2009).",
210
+ "venue": "Journal of interactive marketing, 23(4):321\u2013331.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "12": {
216
+ "title": "Classification with fairness constraints: A meta-algorithm with\nprovable guarantees.",
217
+ "author": "Celis, L. E., Huang, L., Keswani, V., and Vishnoi, N. K. (2019).",
218
+ "venue": "In Proceedings of the conference on fairness, accountability,\nand transparency, pages 319\u2013328.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "13": {
224
+ "title": "Us ai regulation guide: legislative overview and practical\nconsiderations.",
225
+ "author": "Chae, Y. (2020).",
226
+ "venue": "The Journal of Robotics, Artificial Intelligence & Law, 3.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "14": {
232
+ "title": "On the control of multi-agent systems: A survey.",
233
+ "author": "Chen, F., Ren, W., et al. (2019).",
234
+ "venue": "Foundations and Trends\u00ae in Systems and Control,\n6(4):339\u2013499.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "15": {
240
+ "title": "Path-specific counterfactual fairness.",
241
+ "author": "Chiappa, S. (2019).",
242
+ "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 33, pages 7801\u20137808.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "16": {
248
+ "title": "The bias detectives.",
249
+ "author": "Courtland, R. (2018).",
250
+ "venue": "Nature, 558(7710):357\u2013360.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "17": {
256
+ "title": "Fairness is not static: deeper understanding of long term fairness\nvia simulation studies.",
257
+ "author": "D\u2019Amour, A., Srinivasan, H., Atwood, J., Baljekar, P., Sculley, D., and\nHalpern, Y. (2020).",
258
+ "venue": "In Proceedings of the 2020 Conference on Fairness,\nAccountability, and Transparency, pages 525\u2013534.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "18": {
264
+ "title": "Companies committed to responsible ai: From principles towards\nimplementation and regulation?",
265
+ "author": "de Laat, P. B. (2021).",
266
+ "venue": "Philosophy & technology, 34(4):1135\u20131193.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "19": {
272
+ "title": "Iterated random functions.",
273
+ "author": "Diaconis, P. and Freedman, D. (1999).",
274
+ "venue": "SIAM Review, 41(1):45\u201376.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "20": {
280
+ "title": "Hard choices in artificial intelligence.",
281
+ "author": "Dobbe, R., Gilbert, T. K., and Mintz, Y. (2021).",
282
+ "venue": "Artificial Intelligence, 300:103555.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "21": {
288
+ "title": "Robust counterfactual explanations for tree-based ensembles.",
289
+ "author": "Dutta, S., Long, J., Mishra, S., Tilli, C., and Magazzeni, D. (2022).",
290
+ "venue": "In International Conference on Machine Learning, pages\n5742\u20135756. PMLR.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "22": {
296
+ "title": "Fairness through awareness.",
297
+ "author": "Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012).",
298
+ "venue": "In Proceedings of the 3rd innovations in theoretical computer\nscience conference, pages 214\u2013226.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "23": {
304
+ "title": "Should we regulate artificial intelligence or some uses of software?",
305
+ "author": "Ellul, J. (2022).",
306
+ "venue": "Discover Artificial Intelligence, 2(1):1\u20136.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "24": {
312
+ "title": "An ergodic theorem for iterated maps.",
313
+ "author": "Elton, J. H. (1987).",
314
+ "venue": "Ergodic Theory and Dynamical Systems, 7(04):481\u2013488.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "25": {
320
+ "title": "On the ergodic control of ensembles.",
321
+ "author": "Fioravanti, A. R., Marecek, J., Shorten, R. N., Souza, M., and Wirth, F.\n(2019).",
322
+ "venue": "Automatica, 108:108483.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "26": {
328
+ "title": "On the ergodic control of ensembles in the presence of non-linear\nfilters.",
329
+ "author": "Ghosh, R., Kungurtsev, V., Marecek, J., and Shorten, R. N. (2021).",
330
+ "venue": "arXiv preprint arXiv:2112.06767.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "27": {
336
+ "title": "Toward fairness in ai for people with disabilities sbg@ a research\nroadmap.",
337
+ "author": "Guo, A., Kamar, E., Vaughan, J. W., Wallach, H., and Morris, M. R. (2020).",
338
+ "venue": "ACM SIGACCESS Accessibility and Computing, (125):1\u20131.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "28": {
344
+ "title": "Asymptotic coupling and a general form of harris\u2019 theorem with applications to stochastic delay equations.",
345
+ "author": "Hairer, M., Mattingly, J. C., and Scheutzow, M. (2011).",
346
+ "venue": "Probability theory and related fields, 149(1-2):223\u2013259.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "29": {
352
+ "title": "Equality of opportunity in supervised learning.",
353
+ "author": "Hardt, M., Price, E., and Srebro, N. (2016).",
354
+ "venue": "In Advances in neural information processing systems, pages\n3315\u20133323.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "30": {
360
+ "title": "Counterfactual fairness.",
361
+ "author": "Kusner, M. J., Loftus, J., Russell, C., and Silva, R. (2017).",
362
+ "venue": "In Advances in Neural Information Processing Systems, pages\n4066\u20134076.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "31": {
368
+ "title": "Model-free computation of risk contributions in credit portfolios.",
369
+ "author": "Leitao, \u00c1. and Ortiz-Gracia, L. (2020).",
370
+ "venue": "Applied Mathematics and Computation, 382:125351.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "32": {
376
+ "title": "Cooperative control of multi-agent systems: optimal and adaptive\ndesign approaches.",
377
+ "author": "Lewis, F. L., Zhang, H., Hengster-Movric, K., and Das, A. (2013).",
378
+ "venue": "Springer Science & Business Media.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "33": {
384
+ "title": "A unified approach to interpreting model predictions.",
385
+ "author": "Lundberg, S. M. and Lee, S.-I. (2017).",
386
+ "venue": "In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R.,\nVishwanathan, S., and Garnett, R., editors, Advances in Neural\nInformation Processing Systems 30, pages 4765\u20134774. Curran Associates, Inc.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "34": {
392
+ "title": "Predictability and fairness in load aggregation and operations of\nvirtual power plants.",
393
+ "author": "Marecek, J., Roubalik, M., Ghosh, R., Shorten, R. N., and Wirth, F. (to\nappear).",
394
+ "venue": "Automatica.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "35": {
400
+ "title": "Ricci v. destefano: Diluting disparate impact and redefining\ndisparate treatment.",
401
+ "author": "McGinley, A. C. (2011).",
402
+ "venue": "Nev. LJ, 12:626.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "36": {
408
+ "title": "Translation tutorial: 21 fairness definitions and their politics.",
409
+ "author": "Narayanan, A. (2018).",
410
+ "venue": "In Proc. Conf. Fairness Accountability Transp., New York, USA,\nvolume 1170, page 3.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "37": {
416
+ "title": "Distributed subgradient methods for multi-agent optimization.",
417
+ "author": "Nedic, A. and Ozdaglar, A. (2009).",
418
+ "venue": "IEEE Transactions on Automatic Control, 54(1):48\u201361.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "38": {
424
+ "title": "Security and privacy for 6g: A survey on prospective technologies and\nchallenges.",
425
+ "author": "Nguyen, V.-L., Lin, P.-C., Cheng, B.-C., Hwang, R.-H., and Lin, Y.-D. (2021).",
426
+ "venue": "IEEE Communications Surveys & Tutorials, 23(4):2384\u20132428.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "39": {
432
+ "title": "Algorithms of oppression.",
433
+ "author": "Noble, S. U. (2018).",
434
+ "venue": "In Algorithms of Oppression. New York University Press.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "40": {
440
+ "title": "Recruitment ai has a disability problem: anticipating and mitigating\nunfair automated hiring decisions.",
441
+ "author": "Nugent, S. and Scott-Parker, S. (2021).",
442
+ "venue": null,
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "41": {
448
+ "title": "Post-processing for individual fairness.",
449
+ "author": "Petersen, F., Mukherjee, D., Sun, Y., and Yurochkin, M. (2021).",
450
+ "venue": "Advances in Neural Information Processing Systems,\n34:25944\u201325955.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "42": {
456
+ "title": "Models of Law and Regulation for AI.",
457
+ "author": "Petit, N. and De Cooman, J. (2021).",
458
+ "venue": "Routledge.",
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "43": {
464
+ "title": "Regulatory capital modeling for credit risk.",
465
+ "author": "Rutkowski, M. and Tarca, S. (2015).",
466
+ "venue": "International Journal of Theoretical and Applied Finance,\n18(05):1550034.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "44": {
472
+ "title": "Defining the scope of ai regulations.",
473
+ "author": "Schuett, J. (2019).",
474
+ "venue": "arXiv preprint arXiv:1909.01095.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "45": {
480
+ "title": "A legal definition of ai.",
481
+ "author": "Schuett, J. et al. (2019).",
482
+ "venue": "arXiv preprint arXiv:1909.01095.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "46": {
488
+ "title": "Cooperative control of distributed multi-agent systems.",
489
+ "author": "Shamma, J. (2008).",
490
+ "venue": "John Wiley & Sons.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "47": {
496
+ "title": "Beyond the individual: governing AI\u2019s societal harm",
497
+ "author": "Smuha, N. A. (2021a).",
498
+ "venue": "Internet Policy Review, 10(3).",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "48": {
504
+ "title": "From a \u2018race to AI\u2019 to a \u2018race to AI regulation\u2019: regulatory competition for artificial intelligence.",
505
+ "author": "Smuha, N. A. (2021b).",
506
+ "venue": "Law, Innovation and Technology, 13(1):57\u201384.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "49": {
512
+ "title": "When machine learning meets privacy in 6g: A survey.",
513
+ "author": "Sun, Y., Liu, J., Wang, J., Cao, Y., and Kato, N. (2020).",
514
+ "venue": "IEEE Communications Surveys & Tutorials, 22(4):2694\u20132724.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "50": {
520
+ "title": "Quantum computation for pricing the collateralized debt obligations.",
521
+ "author": "Tang, H., Pal, A., Wang, T.-Y., Qiao, L.-F., Gao, J., and Jin, X.-M. (2021).",
522
+ "venue": "Quantum Engineering, 3(4):e84.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "51": {
528
+ "title": "Demystifying the Draft EU Artificial Intelligence Act-Analysing the good, the bad, and the unclear elements of the proposed approach.",
529
+ "author": "Veale, M. and Borgesius, F. Z. (2021).",
530
+ "venue": "Computer Law Review International, 22(4):97\u2013112.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "52": {
536
+ "title": "Counterfactual explanations for machine learning: A review.",
537
+ "author": "Verma, S., Dickerson, J., and Hines, K. (2020).",
538
+ "venue": "arXiv preprint arXiv:2010.10596.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "53": {
544
+ "title": "Regulating ai in medicine in the united states and europe.",
545
+ "author": "Vokinger, K. N. and Gasser, U. (2021).",
546
+ "venue": "Nature machine intelligence, 3(9):738\u2013739.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "54": {
552
+ "title": "Robust Cooperative Control of Multi-Agent Systems: A Prediction\nand Observation Prospective.",
553
+ "author": "Wang, C., Zuo, Z., Wang, J., and Ding, Z. (2021).",
554
+ "venue": "CRC Press.",
555
+ "url": null
556
+ }
557
+ },
558
+ {
559
+ "55": {
560
+ "title": "Cooperative control of multi-agent systems: Theory and applications.",
561
+ "author": "Wang, Y., Garcia, E., Casbeer, D., and Zhang, F. (2017).",
562
+ "venue": ".",
563
+ "url": null
564
+ }
565
+ },
566
+ {
567
+ "56": {
568
+ "title": "Algorithms for fairness in sequential decision making.",
569
+ "author": "Wen, M., Bastani, O., and Topcu, U. (2021).",
570
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, pages 1144\u20131152. PMLR.",
571
+ "url": null
572
+ }
573
+ },
574
+ {
575
+ "57": {
576
+ "title": "Ergodic theorem for contractive markov systems.",
577
+ "author": "Werner, I. (2004).",
578
+ "venue": "Nonlinearity, 17(6):2303.",
579
+ "url": null
580
+ }
581
+ },
582
+ {
583
+ "58": {
584
+ "title": "Distributed cooperative control of multi-agent systems.",
585
+ "author": "Yu, W., Wen, G., Chen, G., and Cao, J. (2017).",
586
+ "venue": "John Wiley & Sons.",
587
+ "url": null
588
+ }
589
+ },
590
+ {
591
+ "59": {
592
+ "title": "Inherent tradeoffs in learning fair representations.",
593
+ "author": "Zhao, H. and Gordon, G. (2019).",
594
+ "venue": "Advances in neural information processing systems, 32.",
595
+ "url": null
596
+ }
597
+ }
598
+ ],
599
+ "url": "http://arxiv.org/html/2209.01410v2"
600
+ }
20240225/2209.05946v2.json ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "OmDet: Large-scale Vision-Language Multi-dataset Pre-training with Multimodal Detection Network",
3
+ "abstract": "The advancement of object detection (OD) in open-vocabulary and open-world scenarios is a critical challenge in computer vision. This work introduces OmDet, a novel language-aware object detection architecture, and an innovative training mechanism that harnesses continual learning and multi-dataset vision-language pre-training. Leveraging natural language as a universal knowledge representation, OmDet accumulates a \u201dvisual vocabulary\u201d from diverse datasets, unifying the task as a language-conditioned detection framework. Our multimodal detection network (MDN) overcomes the challenges of multi-dataset joint training and generalizes to numerous training datasets without manual label taxonomy merging. We demonstrate superior performance of OmDet over strong baselines in object detection in the wild, open-vocabulary detection, and phrase grounding, achieving state-of-the-art results. Ablation studies reveal the impact of scaling the pre-training visual vocabulary, indicating a promising direction for further expansion to larger datasets. The effectiveness of our deep fusion approach is underscored by its ability to learn jointly from multiple datasets, enhancing performance through knowledge sharing.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Object detection (OD) is one of the monumental tasks in computer vision (CV). Classical OD research has been focusing on improving the detector network to achieve higher accuracy with lower latency (Ren et al., 2015 ###reference_b43###; Redmon et al., 2016 ###reference_b42###; Liu et al., 2016 ###reference_b30###; Zhou et al., 2019 ###reference_b61###). In the last decade, numerous novel OD architectures have been developed, including well-known two-stage methods, e.g., Faster-RCNN (Ren et al., 2015 ###reference_b43###), one-stage methods, e.g., Yolo, SSD, and CenterNet (Redmon et al., 2016 ###reference_b42###; Liu et al., 2016 ###reference_b30###; Zhou et al., 2019 ###reference_b61###), and recent end-to-end methods e.g., DETR and Sparse-RCNN (Carion et al., 2020 ###reference_b1###; Sun et al., 2021 ###reference_b47###). However, although the state-of-the-art OD model can achieve over 60 AP on COCO (Lin et al., 2014 ###reference_b29###), classical OD systems all share one major limitation, i.e., they cannot generalize to object types that are not included in the pre-defined label set, such as 80 classes from COCO. This weakness limits OD such that it can only be applied to domains with known targets, and cannot be used in more challenging open-world domains, such as robotics, augmented reality, and embodied agents, that demand the system to detect any type of objects on the fly as the human users request.\nOn the other hand, Vision-Language Pre-training (VLP) has rapidly progressed (Li et al., 2020 ###reference_b26###; Radford et al., 2021a ###reference_b40###; Li et al., 2021 ###reference_b23###; Kim et al., 2021 ###reference_b21###), thanks to the emergence of multimodal transformers (Vaswani et al., 2017 ###reference_b48###) and the availability of large paired image-text corpora (Sharma et al., 2018 ###reference_b46###; Changpinyo et al., 2021 ###reference_b2###). By learning image-to-text matching from massive multimodal datasets, many proposed VLP models have helped to achieve the state-of-the-art performance of a variety of downstream multimodal tasks, ranging from visual QA (Lu et al., 2019 ###reference_b35###), cross-modal retrieval (Lu et al., 2021 ###reference_b36###) to explainable evaluation (Zhao et al., 2022 ###reference_b57###).\nRecently, an emerging line of research is to exploit VLP models to upgrade OD models to solve the more challenging open-vocabulary setting, where a detector can generalize to new visual concepts with zero/few-shot adaption (Gu et al., 2021 ###reference_b15###; Kamath et al., 2021 ###reference_b20###; Li et al., 2022b ###reference_b25###; Minderer et al., 2022 ###reference_b38###). Some of the VLP-based methods exploit large-scale visual grounding datasets for pretraining (Kamath et al., 2021 ###reference_b20###) and some of the work combines class-agnostic region proposal network (RPN) with a zero-shot image-text classifier respectively for localization and classification (Zhong et al., 2022 ###reference_b58###).\nUnlike previous VLP-based methods that utilize one large vision-language corpus for pretraining, this paper explores a continual learning approach, i.e., can a detector learn from many OD datasets with increasing total visual vocabulary and eventually achieve the open-vocabulary detection capabilities?. This approach is appealing for several reasons: (1) it opens the possibility of lifelong learning since one can improve a detector\u2019s zero/few-shot performance by feeding it with new datasets. (2) it is cost-effective since creating many small domain-specific datasets is much cheaper than creating a single large-vocabulary large dataset (Gupta et al., 2019 ###reference_b16###).\nOn the other hand, joint training from multiple OD datasets with different labels faces two key technical challenges: (1) taxonomy conflict: each OD dataset is annotated with its pre-defined labels and a classic detector uses a fixed Softmax layer to classify object types (Ren et al., 2015 ###reference_b43###). Such design forbids the possibility of learning from different label sets or dynamically adapting to new classes. (2) fore/background inconsistency: since the label set is different, then an object proposal may be considered as foreground in dataset A, while it is considered as background in dataset B. For example, an object \u201dcat\u201d is annotated in dataset A, but not in dataset B. Our study shows that this greatly hurts the multi-dataset performance of classic detectors since the RPN head is confused by the conflicting ground truth.\nTo address the above challenges, this work proposes a novel vision-language model, OmDet, for open vocabulary object detection and phrase grounding. The main architecture novelty of OmDet is its latent query-centric fusion module that combines information from visual and text features and the proposed training mechanism that can easily accumulate knowledge from OD/grounding datasets from various domains. Two versions of OmDet is pre-trained, including OmDet V1 which is purely pre-trained on a large number of OD datasets (more than 100 domains), and OmDet V2 which is additionally pre-trained on visual grounding data (Kamath et al., 2021 ###reference_b20###).\nThe proposed method is evaluated on three downstream tasks: object detection in the wild (ODinW) (Li et al., 2022a ###reference_b22###), open-vocabulary detection, and phrase grounding (Plummer et al., 2015 ###reference_b39###). Results show that OmDet is able to outperform all prior art, including the powerful GLIP (Li et al., 2022b ###reference_b25###) that is pre-trained on much larger datasets. Moreover, comprehensive model analysis is conducted to better understand the strength and limitations of OmDet. We conduct controlled study on joint training from four diverse datasets (COCO, Pascal VOC, and Wider Face/Pedestrian) and results show that our method is not only able to learn from all datasets without suffering from label and localization conflicts, but achieves stronger performance than single dataset detectors due to its share of knowledge among tasks. Also, we show that accumulating multiple datasets to expand to large vocabulary OD learning is an effective method to boost OmDet\u2019s zero/few-shot ability as well as parameter-efficient training performance (e.g., prompt tuning)\nIn summary, the contributions of this our paper are four folds:\nWe present OmDet, a novel language-aware OD architecture with Multimodal Detection Network (MDN) that can learn from any number of OD and grounding datasets.\nExperiments show OmDet\u2019s state-of-the-art performance on well-known ODinW, open-vocabulary detection and phrase grounding benchmark.\nExperiments confirm the effectiveness of the proposed multi-dataset training by solving the label difference and fore/background inconsistency challenges.\nExperiments show that by scaling up visual vocabulary size via multi-dataset training, one can improve zero/few-shot and parameter-efficient fine-tuning."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Vision-Language Pre-training",
21
+ "text": "One of the most studied topics of VLP is to pre-train massive image-text pair data. Recent advances in self-supervised learning have enabled models to learn rich representations from large-scale unlabeled data.\nFor example, CLIP (Radford et al., 2021a ###reference_b40###) learns to predict which text matches which image, resulting in a versatile model that can perform well on various vision tasks without task-specific supervision. ALIGN (Li et al., 2021 ###reference_b23###) further scales up CLIP by using a noisy dataset of over one billion image alt-text pairs. However, these models mainly focus on vision-based tasks and neglect the interaction between multiple modalities during pre-training. To address this limitation, several studies propose to learn joint multi-modal representations of image content and natural language for vision+language tasks (such as VQA and visual reasoning). Among them, OSCAR (Li et al., 2020 ###reference_b26###), UNITER (Chen et al., 2020 ###reference_b3###) and VILLA (Gan et al., 2020 ###reference_b12###) adopt a two-stage approach: they first use an object detector (e.g., Faster R-CNN (Zhang et al., 2021 ###reference_b55###)) to extract vision features, then they apply a multi-layer transformer (Vaswani et al., 2017 ###reference_b48###) to the concatenation of the visual features and text features to learn joint embeddings.\nSome studies propose to model visual input without relying on pre-trained object detectors. For instance, SOHO (Huang et al., 2021 ###reference_b18###) uses a visual dictionary to extract compact image features from a whole image, which enables 10 times faster inference time than region-based methods. Similarly, ViLT (Kim et al., 2021 ###reference_b21###) employs a vision transformer (Dosovitskiy et al., 2020 ###reference_b9###) to capture long-range dependencies over a sequence of fixed-size non-overlapping image patches, without using convolutional visual features."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Object Detection",
27
+ "text": "Objection detection, one of the predominant tasks in computer vision, aims to detect bounding boxes and classes of object instances. It has significantly evolved through the contributions of massive research in recent years. There are two major categories of detectors: two-stage and one-stage methods. Two-stage methods consist of a region proposal network (RPN) and a region-wise classifier. Classic models include R-CNN (Girshick et al., 2014 ###reference_b14###), Fast R-CNN (Girshick, 2015 ###reference_b13###) and Faster R-CNN (Ren et al., 2015 ###reference_b43###). One-stage methods eliminate the RPN stage and directly make final object predictions on the visual feature maps. Well-known systems include SSD (Liu et al., 2016 ###reference_b30###), Yolo (Redmon et al., 2016 ###reference_b42###) and RetinaNet (Lin et al., 2017b ###reference_b28###). Recently, end-to-end detectors such as DETR (Carion et al., 2020 ###reference_b1###) have proposed to formulate the object detection task as a set prediction task. However, objection detection is often formulated as a closed-set problem with fixed and predefined classes and cannot handle object detection in the wild. To overcome the closed-set limitation, more realistic scenarios such as Multi-Dataset Object Detection (MDOD) and Open-Vocabulary Object Detection (OVOD) have attracted lots of attention.\nMulti-Dataset Object Detection: MDOD focuses on increasing detectable object classes by training a single detector using multiple datasets. Traditional closed-set object detection demands training detectors on datasets with full annotations, and adding a new dataset means costly extra human annotations. Research on MDOD attempts to bypass the closed-set limitation, where a single detector is able to incrementally add object classes by adding new datasets with new classes. Yao et al., (Yao et al., 2020 ###reference_b53###) proposes an MDOD framework with a preprocessed hybrid dataset and a dataset-aware focal loss. (Zhao et al., 2020 ###reference_b56###) designs a conflict-free loss to avoid the ambiguity between positive and negative samples. Detection Hub(Meng et al., 2022 ###reference_b37###) unifies multiple datasets with a query-based object detector with natural language embedding.\nOpen-Vocabulary Object Detection: OVOD, a more ambitious goal beyond the closed-set problem, refers to the capability of only training on annotated datasets and generalizing to unseen novel classes. Recently, OVOD has made such progress with the utilization of multi-modal vision-language pre-trained models (Li et al., 2022b ###reference_b25###)(Zhou et al., 2022b ###reference_b60###)(Kamath et al., 2021 ###reference_b20###). RegionCLIP(Zhong et al., 2022 ###reference_b58###) generates pseudo-labels for region-text pairs from caption datasets to perform regional vision-language pre-training and transfer to OVOD. ViLD(Gu et al., 2021 ###reference_b15###) proposed a two-stage open-vocabulary detector, which distills embeddings from teacher model CLIP (Radford et al., 2021b ###reference_b41###) or ALIGN (Jia et al., 2021 ###reference_b19###). With inspiration from CoOp (Zhou et al., 2022a ###reference_b59###), DetPro (Du et al., 2022 ###reference_b10###) introduces a technique to learn continuous prompt embedding that improves the performance of ViLD. OWL-ViT (Minderer et al., 2022 ###reference_b38###) transfers the pre-trained image-text model to the object detection by adding downstream detection heads and fine-tuning on OD datasets.\nObject Detection as Grounding: Phrase grounding refers to the process of identifying the relationship between individual phrases within a sentence and specific objects or regions depicted in an image (Kamath et al., 2021 ###reference_b20###; Deng et al., 2021 ###reference_b7###). GLIP (Li et al., 2022b ###reference_b25###) proposed that object detection can be viewed as a special case of phrase grounding. The authors of GLIP concatenate object types as a single string and ask the model to ground objects to word spans. This setup enables unified modeling between phrase grounding and object detection, and the resulting system achieves strong performance in long-tail object detection and zero-shot detection.\nUnlike previous grounding-based methods, the proposed method is designed to learn from an arbitrary number of object detection (OD) datasets, which does not necessarily need to train on grounding data. This ability is valuable for real-world scenarios, e.g., creating a multi-task OD model that simultaneously learns from many independent OD datasets."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Our Approach",
33
+ "text": "###figure_1### Before getting into the details of the proposed system, we first define the problem formulation. OmDet is designed for language-conditioned detection. Let be a large vocabulary of object types that OmDet can potentially detect. A task is a set of object types that the model should detect in its forward path, where . Note that the size of can be dynamic ranging from 1 to , where is the maximum supported number of object types in a single inference run. For the visual grounding setting, is the query sentence that contains word tokens. Meanwhile, Let be a set of natural language labels. In the object detection case, . For the grounding cases, is the set of entities that appeared in caption . Then given an input image , a task , and a label set , the model is expected to detect all objects mentioned in from . Since and are not fixed, an ideal model can dynamically adapt its detection targets conditioned on the task."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Model Architecture",
39
+ "text": "Following the above design principle, OmDet is introduced, a task-conditioned detection network that can learn from infinite combinations of tasks. It is composed of a vision backbone, a task encoder, a label encoder, and a multimodal detection network. The overall structure is illustrated in Fig1 ###reference_###. The following will describe each component in detail.\nVision Backbone\nStarting from the initial image (with 3 color channels), let the vision encoder be a conventional Convolutional Neural Network (CNN) (Liu et al., 2022 ###reference_b33###) or Vision Transformer (e.g. Swin Transformer (Liu et al., 2021 ###reference_b32###)). The vision encoder generates a lower-resolution visual feature map at each output layer. Then Feature Pyramid Network (FPN) (Lin et al., 2017a ###reference_b27###) is used to aggregate information from top to bottom and output a set of visual feature maps .\nTask Encoder and Label Encoder\nThe term \u201dtask\u201d refers to a natural language query designed to expand various text-aware vision tasks; (e.g., \u201dDetect objects: {the specified list of objects that we aim to identify}\u201d) The term \u2019label\u2019 refers to the language phrase output that is intended for detection purposes. The task set is set of natural language words. Then a task encoder or a label encoder is a transformer model that encodes the task set as a natural language sentence, and outputs a set of contextual word embeddings, i.e. and , where is the contextual word embedding dimension size. We use pre-trained transformer-based language models, e.g. CLIP (Radford et al., 2021a ###reference_b40###) to initialize the task and label encoders.\n###figure_2### Multimodal Detection Network\nThe Multimodal Detection Network (MDN) is a core component of OmDet. Different from early work only fuse language and vision information in late stage Gu et al. (2021 ###reference_b15###), we deploy deep fusion to combine information from the image and current task early on, in order to achieve strong performance. We are inspired by the Sparse-RCNN (Sun et al., 2021 ###reference_b47###) network design and developed an iterative query-based fusion mechanism that fuses text features and visual features into latent queries. Figure 3 ###reference_### illustrates the differences between our method versus prior art.\n###figure_3### Let be a fixed small set of learnable proposal features. The denotes the number of proposal features. It is a set of high-dimensional (e.g., ) latent features that capture the rich information of a potential instance, by combining data from the vision backbone and contextual task embedding from the task encoder. Also, let be a set of learnable one-to-one proposal boxes assigned to each feature. Then given the FPN output and task/label encoder output, the initial MDN operates as the following:\nwhere is the task embedding at iteration and is the label embedding. Note that MDN can be stacked to iterative refine its output the same as Sparse-RCNN, with the key difference that is fused with the proposal feature before the Dynamic Convolution layer and also is also iteratively updated at each run of MDN block. This enables the network to learn to adjust the task embedding and the proposal embedding jointly and adapt both object localization and classification heads conditioned on the given task. Figure 2 ###reference_### shows the process by which MDN first combines information between latent queries and language embedding via MHSA, and then infuses visual features with DynamicConv. Note that we can easily adapt MDN to other query-based detectors such as DETR Carion et al. (2020 ###reference_b1###), in which the DynamicConv operation is replaced by a CrossAttention module.\nWith the utilization of deep fusion between image features and task embedding at MDN, the challenge of fore/background inconsistency is solved. Other models like (Zhou et al., 2022b ###reference_b60###)(Minderer et al., 2022 ###reference_b38###) try to solve the fore/background inconsistency by training a perfect RPN to find all possible objects, which is hard to achieve. Our method applies deep fusion at an early stage to help the model be conscious of fore/background according to task embedding, and therefore properly switching fore/background among different tasks. To handle the taxonomy conflict, the label encoder is applied to get the text embedding of the target label, then the label embedding is passed to the classification stage to eliminate naming differences. Taxonomy conflict is solved by projecting the target label into embedding space since the same object with different naming will be close to each other."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Model Training",
45
+ "text": "Set Prediction Loss Given the proposed model, it uses set prediction loss (Carion et al., 2020 ###reference_b1###) on the fixed-size set of predictions of classification and box coordinates. Set-based loss produces an optimal bipartite matching between predictions and ground truth objects using the Hungarian algorithm. The matching cost is defined as follows:\nHere is focal loss (Lin et al., 2017b ###reference_b28###) of predicted classifications and ground truth category labels, and are L1 loss and generalized IoU loss (Carion et al., 2020 ###reference_b1###) between normalized center coordinates and height and width of predicted boxes and ground truth box, respectively. , and are coefficients of each component. The training loss is the same as the matching cost except that only performed on matched pairs. The final loss is the sum of all pairs normalized by the number of objects inside the training batch.\nTask-Sampling Strategy\nFor object detection datasets, in order to simulate a diverse set of tasks for meta-learning during training and also enforce the model to condition its output on a given task, a novel task sampling strategy is used during training.\nLet the max size of a given task be , for an image from a dataset in the mini-batch, we first sample with a uniform distribution.\nLet the number of unique object types in be , if , then only a random subset of object types are kept and the extra annotations are removed for this mini-batch. If , then additional negative object types are randomly selected from the vocabulary of dataset .\nThe model is trained with the above-sampled task and ground truth annotations.\nWith the above method, each image in every mini-batch will have a different set of tasks to learn from. When we learn from a large-vocabulary object detection dataset, e.g., LVIS, which contains 1200 unique object types, the unique combination of task size is . If , then it produces 1.34E43 possibilities, a quite large number. Experiments show that the proposed training strategy serves the purpose well, and yields models that perform task-conditioned object detection.\nFor learning from phrase grounding dataset, the task is simply the corresponding caption of the image. The label set is the set of entities that appeared in the caption. However, since there are only a few entities in each caption, learning of becomes too easy. Therefore, we randomly select from other entities in the dataset to create a label set up to classes to increase the difficulty of learning. This method is proven to be effective in improving performance on phrase grounding in later experiments."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Comparison to Grounding-based Method",
51
+ "text": "Our proposed architecture, the Multimodal Detection Network, has several strengths over traditional approaches that directly fuse text and vision features. Instead, our model fuses latent queries with text features, leading to the following advantages:\nDeep fusion for any query-based OD: early VLP work, e.g., ViLD (Gu et al., 2021 ###reference_b15###) and Detic (Zhou et al., 2022b ###reference_b60###), use shallow fusion for object detection, i.e. use text embedding only for classification, which cannot solve fore/background conflicts. Meanwhile, prior deep fusion models, e.g., MDETR (Kamath et al., 2021 ###reference_b20###) and GLIP (Li et al., 2022b ###reference_b25###)), use specialized cross-attention architecture to fuse the text and visual features. Our method can be applied to any query-based OD architecture, e.g. DETR, Sparse-RCNN, without the need for model change.\nInference speed and performance: visual grounding MDETR (Kamath et al., 2021 ###reference_b20###) and TransVG (Deng et al., 2021 ###reference_b7###) models encode one class at a time for OD and suffer from slow inferences speed, e.g. 10s/image for MDETR. Also, MDETR uses a transformer to fuse images with text, which cannot scale up to multi-scale features due to the complexity of self-attention. Our method deals with fixed-size latent queries, which are independent of visual features. Thus, our method is able to predict many classes with significant speed up with on-par or better performance."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Implementation Details",
63
+ "text": "We implement OmDet with the following settings:\nFor text embeddings, CLIP-B/16 text encoder (Radford et al., 2021b ###reference_b41###) is used throughout the study. We did not use the prompt template as used in study (Gu et al., 2021 ###reference_b15###), i.e. encoding object names in a template a photo of {}. This is because preliminary studies show no major difference between using versus not using the prompt template. Furthermore, the preliminary study also suggests there are no significant differences between using single-modal language models, e.g. BERT (Devlin et al., 2018 ###reference_b8###) and RoBERTa (Liu et al., 2019 ###reference_b31###), versus multimodal-language models e.g. CLIP. We suspect this is because object detection does not involve complex language understanding.\nThe task and label encoders share the same text encoders. On top of the text encoder, two independent Transformers layers (Vaswani et al., 2017 ###reference_b48###) are used to further dedicated encoding for task input and label input. Study shows that the set encoding is able to improve OmDet\u2019s performance.\nFor visual backbones, both Swin Transformers (Liu et al., 2021 ###reference_b32###) and ConvNeXt (Liu et al., 2022 ###reference_b33###) are used in the experiments. A standard FPN (Lin et al., 2017a ###reference_b27###) is used to extract a four-level feature map form the visual encoders. Both backbones are pre-trained on ImageNet 21K data (Ridnik et al., 2021 ###reference_b44###). Preliminary studies found that ConvNeXt usually performs on par or better than Swin Transformers. Therefore, we use ConvNeXt as the default choice.\nLastly, the MDN network utilizes MHSA to fuse information from visual input and text input to latent queries. We equip MDN with 300 latent queries and we use ROIAlignV2 (He et al., 2017 ###reference_b17###) as the ROI Pooler to extract region features from the visual backbone. 6 sequential MDN blocks are cascaded to create the final bounding boxes and classification prediction."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Large-scale Pre-training",
69
+ "text": "Two versions of large-scale pre-training are conducted.\nLarge-scale OD Pre-training (OmDet V1): in this setting, we accumulate a large number (104) of object detection datasets for pre-training to show that OmDet is able to accumulate knowledge from many OD datasets without suffering from fore/background and label inconsistency challenges. Pre-training datasets include COCO (Lin et al., 2014 ###reference_b29###), Object365 (Shao et al., 2019 ###reference_b45###), LVIS (Gupta et al., 2019 ###reference_b16###), PhraseCut (Wu et al., 2020 ###reference_b49###) and Roboflow 100 (Ciaglia et al., 2022 ###reference_b4###). Data details are described in Table 1 ###reference_###\nLarge-scale OD & Grounding Pre-training (OmDet V2): In the second version, we exclude any images related to COCO and LVIS datasets from pre-training since we will test zero-shot performance on these two datasets. In addition to large-scale OD multi-dataset pre-training, OmDet is able to horizontally expand to the non-OD type of training data. Specifically, we include the GoldG grounding dataset curated by Kamath et al. (2021 ###reference_b20###), which includes 1.3M pairs of image-caption data with grounded entities. Data details are described in Table 2 ###reference_###\nModel Training: For OmDet models, the initial learning rate is 5e-5 and it decays at 70 and 90 of total iteration steps by 0.1. ConvNeXt Base backbone is used with a 6-layer MDN head. The batch size is 40 and the maximum number of detections per image is 300 and K is set to 80. All of the proposed models are pre-trained for 36 epochs using NVIDIA A100 GPU cluster and then fine-tuned on the downstream data."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.1.2\"><span class=\"ltx_text\" id=\"S4.T1.1.1.1.2.1\" style=\"color:#000000;\"># Classes</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.3\"><span class=\"ltx_text\" id=\"S4.T1.1.1.1.3.1\" style=\"color:#000000;\"># Images</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.4\"><span class=\"ltx_text\" id=\"S4.T1.1.1.1.4.1\" style=\"color:#000000;\">Federated</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.1.1\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.1.1\" style=\"color:#000000;\">COCO</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.2\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.2.1\" style=\"color:#000000;\">80</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.2.1.3\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.3.1\" style=\"color:#000000;\">100K</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.2.1.4\"><span class=\"ltx_text\" id=\"S4.T1.1.2.1.4.1\" style=\"color:#000000;\">No</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.3.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.1.1\" style=\"color:#000000;\">O365</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.3.2.2\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.2.1\" style=\"color:#000000;\">365</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.3.2.3\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.3.1\" style=\"color:#000000;\">2M</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.3.2.4\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.4.1\" style=\"color:#000000;\">No</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.4.3.1\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.1.1\" style=\"color:#000000;\">LVIS</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.4.3.2\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.2.1\" style=\"color:#000000;\">1203</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.4.3.3\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.3.1\" style=\"color:#000000;\">100K</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.4.3.4\"><span class=\"ltx_text\" id=\"S4.T1.1.4.3.4.1\" style=\"color:#000000;\">Yes</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.5.4.1\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.1.1\" style=\"color:#000000;\">PhraseCut</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.5.4.2\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.2.1\" style=\"color:#000000;\">3013</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.5.4.3\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.3.1\" style=\"color:#000000;\">70K</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.5.4.4\"><span class=\"ltx_text\" id=\"S4.T1.1.5.4.4.1\" style=\"color:#000000;\">Yes</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.1.6.5.1\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.1.1\" style=\"color:#000000;\">RF100</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T1.1.6.5.2\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.2.1\" style=\"color:#000000;\">829</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.1.6.5.3\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.3.1\" style=\"color:#000000;\">224K</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.1.6.5.4\"><span class=\"ltx_text\" id=\"S4.T1.1.6.5.4.1\" style=\"color:#000000;\">Yes</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"color:#000000;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Pre-train data used in large-scale OD pre-training, resulting in OmDetV1.</figcaption>\n</figure>",
76
+ "capture": "Table 1: Pre-train data used in large-scale OD pre-training, resulting in OmDetV1."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T2.1.1.1.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.2.1\" style=\"color:#000000;\"># Classes</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.3.1\" style=\"color:#000000;\"># Images</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1.4\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.4.1\" style=\"color:#000000;\">Type</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.1\"><span class=\"ltx_text\" id=\"S4.T2.1.2.1.1.1\" style=\"color:#000000;\">O365</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.2.1.2\"><span class=\"ltx_text\" id=\"S4.T2.1.2.1.2.1\" style=\"color:#000000;\">365</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.1.2.1.3\"><span class=\"ltx_text\" id=\"S4.T2.1.2.1.3.1\" style=\"color:#000000;\">2M</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.1.2.1.4\"><span class=\"ltx_text\" id=\"S4.T2.1.2.1.4.1\" style=\"color:#000000;\">OD</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.3.2.1\"><span class=\"ltx_text\" id=\"S4.T2.1.3.2.1.1\" style=\"color:#000000;\">PhraseCut</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.3.2.2\"><span class=\"ltx_text\" id=\"S4.T2.1.3.2.2.1\" style=\"color:#000000;\">3013</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.1.3.2.3\"><span class=\"ltx_text\" id=\"S4.T2.1.3.2.3.1\" style=\"color:#000000;\">70K</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.1.3.2.4\"><span class=\"ltx_text\" id=\"S4.T2.1.3.2.4.1\" style=\"color:#000000;\">OD</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.4.3.1\"><span class=\"ltx_text\" id=\"S4.T2.1.4.3.1.1\" style=\"color:#000000;\">RF100</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.4.3.2\"><span class=\"ltx_text\" id=\"S4.T2.1.4.3.2.1\" style=\"color:#000000;\">829</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.1.4.3.3\"><span class=\"ltx_text\" id=\"S4.T2.1.4.3.3.1\" style=\"color:#000000;\">224K</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.1.4.3.4\"><span class=\"ltx_text\" id=\"S4.T2.1.4.3.4.1\" style=\"color:#000000;\">OD</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T2.1.5.4.1\"><span class=\"ltx_text\" id=\"S4.T2.1.5.4.1.1\" style=\"color:#000000;\">GoldG</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.1.5.4.2\"><span class=\"ltx_text\" id=\"S4.T2.1.5.4.2.1\" style=\"color:#000000;\">1.3M</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.1.5.4.3\"><span class=\"ltx_text\" id=\"S4.T2.1.5.4.3.1\" style=\"color:#000000;\">100K</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.1.5.4.4\"><span class=\"ltx_text\" id=\"S4.T2.1.5.4.4.1\" style=\"color:#000000;\">Ground</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"color:#000000;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Pre-train data used in large-scale OD &amp; Ground pre-training, resulting in OmDetV2.</figcaption>\n</figure>",
80
+ "capture": "Table 2: Pre-train data used in large-scale OD & Ground pre-training, resulting in OmDetV2."
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.SS2.2\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;color:#000000;\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Baseline models and their training setup.</figcaption><div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.SS2.2.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.SS2.2.3.1.1.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.1.1.1.1\" style=\"font-size:90%;color:#000000;\">Models</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.1.1.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.1.1.2.1\" style=\"font-size:90%;color:#000000;\">Fusion</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.1.1.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.1.1.3.1\" style=\"font-size:90%;color:#000000;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.1.1.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.1.1.4.1\" style=\"font-size:90%;color:#000000;\">#Param</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.1.1.5\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.1.1.5.1\" style=\"font-size:90%;color:#000000;\">Pretrain Data</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.SS2.2.3.2.2.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.2.2.1.1\" style=\"font-size:90%;color:#000000;\">DyHead-T</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.2.2.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.2.2.2.1\" style=\"font-size:90%;color:#000000;\">-</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.2.2.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.2.2.3.1\" style=\"font-size:90%;color:#000000;\">Swin-T</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.2.2.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.2.2.4.1\" style=\"font-size:90%;color:#000000;\">30M</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.2.2.5\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.2.2.5.1\" style=\"font-size:90%;color:#000000;\">-</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.SS2.2.3.3.3.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.3.3.1.1\" style=\"font-size:90%;color:#000000;\">DINO-T</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.3.3.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.3.3.2.1\" style=\"font-size:90%;color:#000000;\">-</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.3.3.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.3.3.3.1\" style=\"font-size:90%;color:#000000;\">Swin-T</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.3.3.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.3.3.4.1\" style=\"font-size:90%;color:#000000;\">50M</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.3.3.5\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.3.3.5.1\" style=\"font-size:90%;color:#000000;\">O365</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.SS2.2.3.4.4.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.4.4.1.1\" style=\"font-size:90%;color:#000000;\">MDETR</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.4.4.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.4.4.2.1\" style=\"font-size:90%;color:#000000;\">Deep</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.4.4.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.4.4.3.1\" style=\"font-size:90%;color:#000000;\">RN101</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.4.4.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.4.4.4.1\" style=\"font-size:90%;color:#000000;\">185M</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.4.4.5\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.4.4.5.1\" style=\"font-size:90%;color:#000000;\">GoldG</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.SS2.2.3.5.5.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.5.5.1.1\" style=\"font-size:90%;color:#000000;\">Detic-T</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.5.5.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.5.5.2.1\" style=\"font-size:90%;color:#000000;\">Shallow</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.5.5.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.5.5.3.1\" style=\"font-size:90%;color:#000000;\">ConvX-T</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.5.5.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.5.5.4.1\" style=\"font-size:90%;color:#000000;\">138M</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.5.5.5\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.5.5.5.1\" style=\"font-size:90%;color:#000000;\">COCO,LVIS,IN-21K</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.SS2.2.3.6.6.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.6.6.1.1\" style=\"font-size:90%;color:#000000;\">GLIP-T</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.6.6.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.6.6.2.1\" style=\"font-size:90%;color:#000000;\">Deep</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.6.6.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.6.6.3.1\" style=\"font-size:90%;color:#000000;\">Swin-T</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.6.6.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.6.6.4.1\" style=\"font-size:90%;color:#000000;\">231M</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.6.6.5\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.6.6.5.1\" style=\"font-size:90%;color:#000000;\">O365,GoldG,Cap4M</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.SS2.2.3.7.7.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.7.7.1.1\" style=\"font-size:90%;color:#000000;\">OmDetV1-T</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.7.7.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.7.7.2.1\" style=\"font-size:90%;color:#000000;\">Deep Latent</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.7.7.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.7.7.3.1\" style=\"font-size:90%;color:#000000;\">ConvX-T</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.7.7.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.7.7.4.1\" style=\"font-size:90%;color:#000000;\">180M</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.7.7.5\">\n<span class=\"ltx_text\" id=\"S4.SS2.2.3.7.7.5.1\" style=\"font-size:90%;color:#000000;\">COCO,LVIS,O365,</span><span class=\"ltx_text\" id=\"S4.SS2.2.3.7.7.5.2\" style=\"font-size:90%;color:#000000;\">\nPhraseCut, RF100</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.SS2.2.3.8.8.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.8.8.1.1\" style=\"font-size:90%;color:#000000;\">OmDetV1-B</span></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.8.8.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.8.8.2.1\" style=\"font-size:90%;color:#000000;\">Deep Latent</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.8.8.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.8.8.3.1\" style=\"font-size:90%;color:#000000;\">ConvX-B</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.SS2.2.3.8.8.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.8.8.4.1\" style=\"font-size:90%;color:#000000;\">240M</span></td>\n<td class=\"ltx_td\" id=\"S4.SS2.2.3.8.8.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.SS2.2.3.9.9.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.9.9.1.1\" style=\"font-size:90%;color:#000000;\">OmDetV2-T</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.9.9.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.9.9.2.1\" style=\"font-size:90%;color:#000000;\">Deep Latent</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.9.9.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.9.9.3.1\" style=\"font-size:90%;color:#000000;\">ConvX-T</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.9.9.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.9.9.4.1\" style=\"font-size:90%;color:#000000;\">180M</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.SS2.2.3.9.9.5\">\n<span class=\"ltx_text\" id=\"S4.SS2.2.3.9.9.5.1\" style=\"font-size:90%;color:#000000;\">O365, GoldG</span><span class=\"ltx_text\" id=\"S4.SS2.2.3.9.9.5.2\" style=\"font-size:90%;color:#000000;\">\nPhraseCut, RF100</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS2.2.3.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.SS2.2.3.10.10.1\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.10.10.1.1\" style=\"font-size:90%;color:#000000;\">OmDetV2-B</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.SS2.2.3.10.10.2\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.10.10.2.1\" style=\"font-size:90%;color:#000000;\">Deep Latent</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.SS2.2.3.10.10.3\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.10.10.3.1\" style=\"font-size:90%;color:#000000;\">ConvX-B</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.SS2.2.3.10.10.4\"><span class=\"ltx_text\" id=\"S4.SS2.2.3.10.10.4.1\" style=\"font-size:90%;color:#000000;\">240M</span></td>\n<td class=\"ltx_td ltx_border_b\" id=\"S4.SS2.2.3.10.10.5\"></td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S4.SS2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS2.2.2.2\" style=\"font-size:90%;color:#000000;\">Model Training:<span class=\"ltx_text ltx_font_medium\" id=\"S4.SS2.2.2.2.2\"> For OmDet models, the initial learning rate is 5e-5 and it decays at 70 and 90 of total iteration steps by 0.1. ConvNeXt Base backbone is used with a 6-layer MDN head. The batch size is 40 and the maximum number of detections per image is 300 and K is set to 80. All of the proposed models are pre-trained for 36 epochs using NVIDIA A100 GPU cluster and then fine-tuned on the downstream data.</span></span></p>\n</div>\n</div>\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;color:#000000;\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Baseline models and their training setup.</figcaption>\n</figure>",
84
+ "capture": "Table 3: Baseline models and their training setup."
85
+ }
86
+ },
87
+ "image_paths": {
88
+ "1": {
89
+ "figure_path": "2209.05946v2_figure_1.png",
90
+ "caption": "Figure 1: Overview of OmDet Architecture. The proposed Multimodal Detection Network iteratively fuses vision and language features into latent queries for object detection.",
91
+ "url": "http://arxiv.org/html/2209.05946v2/extracted/5430798/figures/overall.png"
92
+ },
93
+ "2": {
94
+ "figure_path": "2209.05946v2_figure_2.png",
95
+ "caption": "Figure 2: Network architecture for the Multimodal Detection Network (MDN), simplified here for illustration purposes.",
96
+ "url": "http://arxiv.org/html/2209.05946v2/extracted/5430798/figures/mdn.png"
97
+ },
98
+ "3": {
99
+ "figure_path": "2209.05946v2_figure_3.png",
100
+ "caption": "Figure 3: Comparison with other frameworks. (a) Shallow fusion that only utilizes text information for object classification.\n(b) Deep fusion that fuses visual and text in the backbone before entering the object detection head.\n(c) Deep latent fusion (ours) utilizes latent queries to fuse multimodal information, enabling adaption to any query-based OD architecture.",
101
+ "url": "http://arxiv.org/html/2209.05946v2/extracted/5430798/figures/diff.png"
102
+ }
103
+ },
104
+ "validation": true,
105
+ "references": [],
106
+ "url": "http://arxiv.org/html/2209.05946v2"
107
+ }
20240225/2210.10544v3.json ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Subtractive random forests Luc Devroye is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) G. Lugosi acknowledges the support of Ayudas Fundaci\u00f3n BBVA a Proyectos de Investigaci\u00f3n Cient\u00edfica 2021 and the Spanish Ministry of Economy and Competitiveness grant PID2022-138268NB-I00, financed by MCIN/AEI/10.13039/501100011033, FSE+MTM2015-67304-P, and FEDER, EU.",
3
+ "abstract": "Motivated by online recommendation systems, we study a family of\nrandom forests. The vertices of the forest are labeled by integers.\nEach non-positive integer is the root of a tree. Vertices\nlabeled by positive integers are attached sequentially such\nthat the parent of vertex is , where the are i.i.d. random variables taking values in .\nWe study several characteristics of the resulting random forest. In\nparticular, we establish\nbounds for the expected tree sizes, the number of trees in the\nforest, the number of leaves, the maximum degree, and the height of\nthe forest.\nWe show that for all distributions of the , the forest\ncontains at most one infinite tree, almost surely.\nIf , then there\nis a unique infinite tree and the total size of the remaining trees is\nfinite, with finite expected value if .\nIf then almost surely all trees are finite.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In some online recommendation systems a user receives recommendations\nof certain topics that are selected sequentially, based on the past\ninterest of the user. At each time instance, the system chooses a\ntopic by selecting a random time length, subtracts this length from\nthe current date and recommends the same\ntopic that was recommended in the past at that time. Initially there is an\ninfinite pool of topics. The random time lengths are assumed to be\nindependent and identically distributed.\nThe goal of this paper is to study the long-term behavior of such\nrecommendation systems. We suggest a model for such a system\nthat allows us to understand many of the most important properties.\nFor example, we show that if the expected subtracted time length has\nfinite expectation, then, after a random time, the system will\nrecommend the same topic forever. When the expectation is infinite,\nall topics are recommended only a finite number of times.\nThe system is best understood by studying properties of random\nforests that we coin subtractive random forest (SuRF).\nEvery tree in the forest corresponds to a topic and vertices are\nattached sequentially, following a subtractive attachment rule.\nTo define the mathematical model,\nwe consider sequential\nrandom coloring of the positive integers as follows.\nLet be independent, identically distributed random variables, taking values\nin the set of positive integers . Define for all nonpositive integers\n.\nWe assign colors to\nthe positive integers by the recursion\nThis process naturally defines a random forest whose vertex set is .\nEach is the root of a tree in the forest.\nThe tree rooted at consists of the vertices corresponding to all such that .\nMoreover, there is an edge between vertices if and only if .\nFigure 1 ###reference_###.\nIn other words, trees of the forest are obtained by sequentially\nattaching vertices corresponding to the positive\nintegers. Denote the tree rooted at at time\n by (i.e., the tree rooted at containing vertices\nwith index at most ).\nInitially, all trees of the forest contain a single vertex: .\nAt time , vertex is added to the tree rooted at such that\n attaches by an edge to vertex . All other trees remain unchanged,\nthat is, for all .\nDefine as the random (possibly infinite) tree\nrooted at obtained at the \u201cend\u201d of the random attachment process.\nWe study the behavior of the resulting forest. The following random variables are of\nparticular interest :\nIntroduce the notation\nfor , where is a random variable distributed as the\n.\nAnother key characteristic of the distribution of is\nNote that is nondecreasing in and is bounded if and\nonly if .\nFinally, let denote the probability that vertex belongs to the tree rooted at .\nThen satisfies the recursion\nfor , with .\n###figure_1### The paper is organized as follows. In Section 2 ###reference_### we\nstudy whether the trees of the forest are finite or infinite. We show\nthat it is the finiteness of the expectation of that characterizes the behavior of\nthe forest in this respect.\nIn particular, if , then,\nalmost surely,\nall trees of the forest are finite. On the other hand, if\n, then the forest has a unique infinite tree and the\ntotal number of non-root vertices in finite trees is finite almost surely.\nIn Section 3 ###reference_### the expected size of the trees \nis studied at time . It is shown that when has full\nsupport, the expected size of\neach tree at time of the forest tends to infinity, regardless\nof the distribution. The expected tree sizes are sublinear in if and only if .\nWe also study various parameters of the random trees of the forest. In\nSections 5 ###reference_###, 6 ###reference_###, and 7 ###reference_###\nwe derive results on the number of leaves, vertex degrees, and the height\nof the forest, respectively."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related work",
15
+ "text": "The random subtractive process studied here was examined\nindependently, in quite different contexts,\nby\nHammond and Sheffield [9 ###reference_b9###]\nand\nBaccelli and Sodre [3 ###reference_b3###],\nBaccelli, Haji-Mirsadeghi, and Khezeli [2 ###reference_b2###],\nand\nBaccelli, Haji-Mirsadeghi, and Khaniha [1 ###reference_b1###].\nThese papers consider an extension of the\nancestral lineage process to the set of integers defined as follows: let\n be i.i.d. random variables taking positive\ninteger values. This naturally defines a random graph with vertex set \nsuch that vertices with are connected by an edge\nif and only if .\nIf we define the graph as the\nsubgraph of obtained by removing all edges for which\n, then is exactly the subtractive\nrandom forest studied in this paper.\nIt is shown in [9 ###reference_b9###] \u2014 and also in [1 ###reference_b1###] \u2014\nthat if , then almost surely has a unique\nconnected component, whereas if , then\nalmost surely has infinitely many\nconnected components. Hammond and Sheffield are only interested in the latter (extremely heavy-tailed) case.\nThey use the resulting coloring of the integers to define a random walk that converges to fractional Brownian motion.\nSee also Igelbrink and Wakolbinger [10 ###reference_b10###] for further results\non the urn model of Hammond and Sheffield.\nThe paper of Baccelli and Sodre [3 ###reference_b3###] considers the case when\n. They show that in this case the graph has a\nunique doubly infinite path. This implies that the\nsubtractive random forest contains a unique infinite tree. This fact is\nalso implied by Theorem 1 ###reference_orem1### below. The fact that\nall trees of the forest become extinct when (part (ii)\nof our Theorem 3 ###reference_orem3###) is implicit in Proposition 4.8 of\n[1 ###reference_b1###]. In that paper, the graph is referred to as\na Renewal Eternal Family Forest (or Renewal Eternal\nFamily Tree when it has a single connected component).\nThe long-range seed bank model of Blath, Jochen, Gonz\u00e1lez, Kurt, and Spano [4 ###reference_b4###] is also\nbased on a similar subtractive process.\nAnother closely related model is studied by Chierichetti, Kumar, and Tomkins [6 ###reference_b6###]. In their model, the\nnonnegative integers are colored by a finite number of colors and the subtractive process is not stationary. Given a sequence of\npositive \u201cweights\u201d , the colors are assigned to the positive integers sequentially such that the color\nof is the same as the color of where the distribution of is given by .\n(The process is initialized by assigning fixed colors to the first few positive integers.)\nChierichetti, Kumar, and Tomkins are mostly interested in the existence of the limiting empirical distribution of the colors."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Survival and extinction",
21
+ "text": "This section is dedicated to the question whether the trees of the\nsubtractive random forest are finite or infinite. The main results\nshow a sharp contrast in the behavior of the limiting random forest\ndepending on the tail behavior of the random variable .\nWhen has a light tail such that , then a single\ntree survives and the total number of non-root vertices of all the remaining trees is finite, almost\nsurely.\nThis is in sharp contrast to what happens when is heavy-tailed:\nWhen , then all trees become extinct, that is, every\ntree in the forest is finite."
22
+ },
23
+ {
24
+ "section_id": "2.1",
25
+ "parent_section_id": "2",
26
+ "section_name": ": a single infinite tree",
27
+ "text": "First we consider the case when .\nWe show that, almost surely, the forest contains a\nsingle infinite tree.\nMoreover, the\ntotal number of non-root vertices in all other trees is an almost surely finite random variable.\nIn other words, the sequence of \u201ccolors\u201d becomes constant\nafter a random index.\nLet and assume that .\nThen there exists a positive random variable \nwith and a (random) index such that for all ,\nwith probability one.\nProof. \nDefine a\nMarkov chain with state space by\nthe recursion and, for ,\nThus, defines the length of a block of consecutive vertices such that each vertex in the\nblock (apart from the first one) is linked to a previous vertex in the\nsame block. In particular, all vertices in the block belong to the\nsame tree.\nNote first that for any ,\nSince the events are nested, by continuity of measure we\nhave that\nHence, with positive probability, for all .\n(Note that is positive by assumption.)\nSince\n is a Markov chain, this implies that,\nwith probability one, the set is finite, which implies\nthe theorem.\nWe may take as the (random) index after which\nthe sequence is a constant.\nNote that the assumption may be somewhat weakened. However,\nsome condition is necessary to avoid periodicity. For example, if the distribution\nof is concentrated on the set of even integers, then the assertion of Theorem 1 ###reference_orem1###\ncannot hold.\nThe next result shows that the random index has a finite\nexpectation if and only if has a finite second moment.\nLet and assume that . Consider random index\n defined in the proof of Theorem 1 ###reference_orem1###.\nThen if and only if .\nIn particular, if , then the total number of\nvertices outside of the unique infinite tree has finite expectation.\nProof. \nConsider the Markov chain defined in the proof of Theorem\n1 ###reference_orem1###. For , let denote the number of times the Markov chain visits state\n. The key observation is that we may write\nSince , this implies\nNext, notice that , , and similarly,\n.\n\nBy convention, we write .\n\nIt follows from (2.1 ###reference_###) that is stochastically dominated by a geometric\nrandom variable and therefore .\nThus,\nAs noted in the proof of Theorem 1 ###reference_orem1###, for all\n,\nwhere is a\npositive constant,\nand therefore\nSince , the theorem follows."
28
+ },
29
+ {
30
+ "section_id": "2.2",
31
+ "parent_section_id": "2",
32
+ "section_name": ": extinction of all trees",
33
+ "text": "In this section we show that when has infinite expectation, then every tree of the forest becomes\nextinct, almost surely. In other words, with probability one, there is\nno infinite tree in the random forest. This is in sharp contrast with\nthe case when , studied in Section 2.1 ###reference_###.\nRecall that for , denotes the size of tree \nrooted at vertex .\nA set of vertices \nforms a maximal infinite path\nif , for all , ,\nand .\nIf , then, with probability one, there exists a\nunique integer such that and\nthe forest contains a unique maximal infinite path. Moreover,\nIf for all and , then\nProof. \nWe naturally extend the notation to positive integers \nso that is the subtree of the random forest rooted at\n. Similarly, denotes the number of vertices in this\nsubtree.\nIn Proposition 6 ###reference_position6### below we show that, regardless\nof the distribution of , there is no vertex of infinite degree,\nalmost surely.\nThis implies that the probability that the tree rooted at is infinite\nequals\nwhere we used the union bound\nand the fact that the events and are\nindependent since the latter only depends on the random variables .\nSince for all ,\nthe right-hand side of (2.2 ###reference_###) equals , that is, the inequality in\n(2.2 ###reference_###) cannot be strict. This means that the events\n for are disjoint (up\nto a zero-measure set).\nIn particular, almost surely, there are no two maximal infinite paths\nmeeting at vertex .\nBy countable additivity, this also implies that\nIn particular, with probability one, all maximal infinite paths in\nthe forest are disjoint.\nSimilarly to (2.2 ###reference_###), for all ,\nHence, the expected number of trees in the forest that contain\ninfinitely many vertices equals\nIf , then by Theorem 1 ###reference_orem1###, the\nexpectation on the left-hand side equals one.\nThis implies part (i) of Theorem 3 ###reference_orem3###.\nIt remains to prove part (ii), so assume that .\nSuppose first that the left-hand side of (2.3 ###reference_###) is finite.\nThen we must have .\nBut then for all ,\nwhich implies the statement.\nFinally, assume that . This implies that with positive probability, there\nare at least two infinite trees in the forest. However, as we show\nbelow, almost surely there is at most one infinite tree in the forest.\nHence, this case is impossible, completing the proof.\nIt remains to prove that for any ,\nFor , denote by the event that is a vertex in connected to \nby an edge (i.e., is a level node in the tree ). Then by the union bound,\nThe key observation is that for all and ,\nand therefore\nHowever, as shown above, each term of the sum on the right-hand side equals zero, which concludes the proof."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Expected tree sizes",
39
+ "text": "In this section we study the expected size of the trees of the random\nforest. In particular, we show that in all cases (if has full\nsupport), the expected size of\neach tree at time of the forest converges to infinity as . The rate of growth is sublinear if and only if .\nDenote the expected\nsize of the tree rooted at by .\n(expected tree sizes.)\nFor every , the expected size of the tree\nrooted at \nsatisfies\nHence, for all distributions of , we have .\nThe sequence is subadditive, that\nis, for all , \n(where we define ).\nFor every ,\nIf , then\nAlso, for all distributions of and for all \nand ,\nProof. \nFor , let denote the number of vertices at path distance \nin the tree rooted at .\nThen\nHence, , proving\n for the tree rooted at .\nIn order to relate expected tree sizes rooted at different vertices\n, we may consider subtrees rooted at vertices .\nTo this end, let denote the subtree of the forest rooted\nat at time and let be its size.\nThen the size of the tree rooted at \nsatisfies\nNoting that is independent of and that\n has the same distribution as , we obtain\nthe identity\nLet be the least positive integer such that is strictly positive.\nThe identity above implies that , and\ntherefore for all ,\nproving the first assertion of the theorem.\nUsing the fact that for all ,\nwe obtain (3.2 ###reference_###).\nTaking in the equality above, we obtain the following recursion\nfor the expected size of the tree rooted at , at time :\nWe may use the reursive formula to prove subadditivity of\nthe sequence . We proceed by induction.\n holds trivially for all when . Let . Suppose now that the inequality holds for all \nand . Then by (3.3 ###reference_###),\nproving .\nNext we show that if then .\nTo this end, observe that by (3.3 ###reference_###), we have\nThus,\nHence,\nIt remains to prove (3.1 ###reference_###).\n(Note that (3.1 ###reference_###) implies that is\nbounded away from zero for all and therefore\nif for some \nthen .)\nTo this end, let\nbe the generating function of the sequence , where \nis a complex variable.\nUsing the recursion (3.3 ###reference_###), we see that\nwhere is the generating function of\nthe sequence .\nThus, we have\nRecall that we assume here that . Since when\n, we have\n(3.1 ###reference_###) now follows from Corollary VI.1 of Flajolet and\nSedgewick [8 ###reference_b8###].\n(profile of the -tree.)\nNote that we proved in passing that, regardless of the distribution,\nfor all , the number of vertices in the\ntree that are at path distance from the root satisfies .\nThe\nsequence is often called the profile of the tree\n.\n(expected vs. actual size.)\nWhile Proposition 1 ###reference_position1### summarizes the properties of the expected\ntree sizes , it is worth emphasizing that the random variables\n behave very differently. For example, when , then we know from\nTheorem 3 ###reference_orem3### that for each , ,\nwhile, by Proposition 1 ###reference_position1###, . Also note that, for all distributions of ,\nfor each , is strictly positive and therefore does\nnot concentrate."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "The number of trees of the forest",
45
+ "text": "In this section we study the number\n\nof trees in the random forest that have at least one vertex \nattached to the root . In the motivating topic recommendation problem,\nthis random variable describes the number of topics that are\nrecommended by time .\nWe show that the expected number of trees goes to infinity as if and only\nif . Moreover, in probability, where\n.\nNote that it follows from Theorem 3 ###reference_orem3### that if , then\n almost surely.\nIn order to understand the behavior of , we first study the\nrandom variable\nNote that when , then vertex connects directly to the\nroot .\nHence, is the number of vertices in the forest at depth\n (i.e., at graph distance from the root of the tree containing the vertex).\nEquivalently, is the sum\nof the degrees of the roots of all trees in the forest at time .\n(number of trees.)\nThe random variables and satisfy the following:\n;\nIf , then ;\nconverges, in distribution, to a standard normal random variable.\nand if , then\n.\nIf , then in probability.\nFor all ,\nand\nProof. \nNote that is a sum of independent Bernoulli random variables\nand\nTo prove (ii), we may use Lyapunov\u2019s central limit theorem.\nIndeed,\nIf , then . (This simply\nfollows from the fact that .)\nIn order to use Lyapunov\u2019s central limit theorem, it suffices that\nThis follows from\nIn order to prove (iii), observe that for each ,\nand therefore\nwhere the last assertion follows from the fact that as \nand that when .\nPart (iv) simply follows from (ii), (iii), and Markov\u2019s inequality. Indeed,\n and for every ,\nThe exponential inequalities of (v) follow from the fact that the\ncollection of indicator random variables\n is negatively associated\n(Dubhashi and Ranjan [7 ###reference_b7###, Proposition 11]).\nThis implies that the collection of indicators\nis also negatively associated ([7 ###reference_b7###, Proposition 7]).\nHence, by [7 ###reference_b7###, Proposition 5], the tail probabilities of\nthe sum satisfy\nthe Chernoff bounds for the corresponding sum of independent random\nvariables.\nThe inequalities of (v) are two well-known examples of the Chernoff\nbound (see, e.g., [5 ###reference_b5###])."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Number of leaves",
51
+ "text": "Let denote the number of leaves of the tree rooted at , at\ntime . That is, is the number of vertices such\nthat and no vertex is attached to it.\nRecall that is the probability that vertex \nbelongs to the tree rooted at and \nis the expected size of the tree .\nThe following proposition shows that the expected number of leaves is\nproportional to the expected number of vertices in the tree.\n(number of leaves.)\nDenote .\nIf , then there exists a constant such that\nProof. \nLet .\nSince the event is independent of the event that\nno vertex is attached to , we may write\nThe sequence is monotone decreasing and,\nusing that for ,\nThus, there exists such that for all\n, there exists such\nthat whenever\n.\nBut then\nTo see why , note that\n\n(expected vs. actual random number of leaves.)\nJust like the size of the trees, the number of leaves is\nnot concentrated around its expectation. Indeed, when ,\n is an almost surely bounded sequence of random variables, whereas\n by Propositions 1 ###reference_position1### and 3 ###reference_position3###.\n(number of leaves when )\nProposition 1 ###reference_position1### is only concerned with the case .\nWhen has finite expectation, then the number of leaves of the tree rooted at \ndepends on whether the tree survives or not. Recall that the events\n\nand \nboth have positive probability.\nIt is easy to see that, conditioned on the event , the ratio almost surely converges to\n. On the other hand, conditioned on the event ,\n converges to a nontrivial random variable taking values in ."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "Degrees",
57
+ "text": "The outdegree of a vertex is the number of vertices\nattached to it at time , that is,\nWe also write\nfor the degree of vertex in the random forest at the end of the\nattachment process.\nNote that for all root vertices , and ,\nwhile for all other vertices ,\n and .\nFirst we show that the degrees among all root vertices is a\ntight sequence of random variables under general conditions, with the possible\nexception of some extremely heavy-tailed distributions.\n(maximum root degree.)\nIf the distribution of is such that there exists\n such that , then\nthe root degrees form a tight sequence of random\nvariables. In particular, for all , we have\nAs an example, consider a distribution with polynomially decaying tail\nsuch that for some .\nThen , and then for any , we have\n. However, if decreases much\nslower, for example, if , then the\nproposition does not guarantee tightness of the root degrees.\nProof. We have\nwhich proves the claim.\nNext we show that the maximum degree of any vertex grows at\nmost as the maximum of independent Poisson random variables\nthat is well known (and easily seen) to grow as .\n(maximum degree.)\nFor every , with probability tending to ,\nProof. \nThe proof once again follows from a simple application of the Chernoff\nbound for sums of independent Bernoulli random variables: for any ,\nwhich converges to if \nfor any fixed .\n(all degrees are finite.)\nWith probability , for all .\nProof. \nBounding as in the proof of Proposition\n4 ###reference_position4###, we see that, for every \nand ,\nHence, by the Borel-Cantelli lemma,\n for all\nbut finitely many values of , almost surely. This implies that,\nalmost surely, \nfor all .\nSimilarly, by taking (say) in (6.1 ###reference_###), it follows from the Borel-Cantelli lemma\nthat, almost surely, for all\nbut finitely many values of . This implies that \nfor all ,\nwith probability one.\n(asymptotic distribution of the out-degree.)\nAs argued above, the asymptotic degree of vertex may be\nrepresented as a sum of independent Bernoulli random variables\nFor example, for all , the are discrete random variables with the same distribution,\nsatisfying and ."
58
+ },
59
+ {
60
+ "section_id": "7",
61
+ "parent_section_id": null,
62
+ "section_name": "The height of the random forest",
63
+ "text": "In this section we study the expected height of the random forest.\nThe height\n of the forest, at time , is the length of the longest\npath of any vertex to the root of its tree.\nIn Proposition 7 ###reference_position7### we derive an upper bound for .\nThe upper bound\nimplies that the expected height is sublinear whenever\n for some .\nIn Proposition 8 ###reference_position8### we show that the expected height \nof the tree rooted at vertex goes to infinity, regardless of the distribution of .\nOf course, this implies that . As a corollary, we\nalso show that for all distributions, almost\nsurely. This is to be contrasted with the fact that when , is almost surely bounded (just like the height of\nany tree in the forest).\n(upper bound for the expected height of the forest.)\nFor all distributions of , we have\nProof. \nThe path length of a vertex to the root of its tree exceeds if\nand only if\nThus, if are i.i.d. with the same distribution as\n,\nThis implies that\nUsing the fact that for , we obtain the\nannounced inequality.\n(lower bound for the expected height of the forest.)\nFor all distributions of , the expected height of the tree rooted at vertex \nsatisfies\nProof. \nSince is an increasing sequence of random variables, we may define\n (that may be infinite). By the monotone convergence\ntheorem, it suffices to prove that , or equivalently, that\nDenote by the event that vertex is connected to vertex via a path\nof length , that is,\nand define .\nIntroducing the random variable\nnote that\nIn order to derive a lower bound for , note first that\nwhere are i.i.d. with the same distribution as\n.\nBy the Paley-Zygmund inequality,\nIn the argument below we show that . Substituting\ninto the inequality above, we obtain\nconcluding the proof of .\nHence, it remains to derive the announced upper bound for the second moment of .\nFirst note that for any and\nThen\nas desired.\n(almost sure lower bound for the height of the forest.)\nFor all distributions of , almost surely.\nProof. \nFor , the statement follows from Theorem 1 ###reference_orem1###\nand Proposition 6 ###reference_position6###\nso we may assume that\n has infinite expectation.\nSince by Proposition 8 ###reference_position8### the\nexpected height of the tree rooted at has infinite expectation, it\nfollows that the distribution of has unbounded support.\nSince by Theorem 3 ###reference_orem3### the tree rooted at becomes\nextinct almost surely, the random variable denoting the index of\nthe last vertex that belongs to is almost surely finite. Let\n denote the height of the -tree. Now we may define\n such that is the last vertex that belongs to the\ntree , and let denote the height of this tree.\nBy continuing recursively, we obtain a sequence of\ni.i.d. random variables distributed as . Moreover,\n, proving the statement.\nAcknowledgements. We are grateful to an anonymous referee for suggestions that\nlead to a much improved version of the paper."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {},
68
+ "image_paths": {
69
+ "1": {
70
+ "figure_path": "2210.10544v3_figure_1.png",
71
+ "caption": "Figure 1: An example of the subtractive attachment process (up to time\n5555) and the\nresulting forest. Here Z1=3subscript\ud835\udc4d13Z_{1}=3italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 3, Z2=2subscript\ud835\udc4d22Z_{2}=2italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 2, Z3=1subscript\ud835\udc4d31Z_{3}=1italic_Z start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT = 1, Z4=6subscript\ud835\udc4d46Z_{4}=6italic_Z start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = 6, and Z5=3subscript\ud835\udc4d53Z_{5}=3italic_Z start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT = 3.",
72
+ "url": "http://arxiv.org/html/2210.10544v3/x1.png"
73
+ }
74
+ },
75
+ "validation": true,
76
+ "references": [
77
+ {
78
+ "1": {
79
+ "title": "Coupling from the past for the null recurrent Markov chain.",
80
+ "author": "Baccelli, F., Haji-Mirsadeghi, M.-O., and Khaniha, S. (2022).",
81
+ "venue": "arXiv preprint arXiv:2203.13585.",
82
+ "url": null
83
+ }
84
+ },
85
+ {
86
+ "2": {
87
+ "title": "Eternal family trees and dynamics on unimodular random graphs.",
88
+ "author": "Baccelli, F., Haji-Mirsadeghi, M.-O., and Khezeli, A. (2018).",
89
+ "venue": "Unimodularity in randomly generated graphs, 719:85\u2013127.",
90
+ "url": null
91
+ }
92
+ },
93
+ {
94
+ "3": {
95
+ "title": "Renewal processes, population dynamics, and unimodular trees.",
96
+ "author": "Baccelli, F. and Sodre, A. (2019).",
97
+ "venue": "Journal of Applied Probability, 56(2):339\u2013357.",
98
+ "url": null
99
+ }
100
+ },
101
+ {
102
+ "4": {
103
+ "title": "The ancestral process of long-range seed bank models.",
104
+ "author": "Blath, J., Gonz\u00e1lez Casanova, A., Kurt, N., and Spano, D. (2013).",
105
+ "venue": "Journal of Applied Probability, 50(3):741\u2013759.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "5": {
111
+ "title": "Concentration Inequalities: A Nonasymptotic Theory of\nIndependence.",
112
+ "author": "Boucheron, S., Lugosi, G., and Massart, P. (2013).",
113
+ "venue": "Oxford University Press.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "6": {
119
+ "title": "Asymptotic behavior of sequence models.",
120
+ "author": "Chierichetti, F., Kumar, R., and Tomkins, A. (2020).",
121
+ "venue": "In Proceedings of The Web Conference 2020, pages 2824\u20132830.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "7": {
127
+ "title": "Balls and bins: a study in negative dependence.",
128
+ "author": "Dubhashi, D. and Ranjan, D. (1998).",
129
+ "venue": "Random Structures & Algorithms, 13(2):99\u2013124.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "8": {
135
+ "title": "Analytic Combinatorics.",
136
+ "author": "Flajolet, P. and Sedgewick, R. (2009).",
137
+ "venue": "Cambridge University Press.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "9": {
143
+ "title": "Power law P\u00f3lya\u2019s urn and fractional Brownian motion.",
144
+ "author": "Hammond, A. and Sheffield, S. (2013).",
145
+ "venue": "Probability Theory and Related Fields, 157(3):691\u2013719.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "10": {
151
+ "title": "Asymptotic gaussianity via coalescence probabilites in the\nHammond-Sheffield urn.",
152
+ "author": "Igelbrink, J. L. and Wakolbinger, A. (2022).",
153
+ "venue": "arXiv e-prints, pages arXiv\u20132201.",
154
+ "url": null
155
+ }
156
+ }
157
+ ],
158
+ "url": "http://arxiv.org/html/2210.10544v3"
159
+ }
20240225/2211.07843v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2211.11338v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2212.11920v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2301.08807v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2302.12491v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2303.05445v4.json ADDED
@@ -0,0 +1,716 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Flooding with Absorption: An Efficient Protocol for Heterogeneous Bandits over Complex Networks",
3
+ "abstract": "Multi-armed bandits are extensively used to model sequential decision-making, making them ubiquitous in many real-life applications such as online recommender systems and wireless networking. We consider a multi-agent setting where each agent solves their own bandit instance endowed with a different set of arms. Their goal is to minimize their group regret while collaborating via some communication protocol over a given network. Previous literature on this problem only considered arm heterogeneity and networked agents separately. In this work, we introduce a setting that encompasses both features. For this novel setting, we first provide a rigorous regret analysis for a standard flooding protocol combined with the classic UCB policy. Then, to mitigate the issue of high communication costs incurred by flooding in complex networks, we propose a new protocol called Flooding with Absorption (FwA). We provide a theoretical analysis of the resulting regret bound and discuss the advantages of using FwA over flooding. Lastly, we experimentally verify on various scenarios, including dynamic networks, that FwA leads to significantly lower communication costs despite minimal regret performance loss compared to other network protocols.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Exploration-exploitation dilemmas form the basis of many real-life decision-making tasks [51 ###reference_b51###, 41 ###reference_b41###].\nIn fact, the trade-off between making a choice to either stay with a current action or explore new possibilities appears as a feature in a variety of well-known applications [6 ###reference_b6###, 52 ###reference_b52###].\nAs a result, the multi-armed bandit (MAB) problem, which is designed to reflect dilemmas of this kind, has been intensely studied in a wide range of scenarios [9 ###reference_b9###, 37 ###reference_b37###, 39 ###reference_b39###].\nIn the baseline setting, an agent must make sequential decisions by choosing from a set of possible actions (the \u201carms\u201d of the bandit). In the setting of stochastic MABs, each arm gives rewards following an unknown probability distribution [9 ###reference_b9###].\nHere, the goal is to minimize the cumulative regret over some timespan , i.e., the difference between the accumulated reward and the reward that arises from choosing only the best arm.\nTo reach this goal, the agent must balance exploring new actions and choosing already tested ones [2 ###reference_b2###, 37 ###reference_b37###, 9 ###reference_b9###].\nFor applications involving large-scale decentralized decision-making [38 ###reference_b38###], such as online advertising, search/recommender systems, and wireless channel allocation, collaborative multi-agent multi-armed bandits are a natural modeling choice [1 ###reference_b1###, 58 ###reference_b58###, 43 ###reference_b43###, 44 ###reference_b44###, 12 ###reference_b12###, 3 ###reference_b3###, 32 ###reference_b32###, 67 ###reference_b67###, 42 ###reference_b42###]. In this setting, each agent plays their own bandit instance and communicates some information to others to minimize the group regret.\nOne common assumption in the literature is that agents share the same set of arms [34 ###reference_b34###, 22 ###reference_b22###, 65 ###reference_b65###]. However, arm homogeneity does not hold in many large-scale systems (e.g., contextual recommender systems), where agents often have heterogeneous arm sets of available actions [68 ###reference_b68###, 15 ###reference_b15###].\nFor instance, in a distributed recommender system scenario, arms might correspond to the contents shown to users, such as movies, and rewards to user opinions; then, depending on one\u2019s location, the set of available movies for each agent may be different due to external constraints such as copyright issues.\nIn this case, it would be desirable for the service provider if all the individual systems with partially overlapping contents collaborate with one another to minimize the group regret.\nAnother common assumption is that agents are connected by a complete network, where agents can directly communicate with every other agent [10 ###reference_b10###, 68 ###reference_b68###].\nHowever, in real large-scale systems,\nagents are usually connected via a multi-hop communication network, where only adjacent nodes can exchange messages. To disseminate information to agents at larger distances here, agents need to forward messages. This is typically done by means of a flooding protocol [22 ###reference_b22###, 49 ###reference_b49###, 34 ###reference_b34###, 65 ###reference_b65###] or gossiping protocol [59 ###reference_b59###, 56 ###reference_b56###, 14 ###reference_b14###, 63 ###reference_b63###, 15 ###reference_b15###]; that is, a received message is forwarded to all neighbors or only one randomly selected neighbor, respectively.\nSuch a modeling assumption is ubiquitous in many real-life scenarios involving mobile/vehicular networks or social networks, among others.\nContributions.\nTo the best of our knowledge, the setting of collaborative, heterogeneous multi-agent multi-armed bandits communicating over a general network has not been investigated yet. Yet, this setting arises in a wide range of real-life applications, e.g. wireless channel allocation, where not all nodes on an underlying network can access the same channels. For such scenarios, one can neither assume fully connected or particularly regular communication topologies, nor homogeneous arm sets.\nDistinct from previous work on multi-agent bandits, we here aim at efficient network protocol design for this novel setting; specifically, we want to design a simple alternative to flooding that achieves low communication complexity while retaining minimal loss in regret. Note that our research objective forces us to consider the bandit instances and the communication protocol in an integrated fashion, which is in stark contrast to the approaches used both in the existing bandit and networking literature, and thus contributes to the novelty of our approach.\nTo address the significant issue of exploding communication complexity of flooding in our setting, we introduce a new lightweight communication protocol for complex networks, called Flooding with Absorption (FwA). Its design principle is inherently coupled with the given bandit instances.\nWe provide theoretical and experimental results showing that this protocol is highly communication-efficient on a wide range of complex networks, yet induces minimal regret performance loss for complex network topologies, even compared to standard flooding.\nUsing FwA can also help avoid heavy individual link congestion in complex networks.\nAn important practical advantage of our protocol is that it is fully agnostic to the network structure, and can therefore be deployed on dynamically changing networks without any need for fine-tuning.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "System Model",
15
+ "text": "We now describe our setting of collaborative111We remark that we do not consider any collisions [46 ###reference_b46###, 40 ###reference_b40###, 65 ###reference_b65###] where two neighbors pulling the same arm do not affect their observed rewards in any way. Rather, we focus on the collaborative setting where the agents are encouraged to cooperate with one another by sharing their own observations. heterogeneous multi-agent multi-armed stochastic bandits over a general communication network; see Figure 1 ###reference_### for an illustration.\nWe assume that there are agents connected by an undirected graph , with .\nWe denote by be the neighborhood of in not including , and by the induced subgraph for .\nAlso, for an integer , the -th order graph power of , denoted as , is defined as the graph on such that iff , where is the length of the shortest path in connecting and .\nEach agent has access to a finite set of arms with cardinality that they can pull; let be the total set of arms of cardinality .\nFollowing [39 ###reference_b39###], let be a set of -sub-Gaussian distributions, and let be a function mapping each (reward) distribution to its mean.\nEach arm is associated with an unknown reward distribution .\nFor simplicity, let .\nWe note that is independent of the agents\u2019 identities, i.e., each agent , regardless of their arm set , faces the same distribution of rewards for the same arm (whenever contains ), and receives an i.i.d. reward from this distribution upon pulling this arm.\nWe denote by the best local arm for agent that satisfies for all , and set .\nThe main challenge in the regret analysis is that even for the same arm , the suboptimality gap may be different across agents containing .\nThe execution of all agents proceeds in a sequence of synchronous rounds .\nIn each round , all agents simultaneously (i) pull some arm, (ii) compute and send a message to their neighbors, and (iii) receive and process all messages from their neighbors.\nFrom the perspective of agents, let us denote by the set of agents having action , and let be the set of agents containing as a suboptimal arm, i.e., .\nAs done in the classic work on regret minimization in collaborative multi-agent bandits [34 ###reference_b34###, 68 ###reference_b68###, 49 ###reference_b49###], our goal is to minimize the expected group regret at time horizon , , defined as:\nwhere is the agent-specific gap of arm and is the number of times agent plays the arm up to time .\nWe note that due to the two sources of heterogeneity in our setting, the benefit one receives from collaboration differs;\nfurthermore, the same arm may be optimal for some agents but suboptimal for others.\nOf course, even so, we expect that agents communicating and collaborating should lead to a speed-up (in terms of the group regret) compared to the baseline without any information sharing.\nThe question is then how much speed-up one could get from collaboration under our heterogeneous system model. An additional prominent issue for realistic applications and, in particular, complex networks concerns the communication complexity of the information-sharing protocol used. Designing communication protocols with low communication complexity, defined as the number of messages sent and forwarded, is of paramount importance to not overshadow the improvement in regret due to collaboration."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Flooding",
21
+ "text": "As is common in much of the previous literature, we will focus on agents that individually run the classic upper-confidence bound (UCB) policy [2 ###reference_b2###, 34 ###reference_b34###, 68 ###reference_b68###] under which each agent pulls the arm associated with the maximum of the so-called UCB index, which is the sum of empirical reward (up to time ) and an additional bonus term.\nAgents can, and should, take advantage of the distributed setting by communicating pulled arms and received rewards amongst each other over the underlying communication network.\nThis is not straightforward to implement or analyze, given that the agents\u2019 arm sets are all different, and thus, even when broadcasting over multiple hops by flooding the network is available, the speed-up gained from this is not immediate.\nIn this section, we review (and reformulate) the standard flooding protocol for our setting and discuss its properties."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Flooding",
27
+ "text": "Flooding (broadcasting) lets each agent send all messages to all its neighbors in every round, with the number of times the message is forwarded limited by the time-to-live (TTL) [13 ###reference_b13###, 60 ###reference_b60###, 64 ###reference_b64###, 55 ###reference_b55###].\nTo account for potential loops in the network and avoid a broadcast storm [61 ###reference_b61###, 66 ###reference_b66###], we explicitly use a sequence number-controlled flooding (SNCF) variant.\nThe pseudocode of Flooding along with each agent\u2019s UCB algorithms is presented in Algorithm 1 ###reference_thm1### with .\nFlooding proceeds as follows: In each round , each agent pulls an arm that has the highest upper confidence bound (line 8).\nNote that is the number of pulls of arm available to agent by time , and is the estimate of made by agent at time .\nIn both estimates, agent uses all observations available to by time , including the messages relayed to them.\nHaving received the corresponding reward from pulling arm , agent creates a message\nand pushes it to , its current queue of messages to be sent (line 11).\nAfter UCB has been completed, each agent sends and receives messages to and from its neighbors (lines 14-21).\nOur message consists of the following components:\n and are the arm pulled by agent and the reward received at time , respectively.\n is a hash value of the originating agent , the arm pulled, and the obtained reward that acts as a unique identifier of the message.\nOur protocol uses to control flooding by avoiding routing loops that can lead to broadcast storms and improper bias in the reward estimations (the estimation protocol is shown in line 3 of Algorithm 2 ###reference_thm2###).\nEach agent keeps track of the hash values of messages that they have seen until time via a queue222The queue operations used in the algorithm (, , ) are defined as usual [21 ###reference_b21###]. of size , denoted as .\nIf an already-seen message comes in (line 18), that message is deleted on arrival (line 22).\nThe memory length of is the worst-case space complexity that arises from keeping track of all messages from all agents for the last time steps, as all messages can be forwarded at most times.\n is the agent that last forwarded the message; if the receiver \npasses on , they replace with and forward to all neighbors except the originator (line 16).\nThis prevents messages from echoing after one hop.\n keeps track of the remaining life span of the message (TTL), which is initialized to .\nIt is decayed by every time a message is forwarded, and the message is discarded when TTL reaches . We note that is equivalent to Instant Reward Sharing (IRS) [34 ###reference_b34###], where each agent only sends its message to its neighbors, and any message containing arm that is sent to agents not containing becomes void.\nWe assume that the nodes have no knowledge of the network topology, or who their neighbors are, which is a realistic assumption in complex networks, and wireless networks.\nThis is in contrast to some of the previous works, e.g., [49 ###reference_b49###], which assumes that each agent knows its neighborhood in , which essentially bypasses any issues regarding communication complexity. Specifically, this also abstracts away any difficulties arising from delayed messages traveling along different paths."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Group Regret Analyses of Flooding",
33
+ "text": "The regret bound of stochastic bandits generally depends on a problem-dependent quantity [2 ###reference_b2###] that quantifies the difficulty of learning. For instance, for a single-agent multi-armed stochastic bandit, the regret bound scales as , where is the gap between the mean rewards of best arm and the suboptimal arm .\nThe intuition is that if the mean rewards are similar, one would need a much tighter confidence interval to identify the optimal arm, forcing one to pull many more times, precisely inversely proportional to its reward gap.\nThis is known to be asymptotically optimal [37 ###reference_b37###].\nIn our setting, we would expect our problem-dependent quantity to depend on both the underlying network topology and arm distribution. To see this, given a graph , we first recall some graph-theoretic quantities [7 ###reference_b7###]:\nThe clique covering number of , denoted as , is the smallest size of a partition of such that each part induces a clique.\nAny such partition (not necessarily minimum) is called a clique cover.\nThe independence number of , denoted as , is the maximum size of a subset of that induces no edges.\nWe now define our problem-dependent quantity as follows:\nwhere is the agent-specific suboptimality gap of arm , and is taken over all possible clique covers of .\nIf , we set .\nWe now present the nonasymptotic regret upper bound for Flooding:\nAlgorithm 1 ###reference_thm1### with , , , and achieves the group regret upper bound\nwhere\nThe complete proof, deferred to Appendix B ###reference_###, uses a clique covering argument and an Abel transformation.\nSee Appendix C.1 ###reference_### for a discussion of the main technical challenges when proving the regret bound for our setting, compared to previously considered settings.\nBy choosing as the minimum clique cover for each in the definition of , a simplified, asymptotic regret bound can be deduced:\nWhen ,\nwhere .\nNote that\n is the suboptimality gap introduced in Yang et al. [68 ###reference_b68###], where they studied the setting of heterogeneous bandits on a fully-connected network.\nWhen is a fully connected network, , and we recover their regret bound, with an improved constant in .\nSince our general setting also applies to the restricted cases considered in the previous literature, we can compare our regret bounds to existing ones.\nIn the same setting without collaboration, the group regret scales as ; thus when is considered to be constant, the regret always scales linearly in .\nCompared to this, depending on the network, the regret bound of Flooding scales with the clique covering number of subgraphs of , which is usually strictly less than , and in some cases, even sublinear.\nSimilarly, our regret bounds and our problem-dependent difficulty quantity also generalize previous literature on collaborative multi-agent multi-armed bandits.\nWhen the network is a clique, we recover the regret bound presented in [68 ###reference_b68###] with matching dependency and an improved leading coefficient333Theorem 2 of [68 ###reference_b68###] requires , and thus with proper scaling, it can be seen that our coefficient is while their coefficient is ..\nIn the homogeneous agents setting with a general network topology [34 ###reference_b34###, 49 ###reference_b49###], reduces to , where is the suboptimality gap as defined in [34 ###reference_b34###], satisfying for all .\nAs is independent of the arm , we have shown that our successfully generalizes the suboptimality gap of [34 ###reference_b34###].\nWhen , we have that as , which results in the same regret bound as in [34 ###reference_b34###].\nWhen (IRS), it can be observed that IRS and FwA coincide, yet our bound is a bit worse compared to [34 ###reference_b34###], whose bound depends on .\nWe believe that such a gap is an inherent artifact of our proof, and we leave closing this gap to future work. Also, we should remark that the difference between the two regret bounds for IRS is generally small, as the gap depends on the covering gap , which is known to be small for many classes of graphs, and zero for perfect graphs [27 ###reference_b27###]; see [28 ###reference_b28###, 53 ###reference_b53###] for some recent advances."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Drawbacks of Flooding",
39
+ "text": "Flooding leads to optimal information dissemination, and thus significantly improves the group regret.\nYet, it is very expensive in terms of communication complexity, defined as the cumulative number of messages sent by all agents [68 ###reference_b68###].\nIndeed, for , the worst-case communication complexity is , which is attained when every message created by every agent up to time is being passed around at every edge. This can quickly congest the network.\nOne na\u00efve way of controlling the communication complexity is to set the TTL, , to a low enough value. However, in our setting, the trade-off between communication complexity and group regret is not trivial due to the arm heterogeneity; for instance, IRS [34 ###reference_b34###, 49 ###reference_b49###], i.e., , has a lower message complexity but often does not result in good regret guarantees, as immediate neighbors may not share any arms. On the other hand, (uniform) gossiping algorithms for bandit problems [14 ###reference_b14###, 56 ###reference_b56###, 57 ###reference_b57###] suffer from large latencies on networks with sparse links [29 ###reference_b29###, 11 ###reference_b11###].\nIt is thus desirable to have a simple communication protocol with good regret guarantees when combined with the UCB policy (compared to Flooding) and low communication complexity. With this, we can target more complex network structures commonly found in real-life applications.\nFor such settings, we introduce a new protocol that interpolates between the communication-efficient nature of IRS and the regret of Flooding by using the intrinsic heterogeneity of the system caused by network topology and arm distributions."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "A New Efficient & Effective Protocol on Complex Networks: Flooding with Absorption",
45
+ "text": "In this section, we propose a new approach, which we call Flooding with Absorption (FwA) (Figure 2 ###reference_###), whose pseudocode is shown in Algorithm 1 ###reference_thm1### with . In contrast to Flooding, once a message (some copy of it to be precise) containing arm reaches an agent whose arm set includes the arm , the agent absorbs that message, i.e., does not forward it any further.\nAdditionally, as in Flooding, we retain the TTL , meaning that if a message originating at time has not found an absorbing agent until , it gets discarded.\nIt is also dropped if the message hits a \u201cdead end,\u201d i.e., a leaf node.\nThis seemingly small difference to Flooding is critical in ensuring low communication complexity, as it prevents messages from circulating for too long.\nWe note that FwA is somewhat reminiscent of the well-studied replication-based epidemic- and other controlled flooding algorithms [62 ###reference_b62###, 55 ###reference_b55###, 45 ###reference_b45###, 24 ###reference_b24###], which were designed for various networking applications, e.g., ad-hoc networks.\nOur FwA protocol distinguishes itself by using the inherent heterogeneity of agents without any explicit tuning or need for solving NP-hard combinatorial problems [45 ###reference_b45###].\nFurthermore, the goal of FwA is to disseminate information, which is generated at each timestep, to nodes that may benefit from it for its learning, not to route packets from an arbitrary source to an arbitrary destination. Despite some outward protocol similarity, it hence also differs from classic P2P systems such as Gnutella [47 ###reference_b47###, 48 ###reference_b48###], where there is no such intrinsic correlation between sender and receiver. In the bandit setting, FwA is advantageous because the sender and final receiver share the arm in question.\nEach agent must have a sufficiently large memory buffer to store the messages to be sent in the next round and previously seen message identifiers.\nAs all messages expire after rounds, this memory requirement is at most .\nAlso, we note that the communication complexity of FwA ranges from to , depending on the underlying network topology and the arm distribution.\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Group Regret Bound of FwA",
51
+ "text": "As FwA is algorithmically similar to Flooding, their regret bounds are also somewhat similar.\nTo formalize this, we first consider a graph , and let be a multi-coloring with overlap allowed, i.e., it may be that for .\nLet and be such that .\nA path (of length ) is said to -free if , where\nFor and , we define the -non-blocking graph power of -th order of , denoted as , as a graph on with the edge set such that iff there exists an -free path from to in of length at most .\nDef. 4.3 ###reference_theorem3### is similar to color-avoiding percolation (CAP) in statistical physics [35 ###reference_b35###, 36 ###reference_b36###, 33 ###reference_b33###], albeit there are several differences.\nWe consider a multi-coloring , while CAP is only studied for a single color per vertex.\nAlso, CAP only considered the criticality of the connectivity, while in our case, the performance of our algorithm depends on specific graph invariants (e.g., chromatic number) of color-dependent subgraphs.\nDefining the suboptimality gap for FwA as\nwhere is over all possible clique covers of , it is easy to see that the following theorem holds:\nWith , Theorem 3.2 ###reference_theorem2### holds with replaced by .\nSimilarly, with an appropriate choice of the clique cover, we have the following simplified asymptotic regret bound:\nWith the same assumption as in Theorem 3.2 ###reference_theorem2###, we have\nwhere .\nAs is always a subgraph of , it can be easily seen that the regret upper-bound of Flooding is always better than that of FwA.\nBut as we will demonstrate later, at the price of slightly worse regret, FwA obtains significantly better communication complexity than Flooding.\nCorollary 3.3 ###reference_theorem3### and 4.6 ###reference_theorem6### imply that the gap in the asymptotic regret upper-bounds of Flooding and FwA roughly scales with , where .\nTo get some intuition, we consider two extreme cases.\nFirst, suppose that the arms are so heterogeneous that no agents of distance at most share arm .\nIn this case, we have , and .\nNow suppose that all agents have the same arm set, i.e., for all , in which case FwA is equivalent to IRS, i.e., messages do not get forwarded beyond direct neighbors.\nHence, we have that for all and , i.e., .\nThus, for small \u2019s and with large average path length [5 ###reference_b5###] between agents containing , is small."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "Advantages of Flooding with Absorption",
57
+ "text": "We now informally argue the advantages of using Flooding with Absorption over other protocols such as Flooding or IRS when we run it on complex network topologies.\nInterpolation between IRS and Flooding.\nFwA naturally interpolates between IRS and Flooding in terms of information propagation, which is advantageous on complex networks that are not particularly regular, e.g., those with both dense and sparse regions.\nIn dense parts of the network, where many nodes share arms, FwA is closer to IRS: a message containing the shared arm and its reward gets absorbed quickly.\nOn the other hand, in regions of the network where the arm that a particular node pulled is rare, FwA acts like Flooding with , thereby ensuring that agents at a larger distance get information that is relevant.\nWe additionally note that setting the TTL to larger values in FwA will always be less costly than doing so in Flooding, as the probability of congestion is much smaller.\nComparable Regret Guarantees. As FwA acts as a mix of IRS and Flooding, its regret should be bounded by IRS (where messages get absorbed in just one step) from above, and Flooding (where messages are not absorbed until the TTL expires) from below.\nThe combination of Theorems 3.2 ###reference_theorem2### and 4.5 ###reference_theorem5### gives us an expression for the gap between the regret upper bounds of Flooding and FwA.\nFrom this, we can conclude that for the regret gap between FwA and Flooding to be small, either the graph is so sparse that the average path length [5 ###reference_b5###] between agents containing is large, or the graph is dense but the arm distribution is sparse enough such that the same property holds.\nWe emphasize that although the gap may be nonzero, the exploding communication complexity of Flooding demonstrates a clear trade-off between performance and communication complexity.\nOn the other hand, it is also expected that FwA will outperform (uniform) gossiping in terms of the regret. In fact, in networks with sparse links connecting very dense network regions, the probability that a gossiping protocol hits the sparse link before the TTL expires can be arbitrarily small.\nCommunication Efficiency.\nHaving messages absorbed by agents that can profit from its information implies that the FwA protocol completely falls back to the baseline Flooding algorithm only in the case of a network where particular arms are very rare.\nThis means that if there is a ball of radius in the network in which two agents share an arm, the communication complexity of FwA will already be lower than that of Flooding, .\nIn networks of high density and few arms, the communication complexity of FwA will be close to that of IRS, i.e., lowered by a factor . Moreover, due to the arm-dependent absorptions, we expect that FwA will result in much lesser number of messages sent across each individual link per round.\nHence, FwA has the advantage of being able to mitigate network overload and heavy link congestion without much overhead or the need to resort to gossiping protocols, which is particularly salient for applications such as large-scale and wireless networks.\nNo tuning requirements.\nFwA also has an important practical advantage: it has no tunable parameters beyond the TTL , which means it is close to network agnostic.\nThis is in contrast to protocols like probabilistic flooding [54 ###reference_b54###], which stops message propagation with some constant probability .\nWhile such protocols can also reduce communication complexity to a scalable degree, the \u201coptimal\u201d stopping probability is highly instance-dependent, making them quite hard to use in unknown, real-life networks, in particular, dynamically changing networks.\nFwA, on the other hand, can effectively deal with such instances, which we verify in Section 6 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Regret Lower Bound",
63
+ "text": "We consider a decentralized444see Appendix A of [22 ###reference_b22###] for the precise measure-theoretic definition of decentralized policies. policy , where is the agent-wise policy followed by agent , possibly affected by other policies and the history.\nFor the regret lower bound, we consider a rather general class of policies satisfying the following property, which has been widely adapted in bandit literature [37 ###reference_b37###, 34 ###reference_b34###, 22 ###reference_b22###]:\nis said to be individually consistent if, for any agent and any , we have that , where .\nOne obtains the following regret lower bound:\nFor any individually consistent policy ,\nwhere is the set of -sub-Gaussian distributions with mean , and .\nWhen , we obtain:\nThe proof is immediate from the classic change-of-measure argument for cooperative multi-agent bandit setting [22 ###reference_b22###, 68 ###reference_b68###].\nNote that this asymptotic lower bound matches our asymptotic regret upper bound for both Flooding and FwA (Theorem 4.5 ###reference_theorem5###, Cor. 4.6 ###reference_theorem6###) up to some graph topology and arm distribution-dependent constants; see Appendix C.2 ###reference_### for a more detailed discussion."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Experimental Results",
69
+ "text": "In this section, we experimentally compare FwA to several existing algorithmic solutions.\nThe experiments were conducted on three random graph models with nodes: the Erd\u0151s-R\u00e9nyi model (ER) [23 ###reference_b23###, 26 ###reference_b26###], the Barab\u00e1si-Albert model (BA) [4 ###reference_b4###], and the stochastic block model (SBM) [31 ###reference_b31###].\nWe use the following hyperparameters when generating the random graphs: for ER, the edge probability is set to ; for BA, the preferential attachment constant is set to ; for SBM, we consider 4 clusters, each with nodes, with intracluster and intercluster edge probabilities set to and , respectively.\nWe set the total number of arms to , and the number of arms per agent to be .\nWe sample sets of size as arm sets for all the agents, uniformly at random.\nThe arm rewards follow Gaussian distributions, with the corresponding means uniformly sampled from and fixed variance .\nFor all experiments, the overall arm distribution among the agents and the reward distributions are fixed.\nWe compare the baseline UCB with no cooperation between agents (baseline), Flooding, Probabilistic Flooding (Prob. Flooding) [54 ###reference_b54###], (uniform) Gossip, IRS, and our FwA.\nFor the Gossip algorithm, we assume that each agent forwards messages to only one random neighbor at a time.\nFor Prob. Flooding, assuming that the learner has no prior knowledge of the communication network, we fix to provide a fairer comparison.\nWe again emphasize that FwA does not require any tuning of hyperparameters other than the TTL.\nAll experiments are repeated times, with time horizon .\nWe set TTL to be , as our network instances\u2019 diameters are for ER, BA, and SBM, respectively, and we want our to be strictly lower than all three values for meaningful results.\nAll codes were written in Python, and we made heavy use of the NetworkX package [30 ###reference_b30###].\n###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "6.1",
73
+ "parent_section_id": "6",
74
+ "section_name": "Baseline comparison: Group Regret and Communication Complexity",
75
+ "text": "We first compare the group regret and the communication complexity over time, i.e., the (cumulative) total number of messages sent, for all considered protocols.\nThe results are shown in Figure 3 ###reference_###.\nAs expected, Flooding achieves the best regret out of the tested protocols, but its communication complexity is the worst.\nDespite this, the important observation is that our FwA achieves second-best regret, arguably close to that achieved by Flooding, with a significantly reduced communication complexity.\nMoreover, when compared to Prob. Flooding, FwA exhibits a better tradeoff between regret and communication: with a similar communication complexity, FwA achieves a better regret, or at least at par (for the BA model).\nWe also experimented with Prob. Flooding of other stopping probabilities (not shown here) and observed that they tend to show worse trade-offs between regret and communication complexity.\nThis shows that our proposed FwA protocol is a viable alternative to Flooding if one needs reduced communication complexity and good regret, uniformly across various network topologies.\n###figure_4###"
76
+ },
77
+ {
78
+ "section_id": "6.2",
79
+ "parent_section_id": "6",
80
+ "section_name": "Link congestion",
81
+ "text": "In a setting where new messages are constantly produced by every agent (as each pulls an arm at each time step), one of the potential issues is link congestion caused by a large number of messages passing through bottleneck links.\nThis can lead to significantly decreased performance and undesirable latency effects - messages may be queued with limited memory in reliable link protocols, or automatically discarded in non-reliable link protocols once more messages are being sent than the link can handle.\nIn Figure 4 ###reference_###, we visualize the number of messages sent over a particular link, again with TTL .\nWe chose a random link for ER and BA. For SBM, we chose a \u201csparse\u201d link that connects two dense clusters.\nOut of all the considered protocols, FwA results in the largest reduction of messages per round while providing good regret; specifically, for ER, BA, and SBM, FwA provides about reduction compared to Flooding, respectively.\nWe note that the reduction is larger even compared to Prob. Flooding - even though Prob. Flooding can occasionally have slightly better overall communication complexity (at worse regret).\nThis implies that our protocol exhibits significant benefits regarding individual network link congestion, which would help us avoid latency effects in real-life network applications.\nOne interesting observation is that FwA produces a spike in the number of messages in the early phase for all network topologies.\nThis is due to the design of the UCB algorithm; in the early phase, most of the agents are exploring the arms, and thus the messages are somewhat \u201cdiverse\u201d.\nBut as soon as the agents identify potentially best arms, the arm indices of the messages start to stabilize. They become less diverse, implying that from then on, absorption occurs more frequently under FwA."
82
+ },
83
+ {
84
+ "section_id": "6.3",
85
+ "parent_section_id": "6",
86
+ "section_name": "Dynamic networks",
87
+ "text": "###figure_5### The advantage of our protocol is especially pronounced when we consider dynamically changing networks, specifically, edge-Markovian networks [20 ###reference_b20###, 18 ###reference_b18###, 19 ###reference_b19###, 17 ###reference_b17###], where the network evolves as follows:\nstarting from an arbitrary initial graph , for , is stochastically determined as\nand are often referred to as edge birth rate and edge death rate, respectively.\nWhen , it is well-known that, regardless of the initial graph , the process converges to (stationary) Erd\u0151s-R\u00e9nyi graph .\nFor our experiments, we start from the baseline ER graph and set . We plot an example trajectory in Figure 5 ###reference_###a.\nThe results are shown in Figure 5 ###reference_###b.\nObserve how well our FwA protocol matches the regret of Flooding, with strictly better communication complexity. This trend is consistent across all considered networks, showing that FwA is the most effective out of the considered protocols in dynamic networks.\nSimilarly to the static case, we experimented with Prob. Flooding of other stopping probabilities (not shown here) and observed that they tend to show worse trade-offs between regret and communication complexity.\nThis suggests that the arm-dependent absorption mechanism of FwA implicitly regularizes the communication complexity in an efficient manner in dynamic networks, while minimizing the loss in regret."
88
+ },
89
+ {
90
+ "section_id": "6.4",
91
+ "parent_section_id": "6",
92
+ "section_name": "Tightness of Theoretical Regret Upper Bounds",
93
+ "text": "###figure_6### ###figure_7### Recall that the theoretically derived regret bounds of Flooding and FwA (Theorem 3.2 ###reference_theorem2### and 4.5 ###reference_theorem5###) depend on both network topology and arm distribution.\nFlooding scales with and FwA scales with .\nTo show that the theoretical bounds are tight and match well with practice, we perform an ablation study by varying the underlying edge density and seeing whether the aforementioned quantities scale well with the actual regrets.\nWe consider of varying edge density and compare estimated \u2019s and regrets of Flooding and FwA under the same setting as in previous experiments.\nComputing \u2019s requires computing the chromatic numbers, which is NP-hard.\nWe thus approximate it with the size of a greedy coloring of the considered graph; for Erd\u0151s-R\u00e9nyi graphs, the greedy coloring asymptotically results in twice the true chromatic number [25 ###reference_b25###].\nIn Figure 6 ###reference_###, we scatter plotted along with a best linear fit for Flooding and FwA.\nIndeed, it can be seen that the relationship is almost linear, with high , showing that our regret bounds indeed reflect the regrets in practice.\nThere are some deviations from linearity, which we believe is due to small horizon length and inaccuracy in estimating ."
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In this work, we described a novel setting for distributed multi-armed bandits, where agents communicate on an underlying network and do not all share the same arm set. We assume that each agent runs a UCB algorithm to identify their local best arm, and communicates the information they receive to their neighbors to minimize cumulative group regret. To deal with the very large communication complexity that however arises from using Flooding in our setting, we then introduced a new communication protocol for complex networks, Flooding with Absorption (FwA). With FwA, agents forward information only if it pertains to an arm they themselves do not include in their arm set, whereas they absorb a message that gives information about one of their own arms.\nWe provided theoretical upper and lower regret bounds and showed experimentally that FwA incurs only minimal group regret performance loss compared to Flooding and even Probabilistic Flooding, while leading to a significantly improved communication complexity. In particular, we showed that FwA can reduce link congestion, which significantly improves upon simple heuristics such as probabilistic flooding. Our protocol is fully network-agnostic and hence does not need any fine-tuning, while still making use of the inherent heterogeneity of the problem instance. This makes it a very suitable choice for dynamically changing networks, or those that suffer from occasional message loss.\nWe believe that our work highlights the importance of integrating network topology and action heterogeneity in the design of distributed bandit algorithms, and provides an efficient way to connect bandit learning and network protocols."
100
+ }
101
+ ],
102
+ "appendix": [
103
+ {
104
+ "section_id": "Appendix 1",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix A Notations",
107
+ "text": "Communication network\nSet of vertices in the graph, i.e., agents\nInduced subgraph over\nThe number of agents\nSet of agents having arm\nSet of agents having arm as the suboptimal arm\nThe set of all arms in the network\nThe set of arms agent has access to\nThe set of (locally) suboptimal arms agent has access to\nThe best local arm for agent\nReward of best local arm for agent\nAverage reward that agent has computed for arm at time\nNumber of observations of arm available to agent at time\nNumber of times agent pulls arm up to time\nNumber of times all agents pull arm up to time\nTime-to-live of a message sent by an agent\nGraph power of of -th order: graph over such that when , is an edge\nTime horizon\nCumulative group regret over the time horizon\nAgent-specific suboptimality gap"
108
+ },
109
+ {
110
+ "section_id": "Appendix 2",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix B Proof of Theorem 3.2 \u2013 Regret Upper Bound",
113
+ "text": "The basic proof idea is to consider , which is equivalent to the collaboration network via flooding with TTL .\nThen we consider a clique covering of , and upper bound the group regret of the full collaboration by that in which collaboration happens only intra-clique.\nThe deterministic delays between agents in each clique are precisely their distance, measured in the original communication network .\nWe proceed similarly to [68 ###reference_b68###] while filling in some missing details.\nFor each and (i.e., contains as a (locally) suboptimal arm), define , which can be rewritten as\nThus, this implies that the following event holds:\ni.e. .\nNext, for , define\ni.e., is the first time at which agent , and its neighbors, observe arm at least times.\nDenoting , we have that [8 ###reference_b8###]\nThen the regret can be rewritten as follows [8 ###reference_b8###]:\n(a) is bounded using clique covering and Abel transformation:\nFor each with , we have that\n(b) is bounded as follows:\nThe following hold for each and locally suboptimal :\nCombining both lemmas gives the desired statement.\nLet be a (vertex-disjoint) clique cover of , and let .\nLet be such that .\nWe fist have that .\nFor each ,\nHere, follows from the simple decomposition of :\n(b) follows from the following reasoning: first, we have that as at time , any remaining agent should have at least observations of , i.e., is already reached.\nThus, we have that\nand\nRecall that the group regret for our clique is , where for simplicity we denote .\nThe important observation is that to make the worst-case regret upper-bound, we must \u201callocate\u201d the most number of pulls to the arms with the largest gap,\ni.e.,\nwhere follows from the Abel transformation.\nThus,\nWe start by noting that\nWe consider first.\nFor simplicity, denote .\nThe initialization phase of Algorithm 1 ###reference_thm1### implies that\nDenote , , and .\nAlso, with a slight abuse of notation, here let us denote to be the reward received by agent when she pulls arm for the -th time.\nThen,\nwhere is i.i.d. -subGaussian random variable with .\nBefore moving forward, we recall a maximal-type concentration result:\nLet be a sequence of independent -subGaussian random variables, and let .\nThen, for any ,\nUsing the above concentration result as well as the peeling argument on a geometric grid [8 ###reference_b8###, 34 ###reference_b34###], we have that for any with ,\nOne can show the same for .\n(Note how both of them do not depend on any graph theoretical quantities).\nThen,\nwhere we\u2019ve set for all \u2019s.\nFollowing [8 ###reference_b8###], we choose .\nThen,"
114
+ },
115
+ {
116
+ "section_id": "Appendix 3",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix C Additional Discussions on the Theoretical Results",
119
+ "text": "One might ask whether it is possible to use a star decomposition-type argument for our setting similar to [34 ###reference_b34###], which could give us an improvement from to .\nWe believe this is not possible, and the reason is as follows.\nWe note that the main technical challenge is that unlike the homogeneous settings [50 ###reference_b50###, 34 ###reference_b34###], the agents and the arms are intertwined: In a homogeneous setting, one could rewrite the regret as\ni.e., the regret can be decomposed such that one only needs to upper bound the number of visitations of each agent .\nThus with a star decomposition, for each star, it can be easily seen that the number of visitations of the leaf agents is precisely that of the center agent, allowing for us to further decompose to sum of over all center agents .\nThis is at the core of dealing with more general homogeneous settings such as when there is a communication network [34 ###reference_b34###], possibly with faults [50 ###reference_b50###].\nSuch a decomposition is not possible when the agents are heterogeneous, as the maximal suboptimality gap of , , is agent-dependent.\nTo deal with such heterogeneity in the full information sharing (fully-connected graph) setting, Yang et al. [68 ###reference_b68###] first ordered the agents according to , then bounded the cumulative number of times suboptimal arm is visited by agents via the design of UCB algorithm.\nThen, based on the intuition that the worst-case regret bound occurs when the arm with the highest is visited at its maximum, the final regret bound is derived via the Abel transformation.\nThe fact that all agents have access to all other agents\u2019 information is crucial in this proof idea.\nThis can be seen from a very simple example; consider a situation in which agent has a very difficult problem (very small ) but has lots of connections, and agent has a somewhat easy problem but has few connections.\nIn this case, from the collaboration, it may be that learns faster than , which can impact the ordering of the agents, which in turn impacts the whole Abel transformation-based argument.\nIn a sense, our proof combines these two ideas.\nWe start with a clique decomposition, instead of the star decomposition, of the graph in order to upper-bound the group regret with the sum of regrets of each clique.\nThen, we apply the Abel transformation-type argument to each clique.\nLastly, we remark that our results can also be easily extended to the setting where the agents asynchronously pull the arms, i.e., each agent pulls arms at every round, with .\nIt would be an interesting future direction if we could further reduce the communication cost in the asynchronous case based on ideas from recent works [16 ###reference_b16###, 69 ###reference_b69###, 68 ###reference_b68###].\nIn order to match the lower bound to the upper bound in terms of graph topology and arm distribution-dependent constant as well, there are two avenues, both of which are inspired by Kolla et al. [34 ###reference_b34###], who consider the homogeneous setting with a general graph.\nOn the one hand, it might be valuable to consider a Follow-Your-Leader-type policy, which would tighten the upper bound to match our lower bound.\nHowever, it is unclear how to choose the leaders in our heterogeneous setting, let alone how to ensure that followers get relevant information on the arms.\nAnother way is to consider a more restricted class of policies, namely NAIC (non-altruistic & individually consistent) policies, which would tighten the lower bound to match our upper bound.\nExtending such notion to our setting of heterogeneous bandits over a graph while taking the communication protocol into account, e.g., whether we use the entire graph (Flooding) or we use part of the graph (FwA), and seeing whether we can match the lower bound up to the derived upper bounds, is another interesting future direction.\nOn a separate note, deriving a minimax lower bound, as done in Madhushani et al. [49 ###reference_b49###], for our setting is also an interesting future direction."
120
+ }
121
+ ],
122
+ "tables": {},
123
+ "image_paths": {
124
+ "1": {
125
+ "figure_path": "2303.05445v4_figure_1.png",
126
+ "caption": "Figure 1: Communication network and arm heterogeneity.",
127
+ "url": "http://arxiv.org/html/2303.05445v4/x1.png"
128
+ },
129
+ "2": {
130
+ "figure_path": "2303.05445v4_figure_2.png",
131
+ "caption": "Figure 2: Flooding with Absorption (FwA).\na, An agent (v1subscript\ud835\udc631v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT) pulls one of its arms (a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT).\nb, v1subscript\ud835\udc631v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT sends a message m\ud835\udc5amitalic_m to its neighbors, with a TTL \u03b3\ud835\udefe\\gammaitalic_\u03b3.\nc, Since one receiver of the message (v4subscript\ud835\udc634v_{4}italic_v start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT) does not have a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in its arm set, they forward m\ud835\udc5amitalic_m to their neighbors except the originator v1subscript\ud835\udc631v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. The other receiver (v2subscript\ud835\udc632v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) has arm a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in their arm set, and thus it absorbs m\ud835\udc5amitalic_m.",
132
+ "url": "http://arxiv.org/html/2303.05445v4/x2.png"
133
+ },
134
+ "3": {
135
+ "figure_path": "2303.05445v4_figure_3.png",
136
+ "caption": "Figure 3: Comparing group regret and (cumulative) communication complexity across different topologies and protocols. Note that FwA gives a good trade-off between regret and communication complexity.",
137
+ "url": "http://arxiv.org/html/2303.05445v4/x3.png"
138
+ },
139
+ "4": {
140
+ "figure_path": "2303.05445v4_figure_4.png",
141
+ "caption": "Figure 4: FwA significantly decreases congestion on sparse network links. We find that, in comparison with other protocols, FwA results in a reduced number of messages sent over such a sparse link (highlighted in the networks).",
142
+ "url": "http://arxiv.org/html/2303.05445v4/"
143
+ },
144
+ "5": {
145
+ "figure_path": "2303.05445v4_figure_5.png",
146
+ "caption": "Figure 5: Comparing group regret and (cumulative) communication complexity in a dynamic network setting. Note that FwA achieves the same regret as Flooding, with much lesser cumulative communication complexity.",
147
+ "url": "http://arxiv.org/html/2303.05445v4/x5.png"
148
+ },
149
+ "6(a)": {
150
+ "figure_path": "2303.05445v4_figure_6(a).png",
151
+ "caption": "(a) Regret of Flooding vs. \u03b4F\u2062l\u2062o\u2062o\u2062d\u2062i\u2062n\u2062gsuperscript\ud835\udeff\ud835\udc39\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51\ud835\udc56\ud835\udc5b\ud835\udc54\\delta^{Flooding}italic_\u03b4 start_POSTSUPERSCRIPT italic_F italic_l italic_o italic_o italic_d italic_i italic_n italic_g end_POSTSUPERSCRIPT. Orange line is the best linear fit (R2=0.9439superscript\ud835\udc4520.9439R^{2}=0.9439italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0.9439).\nFigure 6: Experimental Results on \u03b4\ud835\udeff\\deltaitalic_\u03b4. Note that there is a strong linear correlation between the estimated \u03b4F\u2062l\u2062o\u2062o\u2062d\u2062i\u2062n\u2062g,\u03b4F\u2062w\u2062Asuperscript\ud835\udeff\ud835\udc39\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51\ud835\udc56\ud835\udc5b\ud835\udc54superscript\ud835\udeff\ud835\udc39\ud835\udc64\ud835\udc34\\delta^{Flooding},\\delta^{FwA}italic_\u03b4 start_POSTSUPERSCRIPT italic_F italic_l italic_o italic_o italic_d italic_i italic_n italic_g end_POSTSUPERSCRIPT , italic_\u03b4 start_POSTSUPERSCRIPT italic_F italic_w italic_A end_POSTSUPERSCRIPT and the final resulting regrets of Flooding, FwA, respectively.",
152
+ "url": "http://arxiv.org/html/2303.05445v4/x6.png"
153
+ },
154
+ "6(b)": {
155
+ "figure_path": "2303.05445v4_figure_6(b).png",
156
+ "caption": "(b) Regret of FwA vs. \u03b4F\u2062w\u2062Asuperscript\ud835\udeff\ud835\udc39\ud835\udc64\ud835\udc34\\delta^{FwA}italic_\u03b4 start_POSTSUPERSCRIPT italic_F italic_w italic_A end_POSTSUPERSCRIPT. Orange line is the best linear fit (R2=0.9813superscript\ud835\udc4520.9813R^{2}=0.9813italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0.9813).\nFigure 6: Experimental Results on \u03b4\ud835\udeff\\deltaitalic_\u03b4. Note that there is a strong linear correlation between the estimated \u03b4F\u2062l\u2062o\u2062o\u2062d\u2062i\u2062n\u2062g,\u03b4F\u2062w\u2062Asuperscript\ud835\udeff\ud835\udc39\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51\ud835\udc56\ud835\udc5b\ud835\udc54superscript\ud835\udeff\ud835\udc39\ud835\udc64\ud835\udc34\\delta^{Flooding},\\delta^{FwA}italic_\u03b4 start_POSTSUPERSCRIPT italic_F italic_l italic_o italic_o italic_d italic_i italic_n italic_g end_POSTSUPERSCRIPT , italic_\u03b4 start_POSTSUPERSCRIPT italic_F italic_w italic_A end_POSTSUPERSCRIPT and the final resulting regrets of Flooding, FwA, respectively.",
157
+ "url": "http://arxiv.org/html/2303.05445v4/x7.png"
158
+ }
159
+ },
160
+ "validation": true,
161
+ "references": [
162
+ {
163
+ "1": {
164
+ "title": "Opportunistic Spectrum Access with Multiple Users: Learning under\nCompetition.",
165
+ "author": "Animashree Anandkumar, Nithin Michael, and Ao Tang.",
166
+ "venue": "In 2010 Proceedings IEEE INFOCOM, pages 1\u20139, 2010.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "2": {
172
+ "title": "Finite-time Analysis of the Multiarmed Bandit Problem.",
173
+ "author": "Peter Auer, Nicol\u00f2 Cesa-Bianchi, and Paul Fischer.",
174
+ "venue": "Machine Learning, 47(2):235\u2013256, 2002.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "3": {
180
+ "title": "Multi-user lax communications: A multi-armed bandit approach.",
181
+ "author": "Orly Avner and Shie Mannor.",
182
+ "venue": "In IEEE INFOCOM 2016 - The 35th Annual IEEE International\nConference on Computer Communications, pages 1\u20139, 2016.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "4": {
188
+ "title": "Emergence of Scaling in Random Networks.",
189
+ "author": "Albert-L\u00e1szl\u00f3 Barab\u00e1si and R\u00e9ka Albert.",
190
+ "venue": "Science, 286(5439):509\u2013512, 1999.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "5": {
196
+ "title": "Statistical mechanics of complex networks.",
197
+ "author": "Albert-L\u00e1szl\u00f3 Barab\u00e1si and R\u00e9ka Albert.",
198
+ "venue": "Reviews of Modern Physics, 74(1):47\u201397, 2002.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "6": {
204
+ "title": "The Exploration-Exploitation Dilemma: A Multidisciplinary\nFramework.",
205
+ "author": "Oded Berger-Tal, Jonathan Nathan, Ehud Meron, and David Saltz.",
206
+ "venue": "PLOS ONE, 9:1\u20138, 04 2014.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "7": {
212
+ "title": "Modern Graph Theory, volume 184 of Graduate Texts in\nMathematics.",
213
+ "author": "B\u00e9la Bollob\u00e1s.",
214
+ "venue": "Springer, 2002.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "8": {
220
+ "title": "Bandits Games and Clustering Foundations.",
221
+ "author": "S\u00e9bastien Bubeck.",
222
+ "venue": "PhD thesis, INRIA Nord Europe, June 2010.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "9": {
228
+ "title": "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit\nProblems.",
229
+ "author": "S\u00e9bastien Bubeck and Nicol\u00f2 Cesa-Bianchi.",
230
+ "venue": "Foundations and Trends\u00ae in Machine Learning, 5(1):1\u2013122,\n2012.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "10": {
236
+ "title": "Information sharing in distributed stochastic bandits.",
237
+ "author": "Swapna Buccapatnam, Jian Tan, and Li Zhang.",
238
+ "venue": "In 2015 IEEE Conference on Computer Communications (INFOCOM),\npages 2605\u20132613, 2015.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "11": {
244
+ "title": "Global Computation in a Poorly Connected World: Fast Rumor Spreading\nwith No Dependence on Conductance.",
245
+ "author": "Keren Censor-Hillel, Bernhard Haeupler, Jonathan Kelner, and Petar Maymounkov.",
246
+ "venue": "In Proceedings of the Forty-Fourth Annual ACM Symposium on\nTheory of Computing, STOC \u201912, page 961\u2013970. Association for Computing\nMachinery, 2012.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "12": {
252
+ "title": "Cooperative Online Learning: Keeping your Neighbors Updated.",
253
+ "author": "Nicol\u00f2 Cesa-Bianchi, Tommaso Cesari, and Claire Monteleoni.",
254
+ "venue": "In Proceedings of the 31st International Conference on\nAlgorithmic Learning Theory, volume 117 of Proceedings of Machine\nLearning Research, pages 234\u2013250. PMLR, 08 Feb\u201311 Feb 2020.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "13": {
260
+ "title": "Controlled Flooding Search in a Large Network.",
261
+ "author": "Nicholas B Chang and Mingyan Liu.",
262
+ "venue": "IEEE/ACM Transactions on Networking, 15(2):436\u2013449, 2007.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "14": {
268
+ "title": "The Gossiping Insert-Eliminate Algorithm for Multi-Agent Bandits.",
269
+ "author": "Ronshee Chawla, Abishek Sankararaman, Ayalvadi Ganesh, and Sanjay Shakkottai.",
270
+ "venue": "In Proceedings of the Twenty Third International Conference on\nArtificial Intelligence and Statistics, volume 108 of Proceedings of\nMachine Learning Research, pages 3471\u20133481. PMLR, 26\u201328 Aug 2020.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "15": {
276
+ "title": "Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits.",
277
+ "author": "Ronshee Chawla, Daniel Vial, Sanjay Shakkottai, and R. Srikant.",
278
+ "venue": "In Proceedings of the 40th International Conference on Machine\nLearning, volume 202 of Proceedings of Machine Learning Research,\npages 4189\u20134217. PMLR, 23\u201329 Jul 2023.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "16": {
284
+ "title": "On-Demand Communication for Asynchronous Multi-Agent Bandits.",
285
+ "author": "Yu-Zhen Janice Chen, Lin Yang, Xuchuang Wang, Xutong Liu, Mohammad Hajiesmaili,\nJohn C. S. Lui, and Don Towsley.",
286
+ "venue": "In Proceedings of The 26th International Conference on\nArtificial Intelligence and Statistics, volume 206 of Proceedings of\nMachine Learning Research, pages 3903\u20133930. PMLR, 25\u201327 Apr 2023.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "17": {
292
+ "title": "Rumor Spreading in Random Evolving Graphs.",
293
+ "author": "Andrea Clementi, Pierluigi Crescenzi, Carola Doerr, Pierre Fraigniaud,\nFrancesco Pasquale, and Riccardo Silvestri.",
294
+ "venue": "Random Structures & Algorithms, 48(2):290\u2013312, 2016.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "18": {
300
+ "title": "Information Spreading in Stationary Markovian Evolving Graphs.",
301
+ "author": "Andrea Clementi, Angelo Monti, Francesco Pasquale, and Riccardo Silvestri.",
302
+ "venue": "IEEE Transactions on Parallel and Distributed Systems,\n22(9):1425\u20131432, 2011.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "19": {
308
+ "title": "Information spreading in dynamic graphs.",
309
+ "author": "Andrea Clementi, Riccardo Silvestri, and Luca Trevisan.",
310
+ "venue": "Distributed Computing, 28(1):55\u201373, Feb 2015.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "20": {
316
+ "title": "Flooding Time of Edge-Markovian Evolving Graphs.",
317
+ "author": "Andrea E. F. Clementi, Claudio Macci, Angelo Monti, Francesco Pasquale, and\nRiccardo Silvestri.",
318
+ "venue": "SIAM Journal on Discrete Mathematics, 24(4):1694\u20131712, 2010.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "21": {
324
+ "title": "Introduction to Algorithms.",
325
+ "author": "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.",
326
+ "venue": "The MIT Press, 4 edition, 2022.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "22": {
332
+ "title": "Cooperative Multi-Agent Bandits with Heavy Tails.",
333
+ "author": "Abhimanyu Dubey and Alex Pentland.",
334
+ "venue": "In Proceedings of the 37th International Conference on Machine\nLearning, volume 119 of Proceedings of Machine Learning Research,\npages 2730\u20132739. PMLR, 13\u201318 Jul 2020.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "23": {
340
+ "title": "On random graphs, I.",
341
+ "author": "Paul Erd\u0151s and Alfr\u00e9d R\u00e9nyi.",
342
+ "venue": "Publicationes Mathematicae Debrecen, 6:1290\u2013297, 1959.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "24": {
348
+ "title": "Epidemic Information Dissemination in Distributed Systems.",
349
+ "author": "P.T. Eugster, R. Guerraoui, A.-M. Kermarrec, and L. Massouli\u00e9.",
350
+ "venue": "Computer, 37(5):60\u201367, 2004.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "25": {
356
+ "title": "Algorithmic Theory of Random Graphs.",
357
+ "author": "Alan Frieze and Colin McDiarmid.",
358
+ "venue": "Random Structures & Algorithms, 10(1\u20132):5\u201342, feb 1997.",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "26": {
364
+ "title": "Random Graphs.",
365
+ "author": "E. N. Gilbert.",
366
+ "venue": "The Annals of Mathematical Statistics, 30(4):1141 \u2013 1144,\n1959.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "27": {
372
+ "title": "Problems from the world surrounding perfect graphs.",
373
+ "author": "A. Gy\u00e1rf\u00e1s.",
374
+ "venue": "Applicationes Mathematicae, 19(3-4):413\u2013441, 1987.",
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "28": {
380
+ "title": "The chromatic gap and its extremes.",
381
+ "author": "Andr\u00e1s Gy\u00e1rf\u00e1s, Andr\u00e1s Seb\u0151, and Nicolas Trotignon.",
382
+ "venue": "Journal of Combinatorial Theory, Series B, 102(5):1155\u20131178,\n2012.",
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "29": {
388
+ "title": "Simple, Fast and Deterministic Gossip and Rumor Spreading.",
389
+ "author": "Bernhard Haeupler.",
390
+ "venue": "Journal of the ACM, 62(6), dec 2015.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "30": {
396
+ "title": "Exploring Network Structure, Dynamics, and Function using NetworkX.",
397
+ "author": "Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart.",
398
+ "venue": "In Proceedings of the 7th Python in Science Conference, pages\n11 \u2013 15, 2008.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "31": {
404
+ "title": "Stochastic blockmodels: First steps.",
405
+ "author": "Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt.",
406
+ "venue": "Social Networks, 5(2):109\u2013137, 1983.",
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "32": {
412
+ "title": "Cooperative Motion Generation in a Distributed Network of Redundant\nRobot Manipulators With Noises.",
413
+ "author": "Long Jin, Shuai Li, Lin Xiao, Rongbo Lu, and Bolin Liao.",
414
+ "venue": "IEEE Transactions on Systems, Man, and Cybernetics: Systems,\n48(10):1715\u20131724, 2018.",
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "33": {
420
+ "title": "Bond and site color-avoiding percolation in scale-free networks.",
421
+ "author": "Andrea Kadovi\u0107, Sebastian M. Krause, Guido Caldarelli, and Vinko Zlatic.",
422
+ "venue": "Physical Review E, 98:062308, Dec 2018.",
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "34": {
428
+ "title": "Collaborative Learning of Stochastic Bandits Over a Social Network.",
429
+ "author": "Ravi Kumar Kolla, Krishna Jagannathan, and Aditya Gopalan.",
430
+ "venue": "IEEE/ACM Transactions on Networking, 26(4):1782\u20131795, 2018.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "35": {
436
+ "title": "Hidden Connectivity in Networks with Vulnerable Classes of Nodes.",
437
+ "author": "Sebastian M. Krause, Michael M. Danziger, and Vinko Zlati\u0107.",
438
+ "venue": "Phys. Rev. X, 6:041022, Oct 2016.",
439
+ "url": null
440
+ }
441
+ },
442
+ {
443
+ "36": {
444
+ "title": "Color-avoiding percolation.",
445
+ "author": "Sebastian M. Krause, Michael M. Danziger, and Vinko Zlati\u0107.",
446
+ "venue": "Physical Review E, 96:022313, Aug 2017.",
447
+ "url": null
448
+ }
449
+ },
450
+ {
451
+ "37": {
452
+ "title": "Asymptotically efficient adaptive allocation rules.",
453
+ "author": "T.L Lai and Herbert Robbins.",
454
+ "venue": "Advances in Applied Mathematics, 6(1):4\u201322, 1985.",
455
+ "url": null
456
+ }
457
+ },
458
+ {
459
+ "38": {
460
+ "title": "Distributed cooperative decision making in multi-agent multi-armed\nbandits.",
461
+ "author": "Peter Landgren, Vaibhav Srivastava, and Naomi Ehrich Leonard.",
462
+ "venue": "Automatica, 125:109445, 2021.",
463
+ "url": null
464
+ }
465
+ },
466
+ {
467
+ "39": {
468
+ "title": "Bandit algorithms.",
469
+ "author": "Tor Lattimore and Csaba Szepesv\u00e1ri.",
470
+ "venue": "Cambridge University Press, 2020.",
471
+ "url": null
472
+ }
473
+ },
474
+ {
475
+ "40": {
476
+ "title": "Spectrum bandit optimization.",
477
+ "author": "Marc Lelarge, Alexandre Prouti\u00e8re, and M. Sadegh Talebi.",
478
+ "venue": "In 2013 IEEE Information Theory Workshop (ITW), pages 1\u20135,\n2013.",
479
+ "url": null
480
+ }
481
+ },
482
+ {
483
+ "41": {
484
+ "title": "The myopia of learning.",
485
+ "author": "Daniel A Levinthal and James G March.",
486
+ "venue": "Strategic management journal, 14(S2):95\u2013112, 1993.",
487
+ "url": null
488
+ }
489
+ },
490
+ {
491
+ "42": {
492
+ "title": "Multi-Armed-Bandit-Based Spectrum Scheduling Algorithms in Wireless\nNetworks: A Survey.",
493
+ "author": "Feng Li, Dongxiao Yu, Huan Yang, Jiguo Yu, Holger Karl, and Xiuzhen Cheng.",
494
+ "venue": "IEEE Wireless Communications, 27(1):24\u201330, 2020.",
495
+ "url": null
496
+ }
497
+ },
498
+ {
499
+ "43": {
500
+ "title": "A Contextual-Bandit Approach to Personalized News Article\nRecommendation.",
501
+ "author": "Lihong Li, Wei Chu, John Langford, and Robert E. Schapire.",
502
+ "venue": "In Proceedings of the 19th International Conference on World\nWide Web, WWW \u201910, page 661\u2013670. Association for Computing Machinery,\n2010.",
503
+ "url": null
504
+ }
505
+ },
506
+ {
507
+ "44": {
508
+ "title": "Cooperative Distributed Source Seeking by Multiple Robots:\nAlgorithms and Experiments.",
509
+ "author": "Shuai Li, Ruofan Kong, and Yi Guo.",
510
+ "venue": "IEEE/ASME Transactions on Mechatronics, 19(6):1810\u20131820, 2014.",
511
+ "url": null
512
+ }
513
+ },
514
+ {
515
+ "45": {
516
+ "title": "Flooding in wireless ad hoc networks.",
517
+ "author": "H Lim and C Kim.",
518
+ "venue": "Computer Communications, 24(3):353\u2013363, 2001.",
519
+ "url": null
520
+ }
521
+ },
522
+ {
523
+ "46": {
524
+ "title": "Distributed Learning in Multi-Armed Bandit With Multiple Players.",
525
+ "author": "Keqin Liu and Qing Zhao.",
526
+ "venue": "IEEE Transactions on Signal Processing, 58(11):5667\u20135681,\n2010.",
527
+ "url": null
528
+ }
529
+ },
530
+ {
531
+ "47": {
532
+ "title": "Search and Replication in Unstructured Peer-to-Peer Networks.",
533
+ "author": "Qin Lv, Pei Cao, Edith Cohen, Kai Li, and Scott Shenker.",
534
+ "venue": "In Proceedings of the 16th International Conference on\nSupercomputing, ICS \u201902, page 84\u201395, New York, NY, USA, 2002. Association\nfor Computing Machinery.",
535
+ "url": null
536
+ }
537
+ },
538
+ {
539
+ "48": {
540
+ "title": "Can Heterogeneity Make Gnutella Scalable?",
541
+ "author": "Qin Lv, Sylvia Ratnasamy, and Scott Shenker.",
542
+ "venue": "In Revised Papers from the First International Workshop on\nPeer-to-Peer Systems, IPTPS \u201901, page 94\u2013103, Berlin, Heidelberg, 2002.\nSpringer-Verlag.",
543
+ "url": null
544
+ }
545
+ },
546
+ {
547
+ "49": {
548
+ "title": "One More Step Towards Reality: Cooperative Bandits with Imperfect\nCommunication.",
549
+ "author": "Udari Madhushani, Abhimanyu Dubey, Naomi Leonard, and Alex Pentland.",
550
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 34, pages 7813\u20137824. Curran Associates, Inc., 2021.",
551
+ "url": null
552
+ }
553
+ },
554
+ {
555
+ "50": {
556
+ "title": "Distributed Bandits: Probabilistic Communication on d-regular\nGraphs.",
557
+ "author": "Udari Madhushani and Naomi Ehrich Leonard.",
558
+ "venue": "In 2021 European Control Conference (ECC), pages 830\u2013835,\n2021.",
559
+ "url": null
560
+ }
561
+ },
562
+ {
563
+ "51": {
564
+ "title": "Exploration and Exploitation in Organizational Learning.",
565
+ "author": "James G. March.",
566
+ "venue": "Organization Science, 2(1):71\u201387, 1991.",
567
+ "url": null
568
+ }
569
+ },
570
+ {
571
+ "52": {
572
+ "title": "Unpacking the exploration\u2013exploitation tradeoff: A synthesis of\nhuman and animal literatures.",
573
+ "author": "Katja Mehlhorn, Ben R Newell, Peter M Todd, Michael D Lee, Kate Morgan,\nVictoria A Braithwaite, Daniel Hausmann, Klaus Fiedler, and Cleotilde\nGonzalez.",
574
+ "venue": "Decision, 2(3):191\u2013215, 2015.",
575
+ "url": null
576
+ }
577
+ },
578
+ {
579
+ "53": {
580
+ "title": "Bounded clique cover of some sparse graphs.",
581
+ "author": "Andrea Munaro.",
582
+ "venue": "Discrete Mathematics, 340(9):2208\u20132216, 2017.",
583
+ "url": null
584
+ }
585
+ },
586
+ {
587
+ "54": {
588
+ "title": "Probabilistic Flooding Performance Analysis Exploiting Graph Spectra\nProperties.",
589
+ "author": "Konstantinos Oikonomou, George Koufoudakis, Sonia A\u00efssa, and Ioannis\nStavrakakis.",
590
+ "venue": "IEEE/ACM Transactions on Networking, 31(1):133\u2013146, 2023.",
591
+ "url": null
592
+ }
593
+ },
594
+ {
595
+ "55": {
596
+ "title": "Controlled flooding in wireless ad-hoc networks.",
597
+ "author": "Ashikur Rahman, Wlodek Olesinski, and Pawel Gburzynski.",
598
+ "venue": "In International Workshop on Wireless Ad-Hoc Networks, 2004.,\npages 73\u201378. IEEE, 2004.",
599
+ "url": null
600
+ }
601
+ },
602
+ {
603
+ "56": {
604
+ "title": "Social Learning in Multi Agent Multi Armed Bandits.",
605
+ "author": "Abishek Sankararaman, Ayalvadi Ganesh, and Sanjay Shakkottai.",
606
+ "venue": "Proceedings of the ACM on Measurement and Analysis of Computing\nSystems, 3(3), Dec 2019.",
607
+ "url": null
608
+ }
609
+ },
610
+ {
611
+ "57": {
612
+ "title": "Gossip Algorithms.",
613
+ "author": "Devavrat Shah.",
614
+ "venue": "Foundations and Trends\u00ae in Networking, 3(1):1\u2013125, 2009.",
615
+ "url": null
616
+ }
617
+ },
618
+ {
619
+ "58": {
620
+ "title": "Foraging behavior of interacting robots with virtual pheromone.",
621
+ "author": "K. Sugawara, T. Kazama, and T. Watanabe.",
622
+ "venue": "In 2004 IEEE/RSJ International Conference on Intelligent Robots\nand Systems (IROS), volume 3, pages 3074\u20133079 vol.3, 2004.",
623
+ "url": null
624
+ }
625
+ },
626
+ {
627
+ "59": {
628
+ "title": "Gossip-based distributed stochastic bandit algorithms.",
629
+ "author": "Balazs Szorenyi, Robert Busa-Fekete, Istvan Hegedus, Robert Ormandi, Mark\nJelasity, and Balazs Kegl.",
630
+ "venue": "In Proceedings of the 30th International Conference on Machine\nLearning, volume 28 of Proceedings of Machine Learning Research, pages\n19\u201327. PMLR, 17\u201319 Jun 2013.",
631
+ "url": null
632
+ }
633
+ },
634
+ {
635
+ "60": {
636
+ "title": "Computer Networks.",
637
+ "author": "Andrew S. Tanenbaum, Nick Feamster, and David Wetherall.",
638
+ "venue": "Pearson, 6 edition, 2021.",
639
+ "url": null
640
+ }
641
+ },
642
+ {
643
+ "61": {
644
+ "title": "The Broadcast Storm Problem in a Mobile Ad Hoc Network.",
645
+ "author": "Yu-Chee Tseng, Sze-Yao Ni, Yuh-Shyan Chen, and Jang-Ping Sheu.",
646
+ "venue": "Wireless Networks, 8(2):153\u2013167, Mar 2002.",
647
+ "url": null
648
+ }
649
+ },
650
+ {
651
+ "62": {
652
+ "title": "Epidemic Routing for Partially-Connected Ad Hoc Networks.",
653
+ "author": "Amin Vahdat and David Becker.",
654
+ "venue": "Technical Report CS-200006, Duke University, 2000.",
655
+ "url": null
656
+ }
657
+ },
658
+ {
659
+ "63": {
660
+ "title": "Robust Multi-Agent Multi-Armed Bandits.",
661
+ "author": "Daniel Vial, Sanjay Shakkottai, and R. Srikant.",
662
+ "venue": "In Proceedings of the Twenty-Second International Symposium on\nTheory, Algorithmic Foundations, and Protocol Design for Mobile Networks and\nMobile Computing, MobiHoc \u201921, page 161\u2013170. Association for Computing\nMachinery, 2021.",
663
+ "url": null
664
+ }
665
+ },
666
+ {
667
+ "64": {
668
+ "title": "Hop limited flooding over dynamic networks.",
669
+ "author": "Milan Vojnovi\u0107 and Alexandre Prouti\u00e8re.",
670
+ "venue": "In 2011 Proceedings IEEE INFOCOM, pages 685\u2013693, 2011.",
671
+ "url": null
672
+ }
673
+ },
674
+ {
675
+ "65": {
676
+ "title": "Optimal Algorithms for Multiplayer Multi-Armed Bandits.",
677
+ "author": "Po-An Wang, Alexandre Prouti\u00e8re, Kaito Ariu, Yassir Jedra, and Alessio\nRusso.",
678
+ "venue": "In Proceedings of the Twenty Third International Conference on\nArtificial Intelligence and Statistics, volume 108 of Proceedings of\nMachine Learning Research, pages 4120\u20134129. PMLR, 26\u201328 Aug 2020.",
679
+ "url": null
680
+ }
681
+ },
682
+ {
683
+ "66": {
684
+ "title": "Broadcast storm mitigation techniques in vehicular ad hoc networks.",
685
+ "author": "N. Wisitpongphan, O.K. Tonguz, J.S. Parikh, P. Mudalige, F. Bai, and\nV. Sadekar.",
686
+ "venue": "IEEE Wireless Communications, 14(6):84\u201394, 2007.",
687
+ "url": null
688
+ }
689
+ },
690
+ {
691
+ "67": {
692
+ "title": "Multi-Armed Bandit-Based Client Scheduling for Federated Learning.",
693
+ "author": "Wenchao Xia, Tony Q. S. Quek, Kun Guo, Wanli Wen, Howard H. Yang, and Hongbo\nZhu.",
694
+ "venue": "IEEE Transactions on Wireless Communications,\n19(11):7108\u20137123, 2020.",
695
+ "url": null
696
+ }
697
+ },
698
+ {
699
+ "68": {
700
+ "title": "Distributed Bandits with Heterogeneous Agents.",
701
+ "author": "Lin Yang, Yu-Zhen Janice Chen, Mohammad H. Hajiemaili, John C. S. Lui, and Don\nTowsley.",
702
+ "venue": "In IEEE INFOCOM 2022 - IEEE Conference on Computer\nCommunications, pages 200\u2013209, 2022.",
703
+ "url": null
704
+ }
705
+ },
706
+ {
707
+ "69": {
708
+ "title": "Cooperative Stochastic Bandits with Asynchronous Agents and\nConstrained Feedback.",
709
+ "author": "Lin Yang, Yu-Zhen Janice Chen, Stephen Pasteris, Mohammad H. Hajiesmaili, John\nC. S. Lui, and Don Towsley.",
710
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 34, pages 8885\u20138897. Curran Associates, Inc., 2021.",
711
+ "url": null
712
+ }
713
+ }
714
+ ],
715
+ "url": "http://arxiv.org/html/2303.05445v4"
716
+ }
20240225/2303.06440v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2303.15702v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2304.03516v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2305.02759v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2305.11854v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2305.15196v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2305.16882v2.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Link Residual Closeness of Harary Graphs",
3
+ "abstract": "The study of networks characteristics is an important subject in different fields, like\nmath, chemistry, transportation, social network analysis etc.\nThe residual closeness is one of the most sensitive measure\nof graphs\u2019 vulnerability. In this article we calculate\nthe link residual closeness of Harary graphs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "One important characteristic of networks is their robustness, studied in many different\nfields of the science.\nOne of the most sensitive measures of network\u2019s vulnerability is residual closeness,\nintroduced in [1] - Dangalchev\nproposed to measure the closeness of a graph after removing a vertex or a link (edge).\nThe definition for the closeness of a simple undirected graph, introduced in [1], is:\nIn the above formula, is the standard distance between vertices and .\nThe advantages of the above definition are that it can be used\nfor not connected graphs and it is convenient for creating formulae for graph operations.\nLet and be a pair of connected vertices of graph and graph be the graph, constructed by removing link from graph .\nLet be the distance between vertices\n and in graph .\nUsing the above formula, with distances instead of , we can calculate the closeness of graph .\nThe link residual closeness R is defined in [1] as:\nIf we remove a vertex, instead of a link, we can define vertex residual closeness.\nThe vertex residual closeness is more important for the social network analysis, while the link residual closeness is studied in transportation, utility networks, etc.\nIn this article we will\nconsider only the link residual closeness.\nTo find the difference between the closeness and the residual closeness we have to compare distances and .\nHarary graphs are introduced in [2] by F. Harary as\ngraphs that are -connected, having vertices with the least number of edges.\nThe notation for Harary graphs,\nwhere is used in West [3].\nA simple construction of Harary graphs is:\nLet us place vertices\nin a circle and name them .\nIn case of even, every vertex is connected to\nnearest vertices in each direction.\nIn case of odd and even, \nis created by connecting every vertex to the nearest vertices in each direction\nand to the diametrically opposite vertex (adding links ).\nIn these two cases there is an automorphism\nbetween any two vertices.\nIn case of odd and odd, the Harary graph is created by\nconnecting every vertex to the nearest vertices in each direction\nand for vertices are added links .\nThis way every vertex is connected to other vertices, except for vertex , which is connected to vertices: in addition to the\n links to the neighbors, there are 2 more links - and .\nThe relative impact of a failure of a link can be seen in normalized residual closeness\nNR ([1]) of graph : .\nIn this article we will calculate the difference between the closeness and\nthe link residual closeness of Harary graphs.\nThe closeness and the vertex residual closeness of some Harary graphs are given in [4].\nWe can determine the link residual closeness using the results of this article and the closeness from [4]. Throughout this article we will use the term \u201cresidual closeness\u201d instead of \u201clink residual closeness\u201d.\nMore information on closeness, residual closeness, and additional closeness can be found in [5-25]."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Residual closeness of",
15
+ "text": "Graph is cycle graph .\nAfter deleting any link of we receive path graph .\nUsing formulae for closenesses of cycle graphs (given in [4]) and path graphs (in [1]) we can prove:\nThe residual closeness of Harary graph is:\nThe formulae for closeness of cycle graphs, given in [4] are:\nThe formula for closeness of path graphs, given in [1] is:\nReplacing in the last formula with and ,\nand subtracting it from the upper two formulae, we prove the theorem."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Residual closeness of",
21
+ "text": "We will consider all cases where . In graph vertex is connected to vertices\n,\u2026, as well as to ,\u2026,. Because of the automorphism between any two vertices of the graph we will consider only deleting links starting from vertex .\nBy deleting link , distance is changed from to . The new distance is . The same is the change of the distances (from to ) when deleting links ,\u2026,, because\n. No other distances are changed when\n. Every change of a distance should be counted twice,\ne.g. for distance and for distance .\nIn this case the difference between the\ncloseness and the residual closeness is\n and:\nDeleting links ,\u2026, cannot result in any changes between different vertices.\nFor example, if is a path with the shortest distance between vertices and , where , then the same distance is given by path .\nWhen deleting link will change,\nin addition to distance , also distance from 2 to 3. The same will be the change for distance .\nDeleting any other link will not have bigger change in closenesses.\nThe new difference is .\nThe same () is the difference when .\nWhen deleting link will change additionally distance\n from 3 to 4. The same will be the change for 2 other distances:\n and .\nThe new difference .\nThe same () is the difference when .\nUsing the floor function , where is\nthe integer part of the division of by , we can prove:\nThe residual closeness of Harary graph is:\nwhere and .\nIn general:\nin Appendix A is proven formula(1):\nDividing formula (1) by 2 and adding 1 we receive: ,\nwhich proves the theorem."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Residual closeness of",
27
+ "text": "There is automorphism\nbetween any two vertices of graph - instead of deleting link \nwe will delete link ; instead of deleting link \nwe will delete link ;\nis a complete graph and\n. No other distances are changed.\nWe have and:\nFor we have to consider 2 cases.\nCase 1 - Deleting link :\nDistance is changed from 1 to 3:\nThis is the only changed distance. For example: and .\nThe difference in closenesses is: .\nCase 2 - Deleting link :\nA) Distance is changed from 1 to 3:\nThis is true when .\nWhen this is the only changed distance, hence:\nB) When two more distances are changed from 2 to 3. Distance: . The same is the situation with distance , hence:\nC) When ,\ndistance is changed from 2 to 4:\nThe same is situation with distance .\nWhen these are the only changed distances and:\nD) In general, when distance is changed from to :\nor the closeness is changed with .\nThe same is true for other distances:\n,,\u2026,.\nThe difference in closenesses is:\nThe residual closeness is:\nE) Distance , when , is changed from to :\nThe closeness is changed with .\nThe same is the situation with the other distances:\n, ,\u2026,.\nThe difference and the residual closeness are:\nWe can prove now:\nThe residual closeness of Harary graph is:\nFrom:\nwe receive:\nMultiplying formula (1) by gives:\nFrom we receive:\nwhich are exactly the formulae for the residual closeness of ."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Residual closeness of",
33
+ "text": "Graph is a complete graph and deleting\nany link will result in change of the distance from to :\n and .\nGraph has also plenty of links and by deleting\nany link, only one distance is changed from to :\n.\nFor the bigger graphs we have to consider 3 cases:\nCase 1 - Deleting link :\nDistance is always changed from to :\n. No other distances are changed.\nCase 2 - Deleting link :\nWhen distance is changed from to :\nor the change is\n.\nNo other distances are changed.\nCase 3 - Deleting link :\nBy deleting link , distance is changed from to and this is the only changed distance when .\nHence we receive and :\nWhen , other distances start changing. Not only is changed\nfrom 1 to 2, but also and are changed from 2 to 3.\nThe residual closeness is:\nThe difference between the closeness and the residual closeness,\nwhen , is also .\nNow we can prove:\nThe residual closeness of Harary graph is:\nwhere and .\nWhen , not only the previous distances are changed,\nbut new distances are changed from to :\n, ,\u2026, .\nThe difference between the closeness and the residual closeness is:\nThe residual closeness is:\nThe difference in closenesses is the same for .\nDividing formula (1) by 2 we receive:\nFor the difference we receive:\nwhich proves the theorem."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Residual closeness of",
39
+ "text": "We will follow the previous section.\nWhen , by deleting any link, the distance is changed from to : .\nWhen , by deleting link , distance is changed from to and\n. This is the biggest decrement for in this range.\nWhen , by deleting link , distance\n is changed from 1 to 2. Also and are changed from 2 to 3. No other distances are changed when \nand the decrement is: .\nIn general, when and , by deleting link , not only the previous distances are changed, but new distances (,\u2026, )\nare changed from to .\nThe differences is:\nSimilarly to Theorem 4 we can prove:\nThe residual closeness of Harary graph is:\nwhere , , and .\nThe difference is:\nDividing formula (1) by 2 we receive:\nUsing , we receive:\nwhich proves the theorem."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Residual closeness of",
45
+ "text": "All vertices are connected to 3 other vertices,\nonly vertex is connected to 4 vertices:\n, , , and .\nDeleting any vertex of graph changes only this distance from to \nand the difference between the closeness and the residual closeness is\n.\nWhen we have to consider 4 cases.\nCase 1 - Deleting link :\nDistance is changed from to :\nWhen , this is the only changed distance and\nthe difference in closenesses is 0.5.\nWhen , deleting link does not supply the residual closeness.\nCase 2 - Deleting link :\nWhen , distance is changed from to :\nThis is the only changed distance and\nthe difference in closenesses is 0.75.\nCase 3 - Deleting link :\nWhen , distance is changed from to :\nDistance is changed from to when :\n. The residual closeness,\nwhen , is:\nThe only cases when deleting link supplies the\nresidual closeness are .\nCase 4 - Deleting link :\nA) Distance is changed from to :\nWhen this is the only changed distance.\nThe difference is less than the difference\nin case 3: .\nB) When distance is changed from to :\nWhen distance is changed from to :\nWhen distance is also changed from to :\nThese are the only changed distances when and:\nC) When , two of the changed (from 2 to 3) distances in subcase B have bigger changes (from 2 to 4).\nDistance is changed from to :\nDistance is also changed from to :\nThese are the only changes when and:\nD) In general, when , new distances ,,\u2026,\nare changed from to , e.g. from path \nto path .\nWhen another distances ,,\u2026,\nare changed from to . e.g. from path \nto path . The distance between vertices and is equal to .\nThese are the only new changes when and:\nE) When the distances ,,\u2026, from subcase D\nare changed from to , e.g. from path \nto path .\nThese are the only new changes when and:\nNow we can prove:\nThe residual closeness of Harary graph is:\nwhere .\nFormula (1) for , divided by 2, becomes:\nFormula (1) for divided by 2, becomes:\nAdding both equation we receive:\nThe first items of the sum for are not added in the formula above.\nTo determine linear component L (the first items of the sum) we use:\nor . Then the difference becomes:\nFor the next difference we receive:\nwhich proves the theorem."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "Residual closeness of",
51
+ "text": "A) Deleting any link of graph changes distance from to .\nThe same is the situation with graph .\nHence:\nB) For graph , deleting link changes distance from to .\nDeleting a link, connecting nodes with closer numbers, like or , changes the distance from to . The same change in the distance (from to ) causes deleting link . Hence :\nC) For graph , deleting link changes distance from to \nand distances and from to :\nD) For graph , , deleting link changes distance from to :\n. Distance is changed\nfrom to : . The same is for distance .\nDistance is also changed\nfrom to : . The same is for distance .\nNo other distance is changed when and:\nE) In general,\nwhen , , deleting link\n of graph ,\nin addition to the previous changed distances,\n distances are changed from to . The change in closeness\n is:\nWe can prove now:\nThe residual closeness of Harary graph is:\nwhere and .\nFormula (4) is the same as formula (2) from Theorem 6. Using formula (3) from Theorem 6, we determine linear component :\nor . Then the difference becomes:\nwhich proves the theorem."
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "Residual closeness of",
57
+ "text": "We will consider the cases similar to .\nThe differences in closenesses of Harary graphs for the smaller\nnumbers are: , when ;\n, when ; and\n, when .\nWhen , deleting link of graph ,\nchanges distance from to and more distances\n(, , , and ) from to . The difference in closenesses is:\nIn general, when , deleting link of graph ,\nin addition to the previous changed distances, new\n distances are changed from to . The difference in closenesses\n is:\nWe can prove now:\nThe residual closeness of Harary graph is:\nwhere and .\nFormula (5) is the same as formulae (2) and (4).\nSimilarly to the proof of Theorem 6 we have:\nwhere is a linear component, corresponding to the first terms of the sum of .\nUsing , we determine :\nor . Then the difference becomes:\nwhich proves the theorem."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {},
63
+ "validation": true,
64
+ "references": [],
65
+ "url": "http://arxiv.org/html/2305.16882v2"
66
+ }
20240225/2306.02031v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2307.00014v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2307.00743v4.json ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Joint Power Allocation and Beamforming for Active IRS-aided Directional Modulation Network",
3
+ "abstract": "To boost the secrecy rate (SR) of the conventional directional modulation (DM) network and overcome the double fading effect of the cascaded channels of passive intelligent reflecting surface (IRS), a novel active IRS-assisted DM system with a power adjusting strategy between transmitter and active IRS is proposed in this paper. Then, a joint optimization of maximizing the SR is cast by alternately optimizing the power allocation (PA) factors, transmit beamforming, receive beamforming, and reflect beamforming at IRS, subject to the power constraint at IRS. To tackle the formulated non-convex optimization problem, a high-performance scheme of maximizing SR based on successive convex approximation (SCA) and Schur complement (Max-SR-SS) is proposed, where the derivative operation are employed to optimize the PA factors, the generalized Rayleigh-Rize theorem is adopted to derive the receive beamforming, and the SCA strategy is utilized to design the transmit beamforming and phase shift matrix of IRS. To reduce the high complexity, a low-complexity scheme, named maximizing SR based on equal amplitude reflecting (EAR) and majorization-minimization (MM) (Max-SR-EM), is developed, where the EAR and MM methods are adopted to derive the amplitude and phase of the IRS phase shift matrix, respectively. In particular, when the receivers are single antenna, a scheme of maximizing SR based on alternating optimization (Max-SR-AO) is proposed, where the PA factors, transmit and reflect beamforming are derived by the fractional programming (FP) and SCA algorithms. Simulation results show that with the same power constraint, the SR gains achieved by the proposed schemes outperform those of the fixed PA and passive IRS schemes.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The broadcast nature of wireless communication makes the confidential message vulnerable to eavesdropping by the illegal users, leading to security issues of confidential message leakage. Directional modulation (DM), as an advanced and promising physical layer security technology, has attracted the research interest of a wide range of researchers[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nDM provides security via directive and is suitable for the line-of-sight (LoS) channels such as millimeter wave, unmanned aerial vehicle, intelligent transportation, maritime communication, and satellite communication[6 ###reference_b6###, 7 ###reference_b7###]. The main ideas of DM are as follows: in the LoS channel, DM transmits confidential message to legitimate user along the desired direction via beamforming vector, and interferes with illegal user eavesdropping by sending artificial noise (AN) in the undesired direction, hence enhancing the secure performance of the system[8 ###reference_b8###]. So far, the research for DM technology is mainly focused on the radio frequency frontend and baseband.\nTo enhance the secrecy rate (SR) of the DM network with a eavesdropper, in [9 ###reference_b9###], in accordance with the convex optimization method, a sparse array of DM was synthesized, and the proposed approach achieved better flexibility in terms of control security performance and power efficiency. A DM network with hybrid active and passive eavesdroppers was considered in [10 ###reference_b10###], and a scheme, which used frequency division array with assisted AN technique at the transmitter to achieve secure transmission with angle-range dependence, was proposed.\nUnlike the single legitimate user networks above, the authors in [11 ###reference_b11###] investigated a multi-legitimate user DM network and designed a security-enhancing symbol-level precoding vector, which outperformed the benchmark method in terms of both the power efficiency and security enhancement.\nThe multi-beam DM networks were investigated in [12 ###reference_b12###] and [13 ###reference_b13###], and a generalized synthesis method and an AN-aided zero-forcing synthesis method were proposed by the former and the latter to enhance the system performance, respectively. However, the above mentioned works mainly focus on the scenario where the legitimate user and the eavesdropper have different directions. To ensure secure transmission of the system when the eavesdropper was in the same direction as the legitimate user, the secure precise wireless transmission DM systems were investigated in [14 ###reference_b14###] and [15 ###reference_b15###], which sent confidential message to a specific direction and distance to ensure the secure wireless transmission.\nWith the development of wireless communication, the demand for network increases dramatically[16 ###reference_b16###]. Using a large number of active devices will lead to serious energy consumption problems, fortunately, the emergence of intelligent reflecting surface (IRS) provides a novel paradigm to overcome this problem. IRS is a planar array of large numbers of passive electromagnetic elements, each of which is capable of independently adjust the amplitude and phase of the incident signal[17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. Thanks to this ability, the signal strength at the receiver can be significantly enhanced by properly tuning the reflected signal. Recently, various wireless communication scenarios assisted by IRS have been extensively investigated, including the multicell communications [16 ###reference_b16###], unmanned aerial vehicles communications[20 ###reference_b20###], simultaneous wireless information and power transfer (SWIPT) network[21 ###reference_b21###], non-orthogonal multiple access network[22 ###reference_b22###], and wireless-powered communication network[23 ###reference_b23###].\nGiven the advantages of IRS in wireless communication, in recent years, the IRS-assisted DM network has also been investigated. With the help of IRS, the DM can overcome the limitation of being able to transmit only one confidential bit stream and significantly enhance the SR performance. In [24 ###reference_b24###], an IRS-aided DM system was considered, and two confidential bit streams were transmitted from Alice to Bob at the same time. Based on the system model of [24 ###reference_b24###], in [25 ###reference_b25###], to enhance the SR performance, two low-complexity algorithms were proposed to jointly design the transmit and reflect beamforming vectors of the IRS-assisted DM network.\nAn IRS-aided DM network equipped with single antenna for both legitimate user and eavesdropper was investigated in [26 ###reference_b26###], and the SR closed-form expression was derived. Moreover, the authors in [27 ###reference_b27###] proposed two beamforming algorithms to enhance the SR in the DM network aid by IRS, and they achieved about 30 percent SR gains over no IRS and random phase shift IRS schemes.\nThe above works showed that the passive IRS can boost the SR performance of the conventional DM network.\nHowever, the \u201cdouble fading\u201d effect that accompanies passive IRS is inevitable, which is caused by the fact that the signal reflected through the IRS needs to pass through the transmitter-to-IRS and IRS-to-receiver cascade links[28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###]. To overcome this physical limitation, an emerging IRS structure, named active IRS, has been proposed. Unlike the passive IRS, which can only adjust the phase of the incident signal, active IRS integrates active reflection-type amplifiers that can simultaneously tune the amplitude and phase of incident signals. Hence the \u201cdouble fading\u201d effect of the cascaded link can be effectively attenuated, enabling better performance than passive IRS[28 ###reference_b28###]. Notice that although the active IRS can both amplify and reflect incident signals, it is fundamentally different from full-duplex amplify-and-forward relay. Active IRS does not require radio frequency (RF) chains, has no signal processing capability, and has lower hardware cost[31 ###reference_b31###]. Moreover, the relay takes two time slots to accomplish the transmission of one signal, whereas active IRS only requires one time slot.\nSimilar to passive IRS, in recent years, researchers have investigated various wireless communication scenarios with the help of active IRS[32 ###reference_b32###]. For example, to maximize the rate of IRS-aided downlink/uplink communication system, the placement of the active IRS was investigated in [33 ###reference_b33###], which revealed that the system rate was optimal when the active IRS was placed close to the receiver. An active IRS-assisted single input multiple output network was considered in [34 ###reference_b34###], and an alternating optimization approach was proposed to obtain the IRS reflecting coefficient matrix and received beamforming, which achieved the better performance compared to the passive IRS-assisted network with the same power budget. An active IRS-aided SWIPT network was proposed in [35 ###reference_b35###], an alternating iteration method was employed to maximize the weighted sum rate, and the high-performance gain was achieved. The above works presented the benefits of the active IRS for wireless network performance gains.\n###figure_1### Motivated by the discussions above, to further enhance the SR performance of the passive IRS-assisted DM system, an active IRS-assisted DM network with an eavesdropper is considered in this paper. Given that the beamforming and AN powers of the base station (BS) and IRS power are subject to the system\u2019s total power constraint, to investigate the impact of the power allocation (PA) among them and beamforming optimization on the system performance, we focus on maximizing the SR by jointly deriving the PA factors, transmit beamforming, receive beamforming, and reflect beamforming at the active IRS.\nTo the best of the authors\u2019 knowledge, this is the first work to investigate PA between BS and IRS in the active IRS-assisted secure wireless network. The main contributions of this paper are summarized as follows.\nTo enhance the SR performance of the conventional DM system, a novel DM network with the introduction of active IRS is proposed in this paper. Particularly, a PA strategy is proposed to adjust the power fraction between BS and active IRS to further harvest the rate performance gain achieved by active IRS, which does not exist at a passive IRS-aided network. Then, an active IRS-aided DM system with PA is presented. Finally, we formulate a SR maximization problem by jointly optimizing the PA factors, transmit beamforming, receive beamforming, and the IRS phase shift matrix for the active IRS-aided secure DM system in the presence of an eavesdropper, subject to the power constraint at IRS. By optimizing the PA between BS and IRS as well as beamforming, the SR of the system is significantly boosted.\nTo tackle the formulated non-convex maximum SR optimization problem in which the five variables are coupled with each other, a high-performance alternating optimization scheme, called maximizing SR based on successive convex approximation (SCA) and Schur complement (Max-SR-SS), is proposed. In this scheme, the derivative operation is employed to calculate the optimal PA factor of the confidential message and the PA factor of power allocated to the BS, and the transmit and receive beamforming are derived by the SCA method and the generalized Rayleigh-Rize theorem, respectively, and the phase shift matrix of IRS is calculate by the SCA and Schur complement methods. Moreover, a low-complexity with scheme, named maximizing SR based on equal amplitude reflecting (EAR) and majorization-minimization (MM) (Max-SR-EM), is proposed to address the formulated problem, where the EAR and MM strategies are\nadopted to obtain the amplitude and phase of the IRS phase shift matrix, respectively.\nIn particular, when the receivers are equipped with single antenna, the optimization problem can be simplified and there is no receive beamforming. To tackle the problem, a scheme of maximizing SR based on alternating optimization (Max-SR-AO) is proposed, where the PA factors, transmit beamforming, and phase shift matrix of IRS are designed by the fractional programming (FP) and SCA algorithms. From the simulation results, it is clear that with the same power, the SRs harvested by the proposed three schemes are higher than those of the benchmark schemes. In addition, when the number of phase shift elements tends to large-scale, the gap in terms of SR between the Max-SR-SS and Max-SR-EM schemes is trivial.\nThe remainder of this paper is organized as follows. We describe the system model of active IRS-assisted DM network and formulate the maximum SR problem in Section II ###reference_###.\nSection III ###reference_### introduces the proposed Max-SR-SS and Max-SR-EM schemes.\nThe proposed Max-SR-AO scheme is described in Section IV ###reference_###. The numerical simulation results and conclusions are provided in Section V ###reference_### and Section VI ###reference_###, respectively.\nNotations: in this work, the scalars, vectors and matrices are marked in lowercase, boldface lowercase, and uppercase letters, respectively. Symbols , , , , Tr, , , , , and refer to the transpose, conjugate, conjugate transpose, partial derivative, trace, pseudo-inverse, maximum eigenvalue, real part, diagonal, and block diagonal matrix operations, respectively. The sign stands for the scalar\u2019s absolute value or the matrix\u2019s determinant. The notations and mean the identity matrix of and complex-valued matrix space of , respectively."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II system model",
15
+ "text": "As illustrated in Fig. 1, we investigate an active IRS-assisted secure DM network, where the BS (Alice) sends confidential message to the legitimate user (Bob) with the assistance of active IRS, while sending AN to the eavesdropper (Eve) to reduce the risk of confidential information being intercepted by Eve. There are , , and antennas at Alice, Bob, and Eve, respectively. There are reflection elements on the active IRS with tunable amplitude and phase. In this paper, it is assumed that the active IRS reflects signal only once and there exists the line-of-sight channels. Moreover, all channel state information is assumed to be available owing to the channel estimation.\nThe transmit signal at Alice is expressed as\nwhere stands for the total power, and refer to the PA parameters of the confidential message and AN, means the PA factor of the total power allocated to the BS, and refer to the beamforming vector and confidential message intent to Bob, they satisfy and , respectively, and represent the projection matrix and vector of AN, they meet and , respectively.\nGiven the existence of path loss, the received signal at Bob is formulated as\nwhere refers to the receive beamforming, and stand for the path loss parameters of Alice-to-Bob and IRS-to-Bob channels, respectively, means the equivalent path loss parameter of Alice-to-IRS and IRS-to-Bob channels, and refer to the reflection coefficient matrix and vector of the active IRS, , and are the amplitude and phase of -th reflecting element, respectively. and mean the complex additive white Gaussian noise (AWGN) at IRS and at Bob, respectively, , , and denote the Alice-to-Bob, IRS-to-Bob, and Alice-to-IRS channels, respectively. It is assumed that for simplicity, and the normalized steering vector is\nwhere\nrepresents the direction angle of the signal departure or arrival, stands for the antenna index, indicates the distance between adjacent transmitting antennas, and refers to the wavelength.\nSimilarly, the received signal at Eve is cast as\nwhere denotes the receive beamforming, and stand for the path loss parameters of Alice-to-Eve and IRS-to-Eve channels, respectively, means the equivalent path loss parameter of Alice-to-IRS and IRS-to-Eve channels, represents the AWGN at Eve that satisfies the distribution , and refer to the Alice-to-Eve and IRS-to-Eve channels, respectively.\nIt is assumed that AN is transmitted to Eve for jamming eavesdropping only and does not impact Bob, based on the criterion of null-space projection, should meet\nLet us define a equivalent virtual channel matrix of confidential message as follows\nThen, can be designed as\nAt this point, (II ###reference_###) and (II ###reference_###) can be rewritten as\nand\nrespectively.\nBased on (II ###reference_###) and (II ###reference_0###), the achievable rates at Bob and Eve are given by\nand\nrespectively, where\nDue to the fact that Alice and Bob cannot capture Eve\u2019s received beamforming in general, a upper bound of (14 ###reference_###) can be obtained by\nThe detailed derivation is available in Appendix.\nAt this point, the lower bound of SR for the system is expressed as\nMoreover, the transmitted power at active IRS can be formulated as follows\nIn this paper, we maximize the SR by jointly deriving the PA factors and , transmit beamforming v, receive beamforming , and active IRS phase shift matrix . The overall optimization problem is formulated as follows\nwhere means the amplification gain threshold of the active IRS elements, and refers to the maximum transmit power of the active IRS. It is obvious that this optimization problem has a non-convex objective function and constraints, and the optimization variables are highly coupled with each other, which makes it a challenge to address it directly in general. Hence, the alternating iteration strategy is taken into account for solving this optimization problem in what follows."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Proposed Max-SR-SS and Max-SR-EM schemes",
21
+ "text": "In this section, to streamline the solution of the problem, we aim at maximizing SR and decompose the problem (II ###reference_###) into five subproblems. In what follows, the parameters , , v, , and are sequentially optimized by fixing the other variables."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Optimization of the PA factor",
27
+ "text": "In this subsection, the transmit beamforming v, receive beamforming , and IRS phase shift matrix are given for the sake of simplicity, we re-arrange the IRS power constraint (19e ###reference_.5###) as\nFor the sake of simplicity, let us define\nThen, (13 ###reference_###) can be degenerated to\nLet us define\nand based on\n(II ###reference_3###) can be reformulated as\nDue to the presence of inverse operation, the Sherman-Morrison theorem is taken into account for the simplification, i.e.,\nthen, we have\nand (26 ###reference_###) becomes\nLet us define\n\n\n.\nThen, (II ###reference_3###) can be recast as\nrespectively.\nIn what follows, we handle the optimization of the PA parameters and successively.\nDefining\n\n\n\n\nGiven , in accordance with (II ###reference_###), (22 ###reference_###), and (III-A ###reference_###), the optimization problem with respect to can be simplified as follows\nwhere\n\n\n\n\n\n\n.\nThen, (III-A ###reference_###) can be reformulated as\nwhere . Given that the denominator ,\nwe can obtain that the objective function of problem (III-A ###reference_###) is continuous and differentiable in the interval\n. Then, we take its partial derivative and make it equal to 0 yields\nwhich can can be simplified as"
28
+ },
29
+ {
30
+ "section_id": "3.1.1",
31
+ "parent_section_id": "3.1",
32
+ "section_name": "III-A1 When",
33
+ "text": "the equation (III-A ###reference_a###) is a quadratic. Let us define\nif , based on the formula for the roots of a quadratic function, we can get its roots as"
34
+ },
35
+ {
36
+ "section_id": "3.1.2",
37
+ "parent_section_id": "3.1",
38
+ "section_name": "III-A2 When",
39
+ "text": "(III-A ###reference_a###) can be degraded to\nwhich yields\nNext, we judge whether these candidate solutions of are in the interval . Finally, the optimal value of can be obtained by comparing the values of at endpoints and candidate solutions.\nThe detailed procedures for deriving the PA factor is shown in Algorithm 1.\nIf , then compare the values of , , , and .\nIf and , then compare the values of , , and .\nIf and , then compare the values of , , and .\nIf , then compare the values of and .\nIf , then compare the values of , , and .\nIf , then compare the values of and ."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Optimization of the PA factor",
45
+ "text": "Fixed v, , and , given that the optimal has been found in the previous subsection, we transfer the focus to solving for . Let us define\n\n\n\nIn accordance with (22 ###reference_###) and (III-A ###reference_###), by neglecting the constant terms, the optimization problem with respect to can be simplified as follows\nwhere\n\n\n\n\n\n\n.\nFurther simplification yields\nwhere . Due to the fact that the denominator ,\nwe can obtain that the objective function of problem (III-B ###reference_###) is continuous and differentiable in the interval\n. Then, we take its partial derivative and make it equal to 0 yields\nwhich yields"
46
+ },
47
+ {
48
+ "section_id": "3.2.1",
49
+ "parent_section_id": "3.2",
50
+ "section_name": "III-B1 When",
51
+ "text": "the equation (III-B ###reference_b###) is a quadratic. Let us define\nif , based on the formula for the roots of a quadratic function, we can get its roots as"
52
+ },
53
+ {
54
+ "section_id": "3.2.2",
55
+ "parent_section_id": "3.2",
56
+ "section_name": "III-B2 When",
57
+ "text": "(III-B ###reference_b###) can be recast as\nwe have\nNext, an analysis similar to solving for needs to be performed, and we ignore the procedure for the sake of avoiding repetition."
58
+ },
59
+ {
60
+ "section_id": "3.3",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-C Optimization of the transmit beamforming vector v",
63
+ "text": "Given , , , and , we reformulate the IRS power constraint (19e ###reference_.5###) as follows\nWith ignoring the constant term, (II ###reference_###) can be re-arranged as the optimization problem with respect to v as follows\nwhere\nGiven that the objective function value in (III-C ###reference_###) is insensitive to the scaling of v, we relax the equation constraint to [24 ###reference_b24###]. Then, in accordance with the first order Taylor approximation, we have\nThen, the problem (III-C ###reference_###) can be recast as\nwhere stands for the given vector.\nThis is a convex optimization problem that can be tackled directly with convex optimizing toolbox (e.g. CVX[36 ###reference_b36###])."
64
+ },
65
+ {
66
+ "section_id": "3.4",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-D Optimization of the receive beamforming vector",
69
+ "text": "Fixed , , v, and , the optimization problem with respect to can be re-arranged as\nwhere\nIn accordance with the generalized Rayleigh-Rize theorem, the optimal is given by the eigenvector corresponding to the largest eigenvalue of ."
70
+ },
71
+ {
72
+ "section_id": "3.5",
73
+ "parent_section_id": "3",
74
+ "section_name": "III-E Optimization of the IRS phase shift matrix",
75
+ "text": "In the previous sections, the PA factors and , transmit beamforming v, and receive beamforming have been optimized. In this section, we turn our focus to the optimization of the IRS phase shift matrix . In what follows, two strategies for optimizing by fixing the variables , , v, and will be proposed."
76
+ },
77
+ {
78
+ "section_id": "3.5.1",
79
+ "parent_section_id": "3.5",
80
+ "section_name": "III-E1 Max-SR-SS algorithm",
81
+ "text": "First, we transform the power constraint (19e ###reference_.5###) into a constraint on . Based on the fact that for , (19e ###reference_.5###) can be re-arranged as follows\nGiven that the inverse operation in (II ###reference_3###), it is difficult to tackle the optimization problem (II ###reference_###) directly. Hence, to transform in (II ###reference_3###) into an tractable form, let us define\nThen, we introduce a slack variable , which meets\nIn accordance with the nature of Schur complement, we can obtain\nAccording to the first-order Taylor approximation of at feasible point , we have\n\nThen, (63 ###reference_###) can be rewritten as\nAt this point, the optimization problem with respect to can be recast as\nThe objective function of the problem (III-E1 ###reference_###) is the difference of two logarithmic functions and is non-convex. To address this problem, let us define\nThen, we have\nBased on the first-order Taylor approximation of , i.e., and the result in [37 ###reference_b37###], for fixed points ,\nafter neglecting the constant entries, (III-E1 ###reference_###) can be recast as\nwhere stands for the value obtained at the previous iteration of . It is noted that the problem (III-E1 ###reference_###) is convex, which can be derived directly with convex optimizing toolbox."
82
+ },
83
+ {
84
+ "section_id": "3.5.2",
85
+ "parent_section_id": "3.5",
86
+ "section_name": "III-E2 Max-SR-EM algorithm",
87
+ "text": "In the previous subsection, a Max-SR-SS scheme has been proposed to optimize the IRS phase shift matrix , which has a high computational complexity. To reduce the complexity, a Max-SR-EM scheme with lower complexity is proposed in this section. Given that consists of amplitude and phase, we will derive by solving for them separately in the following.\nFirstly, the derivation of the magnitude is taken into account. For the sake of derivation, we assume that in (II ###reference_###) always holds and the amplitude of each IRS phase shift elements is the same, noted as , and . Then, we have . Based on the IRS power constraint (19e ###reference_.5###)\nand the fact that it is optimal when taking the equivalent value, i.e.,\nwhich yields the amplitude\nIn the following, we focus on finding the phase matrix . Let us define\nThen, (13 ###reference_###) and (II ###reference_3###) can be rewritten as\nand\nrespectively.\nNext, we perform a transformation of . By (75 ###reference_###) and the fact that for fixed points and ,\none obtains\nwhere and\nWith the majorization-minimization (MM) algorithm in [38 ###reference_b38###], i.e.,\nwhere , (79 ###reference_###) can be recast as\nNext, we transform in (II ###reference_3###) into a form that is tractable to solving. Based on the fact that for and , one has\nThen, we have\nTo simplify the first term of (III-E2 ###reference_###), based on\nwe have\nwhere means the solution obtained at the previous iteration of J. By utilizing (80 ###reference_###), one has\nTo make the second term of (III-E2 ###reference_###) tractable, according to (70 ###reference_###), we can obtain\nwhere is the solution obtained at the previous iteration. Based on the first-order Taylor series expansion, we have\nAt this point, combined with (III-E2 ###reference_g###), (III-E2 ###reference_e###), (III-E2 ###reference_7###), and (III-E2 ###reference_8###), after neglecting the constant term, the optimization problem with respect to can be recast as\nwhere\nThen, the optimal solution of can be obtain directly by"
88
+ },
89
+ {
90
+ "section_id": "3.6",
91
+ "parent_section_id": "3",
92
+ "section_name": "III-F Overall scheme and complexity analysis",
93
+ "text": "Up to now, we have completed the derivation of the PA factors and , transmit beamforming v, receive beamforming , and IRS phase shift matrix . To make the process of this scheme clearer, we summarize the entire proposed schemes below.\nThe iterative idea of the proposed Max-SR-SS scheme is as follows: (1) the PA factors and , transmit beamforming v, receive beamforming , and IRS phase shift matrix are initialized to feasible solutions; (2) given , v, , and , based on Algorithm 1 to update ; (3) fixed , v, , and , solve (III-B ###reference_###) to update ; (4) given , , , and , solve (III-C ###reference_###) to obtain v; (5) fixed , , v, and , solve (III-D ###reference_###) to yield ; (6) given , , v, and , solve (III-E1 ###reference_###) to yield , and . The five variables are updated alternately until the termination condition is realized, i.e., , where and refer to the iteration number and convergence accuracy, respectively.\nThe overall procedure of the proposed Max-SR-EM scheme is listed below: (1) the PA factors and , transmit beamforming v, receive beamforming , and IRS phase shift matrix are initialized to feasible solutions; (2) given , v, , and , is computed by the Algorithm 1; (3) fixed , v, , and , is updated by (III-B ###reference_###); (4) given , , , and , v is updated by (III-C ###reference_###); (5) fixing , , v, and , is derived via the generalized Rayleigh-Ritz theorem; (6) given , , v, and , solve (III-E2 ###reference_f###) to obtain , solve (91 ###reference_###) to find , and . The alternating iteration is repeated until the termination condition is met.\nDue to the fact that the obtained solutions in Max-SR-SS and Max-SR-EM schemes are sub-optimal, and the objective value sequence obtained in each iteration of the alternate optimization method is non-decreasing. Specifically, it follows\nwhere , , , and are due to the update of , , v, , and , respectively. Moreover, has a finite upper bound since the limited power constraint. Therefore, the convergence of the proposed three schemes can be guaranteed.\nNext, we calculate the computational complexity of the two proposed schemes.\n1) For the Max-SR-SS scheme, the overall computational complexity is \nfloat-point operations (FLOPs), where refers to the maximum number of alternating iterations, stands for the given accuracy tolerance of CVX.\n2) For the Max-SR-EM scheme, the whole computational complexity is \nFLOPs, where represents the maximum number of alternating iterations.\nIt is not difficult to find that the computational complexity of the two proposed schemes can be listed in decreasing order as ."
94
+ },
95
+ {
96
+ "section_id": "4",
97
+ "parent_section_id": null,
98
+ "section_name": "IV Proposed Max-SR-AO scheme",
99
+ "text": "In this section, we consider a special situation of problem (II ###reference_###), i.e., both of Bob and Eve are equipped with single antenna. At this point, the channels , , , are degenerated to , , , , respectively, and the receive beamforming is not done. Then, the receive signal (II ###reference_###) and (II ###reference_###) can be degenerated to\nand\nrespectively. Correspondingly, the achievable rates at Bob and Eve are respectively given by\nand\nIn the absence of receive beamforming, the optimization problem (II ###reference_###) can be recast as\nIn what follows, the alternating iteration strategy is taken into account for solving the variables , , v, and ."
100
+ },
101
+ {
102
+ "section_id": "4.1",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-A Optimization of the PA factor",
105
+ "text": "In this subsection, the beamforming vector v and IRS phase shift matrix are given for the sake of simplicity.\nLet us define\n\n\n\n\n\nThen, (95 ###reference_###) and (IV ###reference_###) can be transformed into\nand\nrespectively. The objective function of the optimization problem (IV ###reference_###) can be degenerated as\nIn what follows, we handle the optimization of the PA parameters and successively.\nGiven , in accordance with (IV ###reference_###) and (IV-A ###reference_###), the optimization problem with respect to can be simplified as follows\nwhich can be re-arrange as\nwhere\n\n\n\n\n\n\n\nIt can be found that this problem is non-convex. Notice that this is a FP problem, and the denominator of (102a ###reference_2.1###) is .\nTo transform (IV-A ###reference_2###) into a convex optimization problem, based on the Dinkelbach\u2019s transform in [39 ###reference_b39###], we introduce a auxiliary parameter and recast the problem (IV-A ###reference_2###) as follows\nThe optimal solution can be obtained by taking the root of . At this point, the optimization problem (IV-A ###reference_3###) is convex, and we can address it by CVX directly."
106
+ },
107
+ {
108
+ "section_id": "4.2",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-B Optimization of the PA factor",
111
+ "text": "Fixed, v and , we transfer the focus to solving for .\nIn accordance with (IV ###reference_###) and (IV-A ###reference_###), by neglecting the constant terms, the optimization problem with respect to can be simplified as follows\nwhich yields\nwhere\n\n\n\n\n\n\n\nIt is noticed that , and this is a non-convex fractional optimization problem, in accordance with the FP method, we introduce a auxiliary parameter and recast the problem (IV-B ###reference_5###) as\nThe optimal solution to this problem is the root of . However, the problem (IV-B ###reference_6###) is still non-convex and requires further transformation.\nWith the first-order Taylor approximation of at feasible point , i.e.,\n, (IV-B ###reference_6###) can be converted to\nwhich is a convex optimization problem and can be addressed directly by the convex optimizing toolbox."
112
+ },
113
+ {
114
+ "section_id": "4.3",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-C Optimization of the beamforming vector v",
117
+ "text": "Given , , and with ignoring the constant term, (IV ###reference_###) can be reformulated as the optimization problem with respect to v as follows\nwhere\nBased on (53 ###reference_###) and relaxed the constraint to , the problem (IV-C ###reference_8###) can be recast as\nIt can be found that this is a convex optimization problem that can be tackled directly with convex optimizing toolbox."
118
+ },
119
+ {
120
+ "section_id": "4.4",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-D Optimization of the IRS phase shift matrix",
123
+ "text": "In this subsection, we turn our target to optimize with given , , and v. For the sake of derivation, let us define\nThen, the achievable rates (95 ###reference_###) and (IV ###reference_###) can be rewritten as\nand\nrespectively.\nIn addition, the power constraint (19e ###reference_.5###) can be re-arranged as follows\nAt this point, the optimization problem with respect to is given by\nThis problem is non-convex and further transformation is required. According to (III-E2 ###reference_g###) and (70 ###reference_###), by omitting the constant term, the optimization problem (IV-D ###reference_4###) can be degenerated to\nwhere\n, \n\n\n\n, , , , and mean the solutions obtained at the previous iteration.\nThen, the optimization problem (IV-D ###reference_5###) degenerate towards the following problem\nwhere\nand stands for the solution obtained at the previous iteration.\nIt is noted that the problem (IV-D ###reference_6###) is convex, which can be derived directly with CVX.\n###figure_2### ###figure_3### ###figure_4###"
124
+ },
125
+ {
126
+ "section_id": "4.5",
127
+ "parent_section_id": "4",
128
+ "section_name": "IV-E Overall scheme and complexity analysis",
129
+ "text": "So far, we have completed the derivation of the PA factors and , beamforming vector v, and IRS phase shift matrix . To make the procedure of this scheme clearer, we summarize the whole proposed Max-SR-AO algorithm below. (1) Initialize , , v, and to feasible solutions; (2) fixing , v, and , solve (IV-A ###reference_3###) to update ; (3) given , v, and , solve (IV-B ###reference_7###) to update ; (4) fix , , and , optimize (IV-C ###reference_1###) to update v; (5) given , , and v, solve (IV-D ###reference_6###) to update , and . Optimize the four variables alternately until the termination condition is satisfied.\nIn this scheme, the objective value sequence obtained in each iteration of the alternate optimization strategy is non-decreasing, and has a finite upper bound since the limited power constraint. Therefore, the convergence of the proposed Max-SR-AO scheme can be guaranteed.\nThe computational complexity of the overall Max-SR-AO algorithm is \nFLOPs, where means the maximum number of alternating iterations, and mean the iterative numbers of the subproblems (IV-C ###reference_1###) and (IV-D ###reference_6###), respectively."
130
+ },
131
+ {
132
+ "section_id": "5",
133
+ "parent_section_id": null,
134
+ "section_name": "Simulation Results",
135
+ "text": "To verify the performance of the proposed three maximum SR schemes, we perform the simulation comparison in this section. Unless otherwise noted, the parameters of the simulation are listed as follows: dBm, , , , m, m, , , , dBm. The path loss model is modeled as [40 ###reference_b40###], where and stand for the wavelength and reference distance, respectively. For the sake of convenience, we set . The convergence accuracy of the iterative scheme is set to be .\nTo evaluate the performance of the proposed schemes, the passive IRS scheme (i.e., GAI algorithm) in [24 ###reference_b24###], passive IRS scheme in [26 ###reference_b26###], passive IRS scheme (i.e., Algorithm 1) in [27 ###reference_b27###], and several benchmark schemes are applied for comparison at the same power, and these benchmark schemes are listed as follows.\n1) Benchmark scheme I: Set the PA factor , we only optimize the remaining variables alternatively.\n2) Benchmark scheme II: Fixing the PA factor , we only have to alternately optimize the rest variables.\n3) Benchmark scheme III: Both the PA factors and are fixed at 0.5, i.e., , and only the residual variables need to be optimized alternately.\n4) No-IRS: Set all the active IRS related channel vectors and matrix to zero vectors and zero matrix, then, we only have to optimize the remaining variables alternatively."
136
+ },
137
+ {
138
+ "section_id": "5.1",
139
+ "parent_section_id": "5",
140
+ "section_name": "Bob and Eve are equipped with multiple antennas",
141
+ "text": "###figure_5### ###figure_6### ###figure_7### Firstly, we show the convergence of both the proposed alternating optimization schemes in Fig. 4 ###reference_###, where the number of phase shift elements of IRS . It can be seen from the figure that the SRs of both proposed schemes increase rapidly with the number of iterations and finally converge to a value after a finite number of iterations. And the convergence speed of the proposed Max-SR-SS scheme is slightly faster than that of the proposed Max-SR-EM scheme. In addition, the SRs of both proposed schemes increase with the increases of , and the SR of the proposed Max-SR-SS scheme is slightly better than that of the proposed Max-SR-EM scheme, regardless of the values of . Combined with the previous analysis of the computational complexity of both, it can be found that the low-complexity of the latter is achieved at the price of some performance loss. As a result, the proposed Max-SR-EM scheme strikes a good balance between computational complexity and SR performance.\nFig. 4 ###reference_### plots the curves of the SR versus the number of active IRS phase shift elements of the proposed two schemes and benchmark schemes. Observing this figure, it can be found that the SRs of both the proposed schemes and benchmark schemes gradually increase with the increases of , they have a decreasing order in terms of SR performance: proposed Max-SR-SS, proposed Max-SR-EM, benchmark scheme I, benchmark scheme II, benchmark scheme III, passive IRS [24 ###reference_b24###], and no IRS. The SR difference between the two proposed schemes is trivial with the increases of , and they make significant SR performance enhancements over the five benchmark schemes at the same total power budget. For example, when , the SR performance enhancements achieved by both the proposed schemes over the benchmark scheme I, benchmark scheme II, benchmark scheme III, passive IRS [24 ###reference_b24###], and no IRS are above , , , , and , respectively. These further explain the motivation for investigating the active IRS, PA, and beamforming algorithms.\nFig. 4 ###reference_### depicts the curves of the SR versus the total power ranging from 10dBm to 35dBm. From this figure, we can learn that the SRs of two proposed schemes and five benchmark schemes increase with the increases of , and the ordering of their achieved SRs is similar to that of Fig. 4 ###reference_###. The difference in SR performance between proposed Max-SR-SS scheme and benchmark scheme I is slightly less than that between it and benchmark scheme II, which means that optimizing the confidential message PA factor has a more significant performance enhancement for the system compared to optimizing the base station PA factor in this paper. Compared to the benchmark schemes of no IRS and passive IRS [24 ###reference_b24###], the SRs achieved by the both proposed schemes and the remaining benchmark schemes are remarkable, with the latter being more than one times higher than the former. This is because active IRS elements equipped with power amplifiers enable more SR performance gain. Moreover, the gap between the SRs of the two proposed schemes is trivial when dBm.\nFig. 7 ###reference_### demonstrates the curves of the SR versus the noise ratio ranging from 1 to 3.5, where and remains constant, i.e., the increase of is equivalent to that of the noise power at the active IRS. This figure shows that apart from the scheme of no IRS, the SRs of two proposed schemes and the benchmark schemes I III decrease gradually with the increases of . This is due to the fact that the active IRS helps to transmit the confidential information to Bob and also reflects the noise generated at the IRS to him. When increases, the noise received by Bob also increases, which leads to a decrease in the SR performance for all schemes apart from the no IRS scheme. Taking Max-SR-SS scheme as an example, the SR at and are above 8% and 13% lower than those at , respectively."
142
+ },
143
+ {
144
+ "section_id": "5.2",
145
+ "parent_section_id": "5",
146
+ "section_name": "Bob and Eve are equipped with single antenna",
147
+ "text": "Fig. 7 ###reference_### shows the SR versus the number of iterations of the proposed Max-SR-AO scheme. It can be seen from this figure that regardless of the value of , the proposed Max-SR-AO scheme takes about four iterations to converge the SR ceiling. Fig. 7 ###reference_### plots the SR versus the number of the IRS phase shift elements. It can be found that similar to the scenario where both Bob and Eve are equipped with multiple antennas, the SR performance of the proposed Max-SR-AO scheme is slightly better than that of the fixed PA schemes and significantly better than that of the passive IRS [27 ###reference_b27###], passive IRS [26 ###reference_b26###], and no IRS schemes.\nTo investigate the impact of the Bob\u2019s location on SR performance, with fixed positions of Alice, IRS, and Eve, we assume that Bob moves only along the straight line (i.e., the line connecting Alice and Bob) for simplicity of analysis. At this point, the Bob\u2019s location only depends on the distance of Alice-to-Bob link. As gradually increases, Bob first moves closer to the IRS, reaches a peak and then moves away from it. The diagram of Bob\u2019s detailed movement as shown in Fig. 8 ###reference_###.\n###figure_8### Based on the model of Bob\u2019s position movement in Fig. 8 ###reference_###, Fig. 9 ###reference_### presents the curves of the SR versus the distance ranging from 80m to 130m, where . It reveals that as Bob\u2019s position moves away from Alice along and closer to the IRS, the SR of the no-IRS scheme gradually decreases with the increase of . For the proposed Max-SR-AO scheme, first, when Bob is positioned between Alice and IRS and away from them, its energy received from Alice gradually decreases and its SRs gradually decreases with increasing . Then, as Bob moves away from Alice and closer to the IRS, their energy received from the IRS gradually increases and their SRs gradually increase and reach a peak when Bob is at the bottom of the IRS. Finally, with Bob moving away from Alice and IRS, their energy from Alice and IRS gradually decreases and the SRs gradually decrease. Moreover, there are similar SR performance tendencies for passive IRS [26 ###reference_b26###], and passive IRS [27 ###reference_b27###]. After the peak, the gap of SRs gained by the proposed schemes and passive IRS schemes increases gradually with . Furthermore, the proposed scheme has better SRs performance than the benchmark schemes I III regardless of the value of , which highlights the significance of optimizing the PA factors.\n###figure_9###"
148
+ },
149
+ {
150
+ "section_id": "6",
151
+ "parent_section_id": null,
152
+ "section_name": "VI Conclusion",
153
+ "text": "In this paper, we made an investigation of active IRS-aided DM network and focused on adjusting the PA between IRS and Alice to improve the SR performance. To the best of our knowledge, such a PA has not been investigated the optimization of the PA factors, transmit and receive beamforming, and phase shift matrix of IRS in the active IRS-assisted DM network. Firstly, to maximize SR with AN only interfering with Eve, the projection matrix of AN was designed based on the criterion of null-space projection. Then, to address the formulated maximum SR optimization problem, two alternating iteration schemes, namely Max-SR-SS and Max-SR-EM, were proposed. The former with a high-performance employed the derivative operation, SCA, and generalized Rayleigh-Rize methods to find the optimal PA factors, transmit and receive beamforming, and IRS phase shift matrix. While the latter with a low-complexity got the closed-form expression of the IRS phase shift matrix by the criteria of EAR and MM. Moreover, a special case of receivers equipped with single antenna was considered, and a Max-SR-AO scheme was proposed to address the problem. Simulation results showed that the SR of the DM network was dramatically enhanced with the help of active IRS compared to the passive IRS scheme, and the proposed joint PA and beamforming schemes have made an obvious SR enhancement over the schemes with fixed PA."
154
+ }
155
+ ],
156
+ "appendix": [],
157
+ "tables": {},
158
+ "image_paths": {
159
+ "1": {
160
+ "figure_path": "2307.00743v4_figure_1.png",
161
+ "caption": "Figure 1: System diagram of active IRS-assisted DM network.",
162
+ "url": "http://arxiv.org/html/2307.00743v4/x1.png"
163
+ },
164
+ "2(a)": {
165
+ "figure_path": "2307.00743v4_figure_2(a).png",
166
+ "caption": "Figure 2: Convergence of proposed schemes.",
167
+ "url": "http://arxiv.org/html/2307.00743v4/x2.png"
168
+ },
169
+ "2(b)": {
170
+ "figure_path": "2307.00743v4_figure_2(b).png",
171
+ "caption": "Figure 2: Convergence of proposed schemes.",
172
+ "url": "http://arxiv.org/html/2307.00743v4/x3.png"
173
+ },
174
+ "2(c)": {
175
+ "figure_path": "2307.00743v4_figure_2(c).png",
176
+ "caption": "Figure 2: Convergence of proposed schemes.",
177
+ "url": "http://arxiv.org/html/2307.00743v4/x4.png"
178
+ },
179
+ "3(a)": {
180
+ "figure_path": "2307.00743v4_figure_3(a).png",
181
+ "caption": "Figure 5: SR versus the noise ratio \u03b7\ud835\udf02\\etaitalic_\u03b7.",
182
+ "url": "http://arxiv.org/html/2307.00743v4/x5.png"
183
+ },
184
+ "3(b)": {
185
+ "figure_path": "2307.00743v4_figure_3(b).png",
186
+ "caption": "Figure 5: SR versus the noise ratio \u03b7\ud835\udf02\\etaitalic_\u03b7.",
187
+ "url": "http://arxiv.org/html/2307.00743v4/x6.png"
188
+ },
189
+ "3(c)": {
190
+ "figure_path": "2307.00743v4_figure_3(c).png",
191
+ "caption": "Figure 5: SR versus the noise ratio \u03b7\ud835\udf02\\etaitalic_\u03b7.",
192
+ "url": "http://arxiv.org/html/2307.00743v4/x7.png"
193
+ },
194
+ "4": {
195
+ "figure_path": "2307.00743v4_figure_4.png",
196
+ "caption": "Figure 8: Diagram of Bob\u2019s movement.",
197
+ "url": "http://arxiv.org/html/2307.00743v4/x8.png"
198
+ },
199
+ "5": {
200
+ "figure_path": "2307.00743v4_figure_5.png",
201
+ "caption": "Figure 9: SR versus the distance between Alice and Bob da\u2062bsubscript\ud835\udc51\ud835\udc4e\ud835\udc4fd_{ab}italic_d start_POSTSUBSCRIPT italic_a italic_b end_POSTSUBSCRIPT.",
202
+ "url": "http://arxiv.org/html/2307.00743v4/x9.png"
203
+ }
204
+ },
205
+ "validation": true,
206
+ "references": [],
207
+ "url": "http://arxiv.org/html/2307.00743v4"
208
+ }
20240225/2307.12856v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2308.06013v2.json ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Large Language Models for Telecom: Forthcoming Impact on the Industry",
3
+ "abstract": "Large Language Models (LLMs), AI-driven models that can achieve general-purpose language understanding and generation, have emerged as a transformative force, revolutionizing fields well beyond Natural Language Processing (NLP) and garnering unprecedented attention. As LLM technology continues to progress, the telecom industry is facing the prospect of its impact on its landscape. To elucidate these\nimplications, we delve into the inner workings of LLMs, providing insights into their current capabilities and limitations. We also examine the use cases that can be readily implemented in the telecom industry, streamlining tasks, such as anomalies resolutions and technical specifications comprehension, which currently hinder operational efficiency and demand significant manpower and expertise. Furthermore, we uncover essential research directions that deal with the distinctive challenges of utilizing the LLMs within the telecom domain. Addressing them represents a significant stride towards fully harnessing the potential of LLMs and unlocking their capabilities to the fullest extent within the telecom domain.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large Language Models ###reference_id8### have revolutionized Natural Language Processing ###reference_.id12### (NLP ###reference_.id12###) and Artificial Intelligence ###reference_d2### (AI ###reference_d2###), propelling text generation, comprehension, and interaction to unprecedented levels of sophistication.\nThe history of LLMs ###reference_id8### can be traced back to the early developments in Machine Learning ###reference_.id10### (ML ###reference_.id10###) and NLP ###reference_.id12###, which encompassed the emergence of statistical language models and the advancements in neural networks. However, it was the rise of transformer architectures [1 ###reference_b1###], which paved the way for the development of language models capable of processing and generating vast amounts of text. Among the notable advancements in this domain, OpenAI\u2019s Generative Pre-trained Transformer ###reference_id6### (GPT ###reference_id6###) series and open-source LLMs like LLaMA and its successor LLaMA2 have garnered significant attention [2 ###reference_b2###].\nSpecifically, they have surpassed earlier models in terms of scale and capability, empowering human-like language understanding and generation.\nThanks to their language understanding capabilities, LLMs ###reference_id8### have the potential to revolutionize diverse domains [3 ###reference_b3###], surpassing traditional NLP ###reference_.id12### applications like machine translation and sentiment analysis. In fact, through domain-specific data, they can excel in tasks related to that particular domain. For instance, in medicine, LLMs ###reference_id8### may play a crucial role in encoding clinical knowledge and supporting medical decision-making processes. Similarly, researchers in finance have investigated how LLMs ###reference_id8### can provide insights into market trends and assist in risk analysis. Also, educational organizations have recently developed an LLM-based virtual tutor\nand classroom assistant.\nAlthough LLMs ###reference_id8### have already demonstrated their potential in various fields, their application in the telecom industry has been relatively scarce.\nHowever, this situation is changing as more researchers are beginning to explore the capabilities of LLMs ###reference_id8### in this domain. For instance, a Bidirectional Encoder Representations from Transformers ###reference_id3### (BERT ###reference_id3###)-like language model was adapted to the telecom domain [4 ###reference_b4###] to test its ability to answer a small, manually curated dataset of telecom questions. In another work, language models such as BERT ###reference_id3### and GPT ###reference_id6###-2 were leveraged to classify working groups within the Third Generation Partnership Project ###reference_d1### (3GPP ###reference_d1###) based on analysis of technical specifications [5 ###reference_b5###]. Moreover, the potential of LLMs ###reference_id8### in facilitating Field-Programmable Gate Array development within wireless systems was highlighted in [6 ###reference_b6###]. Additionally, the authors in [7 ###reference_b7###] provided a vision where LLMs ###reference_id8###, along with multi-modal data (e.g., images), can significantly contribute to the development of Radio Access Network ###reference_.id14### (RAN ###reference_.id14###) technologies such as beamforming and localization. In this future, by combining different data types like text and visuals, LLMs ###reference_id8### can assist in optimizing and improving RAN ###reference_.id14### functionalities.\nIn parallel to the work initiated by the research community, telecom ecosystem industries offer the first products based on LLM ###reference_id8### technologies. Huawei has released Pangu, an LLM ###reference_id8### that has been tested in mining, government, vehicles, weather, and RD applications. Qualcomm has released an AI engine to support up to 10 billion parameters of generative AI ###reference_d2### models on mobile handsets, allowing AI ###reference_d2### assistant with NLP ###reference_.id12### capabilities and image generations based on Stable Diffusion. Moreover, Google has introduced generative AI ###reference_d2### capabilities in its cloud platform to offer Mobile Network Operators ###reference_.id11### the opportunity to integrate NLP ###reference_.id12### functionalities in applications such as root cause analysis, information retrieval in legal documents, and conversational chatbot for customer experience improvement.\nIn light of these applications, a fundamental question arises regarding the immediate and future impact of LLMs ###reference_id8### on the telecom industry. In this article, we aim to answer this question by providing a view of LLMs ###reference_id8### and their impeding influence on the industry. Our objective is to demystify their current abilities, highlight their existing limitations, and showcase several use cases in the telecom industry where they can provide substantial assistance today. Additionally, we highlight the telecom data within the industry that can be harnessed to leverage the capabilities of LLMs ###reference_id8###. Moreover, we shed light on the technical difficulties that arise in implementing these use cases and outline the research directions that need to be pursued to fully harness the potential of LLMs ###reference_id8###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Demystifying LLMs",
15
+ "text": "To explore the potential of LLMs ###reference_id8### in the telecom industry, it is essential to begin by gaining an understanding of their intrinsic behavior. To do so, we delve into the intricacies of LLMs ###reference_id8### architecture and training, exploring their capabilities as well as their limitations."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Fundamentals of LLMs",
21
+ "text": "###figure_1### LLMs ###reference_id8### are Deep Learning ###reference_id4### (DL ###reference_id4###) models with the ability to process information and demonstrate human-like text generation capabilities.\nTypically, LLMs ###reference_id8### utilize transformer-based architectures, where self-attention plays a pivotal role [1 ###reference_b1###]. In self-attention, each word in an input sequence attends to all the other words, calculating attention scores that signify the importance of each word relative to the others. This mechanism allows to effectively capture long-range dependencies and grasp the contextual usage of each word. Interested readers can refer to the paper in [1 ###reference_b1###] for\na mathematical description of the self-attention mechanism.\nIn Fig 1 ###reference_###, a high-level illustration of an LLM ###reference_id8### is presented, along with the accompanying self-attention mechanism. Another essential component in the transformer architecture is multi-head attention, which expands upon the concept of self-attention. Often, a sequence element needs to attend to multiple distinct aspects, and relying on a single attention mechanism alone is inadequate to accomplish this objective. The multi-head attention provides the flexibility by enabling the model to attend to different aspects of the input, capturing diverse patterns and dependencies within the input sequence. This capability allows the model to learn complex interactions between words and comprehensively understand the input.\nIn addition, LLMs ###reference_id8### undergo extensive pretraining on vast amounts of text to acquire an understanding of the statistical properties inherent in the language at hand. During this phase, the models are mainly trained with data crawled from the internet, which provides them with diverse\nlinguistic information. The primary goal of this pretraining is to enable the model to predict the next word in a sentence based on the preceding words.\nThrough this process, the model captures both syntactic and semantic relationships, thereby enhancing its grasp of contextual nuances. Due to the range of corpora used during training and the large number of model parameters involved, LLMs ###reference_id8### can develop a comprehensive understanding of grammar, reasoning abilities, and even comprehend intricate language structures.\nAlthough the pretrained LLM ###reference_id8### has a comprehensive understanding of the statistical properties within the language, it needs specific domain knowledge to be applied to industrial processes.\nTo achieve this, the pretrained LLM ###reference_id8###\u2019s parameters, including attention blocks, are fine-tuned using domain-specific datasets and similar training techniques employed during the pretraining phase. Through this procedure, referred to as knowledge fine-tuning, the LLM ###reference_id8### can adapt the learned representations, denoted to as embeddings, from the pretraining phase to better align with the intricacies of the specific domain.\nIn addition, researchers have designed prompt engineering solutions, such as chain-of-thought (CoT) prompting and Retrieval Augmented Generation ###reference_.id16### (RAG ###reference_.id16###), to enhance the capability of LLMs on a wide range of tasks. This topic and related open challenges are further discussed in Sec. IV-A ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B LLMs Functionalities",
27
+ "text": "The LLM\u2019s potential shines through its three core competencies: an extensive understanding of the intricacies of language, cross-disciplinary knowledge, and the emerging ability to reason, albeit less developed than the former two. While we discuss three distinct functionalities: semantic comprehension, intelligent knowledge retrieval, and orchestration capabilities, it is important to note their inherent overlap in practical applications, as highlighted in Section III."
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "II-B1 Semantic abilities",
33
+ "text": "LLMs develop an internal representation of textual data in the form of real-valued vectors called embeddings. This representation conveniently encapsulates the input text\u2019s semantics, syntax, and contextual interpretation. These embeddings provide a simplified representation of textual data suitable for algorithmic procedures and data analysis. For example, a large city\u2019s telecom network generates millions of daily trouble tickets. Many disruptions are symptomatic of the same core issues; however, due to compartmentalization within the network infrastructure, there is no automated system for categorizing these tickets. By converting them into embeddings from a domain-specific LLM ###reference_id8###, clustering algorithms like K-Means can effectively group the tickets, potentially tying them back to singular faults."
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "II-B2 Intelligent access to knowledge",
39
+ "text": "By understanding the specific intention conveyed through the prompt, an LLM ###reference_id8### can effectively apply its knowledge base to craft a response tailored to the user\u2019s needs. LLMs ###reference_id8### can process and comprehend intricate information, such as the content within standard documents, discern patterns, and infer logical conclusions from the given inputs. LLMs ###reference_id8### thus transition from passive language processors to active and intelligent agents, functioning as assistants or co-pilots that enhance professionals\u2019 productivity. For instance, in an operations and maintenance scenario, an operator faced with a trouble ticket may benefit from the model\u2019s ability to summarize the issue automatically, suggest possible solutions, and even draft a template email for field engineers to act upon, requiring only the operator\u2019s review and approval."
40
+ },
41
+ {
42
+ "section_id": "2.2.3",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "II-B3 LLMs as orchestrators",
45
+ "text": "LLMs can utilize their reasoning to deconstruct complex tasks into manageable subtasks and deploy suitable (external) tools for each. They manage workflows by identifying the most appropriate tool for each segmented operation. Take, for instance, a task such as forecasting the next day\u2019s energy consumption for a Base Station ###reference_id9### (BS ###reference_id9###) undergoing hardware upgrades. Various tools are accessible, including data collection from available features and ML ###reference_.id10### model training. The LLM can formulate a two-phase strategy: predict the traffic load and estimate the energy consumption for a specified load and hardware. It chooses the relevant ML ###reference_.id10### model for each subtask and indicates the data needed to train it. After devising the strategy, the LLM orchestrates available tools to collect the relevant data and train the ML ###reference_.id10### models."
46
+ },
47
+ {
48
+ "section_id": "2.3",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-C LLMs Limitations",
51
+ "text": "###figure_2### Given the structure and functionalities of LLMs ###reference_id8###, certain limitations become apparent. It is crucial to shed light on these shortcomings to utilize and interpret content generated by LLMs ###reference_id8###. The following are noteworthy flaws associated with them:"
52
+ },
53
+ {
54
+ "section_id": "2.3.1",
55
+ "parent_section_id": "2.3",
56
+ "section_name": "II-C1 Hallucinations and Fabrications",
57
+ "text": "One of the key concerns with LLMs ###reference_id8### is their tendency to generate hallucinations or fabrications. LLMs ###reference_id8### rely on statistical patterns and associations learned from vast text data during training. Consequently, they may produce responses that abide to these patterns, but are incorrect or nonexistent [8 ###reference_b8###]."
58
+ },
59
+ {
60
+ "section_id": "2.3.2",
61
+ "parent_section_id": "2.3",
62
+ "section_name": "II-C2 Limited Explainability",
63
+ "text": "The complex architecture and massive number of parameters in these models render it difficult to trace the decision-making process. In fact, LLMs ###reference_id8### lack transparency in terms of the specific features or patterns they rely on to generate responses. This opacity hinders the ability to understand why a particular answer or response was chosen over others. This limited explainability raises concerns, especially in domains where transparency and accountability are crucial."
64
+ },
65
+ {
66
+ "section_id": "2.3.3",
67
+ "parent_section_id": "2.3",
68
+ "section_name": "II-C3 Computational Complexity",
69
+ "text": "LLMs ###reference_id8### may consist of millions or even billions of parameters, making them resource-intensive to train and deploy. Even after training, running inference with LLMs ###reference_id8### can be computationally demanding. Generating responses with these models involves complex computations across multiple layers, which can strain available resources, especially for real-time applications."
70
+ },
71
+ {
72
+ "section_id": "2.3.4",
73
+ "parent_section_id": "2.3",
74
+ "section_name": "II-C4 Sensitivity to Updates",
75
+ "text": "LLMs ###reference_id8### display sensitivity to adjustments in their parameters, leading to unforeseen variations in outputs and behaviors. A compelling illustration of this phenomenon can be found in [9 ###reference_b9###], which showcased how the performance and behavior of both GPT-3.5 and GPT-4 underwent dramatic shifts over time: in March 2023, GPT-4 excelled at identifying prime numbers, but by June 2023, it faltered in handling the same questions. This inconsistency serves as a clear illustration of the susceptibility of LLMs ###reference_id8### to updates and alterations introduced to the model."
76
+ },
77
+ {
78
+ "section_id": "2.3.5",
79
+ "parent_section_id": "2.3",
80
+ "section_name": "II-C5 Output Inconsistency",
81
+ "text": "This is a phenomenon that arises when the output generated by the model fails to fully align with the user\u2019s intent or the desired task, even when the prompt explicitly specifies the required output [10 ###reference_b10###]. This is illustrated in Fig. 2 ###reference_###, where GPT-3.5 was tested to answer a simple constrained maximization question. The LLM ###reference_id8### provided a wrong response with a given probability. Importantly, the error probability was observed to increase with the dimension of the problem.\nThis can hamper the applicability of LLMs ###reference_id8### in areas such as telecom system optimization. Therefore, addressing this limitation becomes of utmost importance."
82
+ },
83
+ {
84
+ "section_id": "3",
85
+ "parent_section_id": null,
86
+ "section_name": "III Potential LLM Applications In The Telecom Industry",
87
+ "text": "With the understanding of how LLMs ###reference_id8### function, their capabilities, and their limitations, we can now delve into the applications that can have a large impact on the telecom industry."
88
+ },
89
+ {
90
+ "section_id": "3.1",
91
+ "parent_section_id": "3",
92
+ "section_name": "III-A Network Anomalies Resolution",
93
+ "text": "###figure_3### Solving anomalies in the mobile network is a tedious task. With a vast infrastructure spanning across large geographical areas, maintaining and monitoring the BSs ###reference_id9### is challenging. Each BS ###reference_id9### is susceptible to a wide array of issues, including hardware malfunctions, software glitches, and environmental factors. For this reason, rectifying these anomalies necessitates extensive expertise, as arriving at appropriate solutions demands significant investments of manpower, meticulous analysis, and troubleshooting efforts. Leveraging LLMs ###reference_id8### can enhance the capabilities of MNOs ###reference_.id11### in addressing these challenges and enable more efficient troubleshooting. Particularly, MNOs ###reference_.id11### have at their disposal a rich repository of tickets accumulated over time from dealing with network anomalies. These tickets capture real-world scenarios, encompassing diverse problems and equipment malfunctions. An illustrative example of such a ticket is shown in Fig. 3 ###reference_###. By utilizing this repository with product manuals as training data, the LLM ###reference_id8### can be fine-tuned to comprehend the intricacies of network issues and grasp the unique context of anomaly resolution. Consequently, the LLM ###reference_id8### becomes an anomaly-solving tool for telecommunications professionals, furnishing them with diagnoses of network issues and their corresponding solutions. Furthermore, leveraging the time-stamped data from the tickets, the LLM ###reference_id8### can estimate the duration required to address network faults, accounting for the product type, hardware specificities, and the attributes of the involved BSs ###reference_id9###. As a result, the LLM ###reference_id8### becomes an asset for the MNO ###reference_.id11###, enhancing the efficiency and effectiveness of resolving network problems."
94
+ },
95
+ {
96
+ "section_id": "3.2",
97
+ "parent_section_id": "3",
98
+ "section_name": "III-B 3GPP Specifications Comprehension",
99
+ "text": "The 3GPP ###reference_d1### produces the specifications that define cellular telecommunications\ntechnologies, including radio access, core network and service capabilities.\n3GPP ###reference_d1### documents are known for their elaborateness, encompassing many details and specifications. Due to the sheer volume of these documents, keeping track of all the specificities, especially in the context of new releases, can be daunting and time-consuming. For engineers attempting to implement technologies and features in the product, this challenge becomes even more apparent, as they must invest considerable time in searching for relevant information within the extensive documentation. LLMs ###reference_id8### offer a resolution to this problem, providing promising solutions for engineers grappling with 3GPP ###reference_d1### documents. Through fine-tuning to the 3GPP ###reference_d1### documents and incorporating all relevant reports, these models can become adept at processing the vast 3GPP ###reference_d1### standard knowledge. Then, a significant application of these fine-tuned LLMs ###reference_id8### revolves around the development of interactive chatbots tailored for answering 3GPP ###reference_d1### standards queries. These chatbots, built upon the fine-tuned LLMs ###reference_id8###, empower engineers to streamline their research processes, saving valuable time and facilitating more efficient and accurate implementations of 3GPP ###reference_d1### standards."
100
+ },
101
+ {
102
+ "section_id": "3.3",
103
+ "parent_section_id": "3",
104
+ "section_name": "III-C Network Modeling",
105
+ "text": "The optimization of mobile networks is a complex task that requires multiple models for capturing different Key Performance Indicators ###reference_id7### of the network and the interactions between various network configuration parameters. Such optimization often relies on white-box models, where interactions between multiple features are mathematically formulated to ensure explainability. Developing such models requires expert engineers with deep domain knowledge to identify relevant information and relationships driving the interactions between features. Leveraging LLMs ###reference_id8### can support the development of these models.\nTo better clarify this aspect, we provide an explanatory example. Let us consider a simple scenario with a network composed of 90 single-carrier BSs ###reference_id9###. We used GPT-3.5 as the LLM ###reference_id8###. The LLM ###reference_id8### was provided with a list of 12 data features, such as BS ###reference_id9### location, frequency, and load, and tasked to select the relevant features for creating a model to estimate energy consumption based on the selected features. Additionally, we asked the LLM ###reference_id8### to provide a mathematical formula capturing the relationship between inputs and outputs and a script to fit the model on a dataset containing real network data.\nGPT-3.5 successfully identified the 5 relevant inputs among the provided features, while discarding the irrelevant ones. Notably, this was achieved solely on the basis of its knowledge, without using any data samples. The model provided by GPT-3.5 consisted of a weighted sum of the selected features for regression.\n###figure_4### ###figure_5### Fig. 4 ###reference_### shows the real energy consumption measured by the BSs ###reference_id9### at different downlink loads. The real data reveal three different trends, corresponding to the three different configurations of maximum transmit powers in the considered network.\nFig. 4 ###reference_###a shows the estimations performed by the model provided by GPT-3.5, which achieved a relative error of 7.8%. The estimations produced by this model resulted in a single average trend, as the selected inputs were summed, overlooking the relationship between the load and the maximum transmit power: in fact, these two terms should be multiplied and not summed. To address this limitation, we provided contextual data related to the dynamics driving the energy consumption of a generic BS ###reference_id9###. By leveraging this, GPT-3.5 produced a different model where the two terms were multiplied instead of summed, significantly reducing the error to 3%. The improved model correctly captures all three trends (Fig. 4 ###reference_###b), highlighting the importance of providing telecom-related context.\n###figure_6### Fig. 5 ###reference_### illustrates the average hourly energy consumption in the selected network and the estimates performed by the two models provided by GPT-3.5 (i.e., with and without context).\nTo provide a basis for comparison, we present the estimations from two alternative models: i) a naive model and ii) an expert-designed ML ###reference_.id10### model [11 ###reference_b11###].\nThe naive model estimates the energy consumption in a given hour by averaging the energy consumption measured at the same hour of the day in the previous week. While simple, this model lacks knowledge of the telecom field and consequently yields an error rate of 12%.\nOn the other hand, the expert-designed ML ###reference_.id10### model employs a ML ###reference_.id10### model designed to handle more intricate scenarios, such as multi-carrier BSs ###reference_id9### utilizing multiple energy-saving features. In this simplistic setup, the expert-designed ML ###reference_.id10### model achieves a relative error of 2.3%.\nSignificantly, GPT-3.5 capitalized on its knowledge to develop a model that surpassed the limitations of the naive approach, realizing a 75% improvement in accuracy, closely approaching the performance of the expert-designed ML ###reference_.id10### model.\nAs a final point, it is crucial to highlight that the choice of the LLM employed for a task significantly influences the quality of the achieved solution.\nTo illustrate this, we conducted the same experiment using LLaMA-70B as the LLM ###reference_id8###. In this case, LLaMA identified additional input features compared to those selected by GPT-3.5, including location, and the year of production of the BS ###reference_id9###.\nThe model proposed by LLaMA took the form of a weighted sum of the chosen features, similar to the approach proposed by GPT-3.5, resulting in a similar error rate of 7.6%.\nHowever, akin to GPT-3.5, the LLaMA model struggled to recognize the relationship between load and maximum transmit power. In contrast to GPT-3.5, though, LLaMA was unable to rectify this issue even when provided with additional contextual information."
106
+ },
107
+ {
108
+ "section_id": "4",
109
+ "parent_section_id": null,
110
+ "section_name": "IV Open Research Directions",
111
+ "text": "Extending upon the previously discussed limitations of LLMs ###reference_id8### and their forthcoming use cases in the telecom industry, a set of open research directions presents itself. These avenues of investigation are crucial to unlock the full potential of LLMs ###reference_id8### in the telecom industry and harness their capabilities to the utmost extent."
112
+ },
113
+ {
114
+ "section_id": "4.1",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-A Telecom Foundation Model",
117
+ "text": "While the most advanced foundation models exhibit a reasonable grasp of the telecommunications theory, they fall short on practical implementation knowledge [12 ###reference_b12###].\nBesides, our findings, illustrated in Fig. 4 ###reference_###, have demonstrated the performance gap between a context-aware LLM ###reference_id8### and a generic counterpart, shedding light on the necessity of a specialized telecom foundation model. This is further validated in [12 ###reference_b12###]. Such a specialized model should leverage standards, white papers, research literature, and even exclusive proprietary materials or synthetic datasets produced through simulators like digital twins.\nThree approaches are available to integrate further knowledge into a language model: full model training, fine-tuning, and RAG ###reference_.id16###. Full model training achieves a profound understanding of the additional knowledge at the expense of substantial energy and complexity costs. Fine-tuning offers a pragmatic balance, enabling model specialization via training a minimal number of parameters using methods like low-rank adaptation ###reference_.id15### (LoRA ###reference_.id15###). Meanwhile, RAG ###reference_.id16### is the most convenient solution. It is cost-efficient and does not require access to the model weights. It incorporates external knowledge using a more surface-level comprehension by querying a database for context to append to the prompt, which may limit the depth of understanding.\nIt is worth mentioning that some lines of work investigate non-language-based telecom foundational models. Notably, graph-based foundational models could natively capture the natural topology of telecom networks. Such approaches remain in an early exploratory phase."
118
+ },
119
+ {
120
+ "section_id": "4.2",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-B Benchmarking LLMs for Telecom",
123
+ "text": "In the last years researchers have proposed a number of tests to evaluate LLM ###reference_id8### capabilities in terms of NLP ###reference_.id12###, e.g., text understanding and reasoning. Recent LLMs ###reference_id8### are already close to human-level performance on several of these tests such that HellaSwag, a test of commonsense inference, and GLUE/SuperGLUE, which evaluate LLM ###reference_id8### linguistic understanding. MMLU, instead, evaluates LLMs ###reference_id8###\u2019 multitask accuracy and capabilities across a broad range of subjects, and show that top-performing LLMs ###reference_id8### have still significant room for improvement before achieving expert-level accuracy across specialized tasks. In all these tests, accuracy on\nmultiple-choice questions is computed to provide a simple to determine and understand evaluation.\nSome researchers have suggested that the future of NLP ###reference_.id12### evaluation should focus on text generation: however, while some metrics exist for testing these capabilities such as BLEU and perplexity, text generation is notoriously difficult to assess and still lacks a standard evaluation methodology. Although ML researchers have mainly focusing on NLP ###reference_.id12### capabilities of LLMs ###reference_id8###, the\nsuccess of LLMs is the telecom industry depends on benchmark datasets designed to assess their proficiency in this specific domain.\nThese datasets are expected to play a pivotal role in determining the optimal architectural design for LLMs ###reference_id8### and guiding the pretraining procedure in the development of telecom foundational models. The framework in [12 ###reference_b12###] proposes a multiple-choice question dataset to simply evaluate the accuracy of telecom knowledge of LLMs ###reference_id8###; future works will need to extend this framework and allow the evaluation of LLMs ###reference_id8### across specialized telecom tasks such as those discussed in Sec. III ###reference_###."
124
+ },
125
+ {
126
+ "section_id": "4.3",
127
+ "parent_section_id": "4",
128
+ "section_name": "IV-C LLMs Compression",
129
+ "text": "As highlighted in Section II-C ###reference_###, LLMs ###reference_id8### can be comprised of billions of parameters and require powerful devices to be trained and inferred.\nThis limitation becomes relevant in critical scenarios where LLMs ###reference_id8### need to be deployed in edge devices with limited storage and computational capabilities.\nAs a result, it is imperative to address the substantial size of LLMs ###reference_id8### and develop compression techniques [13 ###reference_b13###], which can reduce their size while retaining their knowledge of the telecom domain. Pruning, quantization, and knowledge distillation are the three most popular model compression techniques for DL ###reference_id4### models. Today, researchers believe that quantization outperforms pruning in most of LLM ###reference_id8### architectures. Then,\ndue to the large costs of training LLMs, post-training quantization, where weights and activation tensors are encoded with a low-level of precision, e.g., 8-bit or 4bit instead of 16-bit, is the main adopted scheme. Indeed most of the open source LLMs ###reference_id8### offer quantized versions of larger models. In addition, knowledge distillation is currently explored to develop compact LLMs ###reference_id8### that can run on devices with limited resources. To conclude, compression methods reduce memory and computational resource usage but can degrade LLM ###reference_id8### performance, and thus accuracy pre- and post compression has to be evaluated to analyse pros and cons of the existing and future techniques."
130
+ },
131
+ {
132
+ "section_id": "4.4",
133
+ "parent_section_id": "4",
134
+ "section_name": "IV-D Privacy Considerations",
135
+ "text": "Adapting LLMs ###reference_id8### to address specific telecom-related tasks may require the use of datasets containing sensitive user information. In light of this, it becomes imperative to implement measures to protect privacy when handling such data. Included among these measures are data anonymization and aggregation, effectively removing personally identifiable information to protect individual privacy. The incorporation of techniques such as differential privacy is essential to ensure that these models remain impervious to leaking sensitive information during queries. Additionally, the development of smaller LLMs ###reference_id8### that can run on edge devices will further enhance the end user\u2019s privacy."
136
+ },
137
+ {
138
+ "section_id": "4.5",
139
+ "parent_section_id": "4",
140
+ "section_name": "IV-E Behavior Alignment",
141
+ "text": "Solving the problem of output inconsistency is essential to enable the adoption of LLMs ###reference_id8### in the telecom industry, especially in accuracy-critical areas. \n\nIt has been shown that grounding LLMs with use-case-specific external tools, such as querying external knowledge with RAG, reduces hallucinations [14 ###reference_b14###]. Besides, it is crucial to incorporate mechanisms and metrics to assess the model\u2019s prediction confidence. Such mechanisms enable the identification of uncertain cases, triggering additional verification from humans in the loop. In order to measure prediction confidence, methods include using the LLM\u2019s internal evaluation of the likelihood of the output, generating multiple responses to a single query to assess consistency, or using one LLM ###reference_id8### to review and refine the output of another.\n\nAdditionally, rigorous testing of LLMs ###reference_id8### against adversarial inputs and scenarios can help to reveal vulnerabilities and guide the development of reliable models. Finally, understanding prompt engineering is necessary, given that well-designed queries and instructions play a crucial role in shaping the model\u2019s behavior and ensuring accurate outputs."
142
+ },
143
+ {
144
+ "section_id": "4.6",
145
+ "parent_section_id": "4",
146
+ "section_name": "IV-F LLMs Explainability",
147
+ "text": "The need for explainability in LLMs ###reference_id8### within the telecom industry is paramount due to stakeholder concerns regarding trust and reliance on ML ###reference_.id10### outputs, especially considering their limitations previously discussed in Section II-C ###reference_###. The adoption of LLMs ###reference_id8### for critical operations require a clear understanding of how and why specific outputs are generated. This necessitates the incorporation of explainability techniques such as referencing, where LLMs ###reference_id8### can provide sources or justifications for their responses. Additionally, explicitly integrating explainability objectives into the training process is crucial for this purpose."
148
+ },
149
+ {
150
+ "section_id": "4.7",
151
+ "parent_section_id": "4",
152
+ "section_name": "IV-G Real-time Context",
153
+ "text": "By design, LLMs ###reference_id8### are trained offline on large corpora of data and, therefore, are not aware of new findings that may be accessible through search engines. Consequently, prompting these LLMs ###reference_id8### can lead to potentially outdated answers, especially considering that the telecom industry continuously evolves with releases of new technical specifications. One approach to address this issue is to enable LLMs ###reference_id8### to access external tools. For instance, allowing LLMs ###reference_id8### to access the internet through dedicated channels, as OpenAI has done with ChatGPT. However, this approach confines the quality of LLM ###reference_id8### generation to the outcomes derived from search queries. A more fundamental strategy is to create data pipelines to gather new relevant telecom knowledge. This knowledge can then be utilized by either augmenting queries through RAG ###reference_.id16### approaches (e.g., as done by Grok, the LLM developed by XAI, with tweets) or by conducting additional training of the LLM ###reference_id8### to refine its parametric knowledge. The latter approach introduces various research possibilities, such as identifying the optimal frequency for updating the LLM ###reference_id8###\u2019s parametric knowledge and developing efficient methodologies for model updates to integrate the new material."
154
+ },
155
+ {
156
+ "section_id": "4.8",
157
+ "parent_section_id": "4",
158
+ "section_name": "IV-H Sustainability and Environmental Impact",
159
+ "text": "Given their large parameter count, LLMs ###reference_id8### pose a substantial environmental concern in terms of carbon footprint. To mitigate these challenges, prioritizing smaller and more efficient models (e.g., Phi-2 that compete with larger models) is recommended.\nFurthermore, incorporating efficient implementations of attention mechanisms and overall model architecture can substantially alleviate computational demands during both training and inference. For instance, adopting the FlashAttention mechanism [15 ###reference_b15###] or employing the mixture of experts architecture, as demonstrated by models like Mixtral, offers promising avenues for reducing computational loads. From another perspective, tackling the sustainability challenge also involves the development of KPIs and regulations that effectively measure, evaluate, and compare the environmental footprint of different LLMs ###reference_id8###."
160
+ },
161
+ {
162
+ "section_id": "4.9",
163
+ "parent_section_id": "4",
164
+ "section_name": "IV-I LLMs as Orchestrators",
165
+ "text": "Leveraging even further the reasoning capabilities of the LLMs, an open research direction involves transitioning from a strict parametric knowledge framework to a different paradigm where LLMs serve as orchestrators, as introduced in Section II ###reference_###. In this scenario, LLMs are granted access to fine-grained blocks, such as code interpreters, optimizers, signal processing blocks, and network models. Their role then shifts to translating user prompts into actionable steps by leveraging both their knowledge and harnessing the accessible blocks. In this context, the research avenues revolve around defining these fine-grained blocks and ensuring seamless integration between LLMs and these blocks to unlock their potential."
166
+ },
167
+ {
168
+ "section_id": "5",
169
+ "parent_section_id": null,
170
+ "section_name": "Conclusions",
171
+ "text": "In this article, we have delved into the inner workings of LLMs, shedding light on their current capabilities and limitations. Additionally, we explored various use cases of LLMs that can be promptly leveraged within the industry using the available data at vendors\u2019 disposal. Furthermore, we discussed the specific open research directions tailored to the peculiarities of the telecom domain, which must be addressed to fully harness the potential of LLMs. As the technology behind LLMs continues to evolve, the telecom industry is poised to seize the opportunity and leverage these advancements to enhance operational efficiency within the sector."
172
+ }
173
+ ],
174
+ "appendix": [],
175
+ "tables": {},
176
+ "image_paths": {
177
+ "1": {
178
+ "figure_path": "2308.06013v2_figure_1.png",
179
+ "caption": "Figure 1: \nA high-level overview of LLMs.",
180
+ "url": "http://arxiv.org/html/2308.06013v2/x1.png"
181
+ },
182
+ "2": {
183
+ "figure_path": "2308.06013v2_figure_2.png",
184
+ "caption": "Figure 2: \nAn illustration of an LLM output inconsistency. GPT-3.5 was provided a vector reporting the strengths of N beams. It was tasked with selecting the beam with the highest strength, while instructed to avoid a particular beam, which was always set as the strongest.",
185
+ "url": "http://arxiv.org/html/2308.06013v2/x2.png"
186
+ },
187
+ "3": {
188
+ "figure_path": "2308.06013v2_figure_3.png",
189
+ "caption": "Figure 3: An example of a network anomaly troubleshooting ticket. Information related to the anomaly is automatically generated by the system (in blue). Input regarding the dispatch and the resolution of the anomaly is provided by the engineer (in orange).",
190
+ "url": "http://arxiv.org/html/2308.06013v2/extracted/5429922/Figures/troublesh2.png"
191
+ },
192
+ "4(a)": {
193
+ "figure_path": "2308.06013v2_figure_4(a).png",
194
+ "caption": "(a)\nFigure 4: Normalized energy consumption measured at different downlink loads and estimated by the model provided by (a) GPT-3.5, and (b) GPT-3.5 with context.",
195
+ "url": "http://arxiv.org/html/2308.06013v2/x3.png"
196
+ },
197
+ "4(b)": {
198
+ "figure_path": "2308.06013v2_figure_4(b).png",
199
+ "caption": "(b)\nFigure 4: Normalized energy consumption measured at different downlink loads and estimated by the model provided by (a) GPT-3.5, and (b) GPT-3.5 with context.",
200
+ "url": "http://arxiv.org/html/2308.06013v2/x4.png"
201
+ },
202
+ "5": {
203
+ "figure_path": "2308.06013v2_figure_5.png",
204
+ "caption": "Figure 5: Normalized hourly energy consumption in the network - Actual measurements (in black) and estimations from various models.",
205
+ "url": "http://arxiv.org/html/2308.06013v2/x5.png"
206
+ }
207
+ },
208
+ "validation": true,
209
+ "references": [],
210
+ "url": "http://arxiv.org/html/2308.06013v2"
211
+ }
20240225/2308.10385v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2309.04332v2.json ADDED
@@ -0,0 +1,414 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Graph Neural Networks Use Graphs When They Shouldn\u2019t",
3
+ "abstract": "Predictions over graphs play a crucial role in various domains, including social networks and medicine.\nGraph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data.\nAlthough a graph-structure is provided as input to the GNN, in some cases the best solution can be obtained by ignoring it.\nWhile GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will.\nIn this work, we show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it.\nWe analyze the implicit bias of gradient-descent learning of GNNs and prove that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data.\nWe examine this phenomenon with respect to different graph distributions and find that regular graphs are more robust to this overfitting.\nWe also prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent.\nFinally, based on our empirical and theoretical findings, we demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Graph labeling problems arise in many domains, from social networks to molecular biology.\nIn these settings, the goal is to label a graph or its nodes given information about the graph. The information for each graph instance is typically provided in the form of the graph-structure (i.e., its adjacency matrix) as well as the features of its nodes.\nGraph Neural Networks (GNNs) (Kipf & Welling, 2017b ###reference_b13###; Gilmer et al., 2017 ###reference_b8###; Veli\u010dkovi\u0107 et al., 2018 ###reference_b23###; Hamilton et al., 2017 ###reference_b10###) have emerged as the leading approach for such tasks. The fundamental idea behind GNNs is to use neural-networks that combine the node features with the graph-structure, in order to obtain useful graph representations. This combination is done in an iterative manner, which can capture complex properties of the graph and its node features.\nAlthough graph-structures are provided as input to the GNN, in some cases the best solution can be obtained by ignoring them. This may be due to these graph-structures being non-informative for the predictive task at hand. For instance, some molecular properties such as the molar mass (i.e., weight) depend solely on the constituent atoms (node features), and not on the molecular structure.\nAnother case is when the provided graph-structure does contain valuable information for the task, but the GNN cannot effectively exploit it. In such cases, better test accuracy may be achieved by ignoring the graph-structure.\nIn other cases, the node features alone carry most of the information and the graph-structure conveys just a small added value. For example, assume that node features contain the zipcode of a user. Then the user\u2019s income is highly predictable by that feature, and their social structure will add little accuracy of this prediction.\nAG:I don\u2019t think this last point (starting with \"in other cases\" is needed. Remove if space needed.\nMotivated by this observation, we ask a core question in GNN learning: will GNNs work well in cases where it is better to ignore the graph-structure or will they overfit the graph-structure, resulting in reduced test accuracy?\nAnswering this question has several far-reaching practical implications. To illustrate, if GNNs lack the ability to discern when to disregard the graph, then providing a graph can actually hurt the performance of GNNs, and thus one must carefully re-think which graphs to provide a GNN.\nOn the other hand, if GNNs easily reject the structure when they fail to exploit it, then practitioners should attempt to provide a graph, even if their domain knowledge and expertise suggest that there is only a small chance it is informative.\nWe consider the common setting of over-parameterized GNNs. Namely, when the number of parameters the GNN uses is larger than the size of the training data. This is a very common case in deep-learning, where the learned model can fit any training data. Previous studies showed that despite over-parameterization, models learned using Gradient Descent (GD) often generalize well. Hence, it was suggested that the learning algorithm exhibits an implicit bias (e.g., low parameter norm) to avoid spurious models that happen to fit the training data (e.g., Zhang et al., 2017 ###reference_b28###; Lyu & Li, 2020 ###reference_b15###; Gunasekar et al., 2018 ###reference_b9###; Soudry et al., 2017 ###reference_b22###).\nOur focus is thus on the implicit bias of GNN learning, and specifically whether GNNs are biased towards using or not using the graph-structure. If the implicit bias is towards \u201csimple models\u201d that do not use the graph-structure when possible, then one would expect GNNs to be oblivious to the graph-structure when it is not informative. Our first empirical finding is that this is actually not the case. Namely, GNNs tend to not ignore the graph, and their performance is highly dependent on the provided graph-structure. Specifically, there are graph-structures that result in models with low test accuracy.\nNext, we ask which properties of the learned graph distribution affect the GNN\u2019s ability to ignore the graph.\nWe empirically show that graphs that are regular result in more resilient GNNs.\nWe then analyze the implicit bias of learning GNNs with gradient decent and prove that despite the ground truth function being \u201csimple\" in the sense it does not use the graph, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data.\nWe prove that as a result of their implicit bias, GNNs may fail to extrapolate. We then prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent, and provide a sufficient condition for extrapolation when learning on regular graphs.\nFinally, we empirically examine on real-world datasets if the properties of regular graphs are also beneficial in cases where the graph should not necessarily be ignored. We show that modifying the input graph to be \u201cmore regular\u201d can indeed improve performance in practice.\nWe note that we focus on the implicit bias of GNNs, i.e., what GNNs actually do when the graph should be ignored. Understanding this bias can also shed light on the phenomenon of entanglement (Liu et al., 2020 ###reference_b14###; Seddik et al., 2022 ###reference_b19###; Chen et al., 2020 ###reference_b4###), i.e., the intricate interplay between the graph structure and the node features.\nThe main contributions of this work are (1) We show that GNNs tend to overfit the graph-structure, when it should be ignored.\n(2) We evaluate the graph-structure overfitting phenomenon with respect to different graph distributions and find that the best performance is obtained for regular graphs.\n(3) We theoretically analyze the implicit bias of learning GNNs, and show that when trained on regular graphs, they converge to unique solutions that are more robust to graph-structure overfitting.\n(4) We show empirically that transforming GNN input graphs into more regular ones can mitigate the GNN tendency to overfit, and improve performance."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "GNNs Overfit the Graph-Structure",
15
+ "text": "In this section, we present an empirical evaluation showing that GNNs tend to overfit the graph-structure, thus hurting their generalization accuracy. Graph overfitting refers to any case where the GNN uses the graph when it is preferable to ignore it (e.g., because it is non-informative for the task)."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Preliminaries",
21
+ "text": "A graph example is a tuple . is an adjacency matrix representing the graph-structure. Each node is assigned a feature vector , and all the feature vectors are stacked to a feature matrix , where is the number of nodes in . The set of neighbors of node is denoted by . We denote the number of samples in a dataset by .\nWe focus on the common class of Message-Passing Neural Networks (Morris et al., 2021 ###reference_b17###).\nIn these networks, at each layer, each node updates its representation as follows:\nwhere . The initial representation of node is its feature vector .\nThe final node representations obtained in the last layer, can then be used for downstream tasks such as node or graph labeling.\nWe focus on graph labeling tasks, where a graph representation vector is obtained by combining all the node representations, e.g. by summation.\nThis is then followed by a linear transformation matrix that provides the final classification/regression output (referred to as a readout layer).\nFor the sake of presentation, we drop the superscript in cases of one-layer GNNs.\nFor binary classification, we assume the label is the sign of the output of the network.\nWe refer to as the root-weights of layer and to as the topological-weights of layer .\nA natural way for GNNs to ignore the graph-structure is by zeroing the topological-weights in every layer.\nWe say that a function is graph-less\nif , i.e., the function does not use the graph-structure, and is practically a set function.\nIt is important to note that some GNNs, e.g., Kipf & Welling (2017a ###reference_b12###), do not possess the ability to ignore the graph-structure as the root and topological weights are the same. We therefore focus on the most general GNN type that does have the ability to ignore the graph (Gilmer et al., 2017 ###reference_b8###).\nIn the Appendix, we extend our empirical evaluation to multiple GNN variations, including Graph Attention Network (Veli\u010dkovi\u0107 et al., 2018 ###reference_b23###; Brody et al., 2022 ###reference_b3###), Graph Transformer (Shi et al., 2021 ###reference_b21###) and Graph Isomorphism Network (Xu et al., 2019 ###reference_b24###), and to node classification, which show similar trends.\n###table_1###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Evidence for Graph Overfitting",
27
+ "text": "Our goal is to examine what happens when GNNs learn over graphs that should be ignored, either because they are non-informative for the task, or because the GNN fails to exploit their information.\nTo that end, we conducted experiments on three datasets.\nThis is a binary classification synthetic task, with a graph-less ground truth function. To generate the label, we use a teacher GNN that simply sums the node features, and applies a linear readout to produce a scalar.\nThe data contains non-informative graph-structures which are drawn from the GNP graph distribution (Erd\u00f6s & R\u00e9nyi, 1959 ###reference_b6###), where the edges are sampled i.i.d with probability (we used ).\n\n\nProteins and Enzymes\nThese are two classification tasks on real-world molecular data (Morris et al., 2020 ###reference_b16###).\nIn Errica et al. (2022 ###reference_b7###) the authors reported on a thorough GNNs comparison, that the best accuracy on these datasets is achieved when the graph-structure is omitted.\nWe note that with a fixed architecture, the solution learned by a GNN trained on empty graphs is always realizable by the same GNN trained on non-empty graphs. This is a straight-forward argument and it is explained in the Appendix for the sake of completeness.\nTherefore, with a fixed architecture, better performances that are achieved when learning over empty graphs indicates that it was better for the GNN to ignore the graph, and it could, but it didn\u2019t.\nErrica et al. (2022 ###reference_b7###) used a different model for the empty graphs, which was not an instance of the other compared GNNs. Therefore, their results does not imply that the compared GNNs overfitted the graph-structure, as the superiority of the model trained on empty graphs may be due to its architecture.\nIn our experiments, we use a fixed architecture to ensure that a discrepancy in perforamance implies graph overfitting.\nOn each of the three datasets, we trained the same GNN twice: once on the given graph-structures in the data (), and once when the graph-structure is replaced with an empty graph and only the node features are given for training ().\nThis difference between these setups shows the effect of providing the graph-structure.\nThe GNNs architecture is fixed and the learning hyper-parameters are tuned on a validation set for the Sum task, and -fold cross-validation for Protein and Enzymes. We report test errors averaged over runs with random seeds on a separate holdout test set. More information can be found in the Appendix.\nTable 1 ###reference_### shows the results of the experiments. In the three tasks, achieves higher accuracy than .\nThis suggests that made use of the graphs, although a better result, i.e., the one learned by , could be obtained by ignoring them. This graph overfitting led to lower test accuracy."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "How Graph-Structure Affects Overfitting",
33
+ "text": "The previous section showed that in the Sum task, where the given graph-structures are non-informative and should be ignored, the GNN overfits them instead.\nHere we further study how this phenomenon is affected by the specific graph-structure provided to the GNN. Thus, we repeat the setup of the Sum task but with different graph distributions.\nWe used the Sum task described in Section 2.2 ###reference_###.\nWe created four different datasets from this baseline, by sampling graph-structures from different graph distributions. The set of node feature vectors remains the same across all the datasets, and thus the datasets differ only in their graph-structures.\nThe graph distributions we used are: -regular graphs (Regular) where all the nodes have the same degree , star-graph (Star) where the only connections are between one specific node and all other nodes, the Erd\u00f6s-R\u00e9nyi graph distribution (GNP) (Erd\u00f6s & R\u00e9nyi, 1959 ###reference_b6###), where the edges are sampled i.i.d with probability , and the preferential attachment model (BA) (Barabasi & Albert, 1999 ###reference_b2###), where the graph is built by incrementally adding new nodes and connecting them to existing nodes with probability proportional to the degrees of the existing nodes.\nThe GNN model is as in the Sum task in the previous section.\nOn each dataset, we varied the training set size and evaluated test errors on runs with random seeds.\nMore information can be found in the Appendix.\nFor the sake of presentation, we present the results on one instance from each distribution: Regular with , GNP with and BA with .\nAdditional results with more distribution parameters are given in the Appendix and show similar trends.\nRecall that the datasets differ only by the edges and share the same set of nodes and features.\nTherefore, had the GNN ignored the graph-structures, we would expect to see similar performance for all datasets.\nAs shown in Figure LABEL:figure:learning_curves_big, the performance largely differs between different graph distributions, which indicates the GNN overfits the graphs rather than ignores them.\nTo further understand what the GNN learns in these cases, we evaluate the ratio between the norms of the topological and root weights. Results are shown in Figure LABEL:figure:norm_ratio_big.\nIt can be seen that for all the graphs except the empty graphs, the ratio is larger than , indicating that there is more norm on the topological weights than on the root weights. Specifically, the graph-structure is not ignored. In the case of empty graphs, the topological weights are not trained, and the ratio is due to initialization. We also present the norms of the root and topological weights separately in the Appendix.\nFigure LABEL:Figure:sample_complexity_and_norms suggests that some graph distributions are more robust to graph-structure overfitting. The GNN trained on regular graphs performs best across all training set sizes.\nThe good performance on regular graphs would seem to suggest that it learns to use low topological weights. However as Figure LABEL:figure:norm_ratio_big shows, the opposite is actually true.\nThis may seem counter-intuitive, but in the next section we theoretically show how this comes about."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Theoretical Analysis",
39
+ "text": "In the previous section, we saw that GNNs tend to overfit the graph-structure when it should be ignored.\nWe now turn to a theoretical analysis that sheds light on what GNNs learn when the ground truth teacher is graph-less.\nFor the sake of clarity, we state all theorems for a one-layer GNN with sum-pooling, no readout, and output dimension . For simplicity, we also assume no bias term in our analysis. All the proofs and extensions can be found in the Appendix."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Implicit bias of Gradient Descent for GNNs",
45
+ "text": "Let denote a training set of labeled graphs. Each instance in is a triplet , where is a stacked feature matrix of the node feature vectors, is the adjacency matrix, and is the class label (we consider binary classification).\nTo examine the solutions learned by GNNs,\nwe utilize Theorem 4 from Gunasekar et al. (2018 ###reference_b9###). This theorem states that homogeneous neural networks trained with GD on linearly separable data converge to a KKT point of a max-margin problem.\nTranslating this theorem to the GNN in our formulation, we get that gradient-based training will converge to the solution of the following problem:\nEquation 2 ###reference_### can be viewed as a max-margin problem in space, where the input vector is .\nTherefore, the graph input can be viewed as the sum of the node feature vectors concatenated with their weighted sum, according to the node degrees.\nWhen trained on -regular graphs, Equation 2 ###reference_### can be written as:\nThis can be viewed as a max-margin problem in where the input vector is . So the GNN is a linear classifier on the sum of the node features, but the regularizer is not the norm of the weights, because of the factor.\nThe next theorem shows that when a GNN is trained using\nGD on regular graphs, the learned root and topological weights are aligned.\nLet be a set of linearly separable -regular graph examples. A GNN trained with GD that fits perfectly converges to a solution such that . Specifically, the root weights and topological weights are aligned.\nWe prove Lemma 3.1 ###reference_theorem1### in the Appendix by analyzing the KKT conditions for first order stationary points of Equation 3 ###reference_###.\nThe next section will use this result to explain why regular graphs are better for learning graph-less teachers."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Extrapolation with graph-less teachers",
51
+ "text": "In this section, we analyze what\nhappens when GNNs learn from training data generated by\na graph-less model (we refer to this as a graph-less teacher).\nAs we saw empirically in Section 2.2 ###reference_###, these learned models will sometimes\ngeneralize badly on test data. We begin with a theorem\nthat proves that such bad cases indeed exist. The theorem\nconsiders the extrapolation case, where the train and test distribution of\ngraphs is not the same, but is labeled by the same graph-less\nteacher. Had the GNN learned a graph-less model, it would\nhave had the same train and test performance (for infinite\ndata). However, we show this is not the case, indicating that GNNs can overfit the graph structure arbitrarily badly. In other words, they do not extrapolate.\nLet be a graph-less teacher. There exist graph distributions and , with node features drawn from the same fixed distribution, such that when learning a linear GNN with GD over infinite data drawn from and labeled with , the test error on lableled with will be . Namely, the model will fail to extrapolate.\nThe setting in the above result is that a graph-less ground truth teacher is learned using graphs from . Ideally, we would have liked GD to \u201cignore\u201d the graphs, so that the output of the learned model would not change when changing the support to . However, our result shows that when the graph distribution is changed to , performance is poor. This is in line with our empirical observations. The key idea in proving the result is to set such that it puts weights on isolated nodes, and thus exposes the fact that the learned function does not simply sum all nodes, as the graph-less solution does.\nDespite Theorem 3.2 ###reference_theorem2### showing GNNs may fail to extrapolate, the following result shows that GNNs are guaranteed to extrapolate within the family of regular distributions.\n\\commentAG:this also needs infinitely many samples. Otherwise of course you can fail to also interpolate. Need to change wording here. See proposal below\nAG:\nProposal for formulation:\nLet be a distribution over r-regular graphs and be a distribution over node features. Assume a training set of infinite size sampled from and and labeled with a graph-less teacher. Denote the learned model by .\nAssume that test examples are sampled from , a distribution over r\u2019-regular graphs, and . Then will have zero test error,\nLet be a distribution over r-regular graphs and be a distribution over node features. Assume a training set of infinite size sampled from and and labeled with a graph-less teacher. Denote the model learned with GD by .\nAssume that test examples are sampled from , a distribution over r\u2019-regular graphs, and . Then will have zero test error.\nLet be a set of linearly separable graph examples drawn from a distribution over -regular graphs, with binary labels. Assume that the teacher is graph-less. Then a GNN that fits perfectly will extrapolate to any distribution over -regular graphs for all values of .\nTo prove Theorem 3.3 ###reference_theorem3###, we utilize Equation 3 ###reference_### and Lemma 3.1 ###reference_theorem1###, and show that the direction of the weight vector used by the GNN does not change when the regularity degree is changed.\nIt was previously shown in Yehudai et al. (2020 ###reference_b26###) that when there is a certain discrepancy between the train and test distributions, GNNs may fail to extrapolate. The argument extends to our case, and therefore learning without GD could fail to generalize.\nWe next show that when learning without GD, Theorem 3.3 ###reference_theorem3### does not hold. In other words, there are solutions that fit the training set perfectly and will fail to extrapolate to any regular graph with a regularity degree different from the one of the training set. In the proof, we construct such a solution.\nLet be a set of examples drawn from an -regular graphs distribution and labeled with a graph-less teacher. Then there is a GNN that will fit perfectly and will produce the wrong label for when its graphs are changed to -regular, .\nIt was previously shown in Yehudai et al. (2020 ###reference_b26###) that when there is a certain discrepancy between the train and test distributions, GNNs may fail to extrapolate. Lemma 3.4 ###reference_theorem4### extends the setting of the result from Yehudai et al. (2020 ###reference_b26###).\nLet be a set of linearly separable graph examples drawn from an -regular graphs distribution, with binary labels. Assume that the ground truth function is graph-less.AG:let\u2019s avoid the linearly separable stuff. Just say that is labeled by a graph-less teacher. Then there is a GNN that will fit perfectly and will produce the wrong label for any -regular graphs with .AG:Not clear what distributions over features we are talking about here and what is the train and test distribution.\nAG:Alternative:\nLet be a set of examples drawn from an -regular graphs distribution and labeled with a graph-less teacher. Then there is a GNN that will fit perfectly and will produce the wrong label for when its graphs are changed to -regular.\n###figure_1###"
52
+ },
53
+ {
54
+ "section_id": "3.2.1",
55
+ "parent_section_id": "3.2",
56
+ "section_name": "3.2.1 Characterizing Extrapolation Accuracy",
57
+ "text": "Theorems 3.2 ###reference_theorem2### and 3.3 ###reference_theorem3### show extreme cases of good and bad extrapolation. We next examine what determines the extrapolation accuracy.\nFirst, we empirically observe that GNNs trained on regular graphs exhibit good extrapolation to other non-regular graph distributions as well, as presented in Table 2 ###reference_###.\nFor example, a GNN trained on -regular graphs, generalizes perfectly to GNP graphs, and there is a decrease in performance when tested on star-graphs. The training setup and more information on the graphs can be found in the Appendix.\nNext, we present a sufficient condition for extrapolation and empirically show on these test sets that indeed when the GNN successfully extrapolates, this sufficient condition holds.\nWe utilize Lemma 3.1 ###reference_theorem1### and write the GNN trained on -regular graphs in a new form acting on a test graph as:\nwhere , for any .\nThis notation shows that applying to a graph is equivalent to applying it to an -regular graph plus applying it to another -graph that depends on .\nUsing this notation, the following Theorem provides a sufficient condition for extrapolation. For simplicity, we state the results as extrapolation to the same training set, with modified graphs.\nAG:There\u2019s an issue here. It seems like you are assuming that\nthis model correctly classifies all r-regular graphs, not just on the training set. e.g., assume that the training set is of size . I added a revised version of this and changed the proof accordingly.\nAG:I suggest adding: For simplicity, we state the results as extrapolation to the same training set, with modified graphs.\n###table_2### Let be a set of -regular graphs examples, labeled with a graph-less teacher . Let denote a GNN trained with GD on .\nNow assume an instance has been modified to a different graph such that there exists an where:\nThen .\nLet be a set of -regular graphs examples, labeled with a graph-less teacher . Then a GNN that fits perfectly will correctly classify a test graph labeled by if there exists an such that\nAG:\nAlternative:\nLet be a set of -regular graphs examples, labeled with a graph-less teacher . Let denote a GNN trained with GD on .\nNow assume an instance in has been modified to a different graph such that there exists an where:\nThen .\nLet be a GNN that perfectly fits a training set of -regular graphs.\nGiven a test graph from some graph distribution, if there exists an such that\nWhere , then will classify correctly.\nTheorem 3.6 ###reference_theorem6### suggests that applying the GNN to graphs that are \u201ccloser\u201d to regular graphs, i.e., have smaller , results in better extrapolation.\nTo prove it, we show that when these conditions hold, the extrapolation is guaranteed from Theorem 3.3 ###reference_theorem3###.\nNext, we empirically show that indeed all the samples that were classified correctly in Table 2 ###reference_### satisfy this condition of Theorem 3.6 ###reference_theorem6###.\nFigure 1 ###reference_### presents histograms of the values of the ratio in Theorem 3.6 ###reference_theorem6### for every example that is correctly classified, over the test examples presented in Table 2 ###reference_###.\nWe do not include regular graphs in the histograms, because extrapolation withing regular graphs is guaranteed from Theorem 3.6 ###reference_theorem6###.\nThe ratio is computed for the that minimizes the denominator of the ratio.\nIndeed, all the ratios are less than 1, and therefore the sufficient condition holds. These results demonstrate that indeed \u201ccloseness to a regular\u201d graph is an important determinant in extrapolation accuracy."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Are Regular Graphs Better\nwhen Graphs are Useful?",
63
+ "text": "In the previous sections, we showed that regular graphs exhibit robustness to the tendency of GNNs to overfit non-informative graphs that should be completely ignored.\nIn this section, we examine if regular graphs are also beneficial in scenarios when the graph may be informative. We perform an empirical evaluation on real-world data, where we do not know in advance if the graph is indeed informative or not. We compare the performance of the same method, when trained on the original graph, and on the same graph when transformed to be \u201cmore regular\".111The code is available on https://github.com/mayabechlerspeicher/Graph_Neural_Networks_Overfit_Graphs ###reference_ph_Neural_Networks_Overfit_Graphs###"
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Datasets",
69
+ "text": "We used graph datasets, including two large-scale datasets, which greatly differ in their average graph size and density, the number of node features, and the number of classes.\nEnzymes, D&D, Proteins, NCI1 (Shervashidze et al., 2011 ###reference_b20###) are datasets of chemical compounds where the goal is to classify each compound into one of several classes. \nIMDB-B, IMDB-M, Collab, Reddit-B, Reddit-5k (Yanardag & Vishwanathan, 2015 ###reference_b25###) are social network datasets.\nmol-hiv, mol-pcba (Hu et al., 2020 ###reference_b11###) are large-scale datasets of molecular property prediction.\nMore information on the datasets and their statistics can be found in the Appendix.\nFor each model and for each task, we evaluate the model twice: on the original graph provided in the dataset (Original Graph) and on the original graph with the COV reduced (R-COV).\nBecause different graphs have different COVs, we set COV to a fixed percentage of the original average COV of each dataset separately. The percentage is a hyprparameter, and we tested the values .\nWe also include as a baseline the performance when the graph-structure is omitted (Empty Graphs) which is equivalent to using DeepSets (Zaheer et al., 2018 ###reference_b27###).\nFor all the datasets except mol-hiv and mol-pcba we used -fold nested cross validation with the splits and protocol of Errica et al. (2022 ###reference_b7###).\nThe final reported result on these datasets is an average of runs (-folds and random seeds).\nThe mol-hiv and mol-pcba datasets have pre-defined train-validation-test splits and metrics Hu et al. (2020 ###reference_b11###). The metric of mol-hiv is the test AUC averaged over runs with random seeds. The metric of mol-pcba the metric is the averaged precision (AP) over its tasks.\nAdditional details and the hyper-parameters are provided in the Appendix.\nAcross all datasets and all models, reducing the COV of the graphs improves generalization.\nParticularly intriguing outcomes are obtained in the PROTEINS and IMDB-M datasets. Within these two datasets, superior performance is attained when learning over empty graphs in comparison to the provided graphs. Nonetheless, reducing the COV improves performance also with respect to the empty graphs. This observation suggests that the structural information inherent in the data is indeed informative, yet the GNN fails to exploit it correctly as it is.\nAs we see consistent improvement when the COV is reduced, we further examined if this improvement is monotone with respect to the COV reduction. We evaluated the Proteins dataset with an increasing percentage of COV reduction, up to the full graph.\nIndeed as shown in Figure 2 ###reference_###, the performance keeps improving as the COV is reduced. This is in alignment with the results of Alon & Yahav (2021 ###reference_b1###) where a full-graph was used in the last layer of the network to allow better information flow between nodes of long distance. Note that in our case we also distinguish the original edges with the added edges using edge features, and allow the network to ignore the added edges. Clearly, using a full graph comes with a computational cost, a problem that also arises when using full-graph transformers.\nOur results suggested that improvement in generalization can be achieved also without the cost of using the full graph. Practically, one can limit the percentage of reduced COV according to their computation limit in advance.\n###figure_2###"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Practical Implications",
75
+ "text": "In practice, possible graph structures are typically determined based on domain knowledge, and it is common to explore multiple possible structures.\nIn some cases, a natural graph-structure inherently exists within the data, such as in social networks, where the network connections naturally define the graph layout. Nevertheless, it is usually not clear in advance if these graph layouts are informative for the task, or if the GNN will manage to exploit them.\nThe fact that certain layouts may provide valuable information for the task while others might not, and this distinction isn\u2019t clear beforehand, was the driving question for our research.\nIndeed we found that the definition of the graph-structure, typically determined by users, emerges as a pivotal factor in performance outcomes due to the tendency of GNNs to overfit the provided graph.\nThis revelation opens up a fascinating avenue for further research into the significance of topological information during the training of GNNs. Understanding how GNNs respond to different structural layouts and why certain graph-structures are more effective than others could significantly impact the way we design and train these models."
76
+ },
77
+ {
78
+ "section_id": "6",
79
+ "parent_section_id": null,
80
+ "section_name": "Future Work",
81
+ "text": "We believe this work opens up many new avenues for exploration.\nOne simple takeaway from our paper is to always try learning a model over empty graphs as well, i.e., using DeepSets (Zaheer et al., 2018 ###reference_b27###).\nWhen the graph is known to have little contribution to the task, regularizing the topological weights may be useful.\nThe main difficulty is finding ways to improve the GNN\u2019s ability to exploit useful information from the graph if it exists and ignore it otherwise, without prior knowledge.\nWhile we show in Section 4 ###reference_### that reducing the graph\u2019s COV can enhance performance, there may be other ways to mitigate the graph overfitting.\nIn recent years many methods were introduced to mitigate different phenomena that limit GNNs performance (Rong et al., 2019 ###reference_b18###; Alon & Yahav, 2021 ###reference_b1###). It is interesting to examine whether these methods are useful in mitigating graph overfitting.\nAnother interesting avenue for future research is analyzing the implicit bias of non-linear GNNs, including Graph Attention and Transformers."
82
+ },
83
+ {
84
+ "section_id": "7",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "In this study, we showed that although GNNs have the ability to disregard the provided graph when needed, they don\u2019t. Instead, GNNs tend to overfit the graph-structures, which results in reduced performance.\nWe theoretically analyzed the implicit bias of gradient-descent learning of GNNs, and proved that even with infinite data, GNNs are not guaranteed to learn a solution that ignores the graph, when the graph should be ignored.\nWe showed that regular graphs are more robust to graph overfitting, and provided a theoretical explanation and extrapolation results for this setting.\nOur study shows that in some cases, the graph structure hurts the performance of GNNs, and therefore graph selection is of great importance, as well as having a model that can ignore the graph when needed."
88
+ },
89
+ {
90
+ "section_id": "8",
91
+ "parent_section_id": null,
92
+ "section_name": "Acknowledgements",
93
+ "text": "This work was supported by a grant from the Tel Aviv University Center for AI and Data Science (TAD) and by the Israeli Science Foundation research grant 1186/18."
94
+ }
95
+ ],
96
+ "appendix": [
97
+ {
98
+ "section_id": "Appendix 1",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix A Proofs and Extensions",
101
+ "text": "All our analysis assumes that train and test data are labeled via some graph-less teacher. Namely, a function that classifies a graph instance based only on its features and not the graph. We let be defined via a weight vector as follows:\nFor the sake of simplicity, we assume that all the hidden states are of dimension .\nWe denote the number of vertices with , the number of samples with , and denote for a set of node feature vectors .\n is the feature vector of node in the graph sample .\n\\commentAG:you may also want to define the graphless teacher here since you refer to it in many places. You can say that we assume this teacher throughout the proofs.\nCan be like:\nAll our analysis assumes that train and test data are labeled via some graph-less teacher. Namely a function that classifies a graph instances based only on its features adn not the graph. We let be defined via a weight vector as follows:\nFor the sake of simplicity, we begin by proving the simplest case of a GNN with one layer and no readout. Then we extend the proof to the case of readout and multiple layers.\nIn -regular graphs, for all nodes . Therefore Equation 2 ###reference_### can be written as:\nNow writing the KKT stationarity condition:\nTherefore . \u220e\nHere we prove Theorem 3.2 by showing providing distributions P1 and P2 such that GNNs trained on graphs from P1, will fail to extrapolate to graphs from P2. We consider the case where P1 is a distribution over r-regular graphs and P2 is a distribution over star graphs with a random center node. The key intuition in our proof is that learning with P1 will learn a model that averages over nodes. But when testing it on P2, mostly the center node will have to determine the label, and this will typically result in an error. We will take the graph size to to simplify the analysis, but results for finite graphs with high probability can be obtained using standard concentration results.\n\\comment\nAG:This isn\u2019t clearly mapped to the theorem. I don\u2019t see I don\u2019t see a test error and I don\u2019t see \u2026\ne.g. you can write Here we prove Theorem 3.2 by showing providing distributions P1 and P2 such that GNNs trained on graphs from P1, will fail to extrapolate to graphs from P2. We consider the case where P1 is a distribution over r-regular graphs and P2 is a distribution over star graphs with a random center node. The key intuition in our proof is that learning with P1 will learn a model that averages over nodes. But when testing it on P2, mostly the center node will have determine the label, and this will typically result in an error. We will take the graph size to to simplify the analysis, but results for finite graph with high probability can be obtained using standard concentration results.\nLet be a graph-less function, .\nWe assume that labels the training graphs, and graphs are drawn from , namely a distribution over r-regular graphs.\nLet be the learned function when trained with infinite data on -regular graphs, with node features with dimension drawn from .\nThen , where and (following Lemma 3.1 ###reference_theorem1###) are the learned parameters.\nWe now proceed to show that extrapolation to fails in this case.\nLet be a star graph, with features of dimension drawn from , and assume w.l.o.g. that the center node of the star has index .\nThen applying the (learned on ) to this can be written as\nWe will first show that when the number of vertices grows to infinity, the sign is determined by the first (central) node.\nDenote\nWe will show that the correlation coefficient between and goes to as the number of vertices approaches infinity.\nIt holds that\nAs and are fully correlated, they have the same sign.\nWe will now show that when approaches infinity, the probability that will have a different sign from is , and therefore conclude that the error on is as specified in the theorem.\nWe will do so by showing that the correlation coefficient between and converges to .\nWe conclude that a model trained on will fail to extrapolate to .\n\u220e\nWe consider the case of a graph-less teacher as in (5 ###reference_###) We wish to show that if the training data consists of infinitely many samples from a distribution over r-regular graphs, then the learned model will extrapolate perfectly to a distribution over r\u2019-regular graphs. We assume the same feature distribution in all cases.\n\\commentAG:this is not a new assumption. You can\u2019t make assumptions in the middle of the proof. The theorem states that the training data is separated by the learned model. Or it should say if it doesn\u2019t.\nAG:Again this needs to be mapped to the thing you want to prove. Explain why what you do proves it.AG:Say something like: We consider the case of a graph-less teacher as in (5 ###reference_###) We wish to show that if the training data consists of infinitely many samples from a distribution over r-regular graphs, then the learned model will extrapolate perfectly to a distribution over r\u2019-regular graphs. We assume the same feature distribution in all cases.\nLet be a graph dataset with features drawn from a distribution and graphs drawn from an -regular graph distribution .\nAssume that the label generator function of , denoted by , is generated by a teacher GNN with , i.e., for all , .\n\\commentas follows from Equation 3 ###reference_###.\nLet be a minimizer of Equation 3 ###reference_### on the training distribution. Then has perfect accuracy on the support of the training distribution (ie it is equal to the graph-less teacher there).\nLet be in the support of the training distribution. Then:\n(*) Follows from Theorem 3.1 by substituting .\n(**) Follows from the fact that the direction of and is the same. (***) Follows from the fact that is equal to on the training distribution.\n\\commentAG:you didn\u2019t specify this assumption. Need to be clearer here about it.\nNow let be an -regular graph example, with features drawn from .\nFollowing Equation 2, we get that:\n(***) Follows from the assumption that the features are drawn from .\nWe thus have that all instances drawn from the test distribution of r\u2019-regular graphs are classified correctly, and therefore we have perfect extrapolation in this case.\n\u220e\\comment\nWe saw that a linear GNN trained on a distribution with regular graphs of degree and labeled via a linear function generalizes to any . Here we show that there exist \u201cbad solutions\u201d that solve the in-distribution problem (i.e r-regular graphs) but do not generalize to r\u2019-regular graphs. We note that if one does not train with GD, one may learn these bad solutions, and generalize poorly. This highlights the importance of training with GD because of its implicit bias in this case.\nLet be a graph dataset labeled by a graph-less teacher of\n\\commenta linearly separableAG:I dont\u2019 think you should say linearly separable. You should say it is labeled by a graph-less teacher.\n-regular graph dataset with being the node features and .\nWe will now show that there exists a GNN with parameters that fits perfectly and fails to generalize for any graph with regularity degree .\nLet be some classifier\\commentAG:not clear? Where is this coming from? Is this the graph-less teacher? with unit margin obtained on S, i.e., and .\n\\commentAG:why not say ? Why do you need both?\nSet which implies that:\nLet be an -regular graph, with .\nLet then .\nThen a GNN with parameters and fits with accuracy , but\nTherefore the above GNN will have an error of on all r\u2019-regular graphs, and fail to extrapolate.\nThe result for can be shown similarly.\nThe result also applies when \n\u220e\nLet be a graph-less function as in (5 ###reference_###), and\n be a GNN minimizing Equation 3 ###reference_###, on a training set of -regular graph examples. Assume we have modified an example in from to .\nLet . Let and let .\nThen using Equation 2 ###reference_###:\nWhere (*) follows from Theorem 3.1.\nNow assume there exists an such that:\nTherefore the -component is small with respect to the regular component, and can be dropped below, because it doesn\u2019t change the sign.\nWhere follows from Theorem 3.3."
102
+ },
103
+ {
104
+ "section_id": "Appendix 2",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix B Additional Experimental Results",
107
+ "text": "The validation of Theorem 3.1 is presented in Figure 3 ###reference_###. We plot the ratio between the topological weights and root weights, during the training of linear GNNs with one or two layers, with readout. The GNNs are trained on regular graphs with different regularity degrees. In all cases, the ratio converges to the regularity degree, as guaranteed by Theorem 3.1.\n###figure_3### In Section 2.2 we presented an empirical evaluation of the model from Gilmer et al. (2017 ###reference_b8###) as described in Equation 1 ###reference_###. Here we provide the results of the same evaluation with more GNNs. All models show similar trends as presented in the main paper.\nThe results are shown in Figures 4 ###reference_###\n(GIN (Xu et al., 2019 ###reference_b24###)), 5 ###reference_### (GAT (Veli\u010dkovi\u0107 et al., 2018 ###reference_b23###)), 6 ###reference_### (Graph Transformer (Shi et al., 2021 ###reference_b21###)) and 7 ###reference_### (GraphConv with Normalized Neighbor Aggregation (Gilmer et al., 2017 ###reference_b8###)).\n###figure_4### ###figure_5### ###figure_6### ###figure_7### We evaluated the learning curve in a teacher-student setup of a graph classification task, where the teacher is graph-less GNN.\nThe teacher readout is sampled once from to generate the train, validation and test labels. The training graph is over nodes and the validation and test graphs are over nodes. Each node is assigned with a feature vector in sampled i.i.d from .\nFigure 8 ###reference_### shows that also in this case, although the teacher does not use the graph, giving the model different graphs affects generalization. Therefore also in this case, the GNN overfits the given graph although it should be ignored.\n###figure_8### In Section 2.3 for the sake of presentation, we presented only one curve from each distribution. Figure 9 ###reference_### presents the learning curve of all the distributions we tested, with multiple parameters for each distribution.\nAdditionally, in Figure 10 ###reference_### we present the weights norms of the root and topological weights separately, for the curves presented in the main paper.\n###figure_9### ###figure_10###"
108
+ },
109
+ {
110
+ "section_id": "Appendix 3",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix C Additional Experimental Details",
113
+ "text": "The teacher readout is sampled once from and used for all the graphs.\nAll graphs have nodes, and each node is assigned with a feature vector in sampled i.i.d from .\nFor the Sum task, we used a -layer \u201cstudent\" GNN following the teacher model, with readout and ReLU activations. For the PROTEINS and ENZYMES tasks, we used -layers.\nWe evaluated the learning curve with an increasing amount of samples.\nWe note that the GNN has a total of 16,000 parameters, and thus it is overparameterized and can fit the training data with perfect accuracy.\nWe trained a one-layer linear GNN with readout on -regular graphs over nodes.\nWe then applied it to a test sets presented in Table 2 ###reference_###. Each test set contains graph examples and each graph has . All the test sets share the same node features and differ in the graph structure, which is drawn from different graph distributions.\n###table_3### The dataset statistics are summarized in Table 4 ###reference_###.\nIMDB-B & IMDB-M (Yanardag & Vishwanathan, 2015 ###reference_b25###) are movie collaboration datasets. Each graph is derived from a genre, and the task is to predict this genre from the graph. Nodes represent actors/actresses and edges connect them if they have appeared in the same movie.\nProteins, D&D &Enzymes (Shervashidze et al., 2011 ###reference_b20###; Dobson & Doig, 2003 ###reference_b5###) are datasets of chemical compounds. The goal in the first two datasets is to predict whether a compound is an enzyme or not, and the goal in the last datasets is to classify the type of an enzyme among classes.\nNCI1 (Shervashidze et al., 2011 ###reference_b20###) is a datasets of chemical compounds. Vertices and edges represent atoms and the chemical bonds between them. The graphs are divided into two classes according to their ability to suppress or inhibit tumor growth.\nCollab (Morris et al., 2020 ###reference_b16###) is a scientific collaboration dataset. A graph corresponds to a researcher\u2019s ego network, i.e., the researcher and their collaborators are nodes and an edge indicates collaboration between two researchers. A researcher\u2019s ego network has three possible labels, which are the fields that the researcher belongs to.\nReddit-B, Reddit-5k (Morris et al., 2020 ###reference_b16###) are datasets of Reddit posts from the month of September 2014, with binary and multiclass labels, respectively. The node label is the community, or \u201csubreddit\", that a post belongs to. large communities have been sampled to build a post-to-post graph, connecting posts if the same user comments on both.\nmol-hiv, mol-pcba (Hu et al., 2020 ###reference_b11###) are large-scale datasets of molecular property prediction.\nFollowing Errica et al. (2022 ###reference_b7###), we added a feature of the node degrees for datasets which have no node features at all.\nAll GNNs use ReLU activations with layers and hidden channels. They were trained with Adam optimizer over epochs and early on the validation loss with a patient of steps, eight Decay of , learning rate in }, dropout rate in , and a train batch size of .\nThe preserved COV is among {80%, 50%}.\nAll the experiments\u2019 code including the random seeds generator is provided in the code Appendix\nIn Section 4 ###reference_### when evaluating graphs with reduced COV, we add edge features to differ between the original and added edges. We adapt each neighbor\u2019s aggregation component to process this edge information in a non-linear way."
114
+ }
115
+ ],
116
+ "tables": {
117
+ "1": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.18.5.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S2.T1.8.4\" style=\"font-size:90%;\">The accuracy of a fixed GNN architecture, trained once on the given graphs in the data (GNN) and once on the same data where the graph-structure is omitted (), i.e., on empty graphs. The solution of is realizable by , and the only difference between the runs is the given graph-structures. This suggests that the decreased performance of is due to graph-structure overfitting.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T1.16\">\n<tr class=\"ltx_tr\" id=\"S2.T1.16.9\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S2.T1.16.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.16.9.2\">Sum</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.16.9.3\">Proteins</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.16.9.4\">Enzymes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.12.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.9.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.10.2.2\">94.5 0.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.11.3.3\">67.4 1.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.12.4.4\">55.2 3.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.16.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S2.T1.13.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.14.6.2\">97.5 0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.15.7.3\">74.1 2.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.16.8.4\">64.1 5.7</td>\n</tr>\n</table>\n</figure>",
119
+ "capture": "Table 1: The accuracy of a fixed GNN architecture, trained once on the given graphs in the data (GNN) and once on the same data where the graph-structure is omitted (), i.e., on empty graphs. The solution of is realizable by , and the only difference between the runs is the given graph-structures. This suggests that the decreased performance of is due to graph-structure overfitting."
120
+ },
121
+ "2": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T2.12.2.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S3.T2.2.1\" style=\"font-size:90%;\">Accuracy of a GNN trained on -regular graphs and tested on different distribution shifts. The GNN extrapolates perfectly to regular graph distributions, as guaranteed by Theorem\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.04332v2#S3.Thmtheorem3\" title=\"Theorem 3.3 (Extrapolation within regular distributions). \u2023 3.2 Extrapolation with graph-less teachers \u2023 3 Theoretical Analysis \u2023 Graph Neural Networks Use Graphs When They Shouldn\u2019t\"><span class=\"ltx_text ltx_ref_tag\">3.3</span></a>.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.10\">\n<tr class=\"ltx_tr\" id=\"S3.T2.10.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T2.10.9.1\">Test distribution</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.10.9.2\">Accuracy</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.3.1.2\">Regular (r=10)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.1.1\">100 0.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.4.2.2\">Regular (r=15)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.2.1\">100 0.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.5.3.2\">GNP (p=0.2)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.3.1\">100 0.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.6.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.6.4.2\">GNP (p=0.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.6.4.1\">100 0.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.7.5.2\">GNP (p=0.8)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.5.1\">100 0.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.8.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.8.6.2\">BA (m=3)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.8.6.1\">98.0 1.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.9.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.9.7.2\">BA (m=15)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.9.7.1\">93.2 0.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.10.8.2\">Star Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.10.8.1\">75.9 1.1</td>\n</tr>\n</table>\n</figure>",
123
+ "capture": "Table 2: Accuracy of a GNN trained on -regular graphs and tested on different distribution shifts. The GNN extrapolates perfectly to regular graph distributions, as guaranteed by Theorem\u00a03.3."
124
+ },
125
+ "3": {
126
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.3.2\" style=\"font-size:90%;\">Performance of different GNNs when trained on the original graphs versus when the COV of the graphs is reduced. The best model is in bold and with an underline in cases where the p-value &lt; 0.05 using the Wilcoxon signed-rank test.</span></figcaption><div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_flex_size_1 ltx_align_middle\" id=\"S4.T3.4\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T3.4.1.1\">Model</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T3.4.1.2\">Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.4.1.3\">Proteins</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.4.1.4\">NCI1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.4.1.5\">Enzymes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.4.1.6\">D&amp;D</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.4.1.7\">mol-hiv</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.4.1.8\">mol-pcba</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.2.1\">DeepSet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.2.2\">Empty Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.3\">74.1 \u00b1 2.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.4\">72.8 \u00b1 2.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.5\">64.2 \u00b1 3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.6\">77.5 \u00b1 2.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.7\">69.5 \u00b1 2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.8\">15.0 \u00b1 0.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.3.1.1\">GraphConv</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.3.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.3.3\">73.1 \u00b1 1.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.3.4\">76.5 \u00b1 1.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.3.5\">58.2 \u00b1 2.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.3.6\">72.5 \u00b1 1.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.3.7\">78.2 \u00b1 3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.3.8\">20.5 \u00b1 0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.4.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.2.1.1\">75.5 \u00b1 1.8</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.4.4.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.3.1.1\">80.1 \u00b1 0.9</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.4.4.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.1.1\">61.0 \u00b1 1.5</span></span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.5.1\">74.8 \u00b1 2.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.6\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.4.6.1\">80.9 \u00b1 1.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.7\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.4.4.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.7.1.1\">22.8 \u00b1 0.5</span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.5.1.1\">GIN</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.5.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.3\">72.2 \u00b1 2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.4\">79.2 \u00b1 1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.5\">58.9 \u00b1 1.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.6\">74.5 \u00b1 2.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.7\">77.0 \u00b1 1.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.5.8\">21.1 \u00b1 0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.6.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.2\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.6.2.1\">74.8 \u00b1 2.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.6.3.1\">80.0 \u00b1 1.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.6.4.1\">59.7 \u00b1 1.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.6.5.1\">75.7 \u00b1 3.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.6.6.1\">77.9 \u00b1 1.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.6.7.1\">21.5 \u00b1 0.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.7.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.7.1.1\">GATv2</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.7.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.7.3\">73.5 \u00b1 2.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.7.4\">80.4 \u00b1 1.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.7.5\">59.9 \u00b1 2.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.7.6\">70.6 \u00b1 4.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.7.7\">78.7 \u00b1 2.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.7.8\">23.5 \u00b1 0.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.8.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.8.2.1\">76.5 \u00b1 2.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.3\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.8.3.1\">83.0 \u00b1 1.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.4\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.8.4.1\">63.9 \u00b1 3.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.8.5.1\">73.9 \u00b1 1.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.6\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.8.6.1\">80.9 \u00b1 2.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.8.7\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.8.7.1\">24.3 \u00b1 0.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S4.T3.4.9.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.9.1.1\">GraphTransformer</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.9.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.9.3\">73.9 \u00b1 1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.9.4\">80.5 \u00b1 1.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.9.5\">60.9 \u00b1 2.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.9.6\">74.1 \u00b1 1.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.9.7\">80.5 \u00b1 2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.9.8\">29.1 \u00b1 0.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.4.10.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.10.2\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.10.2.1\">76.7 \u00b1 1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.10.3\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.10.3.1\">83.1 \u00b1 1.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.10.4\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.10.4.1\">64.0 \u00b1 1.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.10.5\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.10.5.1\">77.1 \u00b1 1.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.10.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.10.6.1\">82.4 \u00b1 1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.10.7\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.4.10.7.1\">30.5 \u00b1 0.2</span></td>\n</tr>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_flex_size_1 ltx_align_middle\" id=\"S4.T3.5\">\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T3.5.1.1\">Model</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T3.5.1.2\">Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.5.1.3\">IMDB-B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.5.1.4\">IMDB-M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.5.1.5\">Collab</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.5.1.6\">Reddit-B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.5.1.7\">Reddit-5k</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.2.1\">DeepSet</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.2.2\">Empty Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.2.3\">70.0 \u00b1 3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.2.4\">48.2 \u00b1 2.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.2.5\">71.2 \u00b1 1.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.2.6\">80.9 \u00b1 2.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.2.7\">52.1 \u00b1 1.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.5.3.1.1\">GraphConv</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.3.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.3.3\">69.6 \u00b1 1.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.3.4\">47.5 \u00b1 1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.3.5\">73.5 \u00b1 1.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.3.6\">83.2 \u00b1 1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.3.7\">50.0 \u00b1 2.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.5.4.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.4.2\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.4.2.1\">72.9 \u00b1 0.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.4.3\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.4.3.1\">50.0 \u00b1 1.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.4.4.1\">74.2 \u00b1 2.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.4.5\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.4.5.1\">87.0 \u00b1 1.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.4.6\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.4.6.1\">52.5 \u00b1 1.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.5.5.1.1\">GIN</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.5.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.3\">70.1 \u00b1 2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.4\">48.1 \u00b1 2.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.5\">75.3 \u00b1 2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.6\">89.1 \u00b1 2.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.7\">56.1 \u00b1 1.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.5.6.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.6.2\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.6.2.1\">71.3 \u00b1 1.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.6.3.1\">48.5 \u00b1 1.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.6.4\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.6.4.1\">77.2 \u00b1 2.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.6.5.1\">91.0 \u00b1 1.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.6.6.1\">56.7 \u00b1 0.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.7.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.5.7.1.1\">GATv2</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.7.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.7.3\">72.8 \u00b1 0.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.7.4\">48.4 \u00b1 2.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.7.5\">73.9 \u00b1 1.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.7.6\">90.0 \u00b1 1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.7.7\">56.4 \u00b1 1.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.5.8.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.8.2\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.8.2.1\">75.8 \u00b1 1.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.8.3\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.8.3.1\">50.8 \u00b1 1.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.8.4.1\">75.1 \u00b1 1.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.8.5\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.8.5.1\">92.1 \u00b1 0.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.8.6.1\">57.0 \u00b1 0.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S4.T3.5.9.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.5.9.1.1\">GraphTransformer</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.5.9.2\">Original Graph</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.9.3\">73.1 \u00b1 1.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.9.4\">49.0 \u00b1 1.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.9.5\">73.8 \u00b1 1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.9.6\">90.6 \u00b1 1.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.9.7\">51.4 \u00b1 1.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.5.10.1\">Original Graph + R-COV</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.5.10.2\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.10.2.1\">76.1 \u00b1 2.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.5.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.3.1\">51.1 \u00b1 2.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.5.10.4\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.10.4.1\">76.0 \u00b1 1.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.5.10.5\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.10.5.1\">92.3 \u00b1 1.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.5.10.6\"><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"S4.T3.5.10.6.1\">56.0 \u00b1 1.2</span></td>\n</tr>\n</table>\n</div>\n</div>\n</figure>",
127
+ "capture": "Table 3: Performance of different GNNs when trained on the original graphs versus when the COV of the graphs is reduced. The best model is in bold and with an underline in cases where the p-value < 0.05 using the Wilcoxon signed-rank test."
128
+ },
129
+ "4": {
130
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A3.T4.2.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"A3.T4.3.2\" style=\"font-size:90%;\">Statistics of the real-world datasets used in our evaluation.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A3.T4.4\">\n<tr class=\"ltx_tr\" id=\"A3.T4.4.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.1.1\">Dataset</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.1.2\"># Graphs</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.1.3\">Avg # Nodes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.1.4\">Avg # Edges</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.1.5\"># Node Features</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.1.6\"># Classes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T4.4.2.1\">Proteins</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T4.4.2.2\">1113</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T4.4.2.3\">39.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T4.4.2.4\">72.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T4.4.2.5\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T4.4.2.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.3.1\">NCI1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.3.2\">4110</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.3.3\">29.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.3.4\">32.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.3.5\">37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.3.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.4.1\">Enzymes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.4.2\">600</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.4.3\">32.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.4.4\">62.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.4.5\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.4.6\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.5.1\">D&amp; D</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.5.2\">1178</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.5.3\">284.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.5.4\">715.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.5.5\">89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.5.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.6.1\">IMDB-B</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.6.2\">1000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.6.3\">19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.6.4\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.6.5\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.6.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.7.1\">IMDB-M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.7.2\">1500</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.7.3\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.7.4\">65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.7.5\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.7.6\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.8.1\">Collab</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.8.2\">5000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.8.3\">74.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.8.4\">2457.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.8.5\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.8.6\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.9.1\">Reddit-B</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.9.2\">2000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.9.3\">429.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.9.4\">497.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.9.5\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.9.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.10.1\">Reddit-5k</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.10.2\">4999</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.10.3\">508.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.10.4\">594.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.10.5\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.10.6\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T4.4.11.1\">mol-hiv</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.11.2\">41,127</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.11.3\">25.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.11.4\">27.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.11.5\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T4.4.11.6\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T4.4.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A3.T4.4.12.1\">mol-pcba</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T4.4.12.2\">437,929</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T4.4.12.3\">26.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T4.4.12.4\">28.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T4.4.12.5\">9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T4.4.12.6\">2 (128 tasks)</td>\n</tr>\n</table>\n</figure>",
131
+ "capture": "Table 4: Statistics of the real-world datasets used in our evaluation."
132
+ }
133
+ },
134
+ "image_paths": {
135
+ "1": {
136
+ "figure_path": "2309.04332v2_figure_1.png",
137
+ "caption": "Figure 1: The ratios histogram for test examples that are correctly classified in the extrapolation evaluation presented in Table 2. The condition in Theorem 3.6 is met for all the correctly classified examples.",
138
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/extrapolation_ratio.png"
139
+ },
140
+ "2": {
141
+ "figure_path": "2309.04332v2_figure_2.png",
142
+ "caption": "Figure 2: Accuracy and error bars of the Proteins datasets as the COV reduces. The performance is monotonically improving.",
143
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/prot_rcov.png"
144
+ },
145
+ "3": {
146
+ "figure_path": "2309.04332v2_figure_3.png",
147
+ "caption": "Figure 3: An empirical validation of Theorem 3.1. The ratio between the topological and root weights is equal to the regularity degree of the graphs. V is the number of nodes in each graph, and r is the regularity degree.",
148
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/regular_ratio.png"
149
+ },
150
+ "4": {
151
+ "figure_path": "2309.04332v2_figure_4.png",
152
+ "caption": "Figure 4: Evaluation of the GIN (Xu et al., 2019) model on the Sum task where the graph should be ignored, as described in Section 2.2 in the main paper.",
153
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/gin_plot.jpeg"
154
+ },
155
+ "5": {
156
+ "figure_path": "2309.04332v2_figure_5.png",
157
+ "caption": "Figure 5: Evaluation of the GAT (Veli\u010dkovi\u0107 et al., 2018) model on the Sum task where the graph should be ignored, as described in Section 2.2 in the main paper.",
158
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/gat_plot.jpeg"
159
+ },
160
+ "6": {
161
+ "figure_path": "2309.04332v2_figure_6.png",
162
+ "caption": "Figure 6: Evaluation of the Graph Transformer (Shi et al., 2021) model on the Sum task where the graph should be ignored, as described in Section 2.2 in the main paper.",
163
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/transormer_plot.jpeg"
164
+ },
165
+ "7": {
166
+ "figure_path": "2309.04332v2_figure_7.png",
167
+ "caption": "Figure 7: Evaluation of the same model presented in Equation 1 (Gilmer et al., 2017) with normalized neighbor aggregation on the Sum task where the graph should be ignored, as described in Section 2.2 in the main paper.",
168
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/mean_agg_plot.jpeg"
169
+ },
170
+ "8": {
171
+ "figure_path": "2309.04332v2_figure_8.png",
172
+ "caption": "Figure 8: Evaluation of the same model presented in Equation 1 (Gilmer et al., 2017) on the Sum task for node classification where the graph should be ignored.",
173
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/node_task.png"
174
+ },
175
+ "9": {
176
+ "figure_path": "2309.04332v2_figure_9.png",
177
+ "caption": "Figure 9: The learning curves of the same GNN model trained on graphs that have the same node features and only differ in\ntheir graph-structure. The label is computed via a graphless teacher. If GNNs were to ignore the non-informative graph-structure they were given, similar performance should\nhave been observed for all graph distributions. Among the different distributions, regular graphs exhibit the best performance.",
178
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/all_learning_curves.png"
179
+ },
180
+ "10": {
181
+ "figure_path": "2309.04332v2_figure_10.png",
182
+ "caption": "Figure 10: The weights norm of the topological (dashed) and the root (smooth) weights along the same runs. On the empty graphs, the topological\nweights are not trained and the ratio is 0 due to initialization.",
183
+ "url": "http://arxiv.org/html/2309.04332v2/extracted/5430783/images/weight_norms_non_linear.png"
184
+ }
185
+ },
186
+ "validation": true,
187
+ "references": [
188
+ {
189
+ "1": {
190
+ "title": "On the bottleneck of graph neural networks and its practical implications, 2021.",
191
+ "author": "Alon, U. and Yahav, E.",
192
+ "venue": null,
193
+ "url": null
194
+ }
195
+ },
196
+ {
197
+ "2": {
198
+ "title": "Emergence of scaling in random networks.",
199
+ "author": "Barabasi, A.-L. and Albert, R.",
200
+ "venue": "Science, 286(5439):509\u2013512, 1999.",
201
+ "url": null
202
+ }
203
+ },
204
+ {
205
+ "3": {
206
+ "title": "How attentive are graph attention networks?, 2022.",
207
+ "author": "Brody, S., Alon, U., and Yahav, E.",
208
+ "venue": null,
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "4": {
214
+ "title": "On graph neural networks versus graph-augmented mlps, 2020.",
215
+ "author": "Chen, L., Chen, Z., and Bruna, J.",
216
+ "venue": null,
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "5": {
222
+ "title": "Distinguishing enzyme structures from non-enzymes without alignments.",
223
+ "author": "Dobson, P. D. and Doig, A. J.",
224
+ "venue": "Journal of molecular biology, 330 4:771\u201383, 2003.",
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "6": {
230
+ "title": "On random graphs i.",
231
+ "author": "Erd\u00f6s, P. and R\u00e9nyi, A.",
232
+ "venue": "Publicationes Mathematicae Debrecen, 6:290, 1959.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "7": {
238
+ "title": "A fair comparison of graph neural networks for graph classification, 2022.",
239
+ "author": "Errica, F., Podda, M., Bacciu, D., and Micheli, A.",
240
+ "venue": null,
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "8": {
246
+ "title": "Neural message passing for quantum chemistry, 2017.",
247
+ "author": "Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E.",
248
+ "venue": null,
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "9": {
254
+ "title": "Implicit bias of gradient descent on linear convolutional networks.",
255
+ "author": "Gunasekar, S., Lee, J. D., Soudry, D., and Srebro, N.",
256
+ "venue": "In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "10": {
262
+ "title": "Inductive representation learning on large graphs, 2017.",
263
+ "author": "Hamilton, W. L., Ying, R., and Leskovec, J.",
264
+ "venue": "URL https://arxiv.org/abs/1706.02216.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "11": {
270
+ "title": "Open graph benchmark: Datasets for machine learning on graphs, 2020.",
271
+ "author": "Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J.",
272
+ "venue": "URL https://arxiv.org/abs/2005.00687.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "12": {
278
+ "title": "Semi-supervised classification with graph convolutional networks, 2017a.",
279
+ "author": "Kipf, T. N. and Welling, M.",
280
+ "venue": null,
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "13": {
286
+ "title": "Semi-supervised classification with graph convolutional networks.",
287
+ "author": "Kipf, T. N. and Welling, M.",
288
+ "venue": "In International Conference on Learning Representations, 2017b.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "14": {
294
+ "title": "Towards deeper graph neural networks.",
295
+ "author": "Liu, M., Gao, H., and Ji, S.",
296
+ "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD \u201920, pp. 338\u2013348, New York, NY, USA, 2020. Association for Computing Machinery.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "15": {
302
+ "title": "Gradient descent maximizes the margin of homogeneous neural networks, 2020.",
303
+ "author": "Lyu, K. and Li, J.",
304
+ "venue": null,
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "16": {
310
+ "title": "Tudataset: A collection of benchmark datasets for learning with graphs.",
311
+ "author": "Morris, C., Kriege, N. M., Bause, F., Kersting, K., Mutzel, P., and Neumann, M.",
312
+ "venue": "In ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020), 2020.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "17": {
318
+ "title": "Weisfeiler and leman go neural: Higher-order graph neural networks, 2021.",
319
+ "author": "Morris, C., Ritzert, M., Fey, M., Hamilton, W. L., Lenssen, J. E., Rattan, G., and Grohe, M.",
320
+ "venue": null,
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "18": {
326
+ "title": "Dropedge: Towards deep graph convolutional networks on node classification.",
327
+ "author": "Rong, Y., bing Huang, W., Xu, T., and Huang, J.",
328
+ "venue": "In International Conference on Learning Representations, 2019.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "19": {
334
+ "title": "Node feature kernels increase graph convolutional network robustness, 2022.",
335
+ "author": "Seddik, M. E. A., Wu, C., Lutzeyer, J. F., and Vazirgiannis, M.",
336
+ "venue": null,
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "20": {
342
+ "title": "Weisfeiler-lehman graph kernels.",
343
+ "author": "Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., and Borgwardt, K. M.",
344
+ "venue": "J. Mach. Learn. Res., 12:2539\u20132561, 2011.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "21": {
350
+ "title": "Masked label prediction: Unified message passing model for semi-supervised classification, 2021.",
351
+ "author": "Shi, Y., Huang, Z., Feng, S., Zhong, H., Wang, W., and Sun, Y.",
352
+ "venue": null,
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "22": {
358
+ "title": "The implicit bias of gradient descent on separable data, 2017.",
359
+ "author": "Soudry, D., Hoffer, E., Nacson, M. S., Gunasekar, S., and Srebro, N.",
360
+ "venue": "URL https://arxiv.org/abs/1710.10345.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "23": {
366
+ "title": "Graph attention networks.",
367
+ "author": "Veli\u010dkovi\u0107, P., Cucurull, G., Casanova, A., Romero, A., Li\u00f2, P., and Bengio, Y.",
368
+ "venue": "In International Conference on Learning Representations, 2018.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "24": {
374
+ "title": "How powerful are graph neural networks?",
375
+ "author": "Xu, K., Hu, W., Leskovec, J., and Jegelka, S.",
376
+ "venue": "In International Conference on Learning Representations, 2019.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "25": {
382
+ "title": "Deep graph kernels.",
383
+ "author": "Yanardag, P. and Vishwanathan, S.",
384
+ "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD \u201915, pp. 1365\u20131374, New York, NY, USA, 2015. Association for Computing Machinery.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "26": {
390
+ "title": "On size generalization in graph neural networks.",
391
+ "author": "Yehudai, G., Fetaya, E., Meirom, E. A., Chechik, G., and Maron, H.",
392
+ "venue": "CoRR, abs/2010.08853, 2020.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "27": {
398
+ "title": "Deep sets, 2018.",
399
+ "author": "Zaheer, M., Kottur, S., Ravanbakhsh, S., Poczos, B., Salakhutdinov, R., and Smola, A.",
400
+ "venue": null,
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "28": {
406
+ "title": "Understanding deep learning requires rethinking generalization, 2017.",
407
+ "author": "Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O.",
408
+ "venue": null,
409
+ "url": null
410
+ }
411
+ }
412
+ ],
413
+ "url": "http://arxiv.org/html/2309.04332v2"
414
+ }
20240225/2309.16354v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.00386v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.09017v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.10640v2.json ADDED
@@ -0,0 +1,643 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts",
3
+ "abstract": "Diffusion-based generative models have significantly advanced text-to-image generation but encounter challenges when processing lengthy and intricate text prompts describing complex scenes with multiple objects. While excelling in generating images from short, single-object descriptions, these models often struggle to faithfully capture all the nuanced details within longer and more elaborate textual inputs. In response, we present a novel approach leveraging Large Language Models (LLMs) to extract critical components from text prompts, including bounding box coordinates for foreground objects, detailed textual descriptions for individual objects, and a succinct background context. These components form the foundation of our layout-to-image generation model, which operates in two phases. The initial Global Scene Generation utilizes object layouts and background context to create an initial scene but often falls short in faithfully representing object characteristics as specified in the prompts. To address this limitation, we introduce an Iterative Refinement Scheme that iteratively evaluates and refines box-level content to align them with their textual descriptions, recomposing objects as needed to ensure consistency. Our evaluation on complex prompts featuring multiple objects demonstrates a substantial improvement in recall compared to baseline diffusion models. This is further validated by a user study, underscoring the efficacy of our approach in generating coherent and detailed scenes from intricate textual inputs. Our code is available at https://github.com/hananshafi/llmblueprint.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Modern generative diffusion models, e.gRombach et al. (2022 ###reference_b40###); Ho et al. (2020 ###reference_b18###); Saharia et al. (2022 ###reference_b42###); Ruiz et al. (2023 ###reference_b41###), provided a massive leap forward in the problem of text-to-image generation and have emerged as powerful tools for creating diverse images and graphics from plain text prompts. Their success can be attributed to several factors, including the availability of internet-scale multi-modal datasets, increased computational resources, and the scaling up of model parameters. These models are trained using shorter prompts and especially excel at generating images of one prominent foreground object. However, as the description length and the number of objects in the scene increase, modern diffusion models tend to ignore parts of the prompt often leading to critical omissions, misrepresentations, or the generation of objects that do not align with the nuanced details described in the prompts. Fig. 1 ###reference_### shows a scenario where existing state-of-the-art text-to-image diffusion models struggle to follow all the details. This failure can partly be ascribed to the diffusion model\u2019s CLIP text encoder (Radford et al., 2021 ###reference_b36###) which can only process the first text tokens, effectively truncating longer prompts and potentially omitting critical details. Indeed, a single prompt describing a complex scene can span far beyond these token limits, making it a challenge for existing models to process and translate long prompts comprehensively.\nRecent efforts (Epstein et al., 2023 ###reference_b8###; Kang et al., 2023 ###reference_b19###) have been dedicated to improving the capabilities of pre-trained diffusion models to faithfully follow the intricate details within text prompts. These works predominantly revolve around aspects such as object count (e.g. \u201c2 oranges and 4 apples on the table\u201d), and/or capturing spatial relationships among objects (e.g. \u201can orange on the left of an apple\u201d). In the context of longer and more complex prompts, these models still tend to struggle to generate coherent images that faithfully reflect the complexity of the text prompts, especially when tasked with the placement of object instances at considerable spatial separations, often falling short of comprehensively capturing all instances of objects as intended. More recently layout-based diffusion models (Feng et al., 2023 ###reference_b11###; Li et al., 2023 ###reference_b26###; Yang et al., 2023b ###reference_b52###) have proven to be effective in capturing the count and spatial characteristics of the objects in the prompt. Such models first generate bounding boxes of all the objects\nand then condition the diffusion model jointly on the bounding boxes and the text prompt to generate the final image. While effective in the case of small prompts, these models still struggle when presented with long text descriptions that feature multiple diverse objects and hence fail to generate the desired output (See Fig. 1 ###reference_###).\nTo address these challenges, our approach seeks to improve text-to-image generation from lengthy prompts. We introduce a framework that divides image generation into two phases: generating a global scene, followed by iterative refinement of individual object representations. We exploit LLMs to break down long prompts into smaller components organized in a data structure that we call Scene Blueprint. This allows us to generate the image in a step-wise manner. Our framework ensures that the final image faithfully adheres to the details specified in lengthy and complex text prompts.\nWe evaluate our framework on challenging prompts containing 3 to 10 unique foreground objects in varied scenes. Our results showcase a significant improvement in recall (85%) compared to the baseline Feng et al. (2023 ###reference_b11###) (69%) (+16 % improvement).\nWe also include a user study that demonstrates that our proposed method consistently produces coherent images that closely align with their respective textual descriptions, whereas existing approaches struggle to effectively handle longer text prompts (See Fig. 4 ###reference_###)\nIn summary, our main contributions are:\nIterative image generation from long prompts: We introduce a two-phase framework for generating images from long textual descriptions, ensuring a faithful representation of details.\nScene Blueprints using LLMs: We propose Scene Blueprints as a structured scene representation encompassing scene layout and object descriptions that enable a coherent step-wise generation of images from complex and lengthy prompts.\nState-of-the-art results: We present quantitative and qualitative results showcasing the effectiveness of our method, in terms of adherence to textual descriptions, demonstrating its applicability and superiority in text-to-image synthesis from lengthy prompts."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Text-to-Image Diffusion.\nOver the years, Generative Adversarial Networks (GAN) (Goodfellow et al., 2014 ###reference_b13###) have been the default choice for image synthesis (Brock et al., 2018 ###reference_b4###; Reed et al., 2016 ###reference_b39###; Xu et al., 2018 ###reference_b50###; Zhang et al., 2017 ###reference_b55###; 2021 ###reference_b57###; Tao et al., 2022 ###reference_b49###; Zhang et al., 2018a ###reference_b56###; Karras et al., 2019 ###reference_b20###). However, more recently, the focus has shifted towards text conditioned autoregressive models (Ding et al., 2021 ###reference_b7###; Gafni et al., 2022 ###reference_b12###; Ramesh et al., 2021 ###reference_b37###; Yu et al., 2022 ###reference_b54###) and diffusion models (Rombach et al., 2022 ###reference_b40###; Gu et al., 2022 ###reference_b14###; Nichol et al., 2021 ###reference_b32###; Ramesh et al., 2022 ###reference_b38###; Saharia et al., 2022 ###reference_b42###) which have exhibited impressive capabilities in producing high-quality images while avoiding the training challenges, such as instability and mode collapse, commonly associated with GANs (Arjovsky et al., 2017 ###reference_b1###; Gulrajani et al., 2017 ###reference_b15###; Kodali et al., 2017 ###reference_b23###). In particular, diffusion models are trained on large-scale multi-modal data and are capable of generating high-resolution images conditioned on text input. Nevertheless, effectively conveying all the nuances of an image solely from a text prompt can present a considerable hurdle.\nRecent studies have demonstrated the effectiveness of classifier-free guidance (Ho & Salimans, 2022 ###reference_b17###) in\nimproving the faithfulness of the generations in relation to\nthe input prompt. However, all these approaches are designed to accept shorter text prompts, but they tend to fail in scenarios where the prompt describing a scene is longer. In contrast, our proposed approach generates images from longer text prompts, offering an efficient solution to address this challenge.\nLayout-to-Image Generation.\nGenerating images from layouts either in the form of labeled bounding boxes or semantic maps was recently explored in (Sun & Wu, 2019 ###reference_b47###; Sylvain et al., 2021 ###reference_b48###; Yang et al., 2022 ###reference_b53###; Fan et al., 2023 ###reference_b10###; Zhao et al., 2019 ###reference_b59###; Park et al., 2019 ###reference_b34###). Critically, these layout to image generation methods are only conditioned on bounding boxes and\nare closed-set, i.e., they can only generate limited localized\nvisual concepts observed in the training set. With the inception of large multi-modal foundational models such as CLIP (Radford et al., 2021 ###reference_b36###), it has now been possible to generate images in an open-set fashion. Diffusion-based generative models can be conditioned on multiple inputs, however, they have been shown to struggle in following the exact object count and spatial locations in the text prompts (Chen et al., 2023 ###reference_b5###; Kang et al., 2023 ###reference_b19###). More recently layout conditioned diffusion models have been proposed to solve this problem (Chen et al., 2023 ###reference_b5###; Li et al., 2023 ###reference_b26###; Yang et al., 2023b ###reference_b52###; Phung et al., 2023 ###reference_b35###). Chen et al. (2023 ###reference_b5###) manipulates the\ncross-attention layers that the model uses to interface textual and visual information and steers the reconstruction in\nthe desired user-specified layout. GLIGEN (Li et al., 2023 ###reference_b26###) uses\na gated self-attention layer that enables additional inputs\n(e.g., bounding boxes) to be processed. ReCo (Yang et al., 2023b ###reference_b52###) achieves\nlayout control through regional tokens encoded as part of\nthe text prompt.\n Zheng et al. (2023 ###reference_b60###) introduce LayoutDiffusion which treats each patch of the image as a special object for multimodal fusion of layout and image and generates\nimages with both high quality and diversity while\nmaintaining precise control over the position and size\nof multiple objects.\nIn addition to this, there have been few works on LLM-based layout generation (Feng et al., 2023 ###reference_b11###; Lian et al., 2023 ###reference_b27###). These works exploit the LLMs\u2019 abilities to reason over\nnumerical and spatial concepts in text conditions (Li et al., 2022a ###reference_b24###). Building upon these works, we extend LLMs\u2019 powerful generalization and reasoning capabilities to extract layouts, background information, and foreground object descriptions from longer text prompts.\nDiffusion Based Image Editing and Composition.\nDiffusion-based image editing has received overwhelming attention due to its ability to condition on multiple modalities. Recent works utilize text-based image editing using diffusion models to perform region modification. DiffusionCLIP (Kim et al., 2022 ###reference_b21###) uses diffusion models for text-driven global multi-attribute image manipulation on varied domains. Liu et al. (2023 ###reference_b29###) provides both text and semantic guidance for global image manipulation. In addition, GLIDE (Nichol et al., 2021 ###reference_b32###) trains a diffusion model for text-to-image synthesis, as well as local image editing using text guidance. Image composition refers to a form of image manipulation where a foreground reference object is affixed onto a designated source image. A naive way to blend a foreground object on a background image may result in an unrealistic composition. However, more recent works (Avrahami et al., 2021 ###reference_b2###; Yang et al., 2023a ###reference_b51###; Ho & Salimans, 2022 ###reference_b17###; Lu et al., 2023 ###reference_b30###) use diffusion models to overcome the challenges posed due to fusion inconsistency and semantic disharmony for efficient image composition. Avrahami et al. (2021 ###reference_b2###) takes the target region mask and simply blends the noised version of the input image with local text-guided diffusion latent. Yang et al. (2023a ###reference_b51###) trains a diffusion model to blend an exemplar image on the source image at the position specified by an arbitrary shape mask and leverages the classifier-free guidance (Ho & Salimans, 2022 ###reference_b17###) to increase the similarity to the exemplar image. Lu et al. (2023 ###reference_b30###) introduces TF-ICON which leverages off-the-shelf diffusion models to perform cross-domain image-guided composition without requiring additional training, fine-tuning, or optimization.\nBased on these works, we build an iterative refining scheme, which performs region-based composition at the layout level, utilizing a given mask shape and modifies each layout based on the object characteristics guided by a multi-modal loss signal.\n###figure_1###"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Preliminaries on Diffusion Models.",
27
+ "text": "Diffusion models are generative models that learn the data distribution of complex datasets. They consist of a forward diffusion process and a reverse diffusion process. During the forward process, noise is added to the input data point for steps, until the resulting vector is almost distributed according to a standard Gaussian distribution. Each step in the forward process is a Gaussian transition , where is a fixed or learned variance schedule. The resulting latent variable can be expressed as:\nwhere The reverse process is parametrized by another Gaussian transition . can be decomposed into the linear combination of and a noise approximation model , which is trained so that for any pair and any sample of ,\nAfter training , different works (Song et al., 2020a ###reference_b43###; Song & Ermon, 2019 ###reference_b44###; Song et al., 2020b ###reference_b45###) study different approximations of the unknown to perform sampling. In our work, we utilize the denoising diffusion implicit model (DDIM) introduced by Song et al. (2020a ###reference_b43###) to predict the clean data point."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Overview",
33
+ "text": "Our core objective is to generate images from long textual descriptions, ensuring that the resulting images faithfully represent the intricate details outlined in the input text. We generate the output image in a multi-step manner; generating an initial image that serves as a template at a global level, followed by a box-level refinement phase that serves as a corrective procedure.\n1) Global scene generation: We begin by decomposing lengthy text prompts into \u201cScene Blueprints.\u201d Scene Blueprints provide a structured representation of the scene containing: object bounding boxes in image space, detailed text descriptions for each box, and a background text prompt. We use an LLM to extract the Scene Blueprint from the given long prompt and use layout-conditioned text-to-image models to generate the initial image. Unlike prior works, we support additional user control on the box layouts by generating multiple proposal blueprints and providing an option to smoothly interpolate between the candidate layouts. This interpolation not only facilitates (optional) user control but also helps mitigate any potential errors introduced by the LLM. 2) Box-level content refinement: In the second phase, we iterate through all the boxes and evaluate and refine their content in terms of quality and adherence to the prompt. We use a multi-modal guidance procedure that maximizes a quality score for each box.\n###figure_2###"
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Global Scene Generation",
39
+ "text": "Textual representation of a scene contains various characteristics that provide information about objects, their spatial properties, semantics, attributes, and more. To coherently capture these properties, we employ an off-the-shelf pre-trained large language model (LLM)(OpenAI, 2021 ###reference_b33###). We instruct the LLM with an appropriately engineered prompt (see supplementary Sec. A.4 ###reference_###) to generate a Scene Blueprint containing the following three components:\nLayout: Bounding box coordinates for each object -\nObject description: Description associated with each object -\nBackground Prompt: A general prompt describing the overall essence of the scene.\nThe layout and the background prompt generated by the LLM are then used to condition the diffusion model to generate an image. We follow recent work by Lian et al. (2023 ###reference_b27###) which generates the image from the layouts in two steps, 1) generating masked latent inversion for each object bounding box, and 2) composing the latent\ninversion as well as generating the corresponding background from the background prompt.\nLayouts Interpolation and Noise Correction.\nWhile LLMs have advanced spatial reasoning abilities, we observed that they struggle to model the spatial positions of the objects when presented with longer descriptions, often resulting in abnormal relative sizes of the objects and unnatural placement of object boxes in the scene (Sec.4.1 ###reference_###). A naive approach to resolve these issues is to fine-tune an LLM on handcrafted data of text descriptions and bounding boxes. However, this approach requires extensive resources in terms of human annotators and compute requirements, and risks catastrophic forgetting (Luo et al., 2023 ###reference_b31###).\nOn the other hand, correcting these errors manually by adjusting the boxes is also a daunting task and defeats the purpose of using an LLM to extract layouts.\nTo address these challenges, we propose a simple yet effective solution - layout interpolation. Instead of generating only one proposal layout, we query the LLM to generate layouts.\nSubsequently, we employ linear interpolation to compute the coordinates for each object\u2019s final bounding box, denoted as , where for any coordinate , Interpolation. The interpolation function recursively updates each coordinate such that at any iteration of bounding boxes, , where is the interpolation factor which controls the influence of the individual bounding boxes on the final interpolated box (see Fig. 3 ###reference_###).\nAs the long complex prompts may result in a large number of boxes, the images generated by the layout guidance tend to have color noise and small artifacts (See Fig. 8 ###reference_### in supplementary). Therefore, we optionally perform an image-to-image translation step using stable diffusion (Rombach et al., 2022 ###reference_b40###), resulting in a cleaner image while preserving the semantics. We refer to this image as .\nDespite conditioning on layout, we observe that the diffusion model is unable to generate all scene objects effectively. It struggles to compose multiple diverse objects having varied spatial and semantic properties in one shot. Consequently, often has missing objects or fails to represent them accurately in accordance with their descriptions. Therefore, we employ a box-level refinement strategy that evaluates and refines the content within each bounding box of the layout in terms of quality and adherence to the prompt."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Box-level refinement",
45
+ "text": "Current diffusion models have certain limitations in terms of composing multiple objects in the image when presented with longer text prompts, a problem that is still unexplored. To overcome this issue and ensure faithful generation of all the objects, we introduce an Iterative Refinement Scheme (IRS). Our proposed IRS works at the bounding box level and ensures the corresponding object at each bounding box is characterized by its properties given in the textual description. To achieve this, we iterate across each object\u2019s bounding box in and compare the visual characteristics of the object with its corresponding description extracted from the text prompt. Consider an object with its bounding box and its textual characteristics denoted by , we use CLIP score (Hessel et al., 2021 ###reference_b16###) as a metric to get the similarity between the object and its description such that, . If the CLIP score is below a certain threshold, we modify the content of the bounding box such that it follows its corresponding description.\nAny reasonable modification of the content within a bounding box must improve its fidelity and adherence to the prompt. Since diffusion models, e.g. stable diffusion (Rombach et al., 2022 ###reference_b40###), are already\ngood at generating high-fidelity images from shorter prompts, we exploit this ability and follow a paint-by-example approach. We generate a new object for the designated box by passing the object description to a Text-to-Image stable diffusion model. The generated image acts as a reference content for the bounding box .\nWe then use a pretrained image composition model (Yang et al., 2023a ###reference_b51###) conditioned on the reference image , mask (extracted from the bounding box), and source image to compose the reference object at the designated position on the source image specified by the mask. To ensure the composition generates the object that follows the properties described in the text prompt as closely as possible, we guide the sampling process by an external multi-modal loss signal.\nBox-level Multi-modal Guidance.\nGiven the initial image , a reference image , a guiding text prompt and a binary mask that marks the region of interest in the image corresponding to bounding box , our goal is to generate a modified image , s.t. the content of the region is consistent with the prototype image and adheres to the text description , while the complementary area remains as close as possible to the source image, i.e., , where is the element-wise multiplication. Dhariwal & Nichol (2021 ###reference_b6###) use a classifier trained on\nnoisy images to guide generation towards a target class. In our case, we use an external function in the form of CLIP to guide the generation in order to adhere to the prototype image and text description . However,\nCLIP is trained on noise-free data samples, we estimate a clean image from each noisy latent during the denoising diffusion process via Eqn. 1 ###reference_### as follows,\nOur CLIP based multimodal guidance loss can then be expressed as,\nwhere denotes the cosine similarity loss and is a hyperparameter. The first part of the equation measures the cosine loss between the composed object at the region specified by the mask and its corresponding text characteristics . The second part of the equation measures the cosine loss between the composed object and its corresponding prototype . A similar approach using CLIP text-based guidance is used in Avrahami et al. (2021 ###reference_b2###) for region-based modification. However, in contrast, we also include a CLIP image-based guidance to steer the generation toward the prototype image to account for the fine-grained details that may not be captured in the text description.\nIn order to confine the modification within the given bounding box, we optionally employ a background preservation loss \nwhich is a summation of the L2 norm of the pixel-wise differences and Learned Perceptual Image Patch Similarity metric Zhang et al. (2018b ###reference_b58###) between and \nThe final diffusion guidance loss is thus the weighted sum of and given as,\nThe gradient of the resultant loss is used to steer the sampling process to produce an object at the bounding box which follows the properties of prototype and description . Additionally, refer to the supplementray section A.8 ###reference_### to understand how guidance loss influences the generation process and algorithm A.1 ###reference_### for a full end-to-end pipeline."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiments",
51
+ "text": "###figure_3### Settings: Our framework uses a combination of several components. For acquiring the long text descriptions, we ask ChatGPT to generate scenes on various themes. In addition to this, we also use the textual descriptions from some COCO (Lin et al., 2014 ###reference_b28###) and PASCAL (Everingham et al., 2010 ###reference_b9###) images by querying an image captioning model (Zhu et al., 2023 ###reference_b61###) to generate a detailed description spanning 80-100 words. For extracting layouts, bounding boxes, and background prompt, we make use of ChatGPT completion api (OpenAI, 2021 ###reference_b33###) with an appropriate instruction template (See Supplementary Sec. A.4 ###reference_###). We generate 3 layouts for each text prompt and interpolate them to a single layout to account for the spatial location correctness. To avoid layout overlap, we push the boxes away from each other until there is minimal contact wherever feasible. For base layout-to-image generation, we use the work of Lian et al. (2023 ###reference_b27###) and scale it for longer text prompts. We use 20 diffusion steps at this point. For box refinement, we use the pre-trained image composition model of Yang et al. (2023a ###reference_b51###) which conditions on a reference image. For each box refinement, we use 50 diffusion steps. For implementation, we use Pytorch 2.0. Finally, our entire pipeline runs on a single Nvidia A100 40GB GPU.\nQuantitative Results:\nOur work stands as a first effort to address the challenge of generating images from extensive text prompts. As discussed in Section 1 ###reference_###, current diffusion-based text-to-image generation methods typically utilize the CLIP tokenizer to condition the diffusion model for image generation. However, this approach can lead to inconsistent images when confronted with lengthy text prompts with intricate details. To the best of our knowledge, there is currently no established metric for assessing the performance of diffusion models in handling lengthy text descriptions. Hence we propose to use the Prompt Adherence Recall (PAR) score to quantify adherence to the prompt defined as Mean of object presence over all objects over all prompts where we use an off-the-shelf object detector (Li et al., 2022b ###reference_b25###) to check if the object is actually present in the generated image such that object presence is if present and otherwise. We achieve a PAR score of which is significantly better than Stable Diffusion (49%), GLIGEN (57%) and LayoutGPT (69%).\n###figure_4### We also conducted an extensive user study to assess the effectiveness of our method in comparison to four established baseline approaches:\nGLIGEN (Li et al., 2023 ###reference_b26###), LayoutGPT (Feng et al., 2023 ###reference_b11###) and LLM-Grounded Diffusion (Lian et al., 2023 ###reference_b27###). To mitigate any bias, participants were instructed to select one image from a pair of images randomly selected from two distinct approaches. Their goal was to choose the image that most accurately represented the provided textual descriptions regarding spatial arrangement, object characteristics, and overall scene dynamics. The outcomes of the user study are presented in Fig. 4 ###reference_###. Our findings demonstrate that, on average, our proposed method consistently produces coherent images that closely align with their respective textual descriptions, whereas existing approaches struggle to effectively handle longer text prompts. For a detailed procedure and user-study results on the fidelity, refer to section A.9 ###reference_### of supplementary.\n###figure_5### ###figure_6### Qualitative Analysis:\nFig. 5 ###reference_### presents a qualitative assessment of our method in comparison to established state-of-the-art methods. The text descriptions include specific phrases denoted by underlined italics, conveying information about objects, their attributes, and spatial arrangements. Notably, the red text beneath each image highlights instances of missing objects, while the purple text indicates spatial inaccuracies, and the black text identifies elements of implausibility or distortion. From the figure, the stable diffusion baseline (column 1) frequently falls short in incorporating certain objects from the prompt due to its inability to efficiently handle lengthy text prompts. In some instances (rows 2 and 4), the generated images exhibit unrealistic features. Layout-based approaches (columns 2, 3, and 4) also encounter difficulties in fully capturing the nuances of the text prompt, resulting in occasional omissions of objects and instances of deformations (column 3, row 5). In contrast, our approach excels by accurately capturing all objects from the text, including their intricate details and precise spatial positions. Refer to sections A.3 ###reference_### and A.5 ###reference_### for further results and comparisons with additional baselines."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Ablations",
57
+ "text": "Effect of Layout Interpolation.\nWe query the LLM to generate multiple layouts from the input text prompt and then employ linear interpolation to merge them into a single layout. However, for lengthy prompts, LLMs can occasionally generate layouts with random object placements, resulting in unnatural images. As shown in Fig. 6 ###reference_###, the first two columns depict images without layout interpolation, while the last column shows the interpolated image. The underlined phrases in the text prompt indicate object spatial characteristics. In contrast, the last column demonstrates the improved result with interpolation, aligning nearly every object with its textual spatial description.\nEffect of Guidance\nThe external guidance in the form of CLIP multi-modal loss used in the refinement stage steers sampling of the specific box proposal towards its corresponding description and reference prototype. We present a visual illustration of this phenomenon in Fig. 7 ###reference_###. As seen from the figure, the properties of the cat get more aligned with the prototype image and text description in the presence of a guidance signal."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusion",
63
+ "text": "In this work, we identified the limitations of prior text-to-image models in handling complex and lengthy text prompts. In response, we introduced a framework involving a data structure (Scene Blueprint) and a multi-step procedure involving global scene generation followed by an iterative refinement scheme to generate images that faithfully adhere to the details in such lengthy prompts. Our framework offers a promising solution for accurate and diverse image synthesis from complex text inputs, bridging a critical gap in text-to-image synthesis capabilities.\nWhile we presented a simple interpolation technique to combine various bounding box proposals, we maintained fixed box layouts for the second phase. A promising avenue for future research lies in exploring dynamic adjustments of boxes within the iterative refinement loop. Another area warranting further examination pertains to the handling of overlapping boxes. While we currently address this challenge by sorting boxes by size prior to the box-level refinement phase, there is an opportunity to explore more advanced techniques for managing overlaps.\nAdditionally, our current approach to box-level refinement treats each object in isolation, overlooking the relationships that exist among objects within a scene. A compelling avenue for future research is to incorporate and leverage these object relationships, with the aim of achieving more comprehensive and contextually aware image generation."
64
+ }
65
+ ],
66
+ "appendix": [
67
+ {
68
+ "section_id": "Appendix 1",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix A Appendix",
71
+ "text": "In Sec. 3 ###reference_###, we describe our proposed approach LLM Blueprint. Our method uses an LLM to first generate bounding box layouts and descriptions pertaining to each object in the description, along with a background prompt that describes the overall scene in the text. We use specific instruct prompts to do so (see Appendix.A.4 ###reference_###). Due to possibly inconsistent box positions, we query the LLM to generate layouts giving bounding boxes for each object in the layout. We then linearly interpolate the coordinates to obtain one final bounding box layout denoted as . We use a layout-to-image generation model to obtain an initial image. However, as described in Sec.3.3 ###reference_### of the main paper, we apply an additional noise correction step by passing the generated image to an image-to-image diffusion model while retaining the semantics of the scene. We call the image generated until this point as Global Scene Generation and denote it as . However, the image generated after the first phase still has inconsistencies in generating objects as per their characteristics. To overcome this issue and ensure faithful generation of all the objects, we introduce an Iterative Refinement Scheme (IRS). Our proposed method works at the bounding box level and ensures the corresponding object at each bounding box is characterized by its properties given in the textual description. We achieve this by looping across each bounding box and iteratively modifying the regions based on a CLIP metric. In other words, we generate a new object for the designated box by passing the object description to a text-to-image stable diffusion model . The generated image acts as a reference content for the bounding box . We further create a binary mask from the bounding box coordinates. We then use an image composition model conditioned on the reference image , mask and source image to compose the reference object at the designated position on the source image specified by the mask . To ensure the composition generates the object that exactly follows the properties described in the text prompt, we use an external function in the form of CLIP to guide the generation in order to adhere to the prototype image and text description . However, since our prototype is in the input space and CLIP is trained on real-world data samples, we estimate a clean image from each noisy latent during the denoising diffusion process via Eqn. 1 ###reference_### (main paper). We optionally add a background preservation loss \nto avoid the composition process affecting the background. We provide a pseudo code of our algorithm in Algorithm 1 ###reference_###.\nAs discussed in Sec. 3.3 ###reference_###, the generated image after the first phase sometimes suffers from noise or artifacts. We do an optional noise correction step after Global scene generation, which essentially removes the unwanted noise and artifacts. Specifically, we utilize the image-to-image translation method of stable diffusion Rombach et al. (2022 ###reference_b40###) which instead of starting from random noise, starts from an input image, adds noise to it and then denoises it in the reverse process. The idea is to enhance the quality of the image while maintaining its semantics. We notice that this process removes the unwanted noise and artifacts present in the image (See Figure 8 ###reference_###).\n###figure_7### We provide further qualitative comparisons of our approach with state-of-the-art baselines in Fig. 9 ###reference_###.\n###figure_8### Our approach utilizes LLMs\u2019 advanced spatial and reasoning capabilities to derive layouts, object descriptions, and background prompt from a long textual description. For extracting layouts and background prompts, we use the prompt designed by Lian et al. (2023 ###reference_b27###) and scale it to work on longer textual descriptions. For extracting object descriptions, we designed our own unique prompt. Below are the instruct prompts utilized in our work.\nInstruct prompt for extracting layouts and background prompt. \n \nYou are an intelligent bounding box generator. I will provide you with a caption for a photo, image, a detailed scene, or a painting. Your task is to generate the bounding boxes for the objects mentioned in the caption, along with a background prompt describing the scene. The images are of size 512x512. The top-left corner has coordinates [0, 0]. The bottom-right corner has coordinates [512, 512]. The bounding boxes should not overlap or go beyond the image boundaries. Each bounding box should be in the format of (object name, [top-left x coordinate, top-left y coordinate, box width, box height]) and include exactly one object (i.e., start the object name with \"a\" or \"an\" if possible). Do not put objects that are already provided in the bounding boxes into the background prompt. Do not include non-existing or excluded objects in the background prompt. If needed, you can make reasonable guesses. Please refer to the example below for the desired format.\nCaption: In the quiet countryside, a red farmhouse stands with an old-fashioned charm. Nearby, a weathered picket fence surrounds a garden of wildflowers. An antique tractor, though worn, rests as a reminder of hard work. A scarecrow watches over fields of swaying crops. The air carries the scent of earth and hay. Set against rolling hills, this farmhouse tells a story of connection to the land and its traditions \nObjects: [(\u2019a red farmhouse\u2019, [105, 228, 302, 245]), (\u2019a weathered picket fence\u2019, [4, 385, 504, 112]), (\u2019an antique tractor\u2019, [28, 382, 157, 72]), (\u2019a scarecrow\u2019, [368, 271, 66, 156]) ]\nBackground prompt: A realistic image of a quiet countryside with rolling hills\nCaption: A realistic image of landscape scene depicting a green car parking on the left of a blue truck, with a red air balloon and a bird in the sky \nObjects: [(\u2019a green car\u2019, [21, 181, 211, 159]), (\u2019a blue truck\u2019, [269, 181, 209, 160]), (\u2019a red air balloon\u2019, [66, 8, 145, 135]), (\u2019a bird\u2019, [296, 42, 143, 100])]\nBackground prompt: A realistic image of a landscape scene\nInstruct prompt for extracting object descriptions. \n \nYou are an intelligent description extractor.\nI will give you a list of the objects and a corresponding text prompt.\nFor each object, extract its respective description or details\nmentioned in the text prompt. The description should strictly contain fine details\nabout the object and must not contain information regarding location or abstract details\nabout the object. The description must also\ncontain the name of the object being described. For objects that do not have concrete\ndescriptions mentioned, return the object itself in that case. The output should be a Python dictionary\nwith the key as object and the value as description. The description should start with \u2019A realistic photo of\nobject\u2019 followed by its characteristics. Sort the entries as per objects that are spatially\nbehind (background) followed by objects that are spatially ahead (foreground).\nFor instance object \"a garden view\" should precede the \"table\". Make an intelligent guess if possible.\nHere are some examples:\nlist of objects: [a Golden Retriever,a white cat,a wooden table,a vase of vibrant flowers,a sleek modern television]\ntext prompt: In a cozy living room, a heartwarming scene unfolds. A friendly and affectionate Golden Retriever with a soft, golden-furred coat rests contently on a plush rug, its warm eyes filled with joy. Nearby, a graceful and elegant white cat stretches leisurely, showcasing its pristine and fluffy fur. A sturdy wooden table with polished edges stands gracefully in the center, adorned with a vase of vibrant flowers adding a touch of freshness. On the wall, a sleek modern television stands ready to provide entertainment. The ambiance is warm, inviting and filled with a sense of companionship and relaxation.\noutput: {a sleek modern television: A realistic photo of a sleek modern television.,\na wooden table: A realistic photo of a sturdy wooden table with polished edges.,\nvase of vibrant flowers: A realistic photo of a vase of vibrant flowers adding a touch of freshness.,\na Golden Retriever: \u2019A realistic photo of a friendly and affectionate Golden Retriever with a\nsoft, golden-furred coat and its warm eyes filled with joy.,\na white cat: \u2019A realistic photo of a graceful and elegant white cat stretches leisurely, showcasing its pristine and\nfluffy fur.}\nAs recommended we provide visual comparison of our method with DeepFloyd StabilityAI (2023 ###reference_b46###) and DenseDiffusion Kim et al. (2023 ###reference_b22###) in Fig. 10 ###reference_###. As seen from the figure, DeepFloyd with a strong T5 text encoder struggles to generate coherent images with complex compositional prompts. The same is true for DenseDiffusion. We conclude that our scene blueprint augmented with iterative refinement is necessary for generating coherent images from complex compositional prompts.\n###figure_9### We further present quantitative comparison in terms of Prompt Adherence Recall (PAR) (see Sec. 4 ###reference_### of main paper) of our approach with all the baselines in Table 1 ###reference_###. We also report average inference time for each approach. As seen from the table, DeepFloyd with a PAR score of 60% is highly inefficient as it takes around 8 min to generate a 256x256 image from a long textual prompt. We notice that while other approaches are slightly efficient in time, they report a lower PAR score, thus rendering them ineffective on complex compositional prompts. Our approach with an average inference time of around 3 min (including blue print generation and iterative refinement process) has the highest PAR score of 85%, thus validating the effectiveness of our approach. Therefore, a discernible trade-off emerges between addressing\ncomplexity and ensuring the faithful reproduction of images.\n###figure_10### ###figure_11### We present an analysis to study the effect of number of layouts on the final generated image in Fig 11 ###reference_###. Consistent with the findings of Li et al. (2022b ###reference_b25###), the layouts generated by LLM (ChatGPT) most of the times align with the textual prompts. The interpolation of multiple layouts (K1) produces coherent images preserving the spatial relationships between the objects i.e. dog is always towards the left while cat is on the right. However, in extreme cases with only one layout, such as in Fig 12 ###reference_###, the ChatGPT can sometimes generate spatially incorrect boxes, such as that for dog and cat, leading to missing objects or incorrect spatial positions of the objects in the final generated image (Fig. 6 ###reference_### of main paper)\nConsistent with the findings of Li et al. (2022b ###reference_b25###), we observed that proprietary LLMs such as ChatGPT are exceptionally good at following the object positions from the textual prompt. We conducted an analysis in Table 2 ###reference_### with ChatGPT\nto verify its effectiveness on well-defined and ambiguous prompts (position of some objects is unclear). Specifically we prompted ChatGPT to generate bounding boxes of cat\nand dog with three different prompts: \u201dA living room with a cat and a dog sitting on each\nside\u201d, \u201dA living room with a cat sitting towards right and a dog sitting towards left\u201d, \u201dA living\nroom with a dog sitting towards right and a cat sitting towards left\u201d. Our analysis reveals\nthat on an average 60% of the times for the ambiguous prompt \u201dA living room with a cat and a dog sitting on each\nside\u201d, the chatGPT generates bounding box\non the right for the cat and on left for the dog. While for other two unambiguous prompts, the ChatGPT\ngenerates correct location of bounding boxes for cat and dog. This shows that ChatGPT\nworks exceptionally well for unambiguous prompts with clearly defined spatial relationships. For the ambiguous prompt, we notice an inherent bias\ninside ChatGPT, which leads to it generating dog on the left and cat on the right. To account for the errors and to provide a meaningful fix to the inherent bias inside ChatGPT, we provide a hyperparameter in the interpolation, which can be controlled to adjust for the bounding box of each object (see Fig. 3 ###reference_### in the main paper for a visual illustration of effect of parameter).\nKindly note that the guidance function enables diffusion models to be controlled by arbitrary guidance modalities without the need to retrain any specific components (Bansal et al., 2023 ###reference_b3###). Since ours is a training-free approach, therefore, to ensure the objects at each bounding box location follow their corresponding description in the long textual prompt, we employ CLIP-based loss at each step during sampling process (Eq. 5 ###reference_###, main paper).\nDuring sampling, we first compute the gradient of this loss with respect to the estimated clean sample from Eq. 3 ###reference_### of the main paper and denote it as . Using this gradient, we update the noise such that the updated noise at time step is expressed as,\nBased on the above-updated noise, the sampling step is given as,\nwhere represents the latent variable of the image containing the specific foreground object being refined. This is combined with the unaltered background to get the final output latent as shown below,\nwhere represents the mask for the object being refined. The process is repeated iterations across time steps and finally decoded to get the image with refined object. Further please refer to the Algorithm A1 in Appendix A1 for a full end-to-end pipeline.\nDetails of user study. The subjects for human study were recruited from a pool of academic students with backgrounds in computer science and engineering.\nTo ensure an unbiased evaluation and maintain objectivity, the subjects participating in the evaluation remained anonymous with no conflicts of interest with the authors. Furthermore, we prioritized user confidentiality by refraining from collecting personal details such as names, addresses etc.\nThe entire survey process was anonymized, ensuring that there was no leakage of data to any user. Following a 2-AFC (two-alternative forced choice) design, each participant was tasked with selecting one image out of a pair of images, having highest fidelity. Each pair of images was randomly sampled from a pool containing images from baselines as well as our approach. This system mitigates biases which can originate from user-curated surveys. Further, the pool of images was shuffled before sampling each time. The system was further equipped with the feature of randomly flipping the position of the images, to remove the user-specific position bias in case any pair of images is repeated. Additionally, a 2-second delay between questions was introduced to facilitate more thoughtful decision-making.\nFollowing above procedure, we present an extensive user study in Fig. 13 ###reference_### on the fidelity of generated images. For the user study, we compared our method with Stable Diffusion, GLIGEN, LayoutGPT and LLM Grounded Diffusion.\nThe system yielded approximately 90 responses from 15 subjects, with each user answering an average of 6 questions. From the Fig. 13 ###reference_###, it\u2019s clear that our approach compares favorably against baselines in terms of image fidelity while maintaining highest alignment with the text.\n###figure_12###"
72
+ }
73
+ ],
74
+ "tables": {
75
+ "1": {
76
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"A1.T1.4.1\">Quantitative comparison of Our approach with baselines in terms of PAR score and average inference time.</span> Our approach has the best PAR score while having relatively decent efficiency in terms of inference time.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A1.T1.2\" style=\"width:298.1pt;height:150.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(24.6pt,-12.5pt) scale(1.1976425964223,1.1976425964223) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A1.T1.2.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T1.2.2.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T1.1.1.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">PAR score (%) \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T1.2.2.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Inference time (min.) \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T1.2.2.3.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Stable Diffusion</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T1.2.2.3.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T1.2.2.3.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.18</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T1.2.2.4.2.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">GLIGEN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.4.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.4.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T1.2.2.5.3.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">LayoutGPT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.5.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">69</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.5.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T1.2.2.6.4.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DenseDiffusion</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.6.4.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.6.4.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">2.50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T1.2.2.7.5.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DeepFloyd</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.7.5.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T1.2.2.7.5.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">8.33</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2.8.6\" style=\"background-color:#E5E5E5;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"A1.T1.2.2.8.6.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"A1.T1.2.2.8.6.1.1\" style=\"background-color:#E5E5E5;\">Ours</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T1.2.2.8.6.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T1.2.2.8.6.2.1\" style=\"background-color:#E5E5E5;\">85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T1.2.2.8.6.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T1.2.2.8.6.3.1\" style=\"background-color:#E5E5E5;\">3.16</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
77
+ "capture": "Table 1: Quantitative comparison of Our approach with baselines in terms of PAR score and average inference time. Our approach has the best PAR score while having relatively decent efficiency in terms of inference time."
78
+ },
79
+ "2": {
80
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.2.1\">Effectiveness of ChatGPT in modeling spatial relationships.</span> We observe that ChatGPT perfectly follows the object positions given prompt with clearly defined object positions. While as it shows inherent bias for the prompts with ambiguous object poistions.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A1.T2.3\" style=\"width:397.5pt;height:104.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-41.0pt,10.8pt) scale(0.829147666250314,0.829147666250314) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A1.T2.3.1.1.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Prompt</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.3.1.1.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">Object</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.3.1.1.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">right</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.3.1.1.1.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">left</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.3.1.1.1.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">arbitrary position</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T2.3.1.2.2.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">A living room with <span class=\"ltx_text ltx_framed_underline\" id=\"A1.T2.3.1.2.2.1.1\">a cat and a dog sitting one on each side</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.1.2.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">dog</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.1.2.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.1.2.2.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.1.2.2.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.3.3\">\n<td class=\"ltx_td\" id=\"A1.T2.3.1.3.3.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.3.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">cat</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.3.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.3.3.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.3.3.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T2.3.1.4.4.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">A living room with <span class=\"ltx_text ltx_framed_underline\" id=\"A1.T2.3.1.4.4.1.1\">a cat sitting towards right</span> and <span class=\"ltx_text ltx_framed_underline\" id=\"A1.T2.3.1.4.4.1.2\">a dog sitting towards left</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.4.4.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">dog</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.4.4.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.4.4.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.4.4.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.5.5\">\n<td class=\"ltx_td\" id=\"A1.T2.3.1.5.5.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.5.5.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">cat</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.5.5.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.5.5.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.5.5.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T2.3.1.6.6.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">A living room with <span class=\"ltx_text ltx_framed_underline\" id=\"A1.T2.3.1.6.6.1.1\">a dog sitting towards right</span> and <span class=\"ltx_text ltx_framed_underline\" id=\"A1.T2.3.1.6.6.1.2\">a cat sitting towards left</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.6.6.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">dog</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.6.6.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.6.6.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.1.6.6.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.3.1.7.7\">\n<td class=\"ltx_td ltx_border_bb\" id=\"A1.T2.3.1.7.7.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.1.7.7.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">cat</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.1.7.7.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.1.7.7.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.3.1.7.7.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">0</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
81
+ "capture": "Table 2: Effectiveness of ChatGPT in modeling spatial relationships. We observe that ChatGPT perfectly follows the object positions given prompt with clearly defined object positions. While as it shows inherent bias for the prompts with ambiguous object poistions."
82
+ }
83
+ },
84
+ "image_paths": {
85
+ "1": {
86
+ "figure_path": "2310.10640v2_figure_1.png",
87
+ "caption": "Figure 1: Current state-of-the-art text-to-image models (Columns 1-4) face challenges when dealing with lengthy and detailed text prompts, resulting in the exclusion of objects and fine-grained details. Our approach (Column 5) adeptly encompasses all the objects described, preserving their intricate features and spatial characteristics as outlined in the two white boxes.",
88
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/intro_image_arxiv.png"
89
+ },
90
+ "2": {
91
+ "figure_path": "2310.10640v2_figure_2.png",
92
+ "caption": "Figure 2: Global Scene Generation: Our proposed approach takes a long text prompt describing a complex scene and leverages an LLM to generate k\ud835\udc58kitalic_k layouts which are then interpolated to a single layout, ensuring the spatial accuracy of object placement. Along with the layouts, we also query an LLM to generate object descriptions along with a concise background prompt summarizing the scene\u2019s essence.\nA Layout-to-Image model is employed which transforms the layout into an initial image. Iterative Refinement Scheme: The content of each box proposal is refined using a diffusion model conditioned on a box mask, a (generated) reference image for the box, and the source image, guided by a multi-modal signal.",
93
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/iclr_main_figure_revised.png"
94
+ },
95
+ "3": {
96
+ "figure_path": "2310.10640v2_figure_3.png",
97
+ "caption": "Figure 3: Effect of interpolation factor \u03b7\ud835\udf02\\etaitalic_\u03b7: We interpolate the k\ud835\udc58kitalic_k bounding boxes for each object and control the interpolation by the factor \u03b7\ud835\udf02\\etaitalic_\u03b7. We visualize the change in the bounding box location of \u201da white cat\u201d highlighted in the text for different \u03b7\ud835\udf02\\etaitalic_\u03b7 values from 0.1 to 0.9 with increments of 0.1. Best viewed in zoom.",
98
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/interpolation_diagram_arxiv.png"
99
+ },
100
+ "4": {
101
+ "figure_path": "2310.10640v2_figure_4.png",
102
+ "caption": "Figure 4: User study. A majority of users picked our method compared to prior works when presented with a 2-AFC task of selecting the image that adheres to the given prompt the most.",
103
+ "url": "http://arxiv.org/html/2310.10640v2/x1.png"
104
+ },
105
+ "5": {
106
+ "figure_path": "2310.10640v2_figure_5.png",
107
+ "caption": "Figure 5: Qualitative comparisons: We compare our image generation method to state-of-the-art baselines, including those using layouts. The underlined text in the text prompts represents the objects, their characteristics, and spatial properties. Red text highlights missing objects, purple signifies inaccuracies in object positioning, and black text points out implausible or deformed elements. Baseline methods often omit objects and struggle with spatial accuracy (first four columns), while our approach excels in capturing all objects and preserving spatial attributes (last column).",
108
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/qualitative_samples_arxiv.png"
109
+ },
110
+ "6": {
111
+ "figure_path": "2310.10640v2_figure_6.png",
112
+ "caption": "Figure 6: Effect of Layout Interpolation: Our layout interpolation method (last column) significantly improves object spatial positioning compared to non-interpolated cases (first two columns). Best viewed in zoom.",
113
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/figure_layout_ablation_arxiv.png"
114
+ },
115
+ "7": {
116
+ "figure_path": "2310.10640v2_figure_7.png",
117
+ "caption": "Figure 7: Effect of Guidance: Without guidance signal, the composed image does not follow the properties corresponding to its description and visual appearance. In contrast, the one with the guidance (right) adheres to the visual prototype and description.",
118
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/figure_guidance_arxiv.png"
119
+ },
120
+ "8": {
121
+ "figure_path": "2310.10640v2_figure_8.png",
122
+ "caption": "Figure 8: Noise Correction: The noise correction strategy removes noise and certain redundant artifacts from the image that are unnecessary.",
123
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/noise_corrected.png"
124
+ },
125
+ "9": {
126
+ "figure_path": "2310.10640v2_figure_9.png",
127
+ "caption": "Figure 9: Qualitative comparisons: We provide further qualitative comparisons to our approach against the state-of-the-art baselines. The underlined text in the text prompts represents the objects, their characteristics, and spatial properties. Baseline methods often omit objects and struggle with spatial accuracy (first four columns), while our approach excels in capturing all objects and preserving spatial attributes (last column).",
128
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/supplementary_qualitative_arxiv.png"
129
+ },
130
+ "10": {
131
+ "figure_path": "2310.10640v2_figure_10.png",
132
+ "caption": "Figure 10: We provide qualitative comparisons of our approach against DeepFloyd and DenseDiffusion. The underlined text in the text prompts represents the objects, their characteristics, and spatial properties. Baseline methods often omit objects and struggle with spatial accuracy (first 2 columns), while our approach (last column) excels in capturing all objects and preserving spatial attributes. The text in black below images (if present) shows the unrealistic nature of image, red text enlists the missing objects in the image and pink text refers to location misalignment of the object.",
133
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/rebuttal_vis_comparison.png"
134
+ },
135
+ "11": {
136
+ "figure_path": "2310.10640v2_figure_11.png",
137
+ "caption": "Figure 11: Effect of number of layouts on final generated image. We notice that final generated image is coherent in all the cases and aligns well with the textual prompt in terms of object attributes and spatial positions. Note that the position of cat and dog are well defined from the text i.e. cat is on the right of dog.",
138
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/iclr_interpolation_effect.png"
139
+ },
140
+ "12": {
141
+ "figure_path": "2310.10640v2_figure_12.png",
142
+ "caption": "Figure 12: Effect of Interpolation: While ChatGPT is good at generating layouts from textual prompts, however, in few cases it can generate misaligned layouts (2nd row), leading to images missing certain objects such as cat (last row). We notice that the position of cat and dog are not well defined from the textual prompt.",
143
+ "url": "http://arxiv.org/html/2310.10640v2/extracted/5360552/images/iclr_rebuttal_extreme_case.png"
144
+ },
145
+ "13": {
146
+ "figure_path": "2310.10640v2_figure_13.png",
147
+ "caption": "Figure 13: User study on quality. A majority of users picked our method compared to prior works when presented with a 2-AFC task of selecting the image with highest fidelity.",
148
+ "url": "http://arxiv.org/html/2310.10640v2/x2.png"
149
+ }
150
+ },
151
+ "validation": true,
152
+ "references": [
153
+ {
154
+ "1": {
155
+ "title": "Wasserstein generative adversarial networks.",
156
+ "author": "Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou.",
157
+ "venue": "In International conference on machine learning, pp. 214\u2013223. PMLR, 2017.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "2": {
163
+ "title": "Blended diffusion for text-driven editing of natural images.",
164
+ "author": "Omri Avrahami, Dani Lischinski, and Ohad Fried.",
165
+ "venue": "2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18187\u201318197, 2021.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "3": {
171
+ "title": "Universal guidance for diffusion models.",
172
+ "author": "Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein.",
173
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 843\u2013852, 2023.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "4": {
179
+ "title": "Large scale gan training for high fidelity natural image synthesis.",
180
+ "author": "Andrew Brock, Jeff Donahue, and Karen Simonyan.",
181
+ "venue": "arXiv preprint arXiv:1809.11096, 2018.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "5": {
187
+ "title": "Training-free layout control with cross-attention guidance.",
188
+ "author": "Minghao Chen, Iro Laina, and Andrea Vedaldi.",
189
+ "venue": "arXiv preprint arXiv:2304.03373, 2023.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "6": {
195
+ "title": "Diffusion models beat gans on image synthesis.",
196
+ "author": "Prafulla Dhariwal and Alexander Nichol.",
197
+ "venue": "Advances in neural information processing systems, 34:8780\u20138794, 2021.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "7": {
203
+ "title": "Cogview: Mastering text-to-image generation via transformers.",
204
+ "author": "Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al.",
205
+ "venue": "Advances in Neural Information Processing Systems, 34:19822\u201319835, 2021.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "8": {
211
+ "title": "Diffusion self-guidance for controllable image generation.",
212
+ "author": "Dave Epstein, Allan Jabri, Ben Poole, Alexei A Efros, and Aleksander Holynski.",
213
+ "venue": "arXiv preprint arXiv:2306.00986, 2023.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "9": {
219
+ "title": "The pascal visual object classes (voc) challenge.",
220
+ "author": "M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.",
221
+ "venue": "International Journal of Computer Vision, 88(2):303\u2013338, June 2010.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "10": {
227
+ "title": "Frido: Feature pyramid diffusion for complex scene image synthesis.",
228
+ "author": "Wan-Cyuan Fan, Yen-Chun Chen, DongDong Chen, Yu Cheng, Lu Yuan, and Yu-Chiang Frank Wang.",
229
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 579\u2013587, 2023.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "11": {
235
+ "title": "Layoutgpt: Compositional visual planning and generation with large language models.",
236
+ "author": "Weixi Feng, Wanrong Zhu, Tsu-jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, and William Yang Wang.",
237
+ "venue": "arXiv preprint arXiv:2305.15393, 2023.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "12": {
243
+ "title": "Make-a-scene: Scene-based text-to-image generation with human priors.",
244
+ "author": "Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman.",
245
+ "venue": "In European Conference on Computer Vision, pp. 89\u2013106. Springer, 2022.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "13": {
251
+ "title": "Generative adversarial nets.",
252
+ "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.",
253
+ "venue": "Advances in neural information processing systems, 27, 2014.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "14": {
259
+ "title": "Vector quantized diffusion model for text-to-image synthesis.",
260
+ "author": "Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo.",
261
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10696\u201310706, 2022.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "15": {
267
+ "title": "Improved training of wasserstein gans.",
268
+ "author": "Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville.",
269
+ "venue": "Advances in neural information processing systems, 30, 2017.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "16": {
275
+ "title": "Clipscore: A reference-free evaluation metric for image captioning.",
276
+ "author": "Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi.",
277
+ "venue": "arXiv preprint arXiv:2104.08718, 2021.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "17": {
283
+ "title": "Classifier-free diffusion guidance.",
284
+ "author": "Jonathan Ho and Tim Salimans.",
285
+ "venue": "arXiv preprint arXiv:2207.12598, 2022.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "18": {
291
+ "title": "Denoising diffusion probabilistic models.",
292
+ "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.",
293
+ "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "19": {
299
+ "title": "Counting guidance for high fidelity text-to-image synthesis.",
300
+ "author": "Wonjun Kang, Kevin Galim, and Hyung Il Koo.",
301
+ "venue": "arXiv preprint arXiv:2306.17567, 2023.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "20": {
307
+ "title": "A style-based generator architecture for generative adversarial networks.",
308
+ "author": "Tero Karras, Samuli Laine, and Timo Aila.",
309
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401\u20134410, 2019.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "21": {
315
+ "title": "Diffusionclip: Text-guided diffusion models for robust image manipulation.",
316
+ "author": "Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye.",
317
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2426\u20132435, 2022.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "22": {
323
+ "title": "Dense text-to-image generation with attention modulation.",
324
+ "author": "Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, and Jun-Yan Zhu.",
325
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7701\u20137711, 2023.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "23": {
331
+ "title": "On convergence and stability of gans.",
332
+ "author": "Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira.",
333
+ "venue": "arXiv preprint arXiv:1705.07215, 2017.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "24": {
339
+ "title": "Does unsupervised grammar induction need pixels?",
340
+ "author": "Boyi Li, Rodolfo Corona, Karttikeya Mangalam, Catherine Chen, Daniel Flaherty, Serge Belongie, Kilian Q. Weinberger, Jitendra Malik, Trevor Darrell, and Dan Klein.",
341
+ "venue": "arXiv preprint arXiv:2212.10564, 2022a.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "25": {
347
+ "title": "Grounded language-image pre-training, 2022b.",
348
+ "author": "Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao.",
349
+ "venue": null,
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "26": {
355
+ "title": "Gligen: Open-set grounded text-to-image generation.",
356
+ "author": "Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee.",
357
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22511\u201322521, 2023.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "27": {
363
+ "title": "Llm-grounded diffusion: Enhancing prompt understanding of text-to-image diffusion models with large language models.",
364
+ "author": "Long Lian, Boyi Li, Adam Yala, and Trevor Darrell.",
365
+ "venue": "arXiv preprint arXiv:2305.13655, 2023.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "28": {
371
+ "title": "Microsoft coco: Common objects in context.",
372
+ "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.",
373
+ "venue": "In Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pp. 740\u2013755. Springer, 2014.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "29": {
379
+ "title": "More control for free! image synthesis with semantic diffusion guidance.",
380
+ "author": "Xingchao Liu, Dahun Hwang Park, Samaneh Azadi, Guandao Zhang, Armen Chopikyan, Yizhe Hu, \u2026, and Trevor Darrell.",
381
+ "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 289\u2013299, 2023.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "30": {
387
+ "title": "Tf-icon: Diffusion-based training-free cross-domain image composition.",
388
+ "author": "Shilin Lu, Yanzhu Liu, and Adams Wai-Kin Kong.",
389
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "31": {
395
+ "title": "An empirical study of catastrophic forgetting in large language models during continual fine-tuning.",
396
+ "author": "Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang.",
397
+ "venue": "arXiv preprint arXiv:2308.08747, 2023.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "32": {
403
+ "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models.",
404
+ "author": "Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen.",
405
+ "venue": "arXiv preprint arXiv:2112.10741, 2021.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "33": {
411
+ "title": "Chatgpt: A large-scale generative model for conversations.",
412
+ "author": "OpenAI.",
413
+ "venue": "2021.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "34": {
419
+ "title": "Semantic image synthesis with spatially-adaptive normalization.",
420
+ "author": "Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu.",
421
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2337\u20132346, 2019.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "35": {
427
+ "title": "Grounded text-to-image synthesis with attention refocusing.",
428
+ "author": "Quynh Phung, Songwei Ge, and Jia-Bin Huang.",
429
+ "venue": "arXiv preprint arXiv:2306.05427, 2023.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "36": {
435
+ "title": "Learning transferable visual models from natural language supervision.",
436
+ "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.",
437
+ "venue": "In International conference on machine learning, pp. 8748\u20138763. PMLR, 2021.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "37": {
443
+ "title": "Zero-shot text-to-image generation.",
444
+ "author": "Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever.",
445
+ "venue": "In International Conference on Machine Learning, pp. 8821\u20138831. PMLR, 2021.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "38": {
451
+ "title": "Hierarchical text-conditional image generation with clip latents.",
452
+ "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
453
+ "venue": "arXiv preprint arXiv:2204.06125, 1(2):3, 2022.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "39": {
459
+ "title": "Generative adversarial text to image synthesis.",
460
+ "author": "Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.",
461
+ "venue": "In International conference on machine learning, pp. 1060\u20131069. PMLR, 2016.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "40": {
467
+ "title": "High-resolution image synthesis with latent diffusion models.",
468
+ "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.",
469
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684\u201310695, 2022.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "41": {
475
+ "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation.",
476
+ "author": "Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman.",
477
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22500\u201322510, 2023.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "42": {
483
+ "title": "Photorealistic text-to-image diffusion models with deep language understanding.",
484
+ "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.",
485
+ "venue": "Advances in Neural Information Processing Systems, 35:36479\u201336494, 2022.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "43": {
491
+ "title": "Denoising diffusion implicit models.",
492
+ "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.",
493
+ "venue": "arXiv preprint arXiv:2010.02502, 2020a.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "44": {
499
+ "title": "Generative modeling by estimating gradients of the data distribution.",
500
+ "author": "Yang Song and Stefano Ermon.",
501
+ "venue": "Advances in neural information processing systems, 32, 2019.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "45": {
507
+ "title": "Score-based generative modeling through stochastic differential equations.",
508
+ "author": "Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole.",
509
+ "venue": "arXiv preprint arXiv:2011.13456, 2020b.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "46": {
515
+ "title": "Deepfloyd if, 2023.",
516
+ "author": "StabilityAI.",
517
+ "venue": null,
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "47": {
523
+ "title": "Image synthesis from reconfigurable layout and style.",
524
+ "author": "Wei Sun and Tianfu Wu.",
525
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10531\u201310540, 2019.",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "48": {
531
+ "title": "Object-centric image generation from layouts.",
532
+ "author": "Tristan Sylvain, Pengchuan Zhang, Yoshua Bengio, R Devon Hjelm, and Shikhar Sharma.",
533
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 2647\u20132655, 2021.",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "49": {
539
+ "title": "Df-gan: A simple and effective baseline for text-to-image synthesis.",
540
+ "author": "Ming Tao, Hao Tang, Fei Wu, Xiao-Yuan Jing, Bing-Kun Bao, and Changsheng Xu.",
541
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16515\u201316525, 2022.",
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "50": {
547
+ "title": "Attngan: Fine-grained text to image generation with attentional generative adversarial networks.",
548
+ "author": "Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He.",
549
+ "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1316\u20131324, 2018.",
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "51": {
555
+ "title": "Paint by example: Exemplar-based image editing with diffusion models.",
556
+ "author": "Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen.",
557
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18381\u201318391, 2023a.",
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "52": {
563
+ "title": "Reco: Region-controlled text-to-image generation.",
564
+ "author": "Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Linjie Li, Kevin Lin, Chenfei Wu, Nan Duan, Zicheng Liu, Ce Liu, Michael Zeng, et al.",
565
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14246\u201314255, 2023b.",
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "53": {
571
+ "title": "Modeling image composition for complex scene generation.",
572
+ "author": "Zuopeng Yang, Daqing Liu, Chaoyue Wang, Jie Yang, and Dacheng Tao.",
573
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7764\u20137773, 2022.",
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "54": {
579
+ "title": "Scaling autoregressive models for content-rich text-to-image generation.",
580
+ "author": "Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al.",
581
+ "venue": "arXiv preprint arXiv:2206.10789, 2(3):5, 2022.",
582
+ "url": null
583
+ }
584
+ },
585
+ {
586
+ "55": {
587
+ "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks.",
588
+ "author": "Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas.",
589
+ "venue": "In Proceedings of the IEEE international conference on computer vision, pp. 5907\u20135915, 2017.",
590
+ "url": null
591
+ }
592
+ },
593
+ {
594
+ "56": {
595
+ "title": "Stackgan++: Realistic image synthesis with stacked generative adversarial networks.",
596
+ "author": "Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas.",
597
+ "venue": "IEEE transactions on pattern analysis and machine intelligence, 41(8):1947\u20131962, 2018a.",
598
+ "url": null
599
+ }
600
+ },
601
+ {
602
+ "57": {
603
+ "title": "Cross-modal contrastive learning for text-to-image generation.",
604
+ "author": "Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang.",
605
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 833\u2013842, 2021.",
606
+ "url": null
607
+ }
608
+ },
609
+ {
610
+ "58": {
611
+ "title": "The unreasonable effectiveness of deep features as a perceptual metric, 2018b.",
612
+ "author": "Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang.",
613
+ "venue": null,
614
+ "url": null
615
+ }
616
+ },
617
+ {
618
+ "59": {
619
+ "title": "Image generation from layout.",
620
+ "author": "Bo Zhao, Lili Meng, Weidong Yin, and Leonid Sigal.",
621
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8584\u20138593, 2019.",
622
+ "url": null
623
+ }
624
+ },
625
+ {
626
+ "60": {
627
+ "title": "Layoutdiffusion: Controllable diffusion model for layout-to-image generation.",
628
+ "author": "Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li.",
629
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22490\u201322499, June 2023.",
630
+ "url": null
631
+ }
632
+ },
633
+ {
634
+ "61": {
635
+ "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models.",
636
+ "author": "Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny.",
637
+ "venue": "arXiv preprint arXiv:2304.10592, 2023.",
638
+ "url": null
639
+ }
640
+ }
641
+ ],
642
+ "url": "http://arxiv.org/html/2310.10640v2"
643
+ }
20240225/2310.12934v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.14592v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.15213v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.18285v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2310.18306v3.json ADDED
@@ -0,0 +1,443 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Supervised and Penalized Baseline Correction",
3
+ "abstract": "Spectroscopic measurements can show distorted spectral shapes arising\nfrom a mixture of absorbing and scattering contributions. These\ndistortions (or baselines) often manifest themselves as non-constant\noffsets or low-frequency oscillations. As a result, these baselines\ncan adversely affect analytical and quantitative results. Baseline\ncorrection is an umbrella term where one applies pre-processing\nmethods to obtain baseline spectra (the unwanted distortions) and\nthen remove the distortions by differencing. However, current\nstate-of-the art baseline correction methods do not utilize analyte\nconcentrations even if they are available, or even if they\ncontribute significantly to the observed spectral variability. We\nexamine a class of state-of-the-art methods (penalized baseline\ncorrection) and modify them such that they can accommodate a priori\nanalyte concentrations such that prediction can be enhanced.\nPerformance will be assessed on two near infra-red data sets across\nboth classical penalized baseline correction methods (without\nanalyte information) and modified penalized baseline correction\nmethods (leveraging analyte information).",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Spectroscopic measurements, e.g., those obtained from near infrared\n(NIR) instrumentation, often show distorted spectral shapes arising\nfrom a mixture of absorbing and scattering contributions. NIR\nspectral scattering is caused by differences in path length due to\nphysical artifacts where light ballistically deviates from a straight\nline into one or multiple paths with no absorption. Spectrally, this\nscattering typically manifests itself as undulating alterations,\ni.e., non-constant offsets and low frequency curves; see [1 ###reference_bx1###]\nfor a catalogue of spectral distortions due to scattering. These\nscattering distortions can adversely affect qualitative or\nquantitative analytical results. The phrase baseline\ncorrection refers to pre-processing methods that\nremove the physical artifacts in spectra due to scattering.\nAs a consequence of baseline removal, subsequent chemical\ninterpretation and quantitative analyses is then more\nvalid and applicable.\nHistorically, a common method for baseline correction is to\nfit a quadratic or higher-order polynomial function to each\nspectrum and then use the difference between the spectrum and the\nfitted function as the corrected spectrum\n[2 ###reference_bx2###, 3 ###reference_bx3###, 4 ###reference_bx4###, 5 ###reference_bx5###].\nFor example, Multiplicative Scatter Correction (MSC) is\none such procedure: it corrects each measured spectrum\nusing fitted coefficients (slope and intercept) of a reference\nspectrum [6 ###reference_bx6###]. (The reference spectrum is usually just\nthe average spectrum of the calibration set.) There are\nextensions to MSC (e.g., Extended MSC) that include\nfirst-order and/or second-order polynomial fitting to the\nreference spectrum and wavelength axis [7 ###reference_bx7###, 8 ###reference_bx8###].\nAlternatively, baseline removal can also be achieved via\nderivative spectra (i.e., a scaled version of the first or\nsecond derivative of the original spectra). Differentiation\nremoves low-frequency components (e.g., the\nsecond derivative removes constant and linear baselines).\nHowever, differentiation also introduces several problems.\nThe numerical derivative can amplify noise and requires\nsmoothing beforehand, with the final results being highly\ndependent on the parameters of the smoothing algorithm.\nSavitzky-Golay (SG) filtering, based on local least-squares\nfitting of the data by polynomials, is perhaps the most\nwell-known method in chemometrics for smoothing and\ncomputing derivatives on noisy data [9 ###reference_bx9###].\nAlthough SG is a common technique for baseline removal,\nSG filtering can unnecessarily reduce the\nsignal-to-noise ratio, and is prone to artifacts at the\nend of the wavelength range [10 ###reference_bx10###].\nHence, derivative-based baseline removal often amounts\nto a balancing act\u2014it must be smooth enough to\n\u201cclean up\u201d unwanted noise, but not so much as to remove\nimportant spectral gradients.\nOur particular interest is in the class of derivative smoothers\nthat has its roots in the penalized least squares approach of Eilers\n[11 ###reference_bx11###]. Later penalized variants extended the Eilers\napproach by using weighted least squares generalizations\nthat iteratively updated the baseline for a given\nspectrum [12 ###reference_bx12###, 13 ###reference_bx13###, 14 ###reference_bx14###].\nHowever, what is peculiar about these state-of-the-art penalized baseline\ncorrection methods is the following observation: they do not\nconsider analyte concentrations across samples.\nThis is curious because strongly absorbing or scattering\nanalytes, possibly distinct from the response variable or\nanalyte of interest, can dominate or strongly influence the observed\nspectral variability.111An earlier\npaper [3 ###reference_bx3###] did consider analyte concentrations via a\ndifferent class of smoothing, but its regime of applicability was\nquite restrictive: a mixture of solvents in which the concentrations\nof all component species\u2014other than the analyte of interest\u2014is\nknown.\nFor example, biological samples contain\nconsiderable moisture content, and water absorbance often dominates\nthe observed spectral variability across multiple bands in the NIR\nspectra. However, this moisture information is not considered for\nbaseline correction purposes even when reference measurements for\nmoisture are available. In short, current baseline correction\nmethods are unsupervised in that they are agnostic with respect\nto analyte concentrations.\nWe propose how current penalized baseline correction methods can\nbe modified to accommodate reference measurements associated with\nstrongly absorbing or strongly scattering analytes. We call our\nproposed approach Supervised Penalized Baseline Correction\n(SPBC). In Section 2 ###reference_###, we discuss current methods of\npenalized baseline correction. In Section 3 ###reference_###, we\npropose a modification that can accommodate reference measurements.\nSection 4 ###reference_### describes the data sets and the performance\nmetrics used for assessment, and details the procedure for\nselecting tuning parameters. Section 5 ###reference_### evaluates\nperformance on a suite of baseline correction tasks using two\nNIR data sets. Section 6 ###reference_### states the the conclusion\nand suggestions for future work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Penalized Baseline Correction",
15
+ "text": "The approach discussed here relies on penalized least squares\n(or Tikhonov regularization in mathematical parlance) and borrows\nheavily from the algorithmic machinery in [11 ###reference_bx11###]. We will\nuse the phrase Penalized Baseline Correction (PBC) to\ncollectively refer to the spectroscopic baseline correction\napproaches discussed by Paul Eilers in [11 ###reference_bx11###] and later variants\ndiscussed in Section 2.2 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Single Spectrum Formulation of Eilers",
21
+ "text": "Suppose indicates a spectrum from a sample and \ndenotes the baseline correction vector to be fitted or solved for.\nThe misfit between and can be expressed as\n. However, we want to be\nsmooth, and as a result, the roughness can be controlled by\nintroducing a penalty term such that we seek to minimize the\nfollowing function [11 ###reference_bx11###]\nwhere the matrix is termed the discrete smoothing operator\n[15 ###reference_bx15###]. The matrix typically takes on one of two\nforms\u2014 or \u2014where the matrices\nare scaled approximations to the first and second derivative\noperators. In the case of the first derivative operator\nwhere , one can express the\ntwo-norm penalty in Eq.(1 ###reference_###) as\n.\nBy setting the gradient of in Eq.(1 ###reference_###)\nequal to ,\nwe arrive at the linear system:\nWhen , then ; but this would be a\nnon-sensical choice since the baseline-corrected spectra\nwould be . Hence, small values of\n (i.e., ) are not recommended."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Weighted Variants",
27
+ "text": "To introduce flexibility, one can weight the misfit term\n in Eq.(1 ###reference_###)\nwith a diagonal matrix\n\ncontaining non-negative weight entries:\nSubsequent PBC variants of [12 ###reference_bx12###, 13 ###reference_bx13###, 14 ###reference_bx14###]\n(known as ASLS, AIRPLS\u2009 and ARPLS, respectively) go much\nfurther and construct a separate weight matrix for each\nsample . Moreover, each sample-specific weight\nmatrix is also iteratively updated such that the normal\nequations in Eq.(4 ###reference_###) become\n###figure_1### where and correspond to the\n sample and \niteration, respectively. Likewise, the baseline vector\n\ndenotes the baseline-corrected spectrum\nconstructed for the sample\n at\nthe iteration. The \ndiagonal weight matrix is expressed as\n.\nFor example, AIRPLS\u2009 updates the \ndiagonal weight (associated with the \nwavelength) in the following fashion:\nASLS\u2009 and ARPLS\u2009 use different mechanisms to update the\ndiagonal weight entries in .\nFigure 1 ###reference_### illustrates the sequence of\nbaseline correction using AIRPLS: the original spectra, the\nbaselines, and the baseline-corrected spectra on the\ncookie data set (see Section\n4.1.1 ###reference_.SSS1### for a description of this data set).\nThe left-most subplot displays the spectra where the\ncolored lines indicate the level of water concentration\u2014as\ndisplayed in the colorbar to the immediate right.\n(With respect to baseline correction,\nwater is the analyte of interest to be discussed later in this paper.)\nThe middle two subplots display the baseline spectra constructed\nfrom and , and the right-most two subplots\ndisplay the baseline-corrected spectra for and\n.\nThis figure\nhighlights the basic question: for regression purposes, is\nit better to use the original spectra or the\nbaseline-corrected spectra\n( or )?\nThe key observation is the following for the variant PBC\napproaches: whereas the Eilers approach applies the same baseline\ncorrection procedure to each of the spectra \n(via pre-multiplication by ,\nthe weighted PBC variants perform different but simultaneous\nbaseline corrections in parallel."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Multiple Spectrum Formulation",
33
+ "text": "Instead of operating on one spectrum at a time, we\nextend Eq.(1 ###reference_###) and Eq.(4 ###reference_###)\nto accommodate an entire matrix of spectra \nand an entire matrix of baselines where\n is the baseline associated with .\nThis can be accomplished using the\nFrobenius norm:\nThe Frobenius norm of an \nmatrix is expressed as\n\nand can be thought of as a two-norm on the \u201cflattened version\u201d\nof where the flattened vector now\nhas size . Setting the gradient of\nEq.(5 ###reference_###) equal to zero (in addition\nto its weighted equivalent in Eq.(4 ###reference_###)),\nwe obtain the subsequent normal equations\n[16 ###reference_bx16###]:\nThe equations in Eq.(6 ###reference_###)\nare essentially the same as in\nEqs.(3 ###reference_###,4 ###reference_###)\nbut the coefficient matrices\n and\n are\napplied to all baseline spectra simultaneously as opposed to one\nspectrum at a time. Note that in\nEqs.(3 ###reference_###,4 ###reference_###), the\nspectra and are column vectors while the\ncollective spectra in and are aligned\nrow-wise. To maintain alignment consistency with\nEqs.(3 ###reference_###,4 ###reference_###), one could\nrewrite the equations in a column-wise format, e.g.,\n\nand\n."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Supervised Penalized Baseline Correction",
39
+ "text": "In Sections 2.1 ###reference_### and 2.3 ###reference_###, only the\nmatrix is used to construct the baseline matrix\n. However, the approach in Section 2.3 ###reference_###\ncan be modified to accommodate a priori analyte\ninformation. The forthcoming supervised PBC approaches will\nbe denoted by the acronym SPBC. The first SPBC approach\nis based on Nonlinear Iterative Partial Least Squares (NIPALS)\nand will be denoted as SPBCN.\nThe second approach is based on Inverse Least Squares (ILS)\nand will be denoted as\nSPBCI."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "NIPALS framework of SPBCN",
45
+ "text": "Let the vector denote an analyte\nthat will be used to construct the baseline. Here, we extend the Eilers\napproached via the NIPALS outer-product approach\nNote that the Eilers approach of Eq.(5 ###reference_###) and the NIPALS\nextension in Eq.(7 ###reference_###) are functionally equivalent with \nbeing swapped out for with in Eq.(7 ###reference_###). In effect,\nSPBCN\u2009 baseline-corrects the residual or deflated matrix \ninstead of . (When , SPBCN\u2009 reduces to the\nEilers approach.) Since Eq.(7 ###reference_###) is now a function of two\nvariables , we set the gradients of \u2014separately\nwith respect to and \u2014equal to zero and obtain:\nThe above equations can now be solved via alternating least squares (ALS):\nsolve for in the step, plug in\nthe resultant in the equations associated with\n and solve for .\nThe pseudocode for this ALS approach is given in\nAlgorithm 1 ###reference_###. The most computationally intensive\nstep in the pseudocode occurs in Step 4, i.e.,\nsolve\n.\nIn the classical PBC approach of Eilers in Eq.(1 ###reference_###),\nsparse matrix linear libraries coupled with\nCholesky factorization was used to efficiently\nsolve the linear system.\nHowever, a much faster numerical implementation can be performed,\nparticularly in the case of ; see Section\nC ###reference_3### of the Supplement.\nFigure 2 ###reference_### gives an example of the SPBCN-based\nbaseline correction process for the cookie data set\nwhere water concentrations in are\nused to construct the baselines. Compared to\nFigure 1 ###reference_### where baseline correction\nvia AIRPLS\u2009 was performed, the\nbaseline-corrected spectra via SPBC exhibit a more sequential\narrangement of spectra as a function of water concentration\u2014as\nabsorbance increases for a particular wavelength, the spectral\nvalues increase in concentration.\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "ILS framework of SPBCI",
51
+ "text": "Instead of the outer product approach of SPBCN, we can employ an\ninner product approach via ILS to extend Eq.(5 ###reference_###):\nHere, we are trying to relate the baseline corrected spectra\n to the analyte concentrations in \nvia the regression vector .\nAs with Eq.(7 ###reference_###), we set of the gradients,\nseparately with respect to and ,\nequal to zero and obtain:\nThe pseudocode for SPBCI\u2009 via ALS is given in Algorithm 2.\nThe most computationally intensive\nsteps in the pseudocode occurs in Steps 2 and 4, i.e., solve\n for \nand\n,\nrespectively. See Section C ###reference_3### in the Supplement for\ndetails on how these steps were numerically implemented."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Sample Dependence",
57
+ "text": "In the Eilers approach where\n,\nthe baseline correction procedure for each spectrum\n is the same, i.e., pre-multiplication by\n.\nIn the weighted variants (e.g., ASLS\u2009 AIRPLS\u2009 or ARPLS),\n,\nand as a result, the baseline correction procedure is the not the\nsame for each spectrum. However, like the Eilers approach,\nbaseline correction for any one spectrum can be done\nin parallel, (i.e., the baseline correction done one spectrum\ndoes not depend on the baseline correction done an another\nspectrum). Baseline correction for SPBC approaches, on the other\nhand, cannot be done one spectrum at a time. They must be done\nin batch fashion. Thus, the baseline for each sample encodes\ninformation (on analyte concentration) across the entire\ncalibration set which is in contrast to previous approaches."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experimental Methods",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Data Sets",
69
+ "text": "We will examine two near infrared (NIR) data sets:\nthe milk and cookie data\nsets. The NIR spectra for these data sets are displayed in Figure\n3 ###reference_###.\n###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "4.1.1",
73
+ "parent_section_id": "4.1",
74
+ "section_name": "4.1.1 Cookie Data Set",
75
+ "text": "The cookie data set contains measurements from quantitative NIR\nspectroscopy [17 ###reference_bx17###]. The intent of using this data set is to\ntest the feasibility of NIR spectroscopy to measure the\ncomposition of biscuit dough pieces. There are four analytes\nunder consideration: fat, sucrose, flour, and water. The\ncalculated percentages of these four ingredients represent the\nfour response variables. There are 72 samples in total: 40 samples in\nthe calibration set (with sample 23 being an outlier) and 32\nsamples in the separate prediction or validation set (with\nexample 21 considered as an outlier). An NIR reflectance\nspectrum is available for each dough piece. The spectral data\nconsist of 700 points measured from 1100 to 2498 nanometers (nm)\nin intervals of 2nm. In this data set, sucrose will be the\nresponse variable () to be predicted, while fat, water\nand flour will each separately be the analyte that will\nbe used to construct the baselines."
76
+ },
77
+ {
78
+ "section_id": "4.1.2",
79
+ "parent_section_id": "4.1",
80
+ "section_name": "4.1.2 Milk Data Set",
81
+ "text": "The milk data set consists of 298 samples measured across three\nseparate Microelectromechanical System (MEMS) NIR spectrometers\nin transmission mode [18 ###reference_bx18###]. The three spectrometers are\ndenoted in this paper as NIR-TM1, NIR-TM2 and NIR-TM3. The\nspectrum for each milk sample is an average of 20 replicates.\nNIR-TM1, NIR-TM2 and NIR-TM3 span 1100-1400nm, 1550-1950nm and\n2000-2450nm, respectively, with an interval of 2nm. There are\nsix primary analytes under consideration: fat, lactose, protein,\nurea, solute and dry matter.\nWe will focus on instruments NIR-TM2 and NIR-TM3.\nIn this data set, fat will be the analyte () that will\nbe used to construct the baselines. Lactose, protein, urea,\nsolute and dry matter will each separately be the response\nvariable or analyte to be predicted."
82
+ },
83
+ {
84
+ "section_id": "4.2",
85
+ "parent_section_id": "4",
86
+ "section_name": "Schemes involving the availability of",
87
+ "text": "The SPBC implementation depends on how much\ninformation associated with the analyte is\navailable. Data-wise, we will use the triplet\n.\nThe matrix denotes the spectra to be\nbaseline corrected, the vector \ncorresponds to the analyte that will be used for baseline\ncorrection, and the vector corresponds\nto the response variable or analyte whose concentrations we\nwant to predict. We will split the data into three parts:\nthe calibration (or training), tuning, and validation (or\ntest) sets, which will be denoted by the subscripts\n1, t and 2, i.e.,\n,\n and\n.\nThe tuning set will be aside and will be exclusively used to\nestimate the number of PLS latent dimensions. See Section\n4.4 ###reference_### for a more detailed explanation of how the data\nis partitioned. Ultimately, our goal is to enhance the\nprediction of by utilizing baseline corrected\nspectra constructed from\n and\n.\n(The symbol \u201c:=\u201d typically denotes that the left-hand side\nis defined as the expression on the right-hand side.) The\nprediction of proceeds in two steps to be\ndescribed next."
88
+ },
89
+ {
90
+ "section_id": "4.2.1",
91
+ "parent_section_id": "4.2",
92
+ "section_name": "4.2.1 Full and Partial Schemes",
93
+ "text": "We use\n and\n\nin algorithms 1 ###reference_### or 2 ###reference_### to\nobtain , and then split it into two parts\n and \ncorresponding to the calibration and validation sets\nsuch that . Computing\n and requires and ,\nrespectively.\nThe full scheme assumes\nthat we have full access to both \nand .\n(Suppose the reference measurements for both \nand are inexpensive and/or easy to obtain with respect\nto laboratory effort and time, then\n and\n\nwill be the inputs into Algorithms 1 and 2). The\npartial scheme assumes that we have full access to\n but only partial access to\n, that is we have knowledge of but\nnot .\nWithout access to , however, we will need reliable\napproximations or estimates to act as numerical proxies.\nInstead of using\n, we can use a combined set\nof known references and prediction estimates\n such that .\nIn short, for the partial scheme, we use \nto construct the baselines instead of .\nCompared to the partial scheme, we can expect the construction\nof the baseline spectra for the full scheme to be qualitatively\nbetter since known references are used.\nHence, the performance of the partial scheme will be highly\ndependent on the accuracy and precision associated with the\nestimates in .\nThe construction of the estimates in proceeds as follows\nfor each data partition. 80% of the samples are randomly sampled\nfrom the calibration set . The calibration\nmodel is then applied to and a prediction estimate\n is obtained. Another 80% of the samples\nare randomly sampled from the calibration set, the\nsubsequent model is then applied to and another prediction\nestimate is obtained. This process is\nrepeated for a total of 25 times such that we obtain the following\ncollection of estimates\n.\nThe prediction estimates outside the \u201cTukey interval\u201d (or\n) are removed and the\nremaining estimates are averaged to yield the final estimate for\n."
94
+ },
95
+ {
96
+ "section_id": "4.2.2",
97
+ "parent_section_id": "4.2",
98
+ "section_name": "4.2.2 Build calibration model and predict",
99
+ "text": "Once has been obtained, we\nbaseline-correct the calibration and validation sets\nwhereby\n\nand\n,\nrespectively. We mean-center the calibration set\nand solve for\n using, for example, Partial Least Squares (PLS) regression.\nFinally, we then predict via"
100
+ },
101
+ {
102
+ "section_id": "4.3",
103
+ "parent_section_id": "4",
104
+ "section_name": "Baseline Correction Methods Examined",
105
+ "text": "We will examine several classes of penalized smoothing methods:\n1) no background correction (just using the original spectra\nwithout pre-processing);\n2) the original PBC approach of Eilers in Section 2.1 ###reference_###;\n3) a PBC smoothing variant of Section 2.2 ###reference_###; and\n3) the SPBC methods introduced in Section 3 ###reference_###.\nWe outline them below:\nNONE:\nHere, no background correction is applied. However, from\na background correction point of view, \nand the baseline corrected spectra is simply\n.\nNONE\u2009 then serves as the benchmark by which the other\nbaseline correction methods are intended to outperform.\nEILERS:\nThis refers to the construction of the baseline spectra\n by the original PBC approach of Eilers in Section\n2.1 ###reference_###.\nAIRPLS:\nThe baseline spectra are constructed via Adaptive\nIteratively Reweighted Penalized Least Squares [13 ###reference_bx13###].\nWith respect to the other PBC variants mentioned in Section\n2.2 ###reference_### (ASLS\u2009 and ARPLS) that use weighted least\nsquares, we observed that these variants performed\nqualitatively the same as AIRPLS. As a result, and for ease\nof illustration, we use AIRPLS\u2009 as the canonical PBC\nvariant.\nSPBC:\nThe SPBC methods construct the baseline spectra \nby accommodating analyte information. The SPBC approaches\ncan be subdivided by approach (inverse least squares versus\nNIPALS) and by scheme (full versus partial):\nSPBCI:F:\nInverse least squares coupled with the full scheme.\nSPBCI:P:\nInverse least squares coupled with the partial scheme.\nSPBCN:F:\nNIPALS coupled with the full scheme.\nSPBCN:P:\nNIPALS coupled with the partial scheme.\nWe also explored the smoothing approaches of Savitsky-Golay\n(SG) and Extended Multiplicative Scatter Correction (EMSC)\n[9 ###reference_bx9###, 6 ###reference_bx6###, 7 ###reference_bx7###]. Here, our version of EMSC\u2009 utilizes\na \u201cplain vanilla\u201d approach that accounts for wavelength\ndependencies where the fitting coefficients\n were modeled as\nHere, is the reference spectrum and\n\nis the vector of wavelengths. Although we have knowledge of\nthe concentrations of many analytes, we do not assume that we\nhave enough knowledge across the major chemical constituents\n(analytes and interferents) in the milk and cookie data sets;\nhence the rationale for employing the basic EMSC approach accounting\nonly for wavelength dependencies. We found that SG\u2009 and EMSC\u2009\nwere inferior to AIRPLS\u2009 in all instances (and in the case of\nSG, we even tried to optimize for frame length, or moving\nwindow width). As a consequence, and as was the case with\nASLS\u2009 and ARPLS, we also do not display performance results\nfor SG\u2009 and EMSC."
106
+ },
107
+ {
108
+ "section_id": "4.4",
109
+ "parent_section_id": "4",
110
+ "section_name": "Data Partitions and Assessment Metrics",
111
+ "text": "To ensure that performance results are not anecdotal to one\nparticular split of the data, we assess the performance across\n200 splits of the data. Each partition of the samples\nrandomly shuffles the data and splits it into three sets:\n45% (calibration), 5% (tuning) and 50% (validation or\ntesting). The first 45% of the samples will be used to build\nthe calibration model. The next 5% of the samples belong to\nthe tuning set. The prediction of on the tuning set\nsamples will be used to select the PLS latent dimension that\nwill subsequently be applied to the validation set.\nAside from the tuning set, we split the samples into two\nsets of triplets: the\ncalibration triplet \u2014derived\nfrom the 45% block of samples\u2014and the validation triplet\n\u2014derived from the 50% block\nof samples. Note that the SPBC partial\nscheme uses the validation triplet\n\nwhere is a proxy or prediction estimate for .\nTo assess the performance for the partition\nor data split, we use two metrics: MARD and the coefficient of\ndetermination (). MARD is an acronym for Mean Absolute\nRelative Difference, and is computed as the mean value of the\nabsolute relative difference (ARD) between prediction estimates\nand reference measurements. For example, MARD for the validation\nset would be computed as follows: the predictions and reference\nmeasurements for the partition are defined as\n\nand\n,\nrespectively, and\nTo compute MARD for the tuning set, one would instead replace\n and with\n\nand\n,\nrespectively. MARD basically functions as an aggregate percent\nrelative error measure across a set of samples. The coefficient\nof determination metric derives from the line-of-best-fit in the\nscatter diagram associated with the coordinates\nbetween the reference measurements and prediction estimates.\nThe coefficient of determination for the \npartition will be denoted as . We then create\nboxplots from the collection of and\n measures across the partitions\n. Instead of the traditional boxplots\nwhere the inter-quartile range is the middle 50% of the data,\nwe modify our boxplots to show the middle 80% where the edges\nof the \u201cbox\u201d correspond to the 10% and 90% percentiles.\nMoreover, no outliers are displayed; instead the whiskers extend\nto the min and max of the data."
112
+ },
113
+ {
114
+ "section_id": "4.5",
115
+ "parent_section_id": "4",
116
+ "section_name": "Selection of values",
117
+ "text": "In the penalized methods associated with Eilers, the PBC\nvariants such as AIRPLS, and the SPBC approaches, the\nvalue of is the tuning parameter of interest. The\nsimplicity of the Eilers approach, i.e.,\n,\nyields insight on what a\nreasonable choice should be.\nWhen is small (),\nthen and the\nbaseline corrected spectra will essentially\nbe small-amplitude noise around the zero matrix. Hence, small\nvalues of are not warranted.\nThe solution of\n\nis equivalent to a sum involving the loading\nvectors of the derivative operator \u2014see\nEq.(12 ###reference_2###) in the Supplement.\nThe filter factors \nin Eq.(12 ###reference_2###) can only\ndamp or filter the corresponding loading vector\n when is sufficiently large, i.e.\n(). As result, we will assess performance across\nfour penalty values: ."
118
+ },
119
+ {
120
+ "section_id": "4.6",
121
+ "parent_section_id": "4",
122
+ "section_name": "Selection of the latent dimension",
123
+ "text": "As mentioned in Section 4.2 ###reference_###, the calibration model\nrequired for predicting in Step 2 in the full and\npartial schemes in Section 4.2 ###reference_### will be done using\nPartial Least Squares (PLS). To select the PLS latent dimension,\nwe use an approach based on metric ranking.\nBased upon the predictions on the tuning set, let\u2019s consider the\nMARD and R2 values across PLS latent dimensions .\nThe latent dimension with the lowest MARD value gets a rank of 1;\nthe latent dimension with the second lowest MARD value gets a rank\nof 2; and so on. Similarly, the latent dimension with the highest\nR2 value gets a rank of 1; the latent dimension with the second\nhighest R2 value gets a rank of 2; and so on. Let\n\nand\n\ncorrespond to the integer-based rankings associated with MARD and\nR2, respectively, across PLS latent dimensions .\nHence, each latent dimension is associated with a pair of\nranks , and we can treat this pair as\n- and -coordinates. The PLS latent dimension whose\ncoordinates is closest to the origin\n\u2014using the Euclidean distance\n\u2014is deemed the optimal\nPLS latent dimension."
124
+ },
125
+ {
126
+ "section_id": "5",
127
+ "parent_section_id": null,
128
+ "section_name": "Performance",
129
+ "text": "In this section, we examine performance for both the Milk and\nCookie data sets. A collection of MARD and R2 values across\n200 data partitions will be used to assess performance."
130
+ },
131
+ {
132
+ "section_id": "5.1",
133
+ "parent_section_id": "5",
134
+ "section_name": "Milk data set performance",
135
+ "text": "For the Milk data set, fat will be the analyte used\n(in tandem with the spectra ) to construct the baseline\nspectra . Prediction will first be assessed on urea.\nPerformance will then be examined for all of the\nother analytes in order of their correlation strength with fat."
136
+ },
137
+ {
138
+ "section_id": "5.1.1",
139
+ "parent_section_id": "5.1",
140
+ "section_name": "5.1.1 Fat () and Urea ()",
141
+ "text": "###figure_4### ###figure_5### We first examine performance where and correspond\nto fat and urea, respectively. Figure (4 ###reference_###)\ndisplays the summary MARD and R2 boxplot performance across all six\nbaseline correction methods in addition to NONE. The first and\nsecond columns correspond to the first and second derivative\nmatrices, while the first and second rows are associated with MARD\nand R2, respectively. Aside from NONE, each method has four\nboxplots associated with it (all with the same color), and from\nleft-to-right, these intra-method boxplots correspond to\n. We want to note several archetypal\npatterns of behavior:\nThe partial SPBC schemes exhibit poor performance across\nall values, and are always non-superior to NONE.\nWith respect to intra-method performance, the performance\nassociated with , on average, is always non-superior\nto the boxplots associated with .\nThis is especially the case with MARD but less so with R2.\nThe above performance trends hold not just for urea, but also\ngeneralize across different analytes and data sets examined in\nthis paper. As a result, and for ease of visualization, we will\nheretofore focus on as well exclude\nthe partial SPBC schemes from subsequent consideration. Figure\n5 ###reference_### displays the resulting reduced\nset of boxplots, and it is clear that only SPBCI:F\u2009 and SPBCN:F\u2009\nare superior to the other methods. Compared to NONE, the PBC\napproaches of EILERS\u2009 and AIRPLS\u2009 exhibit non-inferior\nperformance with respect to MARD, but marginally superior R2\nperformance."
142
+ },
143
+ {
144
+ "section_id": "5.1.2",
145
+ "parent_section_id": "5.1",
146
+ "section_name": "5.1.2 Impact of correlation between and",
147
+ "text": "In this section, we now compare fat () with all the other\npossible analytes (that ones that we want to predict) in\norder of correlation coefficient magnitude\u2014see Table\n1 ###reference_###.\nFigures 6 ###reference_### and 7 ###reference_###\ndisplay MARD and R2 performance across all of these analyte pairs\nfor instruments NIR-TM3 and NIR-TM2, respectively. For the SPBC\napproaches, we observe that MARD and R2 performance improves as the\ncorrelation coefficient magnitude increases. The improved\nperformance with increasing can be explained by examining\nSteps 3 and 4 in Algorithms 1 ###reference_### and 2 ###reference_###\n(for simplicity of notation, we will drop the subscript \nand denote as ):\nBy its very construction, the baseline spectra is correlated\nwith , and the baseline-corrected spectra \nwill likewise be correlated with . If has a strong\ncorrelation with , then the calibration model built from\n should yield an improved\nprediction for . This also explains why the partial\nschemes performed poorly compared to the full scheme. In the partial\nschemes, we obtain estimates for by building a calibration\nmodel from and subsequently predicting\n from . We hope that the prediction will\nbe accurate and precise but there is no expectation that the\nprediction estimates will also preserve correlation.\nIn effect, the correlation between and \nhas been degraded in the partial schemes.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15###"
148
+ },
149
+ {
150
+ "section_id": "5.2",
151
+ "parent_section_id": "5",
152
+ "section_name": "Cookie Performance",
153
+ "text": "The cookie data set allows us to explore the construction of\nbaselines using various analytes as the correlation coefficient\nmagnitude between and increases. Figure\n8 ###reference_### displays performance for three\npairs of analytes involving sucrose with an increasing degree\nof correlation coefficient magnitude.\n\nfat\nwater\nflour\n\n-0.1581\n-0.6860\n-0.9424\nSince the response variable sucrose () is fixed, the\nperformance for NONE\u2009 and the PBC methods of EILERS\u2009 and\nAIRPLS\u2009 do not change since the construction of the baselines\nare purely unsupervised\u2014they do not take the analyte \ninto account. As expected, the SPBC performance does change\n(as was the case with the Milk data sets) and this performance\nimproves as the correlation coefficient magnitude increases.\nFor sucrose () and fat (), the analyte pair with\nthe lowest correlation coefficient magnitude, none of the baseline\ncorrection methods outperform NONE. With respect to sucrose\n() and water (), the performance is similar to\nwhat we observed with the milk data set, i.e., SPBCI:F\u2009 and\nSPBCN:F\u2009 exhibit superior performance compared to NONE,\u2009\nEILERS\u2009 and AIRPLS. As with the milk data sets, the analytes\nwith the strongest correlation between and yield\nthe best performance, particularly with respect to R2.\n###figure_16### ###figure_17### ###figure_18###"
154
+ },
155
+ {
156
+ "section_id": "6",
157
+ "parent_section_id": null,
158
+ "section_name": "Conclusion and Future Work",
159
+ "text": "The SPBC approaches provide a simple extension for estimating\nbaselines that incorporate a priori analyte information. There\nare two metaparameters ( and latent dimension) that are\nrelatively easy to tune, e.g., MARD and R2 performance were\nobserved to be qualitatively invariant across meaningful\nvalues of (). SPBC via the full scheme\nprovides useful baseline-corrected signals that outperform\ntraditional state-of-the-art penalized baseline algorithms\nsuch as AIRPLS.\nWith respect to the Eilers approach in the case of\n, we have developed even faster\nimplementations (see Supplement) than Cholesky factorizations.\nIn particular, the computation of the singular values and\nloading vectors of using closed-form analytical\nformulas are novel in chemometrics. These fast implementations\nhave been socketed into the alternating least squares framework\nof SPBC. Moreover, the filter factor representations discussed\nin the Supplement allow one to apply SPBC across multiple values\nof simultaneously.\nIn this paper, SPBC has only been applied to NIR data sets.\nWe would like to see if this approach can be applied to other\nspectroscopic modalities such as Raman spectra, fluorescence\nspectra, NMR signals, etc. The SPBC methods only had superior\nperformance for the full scheme, and not for the partial scheme.\nWe seek to develop alternative partial schemes where better\nestimates for can be obtained. Alternative schemes\ncould include semi-supervised learning where the training data\n and are used to compute\n (as opposed to just using the ).\nImprovements in partial scheme development will allow for more\nmeaningful use-case scenarios and will lead to more widespread\nadoption. We have applied SPBC using only one analyte for\n. However, multiple analytes can be accommodated\ninto a matrix such that\nStep 3 in Algorithms 1 ###reference_### and 2 ###reference_### can\nbe rewritten as and\n, respectively. Moreover, one is\nnot necessarily restricted to (or ) being\ncontinuously valued reference measurements. These reference\nmeasurements could be categorical, and the regression framework\nemployed here in this paper could be extended to classification\nalgorithms."
160
+ }
161
+ ],
162
+ "appendix": [
163
+ {
164
+ "section_id": "Appendix x1",
165
+ "parent_section_id": null,
166
+ "section_name": "Supplement: Numerical Considerations",
167
+ "text": "There are many instances when the most straightforward solution\nof a linear system may not be the most efficient. For example,\nthe numerical solution to the linear system\n\nsuggested by [11 ###reference_bx11###] uses Cholesky factorization on the\ncoefficient matrix via sparse matrix\nlibraries since is a tridiagonal or\npentadiagonal matrix if or ,\nrespectively. While computationally sound, there are other\nimplementations that are more efficient. Given that these\nsystem of linear equations are embedded in an alternative least\nsquares loop in Algorithms 1 ###reference_### and 2 ###reference_###,\ndetails involving computational speedup are warranted.\nSuppose the reduced Singular Value Decomposition (SVD)\nof yields\n where and \nare orthonormal and\n, .\nThe full SVD of similarly yields\nwhere and are the orthonormal nullspace\nvectors of and , respectively.\nWe will only be interested in since\n in Eq.(3 ###reference_###).\nFortunately, the nullspace \nof is well characterized [15 ###reference_bx15###]:\n\nand\n\nwhere\n.\nUsing classical Gram-Schmidt orthogonalization, we obtain\nthe orthonormal columns of :\nsuch that and for\n and , respectively.\nAs a result, we can express the Eliers solution in\nEq.(3 ###reference_###) as\nWhen , then\n\nwhere is the average\nvalue across the entries of . As a result, the\nbaseline spectrum can be expressed as a\nlinear combination of the loading vectors of :\nThe second term in\nEq.(12 ###reference_2###) is the fixed or unregularized\ncomponent of the solution since the component does\nnot depend on . The diagonal matrix is\nanalogous to the filter factor matrix associated with\nstandard Tikhonov regularization or ridge regression\n[15 ###reference_bx15###]. The contribution of\nthe singular vector is damped or \u201cfiltered\u201d\nby its corresponding filter factor .\nAs , ,\nand the solution approaches .\nAt the other extreme, as , the\nfirst term in Eq.(12 ###reference_2###)\nshrinks toward zero and approaches the\nunregularized component .\nThe SVD-based solution in Eq.(12 ###reference_2###) also has\nthe appealing aspect in that the solution can be vectorized\nacross multiple values of .\nNext we will discuss how the loading vectors and\nsingular values of can be computed without the\nneed of the SVD.\nOne can exploit the tridiagonal structure of\n\nto compute the singular values and loading vectors without\nthe need of the SVD. We first note that matrix is of\na tridiagonal form\nwhere , and . The near Toeplitz-like\nstructure (Topelitz matrices are banded matrices with\nconstant diagonal elements) of Eq.(14 ###reference_4###) allows the\nsingular values and loading vectors\n of \nto be analytically constructed using symbolic\ncalculus[19 ###reference_bx19###]:\nThis exploitation of the near-Toeplitz structure\nof is novel in baseline correction.\nTo illustrate the eigenstructure of the derivative operator, we\ncompute the analytical-based SVD of\n\nwhere such that\n.\nFigure 8(a) ###reference_.sf1### shows the loading vectors \nin \nwhile Figure 8(b) ###reference_.sf2### displays the square of the\nsingular values . The last\nloading vector actually corresponds to the\nnullspace vector .\nFilter 8(c) ###reference_.sf3### displays the value of the filter\nfactors for\n\u2014each\ncolored curve\ncorresponds to a different value of . The loading\nvector and singular value associated\nwith each index value of has its own color: as \nincreases in value, the colors vary from blue (high frequency)\nto red (low frequency). Compared to most matrices, the loading\nvectors of (and as well) are unusual in\nthat the number of sign changes (the number of times \ncrosses the -axis) decreases as increases.\nThe filter factor curves indicate that the terms in\nEq.(13 ###reference_3###) associated with high frequency\nloading vectors (the blues and greens) are easily\ndamped by moderately large values of , whereas\nthe low frequency loading vectors are preserved except for\nthe largest values of .\n###figure_19### ###figure_20### ###figure_21### For SPBCN, Step 4 of Algorithm 1 ###reference_###, i.e.,\n\n(where )\nis the computational bottleneck of the alternating least squares\nprocedure. Its solution is the same as Eq.(12 ###reference_2###)\nexcept that the matrix is replaced with :\nFor SPBCI,\nlet be the reduced Singular\nValue Decomposition (SVD) of where and \nare orthonormal and\n\nwhere is the rank . Similarly, let\nbe the full SVD where \nis the nullspace of . In Step 2 of Algorithm\n2 ###reference_###, the linear system\n\nwhere \ncan then be rewritten as\nIn this case, the coefficient matrix \non the left-hand-size is constant, and as a result, the\nsolution can be expressed using the basis vectors in\n and . Due to the high correlation\nof spectra in , instead of solving\n\nin Step 2 of Algorithm 2 ###reference_###, we will instead solve\n\nvia ridge regression.222The ridge parameter will be\nintentionally chosen to be small to ensure numerical stability.\nWe will not try to optimize as a tuning parameter.\nAs result, the solution in Step 2 can be written as\nIf has full column rank, i.e., and ,\nthen will empty and the solution can be written as\n.\nSince the ridge regression occurs within an alternative least\nsquares loop, it is prudent to\ncompute the SVD of once at the very beginning of the loop, and\nthen re-use the pre-computed SVD components (the singular\nvalues in , and the loading vectors in \nand the nullspace vectors in ) over-and-over again."
168
+ }
169
+ ],
170
+ "tables": {
171
+ "1": {
172
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.4\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"5\" id=\"S5.T1.3.3.3\">\n<span class=\"ltx_text\" id=\"S5.T1.3.3.3.1\" style=\"font-size:90%;\">Correlation Coefficient (</span><span class=\"ltx_text\" id=\"S5.T1.3.3.3.2\" style=\"font-size:90%;\">) of\n</span><span class=\"ltx_text\" id=\"S5.T1.3.3.3.3\" style=\"font-size:90%;\"> with fat (</span><span class=\"ltx_text\" id=\"S5.T1.3.3.3.4\" style=\"font-size:90%;\">)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.2\"><span class=\"ltx_text\" id=\"S5.T1.4.4.2.1\" style=\"font-size:90%;\">lactose</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.3\"><span class=\"ltx_text\" id=\"S5.T1.4.4.3.1\" style=\"font-size:90%;\">protein</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.4\"><span class=\"ltx_text\" id=\"S5.T1.4.4.4.1\" style=\"font-size:90%;\">urea</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.5\"><span class=\"ltx_text\" id=\"S5.T1.4.4.5.1\" style=\"font-size:90%;\">solute</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.6\"><span class=\"ltx_text\" id=\"S5.T1.4.4.6.1\" style=\"font-size:90%;\">dry matter</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.2\"><span class=\"ltx_text\" id=\"S5.T1.5.5.2.1\" style=\"font-size:90%;\">0.1883</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.3\"><span class=\"ltx_text\" id=\"S5.T1.5.5.3.1\" style=\"font-size:90%;\">-0.4305</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.4\"><span class=\"ltx_text\" id=\"S5.T1.5.5.4.1\" style=\"font-size:90%;\">-0.5480</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.5\"><span class=\"ltx_text\" id=\"S5.T1.5.5.5.1\" style=\"font-size:90%;\">0.7771</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.6\"><span class=\"ltx_text\" id=\"S5.T1.5.5.6.1\" style=\"font-size:90%;\">0.9985</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Milk data set: The correlation coefficient\nwith fat and each of the other analytes.</figcaption>\n</figure>",
173
+ "capture": "Table 1: Milk data set: The correlation coefficient\nwith fat and each of the other analytes."
174
+ },
175
+ "2": {
176
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.4\" style=\"width:17.1pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T2.3.3.3\">\n<span class=\"ltx_text\" id=\"S5.T2.3.3.3.1\" style=\"font-size:90%;\">Correlation Coefficient\n(</span><span class=\"ltx_text\" id=\"S5.T2.3.3.3.2\" style=\"font-size:90%;\">) of </span><span class=\"ltx_text\" id=\"S5.T2.3.3.3.3\" style=\"font-size:90%;\"> with sucrose </span><span class=\"ltx_text\" id=\"S5.T2.3.3.3.4\" style=\"font-size:90%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.1\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.4.4.1.1.1\"></p>\n</th>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.2\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.4.4.2.1\"><span class=\"ltx_text\" id=\"S5.T2.4.4.2.1.1\" style=\"font-size:90%;\">fat</span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.3\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.4.4.3.1\"><span class=\"ltx_text\" id=\"S5.T2.4.4.3.1.1\" style=\"font-size:90%;\">water</span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.4\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.4.4.4.1\"><span class=\"ltx_text\" id=\"S5.T2.4.4.4.1.1\" style=\"font-size:90%;\">flour</span></p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.5.5.1\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.5.5.1.1.1\"></p>\n</th>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.5.5.2\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.5.5.2.1\"><span class=\"ltx_text\" id=\"S5.T2.5.5.2.1.1\" style=\"font-size:90%;\">-0.1581</span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.5.5.3\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.5.5.3.1\"><span class=\"ltx_text\" id=\"S5.T2.5.5.3.1.1\" style=\"font-size:90%;\">-0.6860</span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.5.5.4\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T2.5.5.4.1\"><span class=\"ltx_text\" id=\"S5.T2.5.5.4.1.1\" style=\"font-size:90%;\">-0.9424</span></p>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Cookie data set: The correlation\ncoefficient between sucrose and each of the other\nanalytes.</figcaption>\n</figure>",
177
+ "capture": "Table 2: Cookie data set: The correlation\ncoefficient between sucrose and each of the other\nanalytes."
178
+ }
179
+ },
180
+ "image_paths": {
181
+ "1": {
182
+ "figure_path": "2310.18306v3_figure_1.png",
183
+ "caption": "Figure 1: For the cookie data set, we\ndisplay the spectra (left subplot),\nAIRPLS\u2009 baselines with \u03bb=100\ud835\udf06100\\lambda=100italic_\u03bb = 100 via \ud835\udc031subscript\ud835\udc031\\mathbf{D}_{1}bold_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT\nand \ud835\udc032subscript\ud835\udc032\\mathbf{D}_{2}bold_D start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (middle subplots), and the corresponding\nbaseline-corrected spectra (right subplots).",
184
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Cookie-AIRPLS-Baselines.jpg"
185
+ },
186
+ "2": {
187
+ "figure_path": "2310.18306v3_figure_2.png",
188
+ "caption": "Figure 2: Spectra, baseline spectra and baseline-corrected spectra for\nthe first and second derivative operators\n(\ud835\udc031subscript\ud835\udc031\\mathbf{D}_{1}bold_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \ud835\udc032subscript\ud835\udc032\\mathbf{D}_{2}bold_D start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) when using SPBCN.",
189
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Cookie-SPBC-Baselines.jpg"
190
+ },
191
+ "3": {
192
+ "figure_path": "2310.18306v3_figure_3.png",
193
+ "caption": "Figure 3: Spectra for the milk data set\n(instruments NIR-TM2 and NIR-TM3 in transmission mode) and\nthe cookie data set (in absorbance mode) on the far right.",
194
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/spectra_datasets.png"
195
+ },
196
+ "4": {
197
+ "figure_path": "2310.18306v3_figure_4.png",
198
+ "caption": "Figure 4: Urea (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y) and Fat (\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a).\nPerformance across baseline correction methods, and across 200\ndata splits. The first and second columns corresponds to the\nfirst and second derivative operators, respectively, while the\nfirst and second rows correspond to MARD and R2, respectively.\nAside from NONE, each of the four boxplots associated with the\nsame color correspond (from left-to-right) to\n\u03bb={1,10,100,1000}\ud835\udf061101001000\\lambda=\\{1,10,100,1000\\}italic_\u03bb = { 1 , 10 , 100 , 1000 }.",
199
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Urea.jpg"
200
+ },
201
+ "5": {
202
+ "figure_path": "2310.18306v3_figure_5.png",
203
+ "caption": "Figure 5: Fat (\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and Urea (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nCondensed performance display across NONE, EILERS, AIRPLS,\nSPBCI:F\u2009 and SPBCN:F\u2009 for \u03bb={10,100,1000}\ud835\udf06101001000\\lambda=\\{10,100,1000\\}italic_\u03bb = { 10 , 100 , 1000 } and\nacross 200 data splits. The first and second subplots on the\nleft corresponds to MARD while third and fourth subplots\ncorrespond to R2. The first and third columns correspond to\nthe first derivative operator, while the second and fourth\ncolumns correspond to the second derivative operator.",
204
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Urea-Condensed.jpg"
205
+ },
206
+ "6(a)": {
207
+ "figure_path": "2310.18306v3_figure_6(a).png",
208
+ "caption": "(a) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) lactose (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 6: MARD and R2 performance for the Milk data set using\ninstrument NIR-TM3.\nDescription-wise, this figure has\nthe same format as Figure 5.\nThe correlations between fat and the other analytes\nare shown in 1.",
209
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Lactose-Condensed.jpg"
210
+ },
211
+ "6(b)": {
212
+ "figure_path": "2310.18306v3_figure_6(b).png",
213
+ "caption": "(b) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and protein (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 6: MARD and R2 performance for the Milk data set using\ninstrument NIR-TM3.\nDescription-wise, this figure has\nthe same format as Figure 5.\nThe correlations between fat and the other analytes\nare shown in 1.",
214
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Protein-Condensed.jpg"
215
+ },
216
+ "6(c)": {
217
+ "figure_path": "2310.18306v3_figure_6(c).png",
218
+ "caption": "(c) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and urea (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y)\nFigure 6: MARD and R2 performance for the Milk data set using\ninstrument NIR-TM3.\nDescription-wise, this figure has\nthe same format as Figure 5.\nThe correlations between fat and the other analytes\nare shown in 1.",
219
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Urea-Condensed.jpg"
220
+ },
221
+ "6(d)": {
222
+ "figure_path": "2310.18306v3_figure_6(d).png",
223
+ "caption": "(d) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and solute (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 6: MARD and R2 performance for the Milk data set using\ninstrument NIR-TM3.\nDescription-wise, this figure has\nthe same format as Figure 5.\nThe correlations between fat and the other analytes\nare shown in 1.",
224
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Solute-Condensed.jpg"
225
+ },
226
+ "6(e)": {
227
+ "figure_path": "2310.18306v3_figure_6(e).png",
228
+ "caption": "(e) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and dry matter (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 6: MARD and R2 performance for the Milk data set using\ninstrument NIR-TM3.\nDescription-wise, this figure has\nthe same format as Figure 5.\nThe correlations between fat and the other analytes\nare shown in 1.",
229
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk-Fat-Drymatter-Condensed.jpg"
230
+ },
231
+ "7(a)": {
232
+ "figure_path": "2310.18306v3_figure_7(a).png",
233
+ "caption": "(a) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) lactose (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 7: The display is the same as\n6\nexcept that the performance corresponds\nto instrument NIR-TM2.",
234
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk2-Fat-Lactose-Condensed.jpg"
235
+ },
236
+ "7(b)": {
237
+ "figure_path": "2310.18306v3_figure_7(b).png",
238
+ "caption": "(b) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and protein (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 7: The display is the same as\n6\nexcept that the performance corresponds\nto instrument NIR-TM2.",
239
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk2-Fat-Protein-Condensed.jpg"
240
+ },
241
+ "7(c)": {
242
+ "figure_path": "2310.18306v3_figure_7(c).png",
243
+ "caption": "(c) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and urea (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y)\nFigure 7: The display is the same as\n6\nexcept that the performance corresponds\nto instrument NIR-TM2.",
244
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk2-Fat-Urea-Condensed.jpg"
245
+ },
246
+ "7(d)": {
247
+ "figure_path": "2310.18306v3_figure_7(d).png",
248
+ "caption": "(d) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and solute (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 7: The display is the same as\n6\nexcept that the performance corresponds\nto instrument NIR-TM2.",
249
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk2-Fat-Solute-Condensed.jpg"
250
+ },
251
+ "7(e)": {
252
+ "figure_path": "2310.18306v3_figure_7(e).png",
253
+ "caption": "(e) Performance for fat\n(\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a) and dry matter (\ud835\udc32\ud835\udc32\\mathbf{y}bold_y).\nFigure 7: The display is the same as\n6\nexcept that the performance corresponds\nto instrument NIR-TM2.",
254
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Milk2-Fat-Drymatter-Condensed.jpg"
255
+ },
256
+ "8(a)": {
257
+ "figure_path": "2310.18306v3_figure_8(a).png",
258
+ "caption": "(a) Performance for sucrose\n(\ud835\udc32\ud835\udc32\\mathbf{y}bold_y) and fat (\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a).\nFigure 8: The display is the same as Figure 6\nexcept that the performance corresponds to the Cookie\ndata set and its associated analytes.",
259
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Cookie-Fat-Sucrose-Condensed.jpg"
260
+ },
261
+ "8(b)": {
262
+ "figure_path": "2310.18306v3_figure_8(b).png",
263
+ "caption": "(b) Performance for sucrose\n(\ud835\udc32\ud835\udc32\\mathbf{y}bold_y) and water (\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a).\nFigure 8: The display is the same as Figure 6\nexcept that the performance corresponds to the Cookie\ndata set and its associated analytes.",
264
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Cookie-Water-Sucrose-Condensed.jpg"
265
+ },
266
+ "8(c)": {
267
+ "figure_path": "2310.18306v3_figure_8(c).png",
268
+ "caption": "(c) Performance for sucrose\n(\ud835\udc32\ud835\udc32\\mathbf{y}bold_y) and flour (\ud835\udc1a\ud835\udc1a\\mathbf{a}bold_a).\nFigure 8: The display is the same as Figure 6\nexcept that the performance corresponds to the Cookie\ndata set and its associated analytes.",
269
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/Cookie-Flour-Sucrose-Condensed.jpg"
270
+ },
271
+ "9(a)": {
272
+ "figure_path": "2310.18306v3_figure_9(a).png",
273
+ "caption": "(a) The loading vectors\n\ud835\udc2f:j=[v1\u2062j,v2\u2062j,\u2026,v40\u2062j]\u200b\u200b Tsubscript\ud835\udc2f:absent\ud835\udc57superscriptsubscript\ud835\udc631\ud835\udc57subscript\ud835\udc632\ud835\udc57\u2026subscript\ud835\udc6340\ud835\udc57\u200b\u200b T\\mathbf{v}_{:j}=[v_{1j},v_{2j},\\ldots,v_{40j}]^{\\mbox{\\!\\! \\tiny T}}bold_v start_POSTSUBSCRIPT : italic_j end_POSTSUBSCRIPT = [ italic_v start_POSTSUBSCRIPT 1 italic_j end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 2 italic_j end_POSTSUBSCRIPT , \u2026 , italic_v start_POSTSUBSCRIPT 40 italic_j end_POSTSUBSCRIPT ] start_POSTSUPERSCRIPT \u200b\u200b roman_T end_POSTSUPERSCRIPT\nassociated with \ud835\udc031\u2208\u211d39\u00d740subscript\ud835\udc031superscript\u211d3940\\mathbf{D}_{1}\\in\\mathbb{R}^{39\\times 40}bold_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT 39 \u00d7 40 end_POSTSUPERSCRIPT are\ndisplayed. For each subplot, the y\ud835\udc66yitalic_y-axis corresponds to\nthe value of vi\u2062jsubscript\ud835\udc63\ud835\udc56\ud835\udc57v_{ij}italic_v start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT while the x\ud835\udc65xitalic_x-axis corresponds to\ni={1,2,\u2026,40}\ud835\udc5612\u202640i=\\{1,2,\\ldots,40\\}italic_i = { 1 , 2 , \u2026 , 40 }.\nFigure 9: The loadings vectors, singular and filter factors are displayed\nfor a first derivative operator matrix\n\ud835\udc031\u2208\u211d39\u00d740subscript\ud835\udc031superscript\u211d3940\\mathbf{D}_{1}\\in\\mathbb{R}^{39\\times 40}bold_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT 39 \u00d7 40 end_POSTSUPERSCRIPT.",
274
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/EigenD1.jpg"
275
+ },
276
+ "9(b)": {
277
+ "figure_path": "2310.18306v3_figure_9(b).png",
278
+ "caption": "(b) The singular values sj2superscriptsubscript\ud835\udc60\ud835\udc572s_{j}^{2}italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT\nplotted as a function of index j\ud835\udc57jitalic_j.\nFigure 9: The loadings vectors, singular and filter factors are displayed\nfor a first derivative operator matrix\n\ud835\udc031\u2208\u211d39\u00d740subscript\ud835\udc031superscript\u211d3940\\mathbf{D}_{1}\\in\\mathbb{R}^{39\\times 40}bold_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT 39 \u00d7 40 end_POSTSUPERSCRIPT.",
279
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/SingularD1.jpg"
280
+ },
281
+ "9(c)": {
282
+ "figure_path": "2310.18306v3_figure_9(c).png",
283
+ "caption": "(c) The filter factors\nfj=11+\u03bb2\u2062sj2subscript\ud835\udc53\ud835\udc5711superscript\ud835\udf062superscriptsubscript\ud835\udc60\ud835\udc572f_{j}=\\frac{1}{1+\\lambda^{2}s_{j}^{2}}italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG 1 + italic_\u03bb start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG\nplotted as a function of sj2superscriptsubscript\ud835\udc60\ud835\udc572s_{j}^{2}italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.\nFigure 9: The loadings vectors, singular and filter factors are displayed\nfor a first derivative operator matrix\n\ud835\udc031\u2208\u211d39\u00d740subscript\ud835\udc031superscript\u211d3940\\mathbf{D}_{1}\\in\\mathbb{R}^{39\\times 40}bold_D start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT 39 \u00d7 40 end_POSTSUPERSCRIPT.",
284
+ "url": "http://arxiv.org/html/2310.18306v3/extracted/5430765/FilterD1.jpg"
285
+ }
286
+ },
287
+ "validation": true,
288
+ "references": [
289
+ {
290
+ "1": {
291
+ "title": "\u201cMinimising contributions from scattering in infrared spectra\nby means of an integrating sphere\u201d",
292
+ "author": "A. Dazzi, A. Deniset-Besseau and P. Lasch",
293
+ "venue": "In Analyst 138, 2013, pp. 4191\u20134201",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "2": {
299
+ "title": "\u201cBaseline subtraction using robust local regression\nestimation\u201d",
300
+ "author": "A.F. Ruckstuhl, M.P. Jacobson, R.W. Field and J.A. Dodd",
301
+ "venue": "In Journal of Quantitative Spectroscopy & Radiative\nTransfer 68, 2001, pp. 179\u2013193",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "3": {
307
+ "title": "\u201cCorrection for Nonlinear Fluctuating Background in\nMonovariable Analytical Systems\u201d",
308
+ "author": "I. Schecter",
309
+ "venue": "In Analytical Chemistry 67, 1995, pp. 2580\u20132585",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "4": {
315
+ "title": "\u201cBackground removal from spectra by designing and minimising a\nnon-quadratic cost function\u201d",
316
+ "author": "Vincent Mazet et al.",
317
+ "venue": "In Chemometrics and Intelligent Laboratory Systems 76, 2005, pp. 121\u2013133",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "5": {
323
+ "title": "\u201cBackground removal from spectra by designing and minimising a\nnon-quadratic cost function\u201d",
324
+ "author": "Vincent Mazet et al.",
325
+ "venue": "In Chemometrics and Intelligent Laboratory Systems 76, 2005, pp. 121\u2013133",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "6": {
331
+ "title": "\u201cLinearization and scatter-correction for near-infrared\nreflectance spectra of meat\u201d",
332
+ "author": "P. Geladi, D. MacDougall and H. Martens",
333
+ "venue": "In Applied Spectroscopy 39, 1985, pp. 491\u2013500",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "7": {
339
+ "title": "\u201cExtended multiplicative signal correction and spectral\ninterference subtraction: new preprocessing methods for near infrared\nspectroscopy\u201d",
340
+ "author": "H. Martens and E. Stark",
341
+ "venue": "In J Pharm Biomed Anal. 9, 1991, pp. 625\u2013635",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "8": {
347
+ "title": "\u201cStudy of the scattering effects on NIR data for the\nprediction of ash content using EMSC correction factors\u201d",
348
+ "author": "M. Mancini, Giuseppe Toscano and Rinnan",
349
+ "venue": "In J Pharm Biomed Anal. 9, 1991, pp. 625\u2013635",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "9": {
355
+ "title": "\u201cSmoothing and differentiation of data by simplified least\nsquares procedures\u201d",
356
+ "author": "A. Savitzky and M.J.E. Golay",
357
+ "venue": "In Analytical Chemistry 36, 1964, pp. 1627\u20131639",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "10": {
363
+ "title": "\u201cWhy and How Savitzky-Golay Filters Should Be Replaced\u201d",
364
+ "author": "M. Schmid, D. Rath and U. Diebold",
365
+ "venue": "In ACS Measurement Science 2, 2022, pp. 185\u2013196",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "11": {
371
+ "title": "\u201cA perfect smoother\u201d",
372
+ "author": "P.H.C. Eilers",
373
+ "venue": "In Anal Chem 75, 2003, pp. 3631\u20133636",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "12": {
379
+ "title": "\u201cBaseline Correction with Asymmetric Least Squares Smoothing\u201d",
380
+ "author": "P.H.C. Eilers and H.F.M. Boelens",
381
+ "venue": "In Report (Leiden University Medical Centre), 2005",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "13": {
387
+ "title": "\u201cBaseline correction using adaptive iteratively reweighted\npenalized least squares\u201d",
388
+ "author": "Z.M. Zhang, S. Chen and Y.Z Liang",
389
+ "venue": "In Analyst 135, 2010, pp. 1138\u20131146",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "14": {
395
+ "title": "\u201cBaseline correction using asymmetrically reweighted penalized\nleast squares smoothing\u201d",
396
+ "author": "S-J. Baek, A. Park, Ahn Y-J and J. Choo",
397
+ "venue": "In Analyst 140, 2015, pp. 250\u2013257",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "15": {
403
+ "title": "\u201cRank-Deficient and Discrete Ill-Posed Problems\u201d",
404
+ "author": "P.C. Hansen",
405
+ "venue": "Society for IndustrialApplied Mathematics, 1998",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "16": {
411
+ "title": "\u201cThe Matrix Cookbook\u201d, 2012",
412
+ "author": "K.B. Petersen and M.S. Pedersen",
413
+ "venue": "Technical University of Denmark",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "17": {
419
+ "title": "\u201cApplication of Near-Infrared Reflectance Spectroscopy to\nCompositional Analysis of Biscuits and Biscuit Dough\u201d",
420
+ "author": "B.G. Osborne, T. Fearn, A.R. Miller and S. Douglas",
421
+ "venue": "In Journal of the Science of Food and Agriculture 35, 1984, pp. 99\u2013105",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "18": {
427
+ "title": "\u201cEvaluation of MEMS NIR Spectrometers for On-Farm Analysis of\nRaw Milk Composition\u201d",
428
+ "author": "S. Uusitalo et al.",
429
+ "venue": "In Foods 10, 2021",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "19": {
435
+ "title": "\u201cEigenvalues of several tridiagonal matrices\u201d",
436
+ "author": "W-C Yueh",
437
+ "venue": "In Applied Mathematics E-Notes 5, 2005, pp. 66\u201374",
438
+ "url": null
439
+ }
440
+ }
441
+ ],
442
+ "url": "http://arxiv.org/html/2310.18306v3"
443
+ }
20240225/2311.01270v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2311.05462v2.json ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "ChatGPT and Other Large Language Models for Cybersecurity of Smart Grid Applications A. Zaboli and J. Hong are with the Department of Electrical and Computer Engineering, University of Michigan \u2013 Dearborn, Dearborn, MI 48128, USA. S. L. Choi is with the Power Systems Engineering Center, National Renewable Energy Laboratory (NREL), Golden, CO 80401, USA. T.-J. Song is with the Department of Urban Engineering, Chungbuk National University, Cheongju 28644, South Korea.",
3
+ "abstract": "Cybersecurity breaches targeting electrical substations constitute a significant threat to the integrity of the power grid, necessitating comprehensive defense and mitigation strategies. Any anomaly in information and communication technology (ICT) should be detected for secure communications between devices in digital substations. This paper proposes large language models (LLMs), e.g., ChatGPT, for the cybersecurity of IEC 61850-based communications. Multi-cast messages such as generic object oriented system events (GOOSE) and sampled values (SV) are used for case studies. The proposed LLM-based cybersecurity framework includes, for the first time, data pre-processing of communication systems and human-in-the-loop (HITL) training (considering the cybersecurity guidelines recommended by humans). The results show a comparative analysis of detected anomaly data carried out based on the performance evaluation metrics for different LLMs. A hardware-in-the-loop (HIL) testbed is used to generate and extract dataset of IEC 61850 communications.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Digital substations serve as crucial elements within modern power systems, characterized by their escalating complexity and integration.\nGOOSE and SV are instrumental in facilitating rapid and dependable communication in the context of digital substations. Nevertheless, the open architecture intrinsic to these protocols makes them vulnerable to cyberattacks.\nThe focal point of scholarly endeavors is the refinement and implementation of complex algorithms tailored for the contemporaneous oversight and scrutiny of network traffic [1 ###reference_b1###].\nIntrusion detection system (IDS)-based machine learning (ML) methods have been the foundation for detecting and mitigating anomalies in GOOSE and SV messages. While these methods offer precision and are data-driven, they come with a significant challenge. Every time a new attack pattern emerges, the models need to be re-trained. This necessity for re-training consumes time and resources and leaves the system vulnerable during the interim periods when the new threats are not yet incorporated into the model\u2019s knowledge base [2 ###reference_b2###].\nOn the other hand, LLMs such as ChatGPT 4.0 offer a more dynamic and adaptable approach. Unlike ML models, LLMs are designed to understand context, allowing them to recognize and respond to novel threats even if they have not been explicitly trained in them. This contextual understanding minimizes the efforts required in the face of evolving cyber threats. Instead of frequent re-training sessions, LLMs can interpret and adapt to new information, providing a more resilient and efficient solution for anomaly detection in digital substations [3 ###reference_b3###, 4 ###reference_b4###].\nIn the area of cybersecurity for digital substations, LLMs can play a pivotal role in anomaly detection, enhancing the security layers. These models can analyze vast datasets, identify patterns, and detect anomalies indicative of potential cyberattacks. These models are designed to investigate through extensive data, including GOOSE and SV messages, to effectively distinguish regular patterns from irregularities [5 ###reference_b5###]. The incorporation of artificial intelligence (AI) aids into real-time monitoring is crucial to accelerate responses to security breaches.\nA unique deep learning-based system tailored for detecting cyberattacks on protective relays was developed based on extensive real-world datasets [6 ###reference_b6###]. GOOSE and SV messages are vulnerable to replay and message injection attacks, involving the re-transmission of unaltered messages or the transmission of fake, malicious ones. These attacks disrupt system operations either by replaying old messages or by injecting new, deceptive messages that mimic legitimate behavior [7 ###reference_b7###, 8 ###reference_b8###]. However, the diversity and complexity of cyberattacks necessitate advanced detection mechanisms. Also, balancing the model\u2019s sensitivity to detect minor anomalies while avoiding false positives (FPs) is crucial.\nIn [9 ###reference_b9###], a novel unsupervised learning approach for an IDS of GOOSE messages is suggested based on a combination of autoencoders and clustering techniques for efficient detection.\nAccording to literature surveys, challenges in the applicability of ML models in IDSs can include ensuring the reliability and robustness of the model in real-time power grids considering new cyberattacks, a trade-off between complexity and accuracy due to large datasets, and the adaptability of the ML model to evolving cyberattacks and changing the substation infrastructure. Furthermore, a re-training process is required for new cyberattacks; however, LLMs can handle these challenges effectively and reduce the processing time.\nThis paper proposes for the first time the employment of LLMs based on HITL interactions to detect anomalies in GOOSE and SV datasets for cybersecurity considerations in substations. Hence, this paper focuses on the cybersecurity of multicast messages, and we will focus on other protocols in substations in the future. This paper suggests human recommendations for data pre-processing for these communication protocols. This process minimizes efforts (unlike applying ML methods) when encountering new cyberattacks (or anomalies). It does not affect the model\u2019s complexity/precision and is faster to implement. Moreover, this paper makes a comparison between LLMs (i.e., ChatGPT 4.0 [10 ###reference_b10###], Anthropic\u2019s Claude 2 [11 ###reference_b11###], and Google Bard/PaLM 2 [12 ###reference_b12###]) to evaluate their performance. The actual datasets for GOOSE and SV packets are extracted from the HIL testbed. The main contributions of this paper can be summarized as follows:\nThis paper proposes the usage of different LLMs in the cybersecurity of digital substations in terms of performance evaluation metrics.\nLLM-based HITL is considered an IDS to detect abnormal data in IEC 61850 communication protocols.\nA conversion of the IDS algorithm to text is employed for training datasets to detect anomalies in LLMs.\nThe remainder of the paper is organized as follows: Section II states a representation of the cybersecurity of digital substations using LLMs. The proposed HITL technique, along with the feature extraction and analysis of datasets, are mentioned in Section III. Section IV presents the results and discussion of the evaluation metrics according to different levels of training. Finally, this paper is deduced in Section V."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Cybersecurity of Digital Substations Using Large Language Models",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Cybersecurity of Digital Substations",
21
+ "text": "A cyber-physical power system testbed serves as an instrumental platform for studying the causal relationships associated with cyber intrusions, the robustness of power systems, and the dependability of applications in a realistic environment. Within such a real-time HIL testbed, all constituent elements, encompassing hardware, software, communication mechanisms, and emulators, are coordinated in alignment with the global positioning system (GPS). The real-time dynamics pertinent to communication and information processing become imperative in the context of analyzing cyber intrusions, detection mechanisms, and mitigation strategies [13 ###reference_b13###]. As seen in Fig. 1 ###reference_###, the testbed consists of protective intelligent electronic devices (IEDs), software-defined networking (SDN) switches, a satellite-synchronized clock, a merging unit, a supervisory control and data acquisition (SCADA) system, a real-time digital simulator, and the amplifier.\n\n###figure_1### The distributed management system (DMS) SCADA system can get measurements and issue a control command via DNP3 communication.\nVarious IEDs are implemented, including the merging unit IED and protective IEDs. These IEDs possess the proficiency to transmit control commands (e.g., GOOSE messages) pertinent to a circuit breaker (CB). Conversely, a CB IED (modeled in a real-time simulator) is specifically configured to subscribe to GOOSE messages and publish the status (open or closed) to protective IEDs. The merging unit IED has the ability to forward digital current and voltage values (i.e., SV), taking into account the amplifier from the digital simulator, to the protective IED. Furthermore, the proposed HITL LLM-based IDS is engineered to identify anomalies and potential security threats within the substation automation framework and maintains a connection to SDN switches [13 ###reference_b13###].\nThe purpose of this paper is to demonstrate an IDS considering the LLM-based HITL process. Hence, the GOOSE and SV packets are extracted from the HIL testbed for further analysis in different LLMs considering the human recommendations that are described in the subsequent section."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Large Language Model-Based Human-in-the-loop Process",
27
+ "text": "Generative AI (GenAI) models, constructed through deep neural network methods, are designed to discern patterns and structures from extensive training datasets, subsequently producing similar content. The capabilities of GenAI encompass the creation of diverse content types, ranging from text and images to sounds and various data forms. The introduction of ChatGPT has markedly influenced the broader AI/ML domain, exemplifying GenAI\u2019s potential to resonate with the wider populace and altering prevailing conceptions of AI/ML. The technological sector is actively pursuing the refinement of advanced LLMs aimed at simulating authentic human interactions, as evidenced by innovations (e.g., Microsoft\u2019s GPT and Google Bard/PaLM 2). Over the past year, GenAI has strengthened its presence as a prevalent online tool [5 ###reference_b5###].\nLLMs and GenAI systems present considerable opportunities to augment productivity and operational efficiency. However, their application, especially in sectors characterized by high risk and stringent regulations, brings about notable challenges. A potential strategy to mitigate risks is adopting the HITL process, as illustrated in Fig. 1 ###reference_### by the HITL LLM box. Incorporating human interactions during training, validation, and testing stages can expedite the learning process and improve the confidence level of outputs. Initially, individuals can explain the execution of specific tasks and subsequently offer insights into the model\u2019s efficacy. This involvement can be manifested in modifying the model\u2019s results. Drawing insights from a fusion of human demonstrations and assessments has proven to surpass the efficiency and speed of ML methods. The HITL paradigm becomes indispensable when confronted with constraints (e.g., when data presents anomalies or lacks comprehensiveness), leading to uncertainties about the model\u2019s capability to address all scenarios. Moreover, consistent human oversight and verification are useful, especially when inaccuracies in model predictions could have severe consequences [14 ###reference_b14###]. In the proposed model, there are human recommendations to improve the model efficiency based on GOOSE and SV message features. Thus, this method is helpful in minimizing the trials by entering new data into the normal dataset and avoiding the re-training process. Also, the adaptability and robustness of models can be improved quickly.\nAllowing a language model unrestricted access to data pertaining to critical infrastructure necessitates meticulous study, given the significant security and privacy implications. Implementing accurate access controls and encryption and authentication protocols is imperative to mitigate unauthorized data access.\nIt is vital for human specialists to exercise continuous supervision and assess the outputs of the model, ensuring the validity and dependability of AI-facilitated cybersecurity methodologies.\nIn addition, there exists the potential for LLMs to unintentionally disclose confidential information during engagements, especially if they lack appropriate training or protection [5 ###reference_b5###].\nThe cybersecurity of LLMs is out of scope for this research, and the purpose is solely to employ LLMs as tools for detecting anomalies in communication messages."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III IEC 61850-based Communication Datasets and Human Recommendations Process",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A GOOSE and SV Datasets",
39
+ "text": "The GOOSE and SV packets are extracted from the HIL testbed. The \u201c.pcap\u201d means that these packets are captured using Wireshark (a network packet analyzer), as seen in Fig. 2 ###reference_###.\n\n###figure_2### As shown, there are 10 data points for GOOSE packets based on the extracted features. The SV datasets follow the same procedures, with the 7 most important features as dataset columns. \u201cTime\u201d shows the time at which the packet is actually sent, and the format of this feature is based on hour, minute, and second (including microsecond level). The features \u201cDM\u201d and \u201cSM\u201d refer to the destination and source media access control (MAC) addresses, respectively. This specific \u201cDM\u201d address (01 00 03) of GOOSE messages shows the target devices (sent to the device that subscribes to this MAC address). Also, the \u201cSM\u201d address of GOOSE messages is 27 34 31 which shows the sender\u2019s IED. The indicators for GOOSE and SV are shown as \u201ctype,\u201d which is 88 b8 and 88 ba, respectively. The \u201cAPPID\u201d values for GOOSE and SV communications are 3 and 40, respectively. Also, \u201cdatSet\u201d and \u201cgoID\u201d are assigned based on \u201cDM\u201d and indicate the dataset name and GOOSE identification, respectively. Based on Fig. 2 ###reference_###, there is a GOOSE block reference (\u201cgocbRef\u201d) that indicates the name of the GOOSE in the \u201cgoosePdu.\u201d \u201cstNum\u201d and \u201csqNum\u201d express the state and sequence numbers in GOOSE communications, respectively. Furthermore, two data types (\u201cdata1\u201d and \u201cdata2\u201d) are considered based on GOOSE packets. In the SV dataset, there are \u201csvID\u201d and \u201csmpCnt\u201d in the \u201csavPdu,\u201d which indicate SV identification and sample count number. A large number of datasets have been used to train the GOOSE and SV communications to check the performance evaluation of LLMs."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Human Recommendations for Intrusion Detection Systems",
45
+ "text": "According to the given datasets, a series of human recommendations based on the violations in the GOOSE and SV datasets can be described.\nSome different attacks and errors were considered, such as the data injection (DI) attack, the denial-of-service (DoS) attack, the system problem, and the replay (RE) attack for GOOSE and SV communications. These attacks can be described as follows. A failure to satisfy at least one recommendation leads to the relevant attack. Regarding the DoS attack for SV, the sample/cycle is and the frequency is Hz, so there are a total of samples per second. If is calculated, microseconds can be achieved. Hence, the normal time for a DoS attack should be around this time. The process can be done for GOOSE messages as well. The \u201cheartbeat\u201d of GOOSE packets refers to a regular, periodic message sent over the network to indicate the status of the system. The heartbeat (e.g., seconds) ensures continuous monitoring and quick detection of any changes or failures in the system. The frequency of these heartbeat messages can vary depending on the configuration and requirements of the specific substation system. Typically, GOOSE messages are sent at intervals ranging from a few milliseconds to several seconds. Regarding SV packets, a heartbeat indicates the operational status or health of a system. The exact frequency or interval of the heartbeat for SV packets can vary depending on the specific implementation and requirements of the digital substation system.\nAttacks/errors on GOOSE datasets\n- DI: If data has the same \u201cDM\u201d and \u201cSM,\u201d \u201csqNum\u201d should be increased every time.\n- DI: If there is any change in \u201cdata1\u201d or \u201cdata2,\u201d \u201cstNum\u201d should be increased by 1 and \u201csqNum\u201d should be reset to 0.\n- DI: If data has the same \u201cDM\u201d and \u201cSM,\u201d once \u201cstNum\u201d is increased, it cannot go back to smaller numbers.\n- DoS: There are up to 10 packets (rows) within 10 ms.\n- System Problem: There should be a packet (dataset) within 10 s.\n- RE: If there is any change in \u201cdata1\u201d or \u201cdata2,\u201d \u201cstNum\u201d should be increased 1 and \u201csqNum\u201d should be reset to 0.\nAttacks/errors on SV datasets\n- DI: The range of \u201csmpCnt\u201d is from 0 to 4799.\n- DI: Once the \u201csmpCnt\u201d is increased, it should be increased up to 4799 and then reset to 0.\n- DI: \u201csmpCnt\u201d cannot be decreased until it reaches 4799 and resets to 0.\n- DoS: A normal time interval should be around 208 ms.\n- DoS: There are up to 12 packets within 2.083 ms.\n- System Problem: \u201csmpCnt\u201d should be increased every time by 1.\nThe recommended considerations are applied to datasets to train LLMs. This helps to improve the accuracy of the pre-trained model based on the ML model, even though there is new data. The purpose is to show the performance evaluation based on datasets generated at three different levels, including a dataset without training (without human recommendations), with partial training (recommendations of DI and DoS attacks), and with full training. Then, the performance evaluation metrics of different LLMs are compared. This process assists in minimizing the trials for re-training ML models and the adaptability of the model in cases where there is new data. The next section presents the performance evaluation results, considering the HITL process in different LLMs."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV Results and Discussion",
51
+ "text": "This section presents a comparison of results between LLMs at different levels. Precision is a preferable performance metric, denoting the rate at which an IDS correctly detects anomalies. However, relying solely on precision metric might be misleading when evaluating anomaly detection techniques, especially in scenarios characterized by significant differences between FPs and false negatives (FNs). Therefore, this section presents the performance analysis of different LLMs considering the HITL for the GOOSE and SV datasets. The fundamental performance metrics for anomaly detection analysis are described and discussed in this section.\nA description of evaluation metrics based on the detection of anomaly data in GOOSE and SV communication protocols, along with the results based on case studies, is shown in Table I ###reference_###. Due to the limitations of LLMs and computational speed, this paper focuses on online detection, not real-time intrusion detection.\n###table_1### IEC 61850-based Communication\nLLMs\nDescription\nA ratio of correct GOOSE anomalies that were\n\ncorrectly identified (also, named recall).\nA ratio of normal GOOSE data that were\n\nwrongly identified as anomalies.\nA ratio of correct GOOSE anomalies that\n\nthe system failed to detect.\nMeasures accuracy of detected\n\nGOOSE anomalies.\nProvides a trade-off between precision\n\nand recall.\nIEC 61850-based Communication\nLLMs\nDescription\nA ratio of correct SV anomalies that were\n\ncorrectly identified (also, named recall).\nA ratio of normal SV data that were\n\nwrongly identified as anomalies.\nA ratio of correct SV anomalies that\n\nthe system failed to detect.\nMeasures accuracy of detected\n\nSV anomalies.\nProvides a trade-off between precision\n\nand recall."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-A Case Studies: GOOSE and SV Anomaly Detection",
57
+ "text": "This section presents the results of the performance evaluation metrics for different LLMs based on the training levels. The formulations of the performance assessment are given in the previous part, including true positive rate (TPR), false positive rate (FPR), false negative rate (FNR), precision, and F1-score metrics. A comparison of anomaly detection results considering the different LLMs (i.e., ChatGPT 4.0, Anthropic\u2019s Claude 2, and Google Bard/PaLM 2) with the HITL process is presented in this table. The results show that ChatGPT 4.0 outperforms the two other LLMs in both case studies for anomaly detection as an IDS. A higher TPR indicates a better model, as it is able to identify more of the actual positives. It also shows the detection rate of anomalies, where ChatGPT 4.0 has values of 98.18% and 96.67% for detection of anomalies in GOOSE and SV messages, respectively. These percentages are the highest rates in comparison with other LLMs at full training levels. It happened for all other training levels as well. Lower FPR and FNR indicate a superior model, as it is less possible to misclassify positive and negative values, respectively. It occurs for ChatGPT 4.0 considering FPR and FNR in both communications. At full training levels, these values are less than which represents a good performance of this LLM. Also, Claude 2 shows great performance in the detection of normal SV data that was wrongly detected as anomalies. The precision metric represents the accuracy of anomalies detection in GOOSE and SV communications. The precision values for ChatGPT 4.0 and Claude 2 are in comparison with Google Bard () in the SV dataset. F1-score is a harmonic mean of precision and recall, which means that it gives equal importance to both the ability of the algorithm to identify true anomalies and its ability to avoid FPs. This metric shows the highest value based on ChatGPT 4.0. The impact of the HITL process can be observed at different training levels in Table I ###reference_###. A portion of the human recommendations are considered for the partial training. Therefore, better performance at different rates, precisions, and F1-scores can be perceived by applying the HITL process. All human recommendations based on the defined attacks/errors are considered at the full training level.\nTo recap, ChatGPT 4.0 served as the best LLM in comparison with Anthropic\u2019s Claude 2 and Google Bard/PaLM 2 in all rates and measurements. However, there are challenges in using LLMs based on the HITL process in cybersecurity studies on digital substations. Cybersecurity anomalies entail a level of complexity that may exceed AI\u2019s contextual discernment capabilities. The necessity for LLMs to process sensitive data introduces data privacy and security considerations. AI\u2019s enhancement in cybersecurity is hindered by the need for continuous data input, reflecting the dynamic nature of the field. The integrity of anomaly detection in AI is dependent on its training data, with potential inaccuracies manifesting as FPs or FNs. Hence, task-oriented dialogues (ToD) and fine-tuning are posited to enhance anomaly detection accuracy through the provision of structured interactive patterns that augment LLMs\u2019 effectiveness in cybersecurity-specific responses. They enable more targeted and context-aware queries, thereby refining the decision-making process. Additionally, they promote an improved feedback mechanism where human experts can iteratively refine AI performance on designated tasks, thereby optimizing its learning trajectory over time.\nRule-based detection systems, known for their effectiveness in identifying known threats, often demonstrate superior results in specific scenarios. However, LLMs bring a unique advantage to the field of anomaly detection. Unlike rule-based systems that rely on predefined criteria, LLMs possess the capability to identify unexpected or novel attacks, a critical feature in the constantly evolving landscape of cybersecurity threats. This ability to detect anomalies that deviate from known patterns or behaviors allows LLMs to address a broader range of potential attacks. Consequently, integrating LLMs into anomaly detection efforts can significantly reduce the manual labor and complexity involved in continuously updating and maintaining rule-based systems, especially in environments where new and unforeseen attack vectors are a constant challenge."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusion",
63
+ "text": "This paper proposes the use of LLMs based on the HITL process for cybersecurity in substations, as evaluated by various performance metrics. LLMs are employed as IDSs to identify anomalies in communication protocols. An IDS algorithm is converted to text to train datasets for anomaly detection. In comparison, ChatGPT 4.0 outperformed the two other LLMs in all metrics. This LLM demonstrated better precision and performance at different levels of training. These models have privacy issues regarding confidential data. Thus, using the ToD and fine-tuning are necessary to enhance the accuracy of LLMs. In the future, it will be the intention to consider other LLMs with ToD and fine-tuning processes with more attacks and errors to improve the LLMs\u2019 efficiency, along with analyses on all multicast messages in digital substations."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>A comparison of detection results (without, partial and full terms show the levels of training process).</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.1.2.1.1\"></span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.1.2\"> <span class=\"ltx_text\" id=\"S4.T1.1.1.2.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.1.2.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.1.2.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.2.1.2.1.1.1.1\">IEC 61850-based Communication</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.1.2.1.2.2\"></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"9\" id=\"S4.T1.1.1.3\" style=\"background-color:#FFFFC7;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.3.1\" style=\"color:#000000;background-color:#FFFFC7;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.3.1.1\">GOOSE</span><span class=\"ltx_text\" id=\"S4.T1.1.1.3.1.2\" style=\"color:black;background-color:#FFFFC7;\"></span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.1\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.2.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.2.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.2.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.2.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.2.2.1.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.2.2.1.2.1.1.1.1\">LLMs</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.2.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.2.3\">ChatGPT 4.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.2.4\">Anthropic\u2019s Claude 2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.2.5\">Google Bard/PaLM 2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.1\">\n<span class=\"ltx_text\" id=\"S4.T1.1.3.1.1\"></span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.3.1.2\"> <span class=\"ltx_text\" id=\"S4.T1.1.3.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.3.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.3.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.1.2.1.1.1.1\">Metrics</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.3.1.2.2\"></span></span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.3.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.3.2.1.1\"></span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.3.2.1.2\"> <span class=\"ltx_text\" id=\"S4.T1.1.3.2.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.3.2.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.3.2.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.1.2.1.1.1.1\">Description</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.3.2.1.2.2\"></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3\">without</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.4\">partial</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.5\">full</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.6\">without</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.7\">partial</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.8\">full</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.9\">without</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.10\">partial</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.11\">full</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.4.1.1\">TPR</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.4.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.4.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.4.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.4.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.4.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.1.2.1.1.1\">A ratio of correct GOOSE anomalies that were</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.4.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.1.2.1.2.1\">correctly identified (also, named recall).</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.4.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.3\">78.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4\">85.45%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.5\">98.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.6\">78.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.7\">83.64%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.8\">89.09%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.9\">74.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.10\">81.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.11\">89.1%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.5.1.1\">FPR</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.5.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.5.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.5.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.5.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.5.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.2.1.2.1.1.1\">A ratio of normal GOOSE data that were</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.5.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.2.1.2.1.2.1\">wrongly identified as anomalies.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.5.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.3\">48%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.4\">32%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.5\">4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.6\">56%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.7\">44%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.8\">32%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.9\">56%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.10\">40%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.5.11\">20%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.6.1.1\">FNR</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.6.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.6.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.6.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.6.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.6.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.2.1.2.1.1.1\">A ratio of correct GOOSE anomalies that</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.6.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.2.1.2.1.2.1\">the system failed to detect.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.6.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.3\">21.82%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.4\">14.55%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.5\">1.82%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.6\">21.82%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.7\">16.36%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.8\">10.91%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.9\">25.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.10\">18.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.6.11\">10.9%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.7.1.1\">Precision</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.7.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.7.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.7.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.7.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.7.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.2.1.2.1.1.1\">Measures accuracy of detected</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.7.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.7.2.1.2.1.2.1\">GOOSE anomalies.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.7.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.3\">78.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.4\">85.45%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.5\">98.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.6\">75.43%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.7\">80.7%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.8\">85.96%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.9\">74.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.10\">81.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.7.11\">90.7%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.8.1.1\">F1-Score</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.8.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.8.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.8.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.8.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.8.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.2.1.2.1.1.1\">Provides a trade-off between precision</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.8.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.8.2.1.2.1.2.1\">and recall.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.8.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.3\">78.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.4\">85.45%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.5\">98.18%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.6\">76.78%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.7\">82.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.8\">87.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.9\">74.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.10\">81.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.8.11\">90.7%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.9\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.9.1\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.9.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.9.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.9.2.1.1\"></span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.2.1.2\"> <span class=\"ltx_text\" id=\"S4.T1.1.9.2.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.9.2.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.9.2.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.9.2.1.2.1.1.1.1\">IEC 61850-based Communication</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.9.2.1.2.2\"></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"9\" id=\"S4.T1.1.9.3\" style=\"background-color:#FFFFC7;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.9.3.1\" style=\"background-color:#FFFFC7;\">SV<span class=\"ltx_text ltx_font_medium\" id=\"S4.T1.1.9.3.1.1\"></span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.10\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.10.1\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.10.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.10.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.10.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.10.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.10.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.10.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.10.2.1.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.10.2.1.2.1.1.1.1\">LLMs</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.10.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.10.3\">ChatGPT 4.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.10.4\">Anthropic\u2019s Claude 2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.10.5\">Google Bard/PaLM 2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.1\">\n<span class=\"ltx_text\" id=\"S4.T1.1.11.1.1\"></span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.11.1.2\"> <span class=\"ltx_text\" id=\"S4.T1.1.11.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.11.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.11.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.11.1.2.1.1.1.1\">Metrics</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.11.1.2.2\"></span></span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.11.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.11.2.1.1\"></span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.11.2.1.2\"> <span class=\"ltx_text\" id=\"S4.T1.1.11.2.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.11.2.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.11.2.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.11.2.1.2.1.1.1.1\">Description</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.11.2.1.2.2\"></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.3\">without</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.4\">partial</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.5\">full</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.6\">without</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.7\">partial</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.8\">full</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.9\">without</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.10\">partial</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.11.11\">full</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.12.1.1\">TPR</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.12.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.12.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.12.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.12.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.12.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.12.2.1.2.1.1.1\">A ratio of correct SV anomalies that were</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.12.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.12.2.1.2.1.2.1\">correctly identified (also, named recall).</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.12.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.3\">70%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.4\">95%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.5\">96.67%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.6\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.7\">70%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.8\">88.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.9\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.10\">63.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.12.11\">81.6%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.13.1.1\">FPR</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.13.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.13.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.13.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.13.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.13.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.13.2.1.2.1.1.1\">A ratio of normal SV data that were</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.13.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.13.2.1.2.1.2.1\">wrongly identified as anomalies.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.13.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.3\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.4\">15%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.5\">0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.6\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.7\">20%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.8\">0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.9\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.10\">40%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.13.11\">25%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.14.1.1\">FNR</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.14.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.14.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.14.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.14.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.14.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.14.2.1.2.1.1.1\">A ratio of correct SV anomalies that</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.14.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.14.2.1.2.1.2.1\">the system failed to detect.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.14.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.3\">30%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.4\">5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.5\">3.33%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.6\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.7\">30%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.8\">11.67%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.9\">50%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.10\">36.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.14.11\">18.34%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.15.1.1\">Precision</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.15.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.15.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.15.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.15.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.15.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.15.2.1.2.1.1.1\">Measures accuracy of detected</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.15.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.15.2.1.2.1.2.1\">SV anomalies.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.15.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.3\">80.77%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.4\">95%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.5\">100%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.6\">75%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.7\">91.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.8\">100%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.9\">75%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.10\">82.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.15.11\">91.7%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.1.16.1.1\">F1-Score</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.2\" style=\"width:156.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.16.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.16.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T1.1.16.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.16.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.16.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.16.2.1.2.1.1.1\">Provides a trade-off between precision</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.1.16.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.16.2.1.2.1.2.1\">and recall.</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T1.1.16.2.1.3\"></span></p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.3\">75%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.4\">95%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.5\">98.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.6\">60%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.7\">79.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.8\">93.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.9\">60%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.10\">71.7%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.16.11\">85.9%</td>\n</tr>\n</table>\n</figure>",
70
+ "capture": "TABLE I: A comparison of detection results (without, partial and full terms show the levels of training process)."
71
+ }
72
+ },
73
+ "image_paths": {
74
+ "1": {
75
+ "figure_path": "2311.05462v2_figure_1.png",
76
+ "caption": "Figure 1: HIL Testbed considering the IDS with human recommendations.",
77
+ "url": "http://arxiv.org/html/2311.05462v2/extracted/5430665/images/Figure1.jpg"
78
+ },
79
+ "2": {
80
+ "figure_path": "2311.05462v2_figure_2.png",
81
+ "caption": "Figure 2: A pre-processing step based on the feature extraction for a log of GOOSE message (actual data from an HIL testbed).",
82
+ "url": "http://arxiv.org/html/2311.05462v2/extracted/5430665/images/Pre-processing_GOOSE.png"
83
+ }
84
+ },
85
+ "validation": true,
86
+ "references": [],
87
+ "url": "http://arxiv.org/html/2311.05462v2"
88
+ }
20240225/2311.06056v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2311.06918v3.json ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Resource-Aware Hierarchical Federated Learning for Video Caching in Wireless Networks",
3
+ "abstract": "Video caching can significantly improve backhaul traffic congestion by locally storing the popular content that users frequently request. A privacy-preserving method is desirable to learn how users\u2019 demands change over time.\nAs such, this paper proposes a novel resource-aware hierarchical federated learning (RawHFL) solution to predict users\u2019 future content requests under the realistic assumptions that content requests are sporadic and users\u2019 datasets can only be updated based on the requested content\u2019s information.\nConsidering a partial client participation case, we first derive the upper bound of the global gradient norm that depends on the clients\u2019 local training rounds and the successful reception of their accumulated gradients over the wireless links.\nUnder delay, energy and radio resource constraints, we then optimize client selection and their local rounds and central processing unit (CPU) frequencies to minimize a weighted utility function that facilitates RawHFL\u2019s convergence in an energy-efficient way.\nOur simulation results show that the proposed solution significantly outperforms the considered baselines in terms of prediction accuracy and total energy expenditure.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Video streaming is the dominant source of data traffic, and out of video views occur on wireless devices [1 ###reference_b1###].\nAs such, content caching [2 ###reference_b2###] can become an integral part of modern wireless networks since it can save the backhaul transmission bandwidth and reduce network congestion by ignoring repetitive extractions of the same few popular videos the users repeatedly request from the far-away cloud.\nTwo major design components of an efficient video caching platform are content placement and content delivery [3 ###reference_b3###].\nKnowing which content the users will request in the near future can crucially help the provider in the content placement phase.\nHowever, it is often challenging to predict content popularity as it changes rapidly.\nBesides, many users may have their individual preferences for specific types of content that are not necessarily globally popular.\nWhile machine learning (ML) may accurately predict content popularity or user-specific content demand, the need for immense number of data samples for training the ML model poses challenges in wireless video caching platforms.\nA further challenge arises from requirements for privacy and/or protection of business secrets.\nThe user equipment (UE) makes a content request using its serving base station (BS) to the content service provider (CSP) in wireless networks.\nOn the one hand, the UE and the CSP do not reveal the exact content ID/information to the BS, the former because they want to protect their privacy, and the latter because CSP and the (wireless) internet service provider (ISP) operating the BS are often competitors.\nOn the other hand, the spatial information of the UEs is only known to the ISPs, which does not want to convey it to the CSP.\nAs such, privacy-preserving coordination among the UEs, ISPs and CSP is required.\nTherefore, distributed and privacy-preserving federated learning (FL) [4 ###reference_b4###] is ideal for video caching in wireless networks.\nSome existing literature [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###] acknowledged the need for privacy protection and proposed FL-based solutions for content caching.\nIn [5 ###reference_b5###], Qiao et al. proposed a FL solution for content caching in resource-constrained wireless networks using two separate deep reinforcement learning (DRL) agents to find a subset of clients and their local training rounds.\nWang et al. also leveraged a similar strategy [6 ###reference_b6###], where users and edge servers used two separate DRL agents to learn computation offloading and content placement strategies, respectively.\nBoth [5 ###reference_b5###] and [6 ###reference_b6###] considered federated aggregation of the DRL agents.\nJiang et al. used an offline individual content preference learning, followed by an adaptive context space partitioning and FL-based popularity prediction algorithm in [7 ###reference_b7###].\nLi et al. developed an attention-weighted FL algorithm for device-to-device wireless networks where they partition the UEs into groups based on mobility and social behaviors in [8 ###reference_b8###].\nKhanal et al. proposed a self-attention-based FL algorithm for content caching in vehicular networks where the moving self-driving cars, roadside units and macro BS work collaboratively to train the model in [9 ###reference_b9###].\nWhile the above studies recognized the privacy concern in content caching, the cooperation among the three entities, i.e., UE, BS and CSP, was not addressed.\nBesides, the above studies assumed that the client\u2019s111We use the terms UE and client interchangeably throughout the paper. training dataset is readily available.\nIn reality, a UE\u2019s content requests are sporadic, and its dataset only contains the requested content\u2019s information.\nMoreover, since practical networks have hierarchical architectures where the client only communicates with its immediate upper tier, i.e., the BS [10 ###reference_b10###], a hierarchical federated learning (HFL) solution is needed.\nMotivated by the above facts, we propose a novel resource-aware hierarchical federated learning (RawHFL) algorithm for predicting clients\u2019 future content requests.\nWe consider that UEs\u2019 requests arrive based on their own requirements, and their datasets can only be updated with the requested content\u2019s information.\nIncorporating the well-known system and data heterogeneity, we derive the convergence analysis that reveals that the global gradient norm depends on the successful reception of the clients\u2019 trained accumulated gradients and their local training rounds.\nAs such, we jointly optimize client selection and clients\u2019 local training rounds and CPU cycles to minimize a weighted utility function that facilitates RawHFL\u2019s convergence and minimizes energy expense under delay, energy and radio resource constraints.\nOur extensive simulation results validate that the proposed solution outperforms existing baselines in terms of test accuracy and energy expense.\n,\nUser , all user set\n,\nbase station , all BS set\n,\nSGD round, upper bound for local SGD round\n,\nedge round, total edge round\n,\nglobal round, total global round\n\ndiscrete slot at which UE may request content\n,\nBS \u2019s UE set; selected UE/client set of BS during edge round of global round\n\nUE/client \u2019s local SGD round during edge round of global round\n,\nGenre ; total genres\n, ,\ncontent of genre ; all content set in genre ; entire content catalog\n,\nTotal content in genre ; total content in the catalog\n, ,\npRB; total pRBs; pRB set\n\nUE \u2019s initial historical dataset\n\nUE \u2019s probability of being active (making a content request)\n\nBinary indicator function that defines whether requests content during slot\n\nUE \u2019s preference to genre\n\nDirichlet distribution\u2019s concentration parameter for the genre preference\n,\nUE \u2019s processed dataset; total samples in UE \u2019s processed dataset\n; ,\nUE \u2019s loss function; BS \u2019s loss function; global loss function\n, ,\nUE/Client \u2019s local model during SGD round of edge round of global round ; BS \u2019s edge model during edge round of global round ; global model during round\n\nUE \u2019s gradient during local round of edge round of global round\n\nUE \u2019s accumulated gradients during edge round of global round\n\nLearning rate\n,\nUE \u2019s trained model/accumulated gradients\u2019 weight; BS \u2019s model/accumulated gradients\u2019 weight\n;\nBinary indicator function to define whether is selected in edge round of global round ; success probability of\n;\nUE \u2019s local model training time and energy overheads during edge round of global round\n;\nUE \u2019s accumulated gradient offloading time and energy overheads during edge round of global round\n\nDeadline threshold to finish one edge round\n\nTransmission power of\n\nEnergy budget of for each edge round\n;\nBinary indicator function to define whether accumulated gradient of is received by the BS; success probability of\n;\nFloating point precision; client\u2019s uplink payload size for the accumulated gradients\n\nUtility function for the joint optimization problem"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II System Model",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Network Model",
21
+ "text": "We consider a cache-enabled wireless network, consisting of distributed UEs, BSs and a CSP.\nDenote the UE set and the BS set by .\nBesides, denote the UEs that are associated to BS by such that .\nThe system operates in discrete time slot , where the duration between two slots is seconds.\nThe CSP has a fixed content catalog, denoted by , where is the content set of genre .\nBesides, denote the total number of content in the catalog by .\nEach BS is equipped with an edge server (ES)\nthat has a limited cache storage and computation capability, and is under the control of the CSP.\nConsequently, the BS has no information about what content is stored in the ES.\nTo keep its operational information private, the CSP assigns temporal tag IDs to the original content ID that it shares with the BS.\nThe mappings between the actual content IDs and the tagged IDs are only known to the CSP and the UEs.\nMoreover, the CSP can periodically change these mappings to prevent the BS from learning the actual content information.\nNote that since each BS is equipped with a distinct ES, we use the same notation to represent the ES of the BS for brevity.\nThe network has Hz bandwidth allocated for performing the FL task,\nwhich is further divided into orthogonal physical resource blocks.\nThe frequency reuse factor is , i.e., each BS utilizes the same pRB set.\nBesides, we assume that the BSs collaborate to find node associations and pRB allocations such that inter-cell interference is fully mitigated.\nA list of the important notations used in this paper is summarized in Table I ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Content Request Model",
27
+ "text": "This work assumes that each UE has a small initial historical raw dataset, denoted by .\nDuring slot , a UE may request a content from the CSP with probability , which can be chosen according to the UE\u2019s activity level.\nGiven that the UE is active, we use the binary indicator function to denote which content the UE requests in that slot.\nThe UE stores the requested content\u2019s information in its local raw dataset that evolves as follows\nwhere and are the feature and label vectors, respectively, for the requested content.\nAs such, our dataset acquisition mimics the natural data sensing in real-world applications, where the dataset sizes are time-varying [11 ###reference_b11###, 12 ###reference_b12###].\nBesides, each UE follows a popularity-preference tradeoff in its content request model.\nMore specifically, the UEs have their own independent genre preferences.\nDenote UE \u2019s preference for genre by such that and .\nWe model the genre preference using symmetric Dirichlet distribution , where the is the concentration parameter that controls the skewness [13 ###reference_b13###].\nInitially, the UE requests the most popular content from the selected genre .\nFor the subsequent request, it requests the most similar content222We consider each content\u2019s distinctive feature set and calculate the cosine similarity of the content within the same genre. to the previously requested content in the same genre with probability and the most popular content of a different genre with probability ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Hierarchical Federated Learning: Preliminaries",
33
+ "text": "The central server wants to train an ML model, parameterized by , to predict the future content requests of the UEs.\nWhile the UEs will not reveal their content preferences, they are eager to participate in the model training and share their predictions with the ES.\nWithout loss of generality, during slot , each UE processes its local raw dataset and prepares a processed dataset , where is the (processed) training samples and is the total training samples.\nUsing their processed datasets, each UE wants to minimize the following loss function.\nwhere is the loss associated with the data sample.\nIn HFL [14 ###reference_b14###], the immediate upper tier of the clients, i.e., the ES, wishes to minimize\nwhere is the weight of the client in BS and .\nBesides, the upper tier of the ESs, i.e., the central server, aims at minimizing the following global loss function.\nwhere is the weight of the ES of the BS at the central server and .\nMoreover, due to the dynamic changes in the local datasets, the optimal global model is not necessarily stationary [12 ###reference_b12###].\nAs such, the central server seeks a sequence of \u2019s, where"
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Resource-Aware Hierarchical Federated Learning: Algorithm and Convergence",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Resource-Aware Hierarchical Federated Learning Model",
45
+ "text": "Similar to general HFL [14 ###reference_b14###, 15 ###reference_b15###, 10 ###reference_b10###], in our proposed RawHFL, the clients, the\nES and the central server perform local, edge and global rounds.\nThe nodes in each tier perform their local training before sending their updated models to their respective upper levels.\nBesides, due to resource constraints, we consider that each ES selects only a subset of the clients to participate in model training.\nLet denote the selected client set of the ES of BS during the edge round of the global round.\nDenote the client\u2019s participation by\nIn each edge round, the ES sends its model to its BS.\nThe BS then broadcasts333Similar to existing studies [12 ###reference_b12###, 13 ###reference_b13###], we ignore the downlink transmission time as the BS can use higher transmission power and entire bandwidth. its available model to all , who synchronize their local models as\nThe client takes a stochastic gradient descent (SGD) step to minimize (2 ###reference_###) and updates its model as\nwhere is the learning rate.\nParticularly, each client has seconds and Joules of time and energy budgets to spend in each edge round .\nAs such, each client performs , where , SGD rounds, which can be different for different UEs.\nTherefore, we calculate the local computation time as\nwhere , , , and are the number of mini-batches, batch size, number of CPU cycle to compute -bit data, data sample size in bits and the CPU frequency.\nBesides, the energy expense for performing these SGD rounds is [5 ###reference_b5###, 15 ###reference_b15###, 12 ###reference_b12###]\nwhere is the effective capacitance of the CPU chip.\nAfter finishing local training, each client offloads its accumulated gradient to the BS, which then forwards it to its ES.\nThis accumulated gradient incurs a wireless payload size of bits [13 ###reference_b13###], where is the floating point precision (FPP).\nThe required time to offload is\nwhere is the pRB size and is the signal-to-noise-ratio (SNR), which is calculated as444Since practical networks now offer enough diversity against small-scale fading, we dropped the small-scale fading channel factor .\nwhere is the uplink transmission power of the client.\nBesides, and are the path loss and log-Normal shadowing.\nFurthermore, is the variance of the circularly symmetric zero-mean Gaussian distributed random noise.\nMoreover, the required energy expense to offload is calculated as\nNote that due to the subset client selection, each ES minimizes , where .\nDuring the edge aggregation time, each ES updates its edge model using the \u2019s from all of its selected clients as\nwhere is a binary indicator that defines whether is received at the BS during the aggregation time and is defined as\nEach BS then broadcasts their respective ES\u2019 updated model to their selected clients, and the clients perform local training and offload back the accumulated gradients.\nThis process repeats for edge rounds, after which the ESs send their updated models to the central server that aggregates the received edge models as\nNote that the global loss of RawHFL is , which may differ from (3 ###reference_###) if .\nThe central server then sends this updated model to the ESs, who perform their edge rounds following the above process.\nAlgorithm 1 ###reference_### summarizes these steps."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Convergence of RawHFL",
51
+ "text": "We make the following standard assumptions [14 ###reference_b14###, 15 ###reference_b15###, 10 ###reference_b10###]\nThe loss functions are -smooth.\nThe mini-batch gradients are unbiased. The variance of the gradients is bounded, i.e., .\nThe stochastic gradients in different local epochs, client selection and accumulated gradient offloading in edge rounds are independent.\nThe divergence between the two interconnected tiers\u2019 loss functions is bounded. For all , and ,\nwhere .\nSuppose the above assumptions hold.\nWhen , the average global gradient norm is upper-bounded as\nwhere the expectations depend on clients\u2019 randomly selected mini-batches and \u2019s.\nBesides, , , and .\nWe start with the aggregation rule (16 ###reference_###) and the -smoothness assumption, i.e., \nThen, we derive the upper bounds of the inner-product and norm terms, which gives the following when .\nTo that end, assuming , we derive the upper bounds of the last term of (III-B ###reference_###), and similarly for the second last term using .\nFinally, we plug those terms in (III-B ###reference_###) and do some algebraic manipulations to reach (1 ###reference_###).\n\u220e\nIn (1 ###reference_###), the first term captures the changes in the global loss function in consecutive rounds.\nThe following terms with come from the variance of the gradients.\nBesides, the following terms with and stem from the bounded divergence assumptions of the loss functions.\nFinally, the last term appears from the wireless links between the UEs and the BSs.\nMoreover, when the accumulated gradients are received successfully, i.e., all , the last term becomes .\nWhen , and , the bound in Theorem 1 ###reference_orem1### boils down to"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "IV RawHFL: Joint Problem Formulation and Solutions",
57
+ "text": "Theorem 1 ###reference_orem1###, Corollary 1 ###reference_ollary1### and Remark 1 ###reference_ark1### show that the controllable terms in the convergence bound are the \u2019s and the \u2019s.\nBesides, \u2019s are intertwined with the FL parameters (\u2019s and \u2019s) and wireless factors that influence (12 ###reference_###).\nFurthermore, we assume that the accumulated gradients is transmitted as a single wireless packet.\nThe BS can successfully decode the gradients without errors555This assumption is reasonable as wireless networks use hybrid automatic repeat request and error correction mechanisms [16 ###reference_b16###, Chap. ]. if it receives the packet within the deadline .\nAs such, we denote .\nThe above facts inspire us to solve the following optimization problem to jointly select the clients and find their CPU frequencies and local iterations at the beginning of each edge round .\nwhere and are constraints for the client selection.\nConstraints and ensure that the client selects its local iteration and CPU frequency within the upper bounds.\nBesides, and are enforced to satisfy the deadline and energy constraints.\nNote that we assume the clients share their system information with their associated BS.\nThe BSs cooperate and solve (IV ###reference_###) centrally.\nThen, each BS conveys the optimized parameters to their selected clients.\nIt is also worth noting that we may equivalently minimize .\nIntuitively, this objective function should select the clients to optimize the weighted combination of their \u2019s.\nHowever, both (IV ###reference_###) and this equivalent objective function do not guarantee energy efficiency.\nBesides, based on our dataset acquisition and content request model, it is reasonable to consider is at least as long as the duration of the video content.\nTherefore, we seek an energy-efficient solution and consider the following weighted combination of the \u2019s and energy expense of the clients as our objective function.\nwhere and .\nHowever, the optimization problem is still a binary mixed-integer nonlinear problem and is NP-hard.\nTherefore, we first relax the integer constraint on , and then define a new variable to replace the multiplication of binary and continuous variables.\nBesides, we replace the binary client selection variable as\nTo that end, we re-write the objective function as\nwhere and is a positive constant that acts as a penalty.\nBesides, using first-order Taylor series, we approximate the last quadratic term and rewrite the objective function as\nwhere . Besides, , and are some initial feasible points.\nFurthermore, we approximate the non-convex computation time as follows\nWe thus transform the original optimization problem as\nwhere the constraints are taken for the same reasons as in (IV ###reference_###).\nNote that problem (IV ###reference_###) belongs to the class of \u201cdifference of convex programming\u201d problems and can be solved iteratively using existing tools such as CVX [17 ###reference_b17###].\nOur proposed iterative solution is summarized in Algorithm 2 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Simulation Results and Discussions",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "5.1",
67
+ "parent_section_id": "5",
68
+ "section_name": "Simulation Setting",
69
+ "text": "To show the effectiveness of the proposed approach, we present simulation results from a system with the following settings. We consider and .\nThe coverage radius of the BS is meters and each BS has UEs.\nBesides, , , , , , , , , [15 ###reference_b15###], [14 ###reference_b14###], , , and seconds.\nThe activity levels \u2019s and probability of requesting similar content from the same genre \u2019s are drawn uniformly randomly from and , respectively.\nThe genre preferences \u2019s are generated using distribution.\nThe \u2019s, \u2019s, \u2019s and \u2019s are randomly drawn from cycles, GHz, Joules and dBm, respectively.\nFurthermore, kHz and carrier frequency is GHz.\nThe path losses and line-of-sight probabilities are modeled based on the urban macro model as listed in [18 ###reference_b18###, Section ].\nThe number of pRBs is varied based on \u2019s.\nMoreover, we have used a fully connected (FC) neural network666Our solution is general and can easily be extended to accommodate other neural networks, such as recurrent neural networks or transformers. that has the following architecture: .\nFinally, the clients use a sliding window technique, where, in slot , each UE processes its dataset so that the feature vector is the previously requested content\u2019s information and the label is the currently requested content\u2019s label.\n###figure_1### ###figure_2### ###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "5.2",
73
+ "parent_section_id": "5",
74
+ "section_name": "Performance Analysis",
75
+ "text": "We first observe the convergence performance of the proposed algorithm with respect to different \u2019s in Fig. 3 ###reference_###.\nIntuitively, when the selected client set\u2019s size is small, the global model is trained on fewer data samples.\nUnder severe data heterogeneity, this may lead to poor performance if the clients are not selected appropriately.\nBesides, based on the objective function in (IV ###reference_###), the proposed solution proactively selects the clients that minimize the weighted utility function.\nTherefore, it is expected that RawHFL may take more global rounds to convergence when the is small.\nOur simulation results also show similar trends in Fig. 3 ###reference_###.\nWe observe that the test loss and test accuracy performances improve when increases.\nFor example, with , the test accuracy drops about when is around , while the test accuracy reaches a plateau when is about with .\nHowever, while a larger may help RawHFL convergence faster, the bandwidth and energy costs also increase.\nMore specifically, when increases, (IV ###reference_###) must choose the defined number of clients so that the utility function is minimized.\nAs such, it may select some clients with higher energy expenses.\nOur simulation results also validate this intuition.\nFig. 3 ###reference_### shows the cumulative distribution function (CDF) of the total energy expenses during each edge round for different client set sizes.\nFor example, the clients spend no more than Joules of energy about , , and of the edge rounds, when , , and , respectively."
76
+ },
77
+ {
78
+ "section_id": "5.3",
79
+ "parent_section_id": "5",
80
+ "section_name": "Performance Comparisons",
81
+ "text": "We next show performance comparisons with some baselines.\nTo our best knowledge, no existing baseline exactly considers our system design and uses HFL for video caching.\nAs such, we modify the traditional hierarchical federated averaging (H-FedAvg) algorithm [14 ###reference_b14###] for comparison.\nIn the modification, termed H-FedAvg-M, we find the smallest number of local rounds that all clients can train their local models without violating their constraints.\nIn the second modification, termed H-FedAvg-M, we drop the straggler who cannot even perform a single local round and find the least number of local iterations that the rest of the remaining clients can perform without violating their constraints.\nIn the third modification, termed H-FedAvg-UB, we consider the upper bound of H-FedAvg [14 ###reference_b14###], where each client can perform local rounds without constraints.\nWe assume for these baselines.\nFinally, we consider a naive popularity-based Top-Popular baseline.\nIn Fig. 3 ###reference_###, we show the Top-M accuracy comparison of our proposed solution with these baselines.\nWhile a higher value of M should increase the accuracy, the baselines in constrained cases are expected to perform worse.\nSince all clients perform the same number of local rounds in H-FedAvg-M, some ES may fail to train the model in some edge rounds as some clients may not have sufficient battery powers to offload their trained models due to poor channel conditions.\nH-FedAvg-M should work better than H-FedAvg-M as we drop these stragglers in H-FedAvg-M.\nBesides, H-FedAvg-UB is the ideal case upper bound.\nFurthermore, since the clients request content following a popularity-preference tradeoff, the Top-Popular baseline is expected to perform worse.\nOur simulation results in Fig. 3 ###reference_###, where the horizontal and the vertical lines show the mean and standard deviation of the test accuracies across all clients, also validate these trends.\nParticularly, our solution yields about , and higher (Top-) test accuracies than H-FedAvg-M, H-FedAvg-M and Top-Popular baselines, respectively.\nWhile RawHFL achieves nearly identical test accuracies compared to the H-FedAvg-UB baseline, our solution is energy efficient.\nWe list the energy expenses of these baselines in Table II ###reference_###, which clearly indicates that our proposed solution outperforms these baselines."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "VI Conclusion",
87
+ "text": "We proposed a privacy-preserving RawHFL solution for video caching under a realistic content request model and real-time data sensing mechanism.\nBased on our convergence analysis and content request model, we optimized client selections, local training rounds and CPU frequencies jointly to minimize a weighted utility function that facilitated faster convergence energy-efficiently.\nMoreover, our results suggest a tradeoff between the number of participating clients that facilitates faster convergence and the corresponding resource expenses.\nAcknowledgments: The authors thank Dr. Min-Seok Choi, Omer Gokalp Serbetci and Yijing Zhang for the helpful discussions."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S1.T1.119.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S1.T1.120.2\" style=\"font-size:90%;\">Summary of important variables</span></figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.117\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.117.118.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.117.118.1.1\" style=\"width:37.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_center ltx_align_top\" id=\"S1.T1.117.118.1.1.1\">Parameter</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.117.118.1.2\" style=\"width:184.9pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_center ltx_align_top\" id=\"S1.T1.117.118.1.2.1\">Definitions</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.2.2.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.2.2.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.3.3.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.3.3.3.1.1\">User , all user set</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.5.5.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.5.5.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.6.6.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.6.6.3.1.1\">base station , all BS set</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.9.9\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.8.8.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.8.8.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.9.9.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.9.9.3.1.1\"> SGD round, upper bound for local SGD round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.12.12\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.11.11.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.11.11.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.12.12.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.12.12.3.1.1\"> edge round, total edge round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.15.15\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.14.14.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.14.14.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.15.15.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.15.15.3.1.1\"> global round, total global round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.17.17\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.16.16.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.16.16.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.17.17.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.17.17.2.1.1\"> discrete slot at which UE may request content</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.23.23\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.19.19.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.19.19.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.23.23.6\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.23.23.6.4.4\">BS \u2019s UE set; selected UE/client set of BS during edge round of global round </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.27.27\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.24.24.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.24.24.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.27.27.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.27.27.4.3.3\">UE/client \u2019s local SGD round during edge round of global round </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.30.30\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.29.29.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.29.29.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.30.30.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.30.30.3.1.1\">Genre ; total genres</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.36.36\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.33.33.3\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.33.33.3.3.3\">, , </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.36.36.6\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.36.36.6.3.3\"> content of genre ; all content set in genre ; entire content catalog</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.39.39\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.38.38.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.38.38.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.39.39.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.39.39.3.1.1\">Total content in genre ; total content in the catalog</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.43.43\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.42.42.3\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.42.42.3.3.3\">, , </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.43.43.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.43.43.4.1.1\"> pRB; total pRBs; pRB set</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.45.45\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.44.44.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.44.44.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.45.45.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.45.45.2.1.1\">UE \u2019s initial historical dataset</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.47.47\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.46.46.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.46.46.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.47.47.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.47.47.2.1.1\">UE \u2019s probability of being active (making a content request)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.51.51\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.48.48.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.48.48.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.51.51.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.51.51.4.3.3\">Binary indicator function that defines whether requests content during slot </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.54.54\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.52.52.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.52.52.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.54.54.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.54.54.3.2.2\">UE \u2019s preference to genre </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.55.55\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.55.55.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.55.55.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.55.55.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.55.55.2.1\">Dirichlet distribution\u2019s concentration parameter for the genre preference</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.59.59\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.57.57.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.57.57.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.59.59.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.59.59.4.2.2\">UE \u2019s processed dataset; total samples in UE \u2019s processed dataset</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.64.64\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.62.62.3\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.62.62.3.3.3\">; , </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.64.64.5\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.64.64.5.2.2\">UE \u2019s loss function; BS \u2019s loss function; global loss function</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.75.75\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.67.67.3\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.67.67.3.3.3\">, , </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.75.75.11\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.75.75.11.8.8\">UE/Client \u2019s local model during SGD round of edge round of global round ; BS \u2019s edge model during edge round of global round ; global model during round </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.80.80\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.76.76.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.76.76.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.80.80.5\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.80.80.5.4.4\">UE \u2019s gradient during local round of edge round of global round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.84.84\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.81.81.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.81.81.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.84.84.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.84.84.4.3.3\">UE \u2019s accumulated gradients during edge round of global round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.85.85\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.85.85.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.85.85.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.85.85.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.85.85.2.1\">Learning rate</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.89.89\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.87.87.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.87.87.2.2.2\">, </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.89.89.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.89.89.4.2.2\">UE \u2019s trained model/accumulated gradients\u2019 weight; BS \u2019s model/accumulated gradients\u2019 weight</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.95.95\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.91.91.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.91.91.2.2.2\">; </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.95.95.6\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.95.95.6.4.4\">Binary indicator function to define whether is selected in edge round of global round ; success probability of </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.100.100\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.97.97.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.97.97.2.2.2\">; </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.100.100.5\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.100.100.5.3.3\">UE \u2019s local model training time and energy overheads during edge round of global round </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.105.105\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.102.102.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.102.102.2.2.2\">; </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.105.105.5\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.105.105.5.3.3\">UE \u2019s accumulated gradient offloading time and energy overheads during edge round of global round </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.106.106\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.106.106.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.106.106.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.106.106.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.106.106.2.1\">Deadline threshold to finish one edge round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.108.108\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.107.107.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.107.107.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.108.108.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.108.108.2.1.1\">Transmission power of </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.110.110\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.109.109.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.109.109.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.110.110.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.110.110.2.1.1\">Energy budget of for each edge round</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.114.114\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.112.112.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.112.112.2.2.2\">; </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.114.114.4\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.114.114.4.2.2\">Binary indicator function to define whether accumulated gradient of is received by the <abbr class=\"ltx_glossaryref ltx_centering\" title=\"base station\"><span class=\"ltx_text ltx_glossary_short\">BS</span></abbr>; success probability of </p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.116.116\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.116.116.2\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.116.116.2.2.2\">; </p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S1.T1.116.116.3\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.116.116.3.1\">Floating point precision; client\u2019s uplink payload size for the accumulated gradients</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.117.117\">\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.117.117.1\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.117.117.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"S1.T1.117.117.2\" style=\"width:184.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S1.T1.117.117.2.1\">Utility function for the joint optimization problem</p>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
94
+ "capture": "TABLE I: Summary of important variables"
95
+ },
96
+ "2": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.29.4.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S5.T2.6.3\" style=\"font-size:90%;\">Performance comparison: , , s</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.27\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.27.22.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.27.22.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.27.22.1.1.1\">FL Algorithm</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.27.22.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.27.22.1.2.1\">Test Accuracy</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.27.22.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.27.22.1.3.1\">Energy Expense [J]</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.9.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.7.1.1\">\n<abbr class=\"ltx_glossaryref\" title=\"resource-aware hierarchical federated learning\"><span class=\"ltx_text ltx_glossary_short\">RawHFL</span></abbr>-\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.8.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.9.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.10.4.1\">\n<abbr class=\"ltx_glossaryref\" title=\"resource-aware hierarchical federated learning\"><span class=\"ltx_text ltx_glossary_short\">RawHFL</span></abbr>-\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.11.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.12.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.15.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.13.7.1\">\n<abbr class=\"ltx_glossaryref\" title=\"resource-aware hierarchical federated learning\"><span class=\"ltx_text ltx_glossary_short\">RawHFL</span></abbr>-\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.14.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.15.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.18.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.16.10.1\">\n<abbr class=\"ltx_glossaryref\" title=\"resource-aware hierarchical federated learning\"><span class=\"ltx_text ltx_glossary_short\">RawHFL</span></abbr>-\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.17.11.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.18.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.21.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.19.13.1\">H-FedAvg-M\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.20.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.21.15.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.24.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.22.16.1\">H-FedAvg-M\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.23.17.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.24.18.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.26.20\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.26.20.3\">H-FedAvg-UB <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.06918v3#bib.bib14\" title=\"\">14</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.25.19.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.26.20.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.27.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.27.21.2\">Top-Popular</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.27.21.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.27.21.3\">N/A</td>\n</tr>\n</tbody>\n</table>\n</figure>",
98
+ "capture": "TABLE II: Performance comparison: , , s"
99
+ }
100
+ },
101
+ "image_paths": {
102
+ "1(a)": {
103
+ "figure_path": "2311.06918v3_figure_1(a).png",
104
+ "caption": "Figure 1: Global round vs average test loss and accuracy",
105
+ "url": "http://arxiv.org/html/2311.06918v3/x1.png"
106
+ },
107
+ "1(b)": {
108
+ "figure_path": "2311.06918v3_figure_1(b).png",
109
+ "caption": "Figure 1: Global round vs average test loss and accuracy",
110
+ "url": "http://arxiv.org/html/2311.06918v3/x2.png"
111
+ },
112
+ "1(c)": {
113
+ "figure_path": "2311.06918v3_figure_1(c).png",
114
+ "caption": "Figure 1: Global round vs average test loss and accuracy",
115
+ "url": "http://arxiv.org/html/2311.06918v3/x3.png"
116
+ }
117
+ },
118
+ "validation": true,
119
+ "references": [],
120
+ "url": "http://arxiv.org/html/2311.06918v3"
121
+ }
20240225/2311.07829v2.json ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Coding Scheme for Unresponsive and Byzantine Server Resilient Quantum \ud835\udc4b-Secure \ud835\udc47-Private Information Retrieval",
3
+ "abstract": "Building on recent constructions of Quantum Cross Subspace Alignment (QCSA) codes, this work develops a coding scheme for QEBXSTPIR, i.e., classical private information retrieval with -secure storage and -private queries, over a quantum multiple access channel, that is resilient to any set of up to erased servers (equivalently known as unresponsive servers, or stragglers) together with any set of up to Byzantine servers. The scheme is accordingly labeled QEBCSA, with the \u2018E\u2019 and \u2018B\u2019 indicating resilience to erased and Byzantine servers respectively. The QEBCSA code structure may be broadly useful for problems such as quantum coded secure distributed computation, where security, straggler resilience, and distributed superdense coding gains are simultaneously required. The -security property is further exploited to improve the communication rate when -error decoding is allowed.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recent interest in entanglement assisted computation over quantum multiple access (QMAC) networks adds fundamentally novel dimensions to the rapidly expanding theory of distributed communication and computation, beyond its classical cornerstones such as secret-sharing[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###], distributed storage, private information retrieval (PIR)[5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###], coded distributed computation and computation networks [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. Ideas from these diverse perspectives are encapsulated in various specialized coding structures \u2014 Reed-Solomon (RS) codes[16 ###reference_b16###], Cross Subspace Alignment (CSA) [17 ###reference_b17###], Lagrange Coded Computing [18 ###reference_b18###], CSS codes[19 ###reference_b19###, 20 ###reference_b20###], and the recently developed -sum box abstraction[21 ###reference_b21###], to name a few. Developing new schemes to assimilate the specialized coding structures as much as possible is essential for a unified theory that can facilitate a broader array of applications. This work represents such an endeavor, with the goal of developing a coding scheme for QEBXSTPIR[17 ###reference_b17###], i.e., quantum -secure -private information retrieval that is also resilient to erased servers (equivalently referred to as unresponsive servers or stragglers) and Byzantine servers.\nIn the QEBXSTPIR [17 ###reference_b17###] setting there are servers equipped in advance (independent of the classical data) with entangled quantum digits (qudits). classical messages (files, datasets) are distributed among the servers in an -secure fashion, so that even if any servers collude they learn nothing about the messages. A user wishes to efficiently retrieve one of the messages by querying the servers -privately, so that even if any servers collude they learn nothing about which message is desired by the user. Each server manipulates its qudits locally based on the user\u2019s queries and the messages available to that server. The qudits are then sent as answers from the servers to the user. Here we allow up to stragglers, i.e., any servers may be unresponsive, equivalently their answers may be lost over the QMAC, and Byzantine servers whose answers are subject to arbitrary errors. The lost qudits are erasures in the sense that while the user\u2019s queries are sent without knowledge of which servers may turn out to be stragglers, once the user receives the qudits in response, it knows which servers\u2019 answers were erased, i.e, which servers did not respond. The identities of the Byzantine servers are not directly revealed to the user from the answers. This corresponds to general errors in the context of error correcting codes where the position of error is unknown. Unresponsive and Byzantine server resilience means that we require that regardless of which servers are erased, and which servers are Byzantine, the coding scheme must allow the user to recover its desired message by measuring the qudits that it is able to receive.\nOur solution centers around the CSA coding scheme that was originally introduced in the setting of XSTPIR, i.e., PIR with -secure storage and -private queries [22 ###reference_b22###], and subsequently shown to be applicable to a number of classical variants of PIR, coded computing and private read-write designs for federated submodel learning [23 ###reference_b23###]. The classical CSA scheme was generalized to a QCSA scheme (quantum CSA scheme) for XSTPIR over the QMAC in [8 ###reference_b8###], and resilience to eavesdroppers was explored in [9 ###reference_b9###]. The main challenge noted in [8 ###reference_b8###] for future generalizations is to achieve resilience to erasures and Byzantine servers, which is our goal in this work.\nNote that QEBXSTPIR reduces to QEXSTPIR when , and QBXSTPIR when . In terms of erasure-resilience, recent work in [4 ###reference_b4###] explores secret-sharing jointly with symmetric -private Quantum PIR. Secret sharing and erasure-resilience are related because in both cases the goal is to recover the desired information from a subset of answers. Indeed, if we ignore the -secure storage constraint, then the approach in [4 ###reference_b4###] should yield an erasure resilient QTPIR scheme. However, the scheme is based on random coding which is not directly compatible with -security. Connections to [4 ###reference_b4###] are further elaborated in Remark 5 ###reference_ark5### later in this work. Moreover, erasure resilience of QCSA code structure is particularly important due to the broad applications of CSA codes, e.g., to coded distributed computation (CDC) [24 ###reference_b24###]. A study of the resilience against Byzantine servers under zero-error criterion is initiated most recently in [11 ###reference_b11###]. While certain details of the coding scheme of [11 ###reference_b11###] are unclear to us111To the best of our understanding, [11 ###reference_b11###] appears to achieve rate even in certain settings with Byzantine servers. Let us note that even if the identities of the Byzantine servers are revealed to the user (reduced to erasures), qudits can at most carry classical dits of information according to the Holevo bound [25 ###reference_b25###] which limits the rate to which is strictly less than . For example, under the setting , , from [11 ###reference_b11###] it seems that rate is achievable, while the Holevo bound indicates that the rate of any scheme cannot exceed . let us note that under the zero-error condition, a Byzantine server is as harmful as unresponsive servers. On the other hand, we show in this paper that under an -error formulation a Byzantine server can be only as harmful as unresponsive server, by harnessing the -security property of the PIR scheme.\nIn this paper we first explore the QEXSTPIR problem. In order to achieve -security and -erasure resilience simultaneously, we explicitly construct an -sum box [21 ###reference_b21###] based on classical erasure-resilient CSA codes. As explained in [21 ###reference_b21###] the -sum box is specified by matrices. The choice of is subject to a strong self-orthogonality constraint, but can be relatively unstructured as it only needs to be a complement of . For our QECSA code design, we let the Vandermonde part of the CSA code (which carries undesired noise and interference terms) determine . The Vandermonde structure is compatible with GRS codes which possess the required duality properties. Erasure resilience is then guaranteed by replacing sufficiently many of the Cauchy dimensions (that would otherwise be used to send desired information) with the standard basis vectors in the matrix to allow recovery from arbitrary errors, and using the fact that the ability to recover from arbitrary errors on selected qudits also guarantees recovery from erasures of those qudits.\nQBXSTPIR and QEBXSTPIR schemes that enable perfect recovery of the desired message by the user are then proposed. These schemes are a combination of the QEXSTPIR scheme along with combinatorial decoding arguments. Similar to the reasoning that a general error is as harmful as erasures in error correcting codes, in our scheme a Byzantine server is as bad as unresponsive servers. That is to say, a QEXSTPIR scheme with is applied to the QBXSTPIR setting. In a nutshell, the user first guesses a realization of the non-Byzantine servers. For every servers from these servers, the user pretends that those servers together with the remaining potentially-Byzantine servers are erased, and decodes accordingly. If all the decoding results agree then the guess must be correct and the decoding is successful.\nHowever, in an -error setting, we show that a Byzantine server is only as harmful as unresponsive server. A QEXSTPIR scheme with is applied to solve a QBXSTPIR problem under -error criterion. In a nutshell, the user first decodes with the qudits from any servers that it assumes to be non-Byzantine. If the assumption is incorrect, then we will show that the user can detect the decoding error with high probability. This is possible due to the -security condition. Since any or fewer servers know nothing about the messages, it is possible to design a test that reveals to the user if the decoding result is a valid message, such that any Byzantine servers who do not know the messages cannot make the erroneous decoding result pass the test with high probability. Similar ideas are used in the authentication capacity of adversarial channels [26 ###reference_b26###], distributed storage system with adversarial attacks [27 ###reference_b27###], and communication-efficient secret sharing with malicious adversary [28 ###reference_b28###]. For example, suppose a message , appended with some hash function of itself is transmitted to a receiver as the tuple . An attacker introduces errors so that is received instead. If the errors are chosen independently of , i.e., with no knowledge of , then the manipulated hash will not be the hash of the manipulated message with high probability, i.e., . Meanwhile, since the size of the hash of a message can be negligible compared with the size of original message, it introduces almost no communication overhead to transmit a message together with its hash.\nNotation: For two integers , the set is denoted as . For compact notation, is denoted as . For a set , denotes its cardinality, and for any . For an matrix , denotes the submatrix of whose row indices are in and column indices in . or will be replaced by if they contain the entire rows or columns respectively. If is a vector, we simply write to denote the subvector of whose indices are in . denotes the vector subspace spanned by the columns of . For a length vector , denotes the diagonal matrix whose diagonal elements are entries of . denotes the all zero matrix whose dimension will be clear according to the contexts. is the identity matrix. represents the column of . outputs the cardinality of a set. For a length- vector , its symplectic weight .\nThroughout this paper, let be a finite field with order where is a prime power. Let be the field trace that is an -linear map from to . For quantum systems , let denote the composite quantum system of all the quantum systems. Let denote the dimension of the quantum system . If , we call it a -dimensional qudit. Let be the computational basis of a -dimensional qudit. For any , we define the general and operators as the operators such that when acting on a -dimensional qudit , and where ."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem Statement",
15
+ "text": "The messages, storage, queries are classical and defined as in [17 ###reference_b17###]. For completeness, let us restate the definitions here. Specifically, there are independent messages each of which is uniform over .\nThese messages are -securely shared among servers where server gets the share , such that any or fewer servers learn nothing about the messages, i.e.,\nA user wishes to retrieve the , message from the servers by sending the -private queries to the servers such that any or fewer servers learn nothing about , i.e.,\nThere is a random set of unresponsive servers and another random set of Byzantine servers. are disjoint, , and independent of the messages.\nNext let us formulate the classical setting which will serve as a baseline for comparison."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Classical Setting",
21
+ "text": "In the classical setting, there is a set of encoding functions . The unresponsive servers and Byzantine servers in generate their answers as an arbitrary number in . Each server generates the answer as a function of its storage and the query it received from the user, i.e.,\nThe servers then send their answers back to the user. There is a set of decoding functions . The user applies the decoding function to decode the desired message based on the answers, queries, and , i.e., the decoding result is\nNote that the user immediately knows which servers are unresponsive, but not which servers are Byzantine. Thus, the decoding scheme depends on but not on .\nThe probability of decoding error, given that the set of unresponsive servers is , the set of Byzantine servers is and the wrong answers , is\nThe rate of a classical EBXSTPIR scheme is defined as the number of desired message bits recovered per answer bit that is downloaded from the servers, i.e.,\nA rate is said to be -error achievable if there exists an EBXSTPIR scheme with rate greater than or equal to such that\nAn asymptotic rate is said to be -error achievable if there exists a sequence of EBXSTPIR schemes of rate greater than or equal to , where , such that"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Quantum Setting",
27
+ "text": "In the quantum setting, a composite quantum system is initialized in the state given by the density matrix that is independent of the messages and any randomness included in the PIR scheme. Subsystem has dimension and is distributed to server .\nLet denote the set of all unitary matrices and let denote all completely-positive and trace-preserving operations that are applicable to a -dimensional quantum system. There is a set of encoding functions . Any server applies an arbitrary quantum operation to its quantum subsystem while any server , that is neither unresponsive nor Byzantine, applies a unitary operator,\nwhich is a deterministic function of its storage and received query, to its own quantum subsystem.\nThe composite system is thus transformed into , with the resulting state , and sent to the user as such. Let be the partial state of quantum subsystem that is received by the user. A POVM specified by a set of operators is applied to so that the measurement result with probability\nFinally, there is a set of decoding functions . The user uses the decoding function to decode the desired message based on the answers, queries, and , i.e., the decoding result is\nAgain, the measurement and decoding depend on as the user can tell which servers are unresponsive after receiving answers from responsive servers.\nThe probability of decoding error, given the set of unresponsive servers is , the set of Byzantine servers is and the quantum operations , is\nThe rate of a QEBXSTPIR scheme is defined as the number of desired message bits recovered per qubit downloaded from the servers, i.e.,\nA rate is said to be -error achievable if there exists a QEBXSTPIR scheme with rate greater than or equal to such that\nAn asymptotic rate is said to be -error achievable if there exists a sequence of QEBXSTPIR schemes of rate greater than or equal to such that\nRecall that the main difference between the unresponsive and Byzantine servers is that the user knows which servers are unresponsive but not which servers are Byzantine immediately after receiving the answers from the servers. The (Q)EXSTPIR and (Q)BXSTPIR settings are special cases of (Q)EBXSTPIR, with only unresponsive servers () and only Byzantine servers () respectively."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Main Results",
33
+ "text": "The main results of this paper include a QEXSTPIR, QBXSTPIR, QEBXSTPIR and an -error QEBXSTPIR scheme. The QEXSTPIR scheme combines the classical EXSTPIR scheme [17 ###reference_b17###] and the -sum box [21 ###reference_b21###], and achieves a higher rate compared with the classical scheme. The QBXSTPIR scheme and QEBXSTPIR scheme are built upon the QEXSTPIR scheme combined with decoding under all possible sets of Byzantine errors. Both schemes outperform their classical counterparts. An -error QEBXSTPIR scheme where Byzantine servers are only as harmful as unresponsive servers is also proposed. We have the following theorems.\nFor quantum -secure -private information retrieval with servers out of which at most servers are unresponsive, the rate\nis achievable.\nThe achievability of the third regime is trivial since a -dimensional qudit can always be used to transmit a classical -ary symbol and the classical scheme in [17 ###reference_b17###] can be directly applied. The achievability of the first regime will be proved by the QEXSTPIR scheme presented in Section 5 ###reference_###.\nThe achievability of the second regime follows from a simple combination of the schemes for the first and third regimes. First of all, is always achievable by the classical scheme. For the achievability of , intuitively, when , one can always use the scheme that has more demanding privacy constraints, i.e., the scheme with -privacy such that and . The -secure PIR falls into the first regime and the rate can be calculated accordingly. Note that such a choice of needs to be even so that is an integer. The case when is odd will be addressed in Remark 6 ###reference_ark6###.\nIn the first regime, we note the rate of the quantum scheme is twice of the classical scheme, which matches the maximal superdense coding gain observed thus far in other quantum settings of PIR [7 ###reference_b7###, 6 ###reference_b6###, 21 ###reference_b21###], secret sharing [4 ###reference_b4###] or secure multi-party computation [29 ###reference_b29###, 10 ###reference_b10###].\nFor quantum -secure -private information retrieval with servers out of which at most servers are Byzantine, the rate\nis achievable.\nThe achievability of the first regime is proved in Section 6 ###reference_###. The achievability of the last two regimes can be argued similarly as in the proof of Theorem 1 ###reference_orem1###.\nFor quantum -secure -private information retrieval with servers out of which at most servers are unresponsive and servers are Byzantine, the rate\nis achievable.\nThe achievability of the first regime is proved in Section 7 ###reference_###. The achievability of the last two regimes can be argued similarly as in the proof of Theorem 1 ###reference_orem1###.\nFor quantum -secure -private information retrieval with servers out of which at most servers are unresponsive and servers are Byzantine, with -error, the asymptotic rate\nis achievable.\nThe achievability of the first regime is proved in Section 8 ###reference_###. It is not difficult to verify that the error detection scheme of Section 8 ###reference_### can also be applied to the classical scheme, thus the achievability of the last two regimes follows similarly as in the proof of Theorem 1 ###reference_orem1###."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Building Blocks of the QPIR Schemes",
39
+ "text": "Let us introduce the building blocks of our QPIR schemes."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Cross Subspace Alignment (CSA) Code for EXSTPIR [17]",
45
+ "text": "The QEBXSTPIR scheme is mainly based on the QEXSTPIR scheme which is based on EXSTPIR. As an example, suppose . Let be distinct elements over (). Let every message be a scalar from . The storage at Server , is\nwhere is the collection of all the messages, is the random noise whose entries are uniform over to protect the security of the messages.\nThe query sent from the user to Server is\nwhere is the column of , used for choosing the entry of , and is the random noise whose entries are uniform over to protect the privacy of the desired message.\nThe answer from server is,\nwhere is the desired messages and are two interference symbols comprised of undesired information.\nThe collection of the answers from the servers can be represented as\nDue to the fact that any rows of the matrix in (34 ###reference_###) form an invertible submatrix according to [17 ###reference_b17###], the desired message can be recovered from any responses, by inverting the corresponding submatrix.\nIn general, for arbitrary with , in the EXSTPIR scheme of [17 ###reference_b17###], we have where\nThat is to say, the message\nconsists of symbols from .\nLet be distinct elements in . The answers from the servers are,\nwhere are symbols of desired messages, and . Since every answer is a symbol in we have . According to [17 ###reference_b17###], any rows of form an invertible submatrix. Thus even with unresponsive servers, the symbols of the desired message can be retrieved.\nSimilar to [4 ###reference_b4###], all the requested answers, whether received successfully or erased, are counted towards the download cost. Given the fact that symbols are retrieved with the download cost of symbols, the rate of the scheme is,\nLet us note that CSA codes are not necessary for the cases with as in the previous example. In such cases, RS code based schemes can also achieve the same rate. When , the CSA code based scheme in [22 ###reference_b22###] is the best-known scheme that achieves better interference alignment and thus higher rate. For details, see [22 ###reference_b22###, Section VI-A]."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "-sum Box Abstraction as a MIMO MAC[21, 8]",
51
+ "text": "The functionality of an -sum box over a finite field , where a prime power, can be described by\nwhich can be regarded as a MIMO MAC over [21 ###reference_b21###, 8 ###reference_b8###]. The input vector to the MIMO MAC is , whose and entries for , i.e., are controlled by transmitter Tx-. The vector is the output obtained by the receiver. There are three key properties of an -sum box.\nCommunication cost: An -sum box is realized by distributing pre-entangled qudits , each of which is a quantum system with dimension , to the transmitters. Each Tx-, applies the quantum gates [21 ###reference_b21###, Section II] to its qudit according to its input and then sends its qudit to the receiver. The receiver measures the qudits to obtain . Thus, the communication cost is -dimensional qudits.\nTransfer function: According to [21 ###reference_b21###], must be of the form\nwhere , is invertible (has rank ), and is strongly self-orthogonal (S.S.O.). By S.S.O., we mean\nThe initial pre-entangled state of the qudits and the measurement depend on but is independent of the choice of . Different choices of just produces different labelings of the measurement result."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Discretization of Quantum Errors [30]",
57
+ "text": "Arbitrary errors, including erasures, on a subset of -dimensional qudits can be corrected if arbitrary errors on those qudits can be corrected [31 ###reference_b31###],[30 ###reference_b30###, Section II]. Throughout the QPIR schemes in this paper, the quantum subsystem owned and transmitted by a server will just be a -dimensional qudit. Thus, without loss of generality throughout this paper, instead of assuming that the qudits from the unresponsive servers are not received by the user, we assume that those qudits are subject to arbitrary errors and are then received by the user, who already knows which qudits are subjected to the arbitrary errors, but not the realizations of those errors. Similarly, instead of assuming that the Byzantine servers return arbitrary qudits as the answers, we assume that those qudits are subject to arbitrary errors. But which qudits are subjected to the errors is not known to the user as the user does not know which servers are Byzantine."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "QEXSTPIR Scheme for the First Regime",
63
+ "text": "This scheme is a combination of two instances of a modified CSA scheme over and over-the-air computation (decoding) through the underlying MIMO MAC of an -sum box over the same . Recall that in each instance of a CSA scheme each message has symbols where . Therefore, with two instances, we have symbols from ,\ni.e., for any ,\nwhere .\nThe servers play the role of the transmitters and the user plays the role of the receiver in the -sum box. Since in the -sum box every transmitter sends a -dimensional qudit to the receiver, here . The two instances of the modified classical scheme are the inputs to the -sum box."
64
+ },
65
+ {
66
+ "section_id": "5.1",
67
+ "parent_section_id": "5",
68
+ "section_name": "Two Instances of Modified Classical Scheme",
69
+ "text": "In the modified classical scheme, when generating answers from servers, we will substitute in (50 ###reference_###) with such that,\nThe specific form of is shown in (72 ###reference_###)\nThese answers can be generated by letting each server multiply its original answer generated according to (50 ###reference_###) by . For any subset , note that\nThus, any rows of form an invertible matrix due to the invertibility of and the invertibility of corresponding rows of .\nGiven distinct , let us choose so that\nand thus the GRS matrices defined in (72 ###reference_###) satisfy\naccording to [8 ###reference_b8###, 16 ###reference_b16###].\nThe first instance and the second instance of QEXSTPIR will be encoded by and respectively.\nSpecifically, putting the answers from the servers of the two instances together, we have,\nwhere the superscripts denote the indices of the two instances.\nFor compact notation, let us define,\nso that\nwhere the GC matrix is defined in (72 ###reference_###) and\nThus the answers in (81 ###reference_###) are written as,\nNote that we permute the columns of the matrix and the rows of the vectors accordingly in the RHS of (81 ###reference_###) to form (100 ###reference_0###). Thus the matrix in (100 ###reference_0###) has columns as so that both have columns. We have defined the left columns of the matrix in (100 ###reference_0###) to be while the remaining columns are denoted (which turns out to be the and part of the that specified the -sum box for over-the-air decoding)."
70
+ },
71
+ {
72
+ "section_id": "5.2",
73
+ "parent_section_id": "5",
74
+ "section_name": "Specification of",
75
+ "text": "We will apply different for different realizations of . Thus, it is better to define a set of matrices so that the user will set . Specifically, let specify the -sum box used for decoding over-the-air when the unresponsive servers are .\nFor ease of description of and further discussion, for any subset , let us define\nThe values of , left columns of , last columns of , for any are then specified as\nNote that, the first columns of remain the same across all . However, the last columns depend on .\nLet us then prove that for any , the matrices specify a valid -sum box. First of all, it can be easily shown that the chosen in (106 ###reference_6###) is S.S.O. as is simply implied by according to (75 ###reference_###) and (82 ###reference_###).\nNext, let us prove has rank , for any . On the one hand, since is just a permutation of , its rank is as the rank of each QCSA matrix is .\nOn the other hand, note that has rank , which suffices to prove the following lemma.\nNote that any rows of a QCSA matrix form an invertible submatrix, thus for any or , the column span of forms a MDS code with minimal weight of non-zero codewords being . Thus,\nMeanwhile, for the chosen in (112 ###reference_2###), it can be easily verified that\nsince only the entries of can be non-zero. Lemma 1 ###reference_ma1### is thus proved.\nThe dependence of the last columns of \u2019s on the unresponsive servers does not mean that the encoding operations at the transmitters depend on which servers are unresponsive. It does not mean the measurement of the quantum system depends on the unresponsive servers either. As mentioned in Section 4.2 ###reference_###, only impacts the choice of the user\u2019s \u2018representation\u2019 of the measurement result. Thus, the user can decide the form of after receiving the qudits from servers and determining which servers are unresponsive. Also, the user can find the representation of the measurement result in all choices of simultaneously with just one measurement. This is important because measurements of quantum systems are not reversible operations.\nConnections between the parameters of our construction and those in [4 ###reference_b4###] are noteworthy. The specified in (106 ###reference_6###), specified in (112 ###reference_2###), and specified in (109 ###reference_9###) correspond to the in [4 ###reference_b4###, Section V-C] respectively, which are the same as those in [4 ###reference_b4###, Thm. 2,Section V-A]. Note that in this paper correspond to in [4 ###reference_b4###, Thm. 2,Section V-A] respectively, and the fact that essesntially implies that in [4 ###reference_b4###]. Furthermore, note that the condition in [4 ###reference_b4###, Definition 10] is satisfied, which by itself means our construction could also yield a quantum private PIR scheme. Note, however, that it is the additional CSA structure of our code that enables -security."
76
+ },
77
+ {
78
+ "section_id": "5.3",
79
+ "parent_section_id": "5",
80
+ "section_name": "Decoding via the -sum Box",
81
+ "text": "Now let us specify the input to the -sum box.\nFirst, let server apply to its own qudit, where are specified in (81 ###reference_###) and (100 ###reference_0###).\nThen as mentioned in Section 4.3 ###reference_###, suppose the qudits from the unresponsive servers are subject to some errors and are then received by the user. Specifically, let , and for any , error operator is also applied to server \u2019s qudit. Thus, the operator\nis applied to server \u2019s qudit. Note that the equivalence holds up to a scalar according to [21 ###reference_b21###].\nLet the input . According to the above discussion, we have\nThus, the input to the -sum box,\nThe output of the underlying MIMO MAC of the -sum box specified by can then be written as\nNote that in total contain symbols of the desired messages and can be recovered by the user. Thus, the rate is achieved.\nThe scheme requires so that the corresponding GRS matrices have more than columns as required in (75 ###reference_###). Consider the second regime , where is odd. Though it is not possible to find an integer such that , one can find such that . This means that while constructing the two instances of classical scheme, we have security with privacy for the first instance, and security with privacy for the second instance. By such choice of , we have and , so the above scheme can be used. In the first instance, symbols of desired message are delivered, and in the second instance, symbols are delivered. Thus, in total symbols are delivered. The rate is thus achieved.\nIn this section, we always use the -sum box specified by for decoding. We discuss in Appendix A.1 ###reference_### what the output will be when the servers in a random set (whose realization is not known to the user) introduce errors to the answers while the -sum box specified by is utilized for decoding."
82
+ },
83
+ {
84
+ "section_id": "5.4",
85
+ "parent_section_id": "5",
86
+ "section_name": "Example:",
87
+ "text": "Let us present an example to clarify the details of the QECSA scheme. Let us consider the quantum version of the example in Section 4.1 ###reference_###. . Let be , i.e., integers modulo , and let\nAccording to (74 ###reference_###), entries of can be calculated as\nThe two instances of the classical scheme, can be written as\nAfter permutations and substitution of the corresponding values, this can be represented as\nIt is easily verified that the specified in (189 ###reference_9###) is S.S.O. Server then applies to its qudit. The initial state of the qudits shared to the servers can be determined by .\nSuppose server is the unresponsive server, and is also applied to its qudits, then after all\nis applied to server \u2019s qudit. The input to the -sum box is thus\nLet which is specified in (189 ###reference_9###), and let\nIt can be easily verified that the specified in (189 ###reference_9###) and (195 ###reference_5###) spans the entire dimensional space. The input can be further written as\nFrom the output of the -sum box, can be recovered. Thus, two desired symbols from are recovered with four -dimensional qudits downloaded. The rate is achieved."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "QBXSTPIR Scheme for the First Regime",
93
+ "text": "Next, let us consider the case where there is no unresponsive server, i.e., , while, there are Byzantine servers. We will show that the QEXSTPIR scheme proposed above will also work for the QBXSTPIR setting with . Intuitively, it is analogous to the scenario that a classical error correcting code that is capable of correcting erasures (known-position errors) is capable of correcting general (unknown-position) errors. Note that the -sum box specified above functions similar to a stabilizer based quantum error correcting code so its capability of correcting errors introduced by Byzantine servers up to half the amount of the correctable erasures is not surprising.\nBefore converting the QEXSTPIR scheme to a QBXSTPIR scheme, let us start from an example where we are given a classical error correcting code which we know how to decode under erasures, and we would like to decode with general errors instead based on the erasure decoding scheme."
94
+ },
95
+ {
96
+ "section_id": "6.1",
97
+ "parent_section_id": "6",
98
+ "section_name": "Example: From Erasures to General Errors \u2013 An Classical Code",
99
+ "text": "Suppose we are given a classical code . By definition, there exists the encoding function . For any message , let\nbe its codeword. This code is able to correct erasures. Suppose for any erasure set , there exists the decoding function such that when its input is the two unerased symbols, the output is the encoded message, i.e.,\nNow, let us construct the decoding scheme for general error. The receiver will \u2018guess\u2019 the position of the error. Suppose the receiver guesses that the error is in , then, with the remaining codeword symbols, it considers each one of them, one at a time, to be erased together with , and decodes correspondingly, to get decoding results. The following observations are crucial.\nSince there can be at most one error, at least one of the three decoders , , , must produce the correct .\nIf the guess was correct, i.e., is in error, then all three decoders (because they do not rely on ) produce the correct , i.e.,\nTherefore, whenever the three decoders agree, the decoding is correct. Also, for the correct guess the three decoders must agree. Thus, by trying all possible guesses for the symbol in error, the user is guaranteed to decode the message correctly.\nThus, for the code in the example, the decoding procedure with general error whose position is not known can be summarized as: Guess the position of error and decode based on each pair of symbols of the remaining symbols. If the decoding results match, the decoding is successful. Otherwise, the guess is wrong. Guess another position, repeat the previous steps till the decoding results match."
100
+ },
101
+ {
102
+ "section_id": "6.2",
103
+ "parent_section_id": "6",
104
+ "section_name": "QBXSTPIR Based on QEXSTPIR with",
105
+ "text": "Now let us construct the QBXSTPIR scheme that is resilient to Byzantine servers based on the QEXSTPIR scheme proposed in Section 5 ###reference_### that tolerates unresponsive servers with .\nLet us briefly describe the way of constructing QBXSTPIR based on QEXSTPIR before we formally present it as an algorithm. Similar to the previous classical error correction example, the user first guesses a as the Byzantine servers. Then for any , the user pretends the servers in are unresponsive and applies the decoding scheme specified in Definition 3 ###reference_inition3### in Appendix A.1 ###reference_###. If all the decoding results agree with each other, the user claims the decoding result is correct. Otherwise, the user concludes that the guess was wrong and repeats the previous steps until the decoding results are the same. It is important to recall Remark 4 ###reference_ark4### from Section 5.2 ###reference_### in this regard, i.e., while the user iteratively applies different decoding schemes which utilize different \u2018representations\u2019 of the classical outcome of the measurement result, the measurement is only performed once.\nThe algorithmic characterization is as follows:"
106
+ },
107
+ {
108
+ "section_id": "6.3",
109
+ "parent_section_id": "6",
110
+ "section_name": "Correctness and Rate of the QBXSTPIR Scheme",
111
+ "text": "Let us now prove the correctness of the output of Algorithm 1 ###reference_### in either case, whether the algorithm ends up with a correct guess () or not (). First of all,\nThis is simply true, as even if the guess is completely wrong (), the Byzantine servers will be contained in the remaining servers. Thus, the output of must be correct according to Proposition 1 ###reference_position1###.\nThus, suppose for some ,\nwhere is the set such that , the decoding result must be correct.\nSince here the QBXSTPIR scheme is built upon the QEXSTPIR scheme with , the rate can be calculated as"
112
+ },
113
+ {
114
+ "section_id": "7",
115
+ "parent_section_id": null,
116
+ "section_name": "QEBXSTPIR Scheme for the First Regime",
117
+ "text": "According to the zero-error scheme presented in the previous section, one Byzantine server is as harmful as two unresponsive servers. Let us now build a QEBXSTPIR scheme that is resilient to unresponsive servers and Byzantine servers, based on a QXSTPIR scheme that is resilient to unresponsive servers where . We directly give the scheme in the algorithmic form which is similar to Algorithm 1 ###reference_###.\n.\nThe key of the proof of the correctness is\nsimilar to (198 ###reference_8###). The remaining details are identical to the previous section.\nSince here the QEBXSTPIR scheme is built upon the QXSTPIR scheme with , the rate can be calculated as"
118
+ },
119
+ {
120
+ "section_id": "8",
121
+ "parent_section_id": null,
122
+ "section_name": "-error QEBXSTPIR for the First Regime",
123
+ "text": "Different from the result in sections 6 ###reference_### and 7 ###reference_###, under the \u201c-error\u201d setting, i.e., if the desired message only needs to be retrieved with negligible (but not exactly zero) probability of error, then with the scheme that we present in this section, each Byzantine server will be only as harmful as one unresponsive server. Let us start from QBXSTPIR setting, whose result can be immediately generalized to QEBXSTPIR setting.\nSuppose we are using a QEXSTPIR scheme to achieve -error QBXSTPIR with , under (approximately) the same rate. The user is able to decode with . Among all the decoding schemes, there exist some whose decoding results are guaranteed to be correct since all the Byzantine servers are treated as unresponsive according to Proposition 1 ###reference_position1###. The main challenge is this: for , is it possible to detect if the decoding result contains any errors introduced by unidentified Byzantine servers ( in Definition 3 ###reference_inition3###) with high probability?\nFortunately, the answer is yes, and the idea behind the solution is to design some tests that carry a very small communication overhead, such that Byzantine servers trying to introduce error to the decoding result, without knowledge of the messages (because of the -security constraint), cannot pass the tests and make the decoding result seem valid to the user, with any non-negligible probability.\nSuch an idea is used in many existing works including the authentication capacity of adversarial channels [26 ###reference_b26###], distributed storage system with adversarial attacks [27 ###reference_b27###], and communication-efficient secret sharing with malicious adversary [28 ###reference_b28###].\nThe pairwise hash function in [27 ###reference_b27###] can be applied here. Intuitively, for each message, we append to it some hash function of this message whose size is negligible compared with the size of each message. Byzantine servers who know nothing about the messages cannot manipulate a message and its hash function consistently with non-negligible probability. Thus, any decoding result that involves error introduced by Byzantine servers can be detected with high probability as the decoded hash will be different from the hash of manipulated message with high probability.\nBefore presenting the -error QBXSTPIR scheme, let us restate the pairwise hashing scheme [27 ###reference_b27###] and then show its capability of detecting the error in the decoding result.\nPartition Pairwise Hash [27 ###reference_b27###]:\nLet be a length column vector where divides that can be evenly partitioned into blocks as follows\nThe partition pairwise hash function is defined as\nwhose output is the pairwise inner-product of its blocks."
124
+ },
125
+ {
126
+ "section_id": "8.1",
127
+ "parent_section_id": "8",
128
+ "section_name": "Error Detection Capability of Partition Pairwise Hash",
129
+ "text": "Suppose the message specified in (203 ###reference_3###) is uniform over and is encoded into\nand is sent by a transmitter.\nSuppose there is an error vector\nwhich is a function of a random variable Anci, i.e.,\nOr equivalently,\nwhere the exact form of is not important.\nSuppose is added to and the receiver gets\nRegarding the error detection capability, we have the following lemma.\nIf the random variable Anci is independent of , then for any realization of such that , the probability of the receiver who gets as specified in (229 ###reference_9###), not being able to detect that the realization of is not zero, can be bounded as,\nSee Appendix A.2 ###reference_###."
130
+ },
131
+ {
132
+ "section_id": "8.2",
133
+ "parent_section_id": "8",
134
+ "section_name": "Error Detection over Extension Fields",
135
+ "text": "Since the extension fields can be viewed as a -dimensional vector space over , the pairwise hash function in Definition 1 ###reference_inition1### can also be used for detecting if there exists error when transmitting an element from . Specifically, let us have the following definition\npartition error detection encoding function : The function takes a scalar as the input, converts it into a vector , appends after to form , and converts to a scalar in the extension field as the output.\nWe have the following lemma.\nLet be uniform over . Let . Let be a function of a random variable Anci, i.e., . If the random variable Anci is independent of , then for any realization of such that , the probability of the receiver who gets not being able to detect the realization of is not , can be bounded as\nThis lemma follows immediately from Lemma 2 ###reference_ma2### and the fact that the extension field can be viewed as a vector space over the base field."
136
+ },
137
+ {
138
+ "section_id": "8.3",
139
+ "parent_section_id": "8",
140
+ "section_name": "-error QBXSTPIR",
141
+ "text": "Now we are able to describe our -error QBXSTPIR scheme. Let there be messages. For any , let the message have two instances each of which consists of symbols. Specifically, the two instances of the message are,\nThe -error QBXSTPIR scheme will first encode each message symbol into\nLet the encoded message be\nThen a QEXSTPIR scheme with that works in will be applied to the encoded messages .\nThe decoding is based on trial-and-error. For any , user decodes with . From Definition 3 ###reference_inition3###, the decoding result can be easily verified as\nas we replaced the original messages with in the QEXSTPIR scheme.\nLet us then discuss conditioned on the event that for arbitrary . According to Proposition 2 ###reference_position2###, for all , and are then determined. Specifically, let the realizations of and be and respectively.\nAgain, according to Proposition 1 ###reference_position1###, the decoding result from involves no error. Let denote the set of all decoding schemes such that for any , the decoding result is not error free (i.e., ). Let us say, in the decoding result, symbol of the desired message\u2019s instance should be for some non-zero . Thus, according to Lemma 3 ###reference_ma3###, the probability of not being able to detect , when decoding with , can be bounded as\nby identifying as Anci and setting . The independence of and comes from the fact that is independent of messages and any or fewer servers cannot learn anything about the messages by assumption according to the problem statement.\nNote that the operations Byzantine servers applied to their systems are completely determined by , thus, the probability of decoding error, given\nwhere the last step comes from the fact that as involves no error (i.e., ).\nThus, as the size of the alphabet of message goes to infinity, goes to infinity since are constants, thus\nMeanwhile, the rate can be calculated as\nwhere the first fraction term comes from the fact that in every decoded symbol, fraction are \u2018real\u2019 message symbols while the remaining are hashes.\nThus\nis achievable."
142
+ },
143
+ {
144
+ "section_id": "9",
145
+ "parent_section_id": null,
146
+ "section_name": "Conclusion",
147
+ "text": "The QEBXSTPIR problem is studied where the main challenge is to find a coding structure that is compatible with -secure storage and -privacy (e.g., Cauchy-Vandermonde structures), erasure-resilience (random/generic code structures) and self-orthogonality requirements of quantum superdense coding protocols (e.g., CSS codes, -sum box). The new scheme, QECSA, builds on a recently developed QCSA scheme and while using maximal stabilizers leaves enough space for the error basis to allow arbitrary error correction, which also guarantees erasure correction. Since the construction is based on the -sum box abstraction the derivation in this work is accessible through classical arguments. The QECSA scheme that is erasure-resilient is then made Byzantine-server-resilient (QEBCSA) by introducing combinatorial techniques. It is shown that when decoding error is allowed, the communication efficiency can be improved by harnessing the -security property of the PIR scheme and appending hash functions of messages to the original messages. Promising future directions include applications to quantum distributed coded computation."
148
+ }
149
+ ],
150
+ "appendix": [
151
+ {
152
+ "section_id": "Appendix 1",
153
+ "parent_section_id": null,
154
+ "section_name": "Appendix A Appendix",
155
+ "text": "Let,\ndenote the indices of servers that introduce errors to the answers, and for any , let the error operator applied to the qudit by server be\nSimilar to (124 ###reference_4###), the input to the -sum box can be specified as\nFor arbitrary\nlet\nNow, the in (247 ###reference_7###) can be represented as\nwhere in (resp. ), for any , (resp. ), .\nSimilar to (131 ###reference_1###), can be further written as\nThus, decoding with the -sum box specified by , the output is\nLet us divide into parts whose lengths are the same as , respectively. Then we have\nRecall that the user is only interested in the desired message, i.e., . Let us introduce the following definition,\nDecoding result : For any , let be the first symbols of the output of the -sum box specified by , i.e.,\nIt can be easily verified from (265 ###reference_5###) that if , i.e., , then will be an all-zero vector. Thus we have the following proposition:\nFor any , if , , i.e., the desired message can be correctly recovered from .\nAlso, according to the form of in (265 ###reference_5###), the following proposition can be easily verified:\nGiven any , is just a function of .\nLet\nbe the realization of conditioned on such that .\nTo have an error that is not detectable, the hash of first entries of the received vector must be equal to the last entries of (as this is true for any received vector when there is no error).\nLet us then calculate the probability of non-zero error not being detectable in two cases. In the first case, let anci be any realization such that while there exists . Without loss of generality, let us assume . Under this case, . Since by assumption , the probability of non-zero error not being detectable is zero in this case.\nIn the second case, let anci be any realization such that at least one of is not the all-zero vector . Without loss of generality, let us assume . Then we have\nwhere the last step arises from the fact that are i.i.d. uniform over and is independent of Anci. The proof of being i.i.d. uniform can be established as follows: Since is uniform over , are i.i.d. uniform over . When combined with the condition that , become i.i.d. uniform over , resulting in the i.i.d. uniformity of . The independence follows from the independence of and Anci."
156
+ }
157
+ ],
158
+ "tables": {},
159
+ "image_paths": {},
160
+ "validation": true,
161
+ "references": [],
162
+ "url": "http://arxiv.org/html/2311.07829v2"
163
+ }
20240225/2311.09114v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2311.09522v2.json ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "REVersed Indexes \u2248 VALues in Wavelet Trees REVIVAL in Wavelet Trees",
3
+ "abstract": "Data Compression (or Source Encoding) constructs indexes, by encoding information using fewer bits (than the original representation for values), to reduce the size of data storage. Therefore, if any computation needs to be performed, the compressed data needs to be uncompressed first. Since a limit of lossless data compression is expected to exist (lower-bounded by [5]), it is expected to expand the functionalities of compressed data under lossless compression. To this end, Succinct Data Structures are proposed ([2]) and explored, which enable queries directly on compression.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Data Compression constructs indexes to reduce the elementary size of original values, and therefore the storage cost is saved. Lower-bounded by Shannon ([5 ###reference_bx5###]), a fundamental limit to lossless data compression exists, which is known as the entropy rate. For decades, an extensive amount of efforts are paid on the potential breakthrough of this limit.\nThis work classifies prior work into two types. One is to improve the efficiency of lossless compression; and the other is to expand the functionalities of lossless compression. The latter is the focus in this work. Succinct Data Structures, originally formalized by Jacobson ([2 ###reference_bx2###]), are proposed to expand the functionalities of lossless compression. These data structures use an amount of space that is \"close\" to the entropy rate, while still retaining the functionality for efficient query operations. Therefore, queries on data can be directly performed on compression.\nHowever, to perform any computation, it is still needed for decompression first in Succinct Data Structures. This work centers the focus on a particular Succinct Data Structure - Wavelet Tree ([1 ###reference_bx1###]). Wavelet Tree is used to store strings in compressed space. The definition of Wavelet Tree is achieved by recursively partitioning the alphabet (from the string) into pairs of subsets; the leaves correspond to individual symbols of the alphabet, and at each node a bitvector stores whether a symbol of the string belongs to one subset or the other.\nThis work first breaks the assumption of Wavelet Tree (or all lossless compression methods), by directly taking Leibniz Binary System ([3 ###reference_bx3###]) as the reference point (rather than character encoding such as ASCII codes). We find that: the encoding patterns of Leibniz Binary System ([3 ###reference_bx3###]) can be highly correlated with the formalization of Wavelet Tree ([1 ###reference_bx1###]), with only the bit reversal. Therefore we derive the discovery of Reversed Indexes Values, which is described in Section 1.1.1 ###reference_.SSS1###.\nThis work then rolls back to the formalization of Wavelet Tree (or all lossless compression methods), by refining the reference point back to the character encoding. We find that: the above approach can be generalized to character encoding, by (1) simply accounting for common subsequence(s) in bits; and (2) utilizing these subsequences as patterns, to recover indexes to values. We also discuss extensions to other scenarios (e.g., other data types). It is described in Section 1.1.2 ###reference_.SSS2###.\nThis work finally conjectures potential implications of the above ideas, by analyzing the benefits (and hypothesizing potential modifications to RAM model) for \u201cIndexes Values\" principle. This work considers two viewpoints by leveraging the classification from ([4 ###reference_bx4###]). We first view Wavelet Tree as a compression method, and the benefits can be directly derived. Then we view Wavelet Tree as a data structure, and the design space is discussed and we assume it deserves further investigations. It is described in Section 1.1.3 ###reference_.SSS3###.\nIn summary, the discovery of Reversed Indexes Values (REVIVAL) showcases the feasibility to bridge near-optimal lossless compression with the Leibniz Binary System, which makes the following three major contributions.\nThe bridge between near-optimal lossless compression and Leibniz Binary System enables Computation Directly on Compression.\nThe discovery expands the usability of Succinct Data Structures, and this delivers polymorphic functionalities within a single piece of the information.\nThe bridge motivates a revamp of RAM model to support the above idea, and demonstrates potential merits of fine-grained RAM operations.\nWhen is a revival needed? When carelessness and unconcern keep the people asleep.\nWe first give an overview of the results in this work. Then we summarize the techniques developed in this work."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Our Results",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "1.1.1",
19
+ "parent_section_id": "1.1",
20
+ "section_name": "1.1.1 Reversed Indexes Values in Wavelet Trees",
21
+ "text": "The discovery is initially made as Reversed Indexes Values, by extending the usage of Wavelet Tree from strings to integers. By doing so, an accidental connection is observed that: for integers within [0,2), there exists a Wavelet Tree that its compressed indexes can be equivalent in the Leibiniz Binary system ([3 ###reference_bx3###]), with only the bit reversal."
22
+ },
23
+ {
24
+ "section_id": "1.1.2",
25
+ "parent_section_id": "1.1",
26
+ "section_name": "1.1.2 Reversed Indexes Values in Wavelet Trees",
27
+ "text": "The discovery is then expanded as Reversed Indexes Values for other types of encoding, by applying (a) common subsequence(s) in bits. Such a supplementary method breeds two opportunities. First, it can allow a more flexible mapping for the range of the to-be-compressed data. This is done by (1) extracting (a) common subsequence(s) in bits as (a) pattern(s); and (2) applying Reversed Indexes Values in the rest of bits. Second, it can partially break the requirement of the consecutive range for the to-be-compressed data, since the shared common subsequence(s) can have positional variations."
28
+ },
29
+ {
30
+ "section_id": "1.1.3",
31
+ "parent_section_id": "1.1",
32
+ "section_name": "1.1.3 Implications from \u201cIndexes Values\" Principle",
33
+ "text": "Based on the above ideas, this work discusses potential benefits, and suggests a potential revamp of RAM model. Our discussion takes WT as a motivating example, and classifies our analysis into two cases. We first view WT as a compression method, and the benefits can be derived directly. Then we view WT as a data structure, and suggest potential modifications of RAM model: so that we can support both queries and computation directly on compression."
34
+ },
35
+ {
36
+ "section_id": "1.2",
37
+ "parent_section_id": "1",
38
+ "section_name": "Technique Outline",
39
+ "text": "Though the discovery of REVIVAL is empirical, there are the following techniques can be derived for further investigations of other instances in the \u201cIndexes Values\" principle.\nInput-bounded Range of Available Values: the first step for the usage of REVIVAL (or other techniques) requires the bounded range of all available values, as the preliminary knowledge. This is used to determine how REVIVAL shall play a role in this bounded range, and furthermore how the longest common subsequence in bits shall be extracted (if needed).\nConditional Partitions of the Alphabet: the second step for the usage of REVIVAL (or other techniques) requires a set of conditions to be determined, which are used for encoding all values. The key to decide these conditions shall be closely related to how these values are encoded, and they are expected to be closely correlated.\nExecutions on \u201cIndexes Values\" Principle: the final step for the usage of REVIVAL (or other techniques) requires the abstraction of the above information, for the fine-grained operations over the compressed sequence of bits. This can vary by leveraging different viewpoints, and how different functionalities shall be supproted."
40
+ },
41
+ {
42
+ "section_id": "2",
43
+ "parent_section_id": null,
44
+ "section_name": "Background",
45
+ "text": "Wavelet Tree (WT) is a Succinct Data Structure, that can support and operations efficiently, to store strings in compressed space ([1 ###reference_bx1###])."
46
+ },
47
+ {
48
+ "section_id": "2.1",
49
+ "parent_section_id": "2",
50
+ "section_name": "Wavelet Tree Definition",
51
+ "text": "A WT is a data structure that recursively partitions a stream of characters into two parts, until homogeneous data are left. The encoding scheme is dependent to the partition of alphabet (and its subsets). Figure 1 ###reference_### gives out an example of WT from the string \u201cabcdabcd\".\n###figure_1###"
52
+ },
53
+ {
54
+ "section_id": "2.2",
55
+ "parent_section_id": "2",
56
+ "section_name": "Bitmap from Wavelet Tree",
57
+ "text": "A bitmap from WT is to deliver the encoding results level by level, and every index can be viewed vertically (from top to down). Figure 2 ###reference_### gives out the demonstrated bitmap from Figure 1 ###reference_###.\n###figure_2###"
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "Reversed Indexes Values",
63
+ "text": "We first introduce Reversed Indexes Values. We make the discovery by changing the reference point from character encoding into Leibniz Binary System ([3 ###reference_bx3###])."
64
+ },
65
+ {
66
+ "section_id": "3.1",
67
+ "parent_section_id": "3",
68
+ "section_name": "Leibniz Binary System Made WT",
69
+ "text": "The \"Reversed Indexes Values\" is described hereby: for integers within , there exists a Wavelet Tree that its compressed indexes can be equivalent to the Leibniz Binary system ([3 ###reference_bx3###]), with only the bit reversal. Figure 3 ###reference_### gives an example to demonstrate the idea.\n###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "3.2",
73
+ "parent_section_id": "3",
74
+ "section_name": "Generalization to integers within",
75
+ "text": "The above discovery can be easily generalized. This is because the encoding scheme from Leibniz Binary System ([3 ###reference_bx3###]) is a natural fit with the definition of WT. To obtain the above results in a generalized form (i.e. for integers within ), it is straightforward to have the following method derived:\nSort the alphabet, and then perform the partition for WT by putting smaller ones into a subset, and the rest into the other subset.\nThere are a few notes regarding the above generalization of Reversed Indexes Values.\nFirst, note that this may not be the only method to derive parts of the same results, but the above one is considered as the most general one. This is because the above rule is derived based on binary carry in Leibniz Binary System. More specialized forms are expected to be delivered also.\nSecond, the lower bound of the input data range has to remain as zero so that the proposed method can function. This is expected since WT encoding requires one of all elements from the alphabet to encoded as \u201c0\"s, and therefore enforces the inclusion of zero.\nThird, similar results may be derived if we change the overall range of the input data. However, the regulations to partition the alphabet can not be succinct enough, and the derived results can only be an approximation of Reversed Indexes Values. More instances, from slight changes of the input range, can be derived. We leave this part to the next section."
76
+ },
77
+ {
78
+ "section_id": "4",
79
+ "parent_section_id": null,
80
+ "section_name": "Reversed Indexes Values",
81
+ "text": "With the strict definition of Reversed Indexes Values, we present a more general form called Reversed Indexes Values."
82
+ },
83
+ {
84
+ "section_id": "4.1",
85
+ "parent_section_id": "4",
86
+ "section_name": "Common Subsequence in Bits Made Reversed Indexes Values",
87
+ "text": "The \"Reversed Indexes Values\" is described hereby: when leveraging parts of the bit sequence via Reversed Indexes Values, one or (several patterns) patterns can be used to connect bitmap from WT with the exact values in different encoding schemes. Figure 4 ###reference_### gives an example to demonstrate the idea in ASCII encoding, and we elaborate more on this example.\n###figure_4### For in ASCII encoding, all values share the common bit subsequence \u201c10001\" in binary. Therefore, after following the principle of Reversed Indexes Values, we can obtain the bitmap and the reversed bits can be equalized with parts of the exact values. To fully recover the values, the only job is to supplement the common subsequence (i.e. \u201c10001\")."
88
+ },
89
+ {
90
+ "section_id": "4.2",
91
+ "parent_section_id": "4",
92
+ "section_name": "Extensions in Reversed Indexes Values",
93
+ "text": "It is expected that Reversed Indexes Values can be extended in a variety of aspects.\nFirst, note that the usage of common subsequences can be generalized, since there can be several common subsequences within the input data range. Hence, the number of these subsequences determine the number of bit patterns, and these patterns are used to isolate so that the rest of bits can be used via Reversed Indexes Values.\nSecond, though the demonstrated example only covers character encoding in ASCII encoding, it is expected to be generalized to other types of value encoding. Particularly, for floating-point numbers, we assume dyadic scaling is more suitable one for the discovery.\nThird, the discussion so far does not cover the impacts of the sign system, but the inclusion of a sign system is expected not to impact the correctness of our method for Reversed Indexes Values, as long as the value of zero is included."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Implications from \u201cIndexes Values\" Principle",
99
+ "text": "We discuss benefits from (and potential modifications to) RAM, so that we can support the usage of Reversed Indexes Values (and their potential variants). The key to support the polymorphic functionalities is to enable the retrieval of these values efficiently. We leverage Reversed Indexes Values as an example: based on the usage of WT, we analyze (and provide potential modifications) using RAM for Reversed Indexes Values. We first view WT as a compression method; and then we view WT as a data structure."
100
+ },
101
+ {
102
+ "section_id": "5.1",
103
+ "parent_section_id": "5",
104
+ "section_name": "Viewpoint as a Compression Method",
105
+ "text": "When taking WT as a compression method, the benefits are straightforward and two-fold. First, every index is directly packed together, which reduces the overhead in terms of data transfer. Second, the processor only requires bit manipulation to conclude the de-compression (in Reversed Indexes Values, the bit reversal or constant values for recovering the values based on bit patterns): it saves costs in common de-compression (e.g., lookup costs)."
106
+ },
107
+ {
108
+ "section_id": "5.2",
109
+ "parent_section_id": "5",
110
+ "section_name": "Viewpoint as a Data Structure",
111
+ "text": "Taking WT as a data structure requires decent modifications to RAM for the usage of Reversed Indexes Values, and we conjecture that there are three parts. First, it requires dual-address modes for indexes and values respectively. Second, it requires level-oriented gather supports by using levels of WT as breakpoints, so that values can be retrieved. Third, though bit manipulation (and constant values for recovering the values based on bit patterns) can be performed within the processor, there can be benefits by integrating these functionalities in some cases (e.g., Processing-In-Memory Paradigm). We leave this to the future investigations."
112
+ },
113
+ {
114
+ "section_id": "6",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusions",
117
+ "text": "This work describes a discovery to bridge near-optimal lossless compression with Leibniz Binary System. It (1) makes Computation Directly on Compression feasible; and (2) enables polymorphic functionalities (i.e., efficient queries and computation) within a single piece of the information. This work also provides an initial analysis of the benefits from the method (and potentially other extensions), and suggests potential modifications. We conjecture that: with Reversed Indexes Values, everything old can be new now."
118
+ }
119
+ ],
120
+ "appendix": [
121
+ {
122
+ "section_id": "Appendix x1",
123
+ "parent_section_id": null,
124
+ "section_name": "Review Outcome from ACM STOC 2024",
125
+ "text": "This preprint was submitted to ACM STOC 2024, and rejected for the following reason:\nThis paper discusses Wavelet Trees and a manner for splitting characters in them based on binary notation. However, no new theorems are presented. Thus, the PC concluded the paper is not appropriate for the CS theory conference STOC."
126
+ },
127
+ {
128
+ "section_id": "Appendix x2",
129
+ "parent_section_id": null,
130
+ "section_name": "Personal Comments",
131
+ "text": "Hilarious. :-)\n###figure_5###"
132
+ }
133
+ ],
134
+ "tables": {},
135
+ "image_paths": {
136
+ "1": {
137
+ "figure_path": "2311.09522v2_figure_1.png",
138
+ "caption": "Figure 1: An example Wavelet Tree of the string \u201cabcdabcd\".",
139
+ "url": "http://arxiv.org/html/2311.09522v2/extracted/5430512/wavelet-tree.jpg"
140
+ },
141
+ "2": {
142
+ "figure_path": "2311.09522v2_figure_2.png",
143
+ "caption": "Figure 2: The corresponding bitmap from Figure 1.",
144
+ "url": "http://arxiv.org/html/2311.09522v2/extracted/5430512/wt-bitmap.jpg"
145
+ },
146
+ "3": {
147
+ "figure_path": "2311.09522v2_figure_3.png",
148
+ "caption": "Figure 3: An example of Reversed Indexes === Values using integers within [0,22)0superscript22[0,2^{2})[ 0 , 2 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).",
149
+ "url": "http://arxiv.org/html/2311.09522v2/extracted/5430512/revival-full.jpg"
150
+ },
151
+ "4": {
152
+ "figure_path": "2311.09522v2_figure_4.png",
153
+ "caption": "Figure 4: An example of Reversed Indexes \u2248\\approx\u2248 Values using characters within [D,G]\ud835\udc37\ud835\udc3a[D,G][ italic_D , italic_G ] in ASCII encoding. The highlighted \u201c10001\" is the shared common subsequence of all characters in bits.",
154
+ "url": "http://arxiv.org/html/2311.09522v2/extracted/5430512/revival-test.jpg"
155
+ }
156
+ },
157
+ "validation": true,
158
+ "references": [
159
+ {
160
+ "1": {
161
+ "title": "\u201cHigh-order Entropy-compressed Text Indexes\u201d",
162
+ "author": "Roberto Grossi, Ankur Gupta and Jeffrey Scott Vitter",
163
+ "venue": "In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 12-14, 2003, Baltimore, Maryland, USA",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "2": {
169
+ "title": "\u201cSuccinct Static Data Structures\u201d AAI8918056",
170
+ "author": "Guy Joseph Jacobson",
171
+ "venue": "USA: Carnegie Mellon University, 1988",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "3": {
177
+ "title": "\u201cExplication de l\u2019arithmetique binaire, qui se sert des seuls caracteres O et I avec des remarques sur son utilite et sur ce qu\u2019elle donne le sens des anciennes figures chinoises de Fohy\u201d",
178
+ "author": "Gottfried Wilhelm Leibniz",
179
+ "venue": "In Memoires de l\u2019Acad\u00e9mie Royale des Science 3, 1703, pp. 85\u201389",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "4": {
185
+ "title": "\u201cWavelet trees for all\u201d",
186
+ "author": "Gonzalo Navarro",
187
+ "venue": "In J. Discrete Algorithms 25, 2014, pp. 2\u201320",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "5": {
193
+ "title": "\u201cA Mathematical Theory of Communication\u201d",
194
+ "author": "Claude Elwood Shannon",
195
+ "venue": "In The Bell System Technical Journal 27.3",
196
+ "url": null
197
+ }
198
+ }
199
+ ],
200
+ "url": "http://arxiv.org/html/2311.09522v2"
201
+ }
20240225/2311.14986v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2311.15443v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2312.07424v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240225/2312.10584v2.json ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Policy Optimization in RLHF: The Impact of Out-of-preference Data",
3
+ "abstract": "Aligning agents with human preferences is important. This paper examines two classes of alignment methods. The first class operates without explicitly learning a reward model from preference data, with Direct Preference Optimization (Rafailov et al., 2023) emerging as a prominent method within this class. The second class involves methods that explicitly learn a reward function and utilize it to optimize policy on prompts-only data, with Proximal Policy Optimization (Schulman et al., 2017) standing out as a popular choice. Within this class, we investigate a notable approach that leverages a large amount of prompts, extending beyond those present in the preference dataset. Experiments demonstrate that this approach outperforms other methods on synthetic contextual bandits, which serve as mathematical models for alignment. Additionally, we provide an analysis of source errors in these optimization methods and draw connections with other related research areas, such as imitation learning and reinforcement learning. In essence, our research highlights the importance of integrating out-of-preference data, including the policy\u2019s responses to prompts from the preference dataset and new prompts, into the policy optimization.111A short version of this paper is presented at the tiny paper track of the 12th International Conference on Learning Representations (ICLR), 2024. Code is available at https://github.com/liziniu/policy_optimization.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Developing trustworthy agents requires alignment with human preferences (Russell and Norvig, 2010 ###reference_b26###). A standard practice involves providing a human preference dataset for the agent to learn from. According to utility theory (Fishburn et al., 1979 ###reference_b8###), preference is connected with a certain reward function. Currently, there are two kinds of alignment methods:\nThe first class of methods, referred to as the reward-model-free approach in this paper, does not explicitly learn a reward model but directly optimizes the language model from preference annotations. This class includes popular algorithms such as Direct Preference Optimization (DPO) (Rafailov et al., 2023 ###reference_b25###) and Identity Policy Optimization (IPO) (Azar et al., 2023 ###reference_b2###).\nThe second class of methods, referred to as the reward-model-based approaches in this paper, is exemplified by the so-called Reinforcement Learning from Human Feedback (RLHF) framework (Christiano et al., 2017 ###reference_b4###; Stiennon et al., 2020 ###reference_b29###; OpenAI, 2023 ###reference_b21###) framework, with Proximal Policy Optimization (PPO) (Schulman et al., 2017 ###reference_b28###) standing out as a popular choice. In particular, this class of methods trains a reward model from the preference data and subsequently optimizes the language model to improve its responses to prompts.\nWhile both approaches are able to improve performance by leveraging preference data, the superiority of one method over the other remains an open question, crucial for driving future advancements. We briefly explain the challenges in determining this superiority. First, we expect that the reward model, learned from preference data, possesses a certain generalization capability (i.e., through fine-tuning a powerful pre-trained neural network). To improve the language model, we require prompt-response pairs (i.e., input-output pairs) and their associated reward values (i.e., the supervision signals). This raises an important question: how should we select these prompt-response pairs? Notably, reward-model-free approaches utilize previously collected prompt-response pairs from the preference dataset, whereas reward-model-based approaches generate new responses to prompts using the language model, discarding the responses in the preference dataset. Which approach is more effective? And how do these data sources impact the generalization performance?\nWe explore the above questions by analyzing the errors in the language model\u2019s optimization under the framework of contextual bandits (Lattimore and Szepesv\u00e1ri, 2020 ###reference_b16###), which serve as mathematical models for alignment. This analysis helps us predict the algorithm\u2019s behaviors without the need for extensive experiments. Within this context, the language model is framed as a \u201cpolicy\u201d in broader terms. We demonstrate that the policy optimization in these methods corresponds to various forms of Monte Carlo approximations for maximizing the expected reward. Notably, the inclusion of out-of-preference data, which includes responses to prompts from the preference dataset and responses to new prompts, enhances the accuracy of the Monte Carlo approximation.\nTo validate the above ideas, we conduct experiments on contextual bandits with linear function approximation and neural function approximation, respectively. One main experiment, where we manually ensure that the policy shares the same good feature representation with the reward model (thus, they have the same representation power), shows that policy optimization with additional out-of-preference data still improves generalization performance. Other experiments also support this claim. Finally, we provide a discussion about this phenomenon with reference to other fields, such as imitation learning (Osa et al., 2018 ###reference_b22###) and reinforcement learning (Sutton and Barto, 2018 ###reference_b30###)."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem Formulation",
15
+ "text": "We consider the so-called contextual bandits (Langford and Zhang, 2007 ###reference_b15###; Lu et al., 2010 ###reference_b18###) formulation, which serves mathematical models for alignment. Let and be the state and action, respectively. We aim to obtain a decision policy that acts optimally in terms of reward maximization:\nwhere the symbol denotes the state distribution, and is the ground truth reward function. We omit the subscript in when the context is clear. For language models, the term \u201cstates\u201d refers to prompts, while \u201cactions\u201d denote responses. The language model functions as the decision-making policy. It is worth noting that terminologies may be used interchangeably.\nIn the context of alignment, the difficulty is that the reward function is unknown but only preferences over two actions are observed. Typically, the Bradley-Terry assumption (Bradley and Terry, 1952 ###reference_b3###) is used:\nwhere the symbol means that is more preferred compared with . Given a preference dataset , where is assumed without loss of generality, the reward learning objective, derived via maximum likelihood estimation, is\nwhere is the sigmoid function. This objective encourages the reward function to give a high score for the positively preferred data and a low score for the negative preferred data .\nLet be a reference policy model and be a hyper-parameter.\nIdeally, we may want to optimize the policy with this recovered reward function in population:\nHere, the Kullback\u2013Leibler (KL) penalty aims to mitigate the reward hacking and over-optimization issue (Gao et al., 2023 ###reference_b9###). We remark that, in practice, we do not know the distribution and typically employ Monte Carlo approximations. That is, we use finite samples to approximate the population distribution and its expectation. There are two kinds of practical approaches, which we elaborate on below. Please also see Figure 1 ###reference_###.\n###figure_1### ###figure_2### Reward-model-free Optimization Approaches: One direct idea is to use state-action pairs from the preference dataset for policy optimization:\nThis approach approximate the state and action distributions by finite samples from the preference dataset. By using tools from KL-regularized optimization (see e.g., (Vieillard et al., 2020 ###reference_b32###)), Rafailov et al. (2023 ###reference_b25###) showed that procedures in Equation 2 ###reference_### and Equation 3 ###reference_### could be integrated into a single objective:\nThe resultant algorithm is named Direct Preference Optimization (DPO). Since this approach does not require explicitly training a reward model, it is considered as reward-model-free optimization.\nReward-model-based Optimization Approaches:\nThe vanilla Reward-Model-Based Policy Optimization (RMB-PO) approach also leverages states from the preference dataset but samples actions from the policy model:\nWe consider the exact action expectation in the above formulation, and this expectation can be approximated by sampling multiple actions. This approximation error can be mitigated by computational power, and we do not consider this error in this paper.\nIn (Ouyang et al., 2022 ###reference_b23###; Touvron et al., 2023 ###reference_b31###), a variant of RMB-PO, refereed to as RMB-PO+ in this paper, further leverages a new, preference-free dataset :\nNote that the dataset is cheap to obtain and usually (Ouyang et al., 2022 ###reference_b23###). One particular example of such data in language model\u2019s application is the lmsys-chat-1m dataset (Zheng et al., 2023 ###reference_b37###), which has 1 million prompts from real users without preference annotations.\nWe note that there is no single learning objective for reward-model-based approaches. This is because the technique in (Rafailov et al., 2023 ###reference_b25###) requires that the reward and policy learning objectives have the same training distribution, a condition that is not met for reward-model-based approaches. In practice, policy optimization in Equation 5 ###reference_### and Equation 6 ###reference_### can be conducted by policy gradient methods such as PPO (Schulman et al., 2017 ###reference_b28###) and ReMax (Li et al., 2023 ###reference_b17###)."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Theoretical Analysis",
21
+ "text": "In this section, we present a preliminary analysis of errors in the optimization methods. At a high level, we identify three types of errors:\nthe reward evaluation error ;\nthe estimation error when using finite samples to calculate the expectation ;\nthe estimation error when using finite samples to calculate the expectation .\nThe first error primarily results from finite preference data and diminishes to zero as the preference data size increases indefinitely. This error exists in all optimization methods. Compared with DPO, RMB-PO aims to mitigate the second error, while RMB-PO+ further reduces the third error. We note that RMB-PO and RMB-PO+ do not increase the sample complexity of preference data but only incur additional computational steps. We present the error bound analysis below.\nDefine the reward evaluation error , the state distribution estimation error , and the action distribution estimation error . Here and denote the finite-sample estimations of expectation under the state and action distributions, respectively. Consider , then we have\nwhere is the evaluation performance of a policy .\nWe first consider the reward error:\nThen we consider the state distribution estimation error:\nCombining (7 ###reference_###) and (8 ###reference_###) proves the first result in Proposition 1 ###reference_p1###. For the second result, we may replace with in the above proof to obtain:\nThen we consider the action distribution estimation error:\nCombining (9 ###reference_###) and (10 ###reference_###) proves the second result in Proposition 1 ###reference_p1###.\n\u220e\nWe do not present the analysis for , as its analysis is similar to that for . The main difference lies in that the state distribution estimation error for is generally smaller than that for due to more samples. We note that our analysis is quite basic in the sense that we consider all errors in the supremum norm, and it would be interesting to explore a more tighter analysis with finite sample guarantee; see e.g., (Xiong et al., 2023 ###reference_b33###) for recent progress in this direction."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Experiments",
27
+ "text": "In this section, we conduct numerical experiments to validate the improvement of RMB-PO and RMB-PO+ by better stochastic approximation. All of our experiments are run with 10 different random seeds (2021-2030), and the averaged results are reported222We exclude the worst and best results to make a robust estimation of the performance.. Note that we set to be a policy with a uniform action distribution in all experiments and for all methods. Besides, we use a policy with a uniform action distribution to collect the preference data."
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "Linear Bandit",
33
+ "text": "We study a linear bandit task, where we have , with denoting the feature representation and as the parameter. In this case, the reward learning optimization problem is convex, so we use CVXPY (Diamond and Boyd, 2016 ###reference_b5###) to find the solution . In particular, we use the feature map and the parameter as\nwhere and . A uniform distribution over is studied. For the policy, we consider the parameterization\nwith and both in . In this case, the policy optimization problem is a non-convex problem, but the gradient domination condition holds (Agarwal et al., 2021 ###reference_b1###). We use the gradient ascent method with the AdaGrad optimizer (Duchi et al., 2011 ###reference_b7###) (a step size of 0.1 is used).\n###figure_3### ###figure_4### We examine two scenarios. In the first scenario, there is no feature mismatch between the reward and policy models, i.e., . In the second, we use a different feature map for policy:\nWe believe that in scenarios where , RMB-PO approaches could exhibit more promising performance than RMF-PO approaches. This is because, in such cases, the policy and reward models may align well by learning from preference data. However, in out-of-preference-distribution scenarios, they may extrapolate and generalize quite differently due to mismatches in representations. Nevertheless, RMB-PO approaches could use out-of-preference-distribution data to mitigate these mismatches and tend to perform well. The case where will be revisited in later neural bandit experiments, where the policy model and reward model typically utilize distinct architectures and learn distinct representations.\nIn our experiments, we set the size of preference data to be and the size of preference-free data to be , resulting in training accuracy of the reward model ranging from 60% to 80% over 10 experiments. We display the optimality gap (the smaller, the better) in Figure 3 ###reference_### and Figure 3 ###reference_###, where is the evaluation performance of a policy , i.e., (in our experiments, we use 5000 sampled states to approximate this expectation).\nFrom Figure 3 ###reference_###, we see that even though the policy model is provided with a good feature (e.g., in Figure 3 ###reference_###), RMB-PO methods can benefit from out-of-preference data. In the case where in Figure 3 ###reference_###, we find that RMB-PO+ is better than RMB-PO by leveraging additional preference-free data. Thus, we believe it is crucial to learn the optimal action (as inferred by the reward model) on out-of-preference data, even when the two models share the same good feature.\nTo gain a better understanding, we also visualize the learned policy distribution in the setting; see Figure 4 ###reference_###. To observe the training distribution coverage, we plot the states from the preference dataset. Additional states used in RMB-PO+ almost cover the entire state space but are not shown for readability reasons. From the reported curves, we observe that DPO aligns well with the optimal policy in the regions covered by preference data, and RMB-PO(+) methods tend to perform better than DPO in the out-of-distribution regime not covered by the preference data.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### Following the same setup, we provide ablation studies regarding the size of preference-free data used in RMB-PO+. See the results in Figure 6 ###reference_### and Figure 6 ###reference_###. We find that the previous conclusions still hold true.\n###figure_9### ###figure_10###"
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "Neural Bandit",
39
+ "text": "In this section, we study a neural bandit problem. Specifically, we study the case where , with being a fixed 1-hidden-layer multi-layer perceptron (MLP) neural network, having a hidden size of 64. For reward learning, we use a 2-hidden-layer MLP with a hidden size of 64, and the policy network is also a 2-hidden-layer MLP with a hidden size of 64. We consider a continuous state space and a discrete action space . The state distribution is uniform and one-hot feature representation for actions is used.\n###figure_11### ###figure_12### We note that, unlike in the linear bandit case where we could fix the feature representations of the reward and policy models to be the same, in this case, the feature representations of the reward and policy models are purely learned from the given data. The architectures of the reward and policy models are shown in Figure 7 ###reference_###. All neural networks are optimized using the Adam optimizer (Kingma and Ba, 2015 ###reference_b14###) with a step size of .\nWe run experiments with varying sizes of preference-free data while fixing the preference data size at . We report the results in Figure 8 ###reference_###. First, we observe that RMB-PO and RMB-PO+ significantly outperform DPO. Furthermore, simply using a preference-free data size that is twice as large already improves performance over RMB-PO, and further scaling does not help too much.\n###figure_13###"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Discussion",
45
+ "text": "Our research is related to imitation learning (Osa et al., 2018 ###reference_b22###), which aims to learn a policy from expert demonstrations. A popular approach to achieve this goal is through behavioral cloning (BC) (Pomerleau, 1991 ###reference_b24###), which trains a policy model by maximizing the likelihood of expert data. Note that the working mechanism of BC is quite similar to DPO, as in Equation 4 ###reference_###, where the likelihood of positively preferred actions is increased and that of negatively preferred actions is decreased:\nwhere is the expert dataset.\nGhasemipour et al. (2019 ###reference_b10###) showed that another class of imitation methods, known as adversarial imitation learning (AIL) methods, (such as GAIL (Ho and Ermon, 2016 ###reference_b12###)), usually performs better than BC. In particular, AIL methods leverage a recovered reward function to perform policy optimization on \u201cout-of-expert-data\u201d through online interaction, significantly improving performance. Following the formulation in (Xu et al., 2022 ###reference_b34###), the training objective of reward-model-based AIL can be re-formulated as\nwhere is the empirical state-action distribution estimated from , and is obtained from online interaction. For the optimization objective of AIL, it utilizes states beyond those in the expert dataset (reflected in the summation over all state-action pairs). We notice that Xu et al. (2022 ###reference_b34###) theoretically proved that AIL can outperform BC in terms of addressing the distribution shift issue with optimization on \u201cout-of-expert data\u201d. The idea of recovering a reward function and using it to perform extensive policy optimization is quite similar to the framework of RLHF.\nAdditionally, our research is related to transition-model-based reinforcement learning (RL) methods, where the goal is to find an optimal policy through interactions with environments. Many empirical successes suggest that transition-model-based approaches are superior in terms of sample complexity (Luo et al., 2019 ###reference_b20###; Janner et al., 2019 ###reference_b13###). We do not aim to present a detailed discussion since RL involves lots of concepts and notations. Instead, we would like to highlight that our findings align with the understanding that additional policy optimization on transition-model-generated data is helpful. We would like to refer readers to (Hafner et al., 2020 ###reference_b11###; Schrittwieser et al., 2020 ###reference_b27###; Yu et al., 2020 ###reference_b36###; Luo et al., 2023 ###reference_b19###) for the effect of data augmentation in transition-model-based RL methods.\nFinally, we note that compared with reward-model-free methods such as DPO (Rafailov et al., 2023 ###reference_b25###), reward-model-based policy optimization (RMB-PO) methods do not require extra preference annotation. For applications such as language models, training and storing a reward model has been shown to be highly efficient, as demonstrated in (Yao et al., 2023 ###reference_b35###). The primary challenge in RMB-PO lies in the huge action space during policy optimization. However, this issue can be effectively addressed by computationally efficient methods like those proposed by (Dong et al., 2023 ###reference_b6###; Li et al., 2023 ###reference_b17###). Notably, Li et al. (2023 ###reference_b17###) showed that optimizing the language model with prompts-only data can improve performance, a setting that cannot achieved by reward-model-free approaches such as DPO."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Conclusion",
51
+ "text": "We analyze the errors of policy optimization methods when learning from preferences for alignment. We also conduct experiments to validate our claims. Our results underscore the importance of optimizing policies on out-of-preference data and the power of using a reward model to provide supervision signals."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {},
56
+ "image_paths": {
57
+ "1(a)": {
58
+ "figure_path": "2312.10584v2_figure_1(a).png",
59
+ "caption": "(a) Illustration for reward-model-free approaches.\nFigure 1: Illustration for policy optimization methods. For reward-model-based approaches, the reward model learning procedure is not plotted for ease of presentation.",
60
+ "url": "http://arxiv.org/html/2312.10584v2/x1.png"
61
+ },
62
+ "1(b)": {
63
+ "figure_path": "2312.10584v2_figure_1(b).png",
64
+ "caption": "(b) Illustration for reward-model-based approaches.\nFigure 1: Illustration for policy optimization methods. For reward-model-based approaches, the reward model learning procedure is not plotted for ease of presentation.",
65
+ "url": "http://arxiv.org/html/2312.10584v2/x2.png"
66
+ },
67
+ "2(a)": {
68
+ "figure_path": "2312.10584v2_figure_2(a).png",
69
+ "caption": "Figure 2: Optimality gap with \u03d5\u03c0=\u03d5rsubscriptitalic-\u03d5\ud835\udf0bsubscriptitalic-\u03d5\ud835\udc5f\\phi_{\\pi}=\\phi_{r}italic_\u03d5 start_POSTSUBSCRIPT italic_\u03c0 end_POSTSUBSCRIPT = italic_\u03d5 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT.",
70
+ "url": "http://arxiv.org/html/2312.10584v2/x3.png"
71
+ },
72
+ "2(b)": {
73
+ "figure_path": "2312.10584v2_figure_2(b).png",
74
+ "caption": "Figure 2: Optimality gap with \u03d5\u03c0=\u03d5rsubscriptitalic-\u03d5\ud835\udf0bsubscriptitalic-\u03d5\ud835\udc5f\\phi_{\\pi}=\\phi_{r}italic_\u03d5 start_POSTSUBSCRIPT italic_\u03c0 end_POSTSUBSCRIPT = italic_\u03d5 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT.",
75
+ "url": "http://arxiv.org/html/2312.10584v2/x4.png"
76
+ },
77
+ "3(a)": {
78
+ "figure_path": "2312.10584v2_figure_3(a).png",
79
+ "caption": "(a) Action a0subscript\ud835\udc4e0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.\nFigure 4: Probabilities of four actions a0subscript\ud835\udc4e0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, a1subscript\ud835\udc4e1a_{1}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and a3subscript\ud835\udc4e3a_{3}italic_a start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. Results illustrate that RMB-PO(+) methods leverage out-of-preference data to better learn the policy distribution on out-of-distribution states and improve the generalization performance.",
80
+ "url": "http://arxiv.org/html/2312.10584v2/x5.png"
81
+ },
82
+ "3(b)": {
83
+ "figure_path": "2312.10584v2_figure_3(b).png",
84
+ "caption": "(b) Action a1subscript\ud835\udc4e1a_{1}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT.\nFigure 4: Probabilities of four actions a0subscript\ud835\udc4e0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, a1subscript\ud835\udc4e1a_{1}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and a3subscript\ud835\udc4e3a_{3}italic_a start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. Results illustrate that RMB-PO(+) methods leverage out-of-preference data to better learn the policy distribution on out-of-distribution states and improve the generalization performance.",
85
+ "url": "http://arxiv.org/html/2312.10584v2/x6.png"
86
+ },
87
+ "3(c)": {
88
+ "figure_path": "2312.10584v2_figure_3(c).png",
89
+ "caption": "(c) Action a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.\nFigure 4: Probabilities of four actions a0subscript\ud835\udc4e0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, a1subscript\ud835\udc4e1a_{1}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and a3subscript\ud835\udc4e3a_{3}italic_a start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. Results illustrate that RMB-PO(+) methods leverage out-of-preference data to better learn the policy distribution on out-of-distribution states and improve the generalization performance.",
90
+ "url": "http://arxiv.org/html/2312.10584v2/x7.png"
91
+ },
92
+ "3(d)": {
93
+ "figure_path": "2312.10584v2_figure_3(d).png",
94
+ "caption": "(d) Action a3subscript\ud835\udc4e3a_{3}italic_a start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT.\nFigure 4: Probabilities of four actions a0subscript\ud835\udc4e0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, a1subscript\ud835\udc4e1a_{1}italic_a start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, a2subscript\ud835\udc4e2a_{2}italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and a3subscript\ud835\udc4e3a_{3}italic_a start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT. Results illustrate that RMB-PO(+) methods leverage out-of-preference data to better learn the policy distribution on out-of-distribution states and improve the generalization performance.",
95
+ "url": "http://arxiv.org/html/2312.10584v2/x8.png"
96
+ },
97
+ "4(a)": {
98
+ "figure_path": "2312.10584v2_figure_4(a).png",
99
+ "caption": "Figure 5: Optimality gap with \u03d5\u03c0=\u03d5rsubscriptitalic-\u03d5\ud835\udf0bsubscriptitalic-\u03d5\ud835\udc5f\\phi_{\\pi}=\\phi_{r}italic_\u03d5 start_POSTSUBSCRIPT italic_\u03c0 end_POSTSUBSCRIPT = italic_\u03d5 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT.",
100
+ "url": "http://arxiv.org/html/2312.10584v2/x9.png"
101
+ },
102
+ "4(b)": {
103
+ "figure_path": "2312.10584v2_figure_4(b).png",
104
+ "caption": "Figure 5: Optimality gap with \u03d5\u03c0=\u03d5rsubscriptitalic-\u03d5\ud835\udf0bsubscriptitalic-\u03d5\ud835\udc5f\\phi_{\\pi}=\\phi_{r}italic_\u03d5 start_POSTSUBSCRIPT italic_\u03c0 end_POSTSUBSCRIPT = italic_\u03d5 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT.",
105
+ "url": "http://arxiv.org/html/2312.10584v2/x10.png"
106
+ },
107
+ "5(a)": {
108
+ "figure_path": "2312.10584v2_figure_5(a).png",
109
+ "caption": "(a) Reward neural network.\nFigure 7: Architectures of the reward and policy models.",
110
+ "url": "http://arxiv.org/html/2312.10584v2/x11.png"
111
+ },
112
+ "5(b)": {
113
+ "figure_path": "2312.10584v2_figure_5(b).png",
114
+ "caption": "(b) Policy neural network.\nFigure 7: Architectures of the reward and policy models.",
115
+ "url": "http://arxiv.org/html/2312.10584v2/x12.png"
116
+ },
117
+ "6": {
118
+ "figure_path": "2312.10584v2_figure_6.png",
119
+ "caption": "Figure 8: Optimality gap of learned policies in the neural bandit task.",
120
+ "url": "http://arxiv.org/html/2312.10584v2/x13.png"
121
+ }
122
+ },
123
+ "validation": true,
124
+ "references": [
125
+ {
126
+ "1": {
127
+ "title": "On the theory of policy gradient methods: Optimality, approximation,\nand distribution shift.",
128
+ "author": "Alekh Agarwal, Sham M Kakade, Jason D Lee, and Gaurav Mahajan.",
129
+ "venue": "The Journal of Machine Learning Research, 22(1):4431\u20134506, 2021.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "2": {
135
+ "title": "A general theoretical paradigm to understand learning from human\npreferences.",
136
+ "author": "Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele\nCalandriello, Michal Valko, and R\u00e9mi Munos.",
137
+ "venue": "arXiv preprint arXiv:2310.12036, 2023.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "3": {
143
+ "title": "Rank analysis of incomplete block designs: I. the method of paired\ncomparisons.",
144
+ "author": "Ralph Allan Bradley and Milton E Terry.",
145
+ "venue": "Biometrika, 39(3/4):324\u2013345, 1952.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "4": {
151
+ "title": "Deep reinforcement learning from human preferences.",
152
+ "author": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario\nAmodei.",
153
+ "venue": "Advances in Neural Information Processing Systems 30, pages\n4299\u20134307, 2017.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "5": {
159
+ "title": "Cvxpy: A python-embedded modeling language for convex optimization.",
160
+ "author": "Steven Diamond and Stephen Boyd.",
161
+ "venue": "The Journal of Machine Learning Research, 17(1):2909\u20132913, 2016.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "6": {
167
+ "title": "Raft: Reward ranked finetuning for generative foundation model\nalignment.",
168
+ "author": "Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang,\nKashun Shum, and Tong Zhang.",
169
+ "venue": "arXiv preprint arXiv:2304.06767, 2023.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "7": {
175
+ "title": "Adaptive subgradient methods for online learning and stochastic\noptimization.",
176
+ "author": "John Duchi, Elad Hazan, and Yoram Singer.",
177
+ "venue": "Journal of machine learning research, 12(7), 2011.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "8": {
183
+ "title": "Utility theory for decision making.",
184
+ "author": "Peter C Fishburn, Peter C Fishburn, et al.",
185
+ "venue": "Krieger NY, 1979.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "9": {
191
+ "title": "Scaling laws for reward model overoptimization.",
192
+ "author": "Leo Gao, John Schulman, and Jacob Hilton.",
193
+ "venue": "In Proceedings of the 40th International Conference on Machine\nLearning, pages 10835\u201310866, 2023.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "10": {
199
+ "title": "A divergence minimization perspective on imitation learning methods.",
200
+ "author": "Seyed Kamyar Seyed Ghasemipour, Richard S. Zemel, and Shixiang Gu.",
201
+ "venue": "In Proceedings of the 3rd Conference on Robot Learning, pages\n1259\u20131277, 2019.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "11": {
207
+ "title": "Dream to control: Learning behaviors by latent imagination.",
208
+ "author": "Danijar Hafner, Timothy P. Lillicrap, Jimmy Ba, and Mohammad Norouzi.",
209
+ "venue": "In Proceedings of the 8th International Conference on Learning\nRepresentations, 2020.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "12": {
215
+ "title": "Generative adversarial imitation learning.",
216
+ "author": "Jonathan Ho and Stefano Ermon.",
217
+ "venue": "In Advances in Neural Information Processing Systems 29, pages\n4565\u20134573, 2016.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "13": {
223
+ "title": "When to trust your model: Model-based policy optimization.",
224
+ "author": "Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine.",
225
+ "venue": "In Advances in neural information processing systems 32, pages\n12498\u201312509, 2019.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "14": {
231
+ "title": "Adam: A method for stochastic optimization.",
232
+ "author": "Diederik P. Kingma and Jimmy Ba.",
233
+ "venue": "In Proceedings of the 3rd International Conference on Learning\nRepresentations, 2015.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "15": {
239
+ "title": "The epoch-greedy algorithm for multi-armed bandits with side\ninformation.",
240
+ "author": "John Langford and Tong Zhang.",
241
+ "venue": "Advances in Neural Information Processing Systems 20, 2007.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "16": {
247
+ "title": "Bandit Algorithms.",
248
+ "author": "Tor Lattimore and Csaba Szepesv\u00e1ri.",
249
+ "venue": "Cambridge University Press, 2020.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "17": {
255
+ "title": "Remax: A simple, effective, and efficient method for aligning large\nlanguage models.",
256
+ "author": "Ziniu Li, Tian Xu, Yushun Zhang, Yang Yu, Ruoyu Sun, and Zhi-Quan Luo.",
257
+ "venue": "arXiv preprint arXiv:2310.10505, 2023.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "18": {
263
+ "title": "Contextual multi-armed bandits.",
264
+ "author": "Tyler Lu, D\u00e1vid P\u00e1l, and Martin P\u00e1l.",
265
+ "venue": "In Proceedings of the 13th International Conference on\nArtificial Intelligence and Statistics, pages 485\u2013492, 2010.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "19": {
271
+ "title": "Reward-consistent dynamics models are strongly generalizable for\noffline reinforcement learning.",
272
+ "author": "Fan-Ming Luo, Tian Xu, Xingchen Cao, and Yang Yu.",
273
+ "venue": "arXiv preprint arXiv:2310.05422, 2023.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "20": {
279
+ "title": "Algorithmic framework for model-based deep reinforcement learning\nwith theoretical guarantees.",
280
+ "author": "Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, and Tengyu\nMa.",
281
+ "venue": "In Proceedings of the 7th International Conference on Learning\nRepresentations, 2019.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "21": {
287
+ "title": "Gpt-4 technical report.",
288
+ "author": "OpenAI.",
289
+ "venue": "arXiv preprint arXiv:2303.08774, 2023.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "22": {
295
+ "title": "An algorithmic perspective on imitation learning.",
296
+ "author": "Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter\nAbbeel, and Jan Peters.",
297
+ "venue": "Foundations and Trends in Robotic, 7(1-2):1\u2013179, 2018.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "23": {
303
+ "title": "Training language models to follow instructions with human feedback.",
304
+ "author": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela\nMishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al.",
305
+ "venue": "Advances in Neural Information Processing Systems 35, pages\n27730\u201327744, 2022.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "24": {
311
+ "title": "Efficient training of artificial neural networks for autonomous\nnavigation.",
312
+ "author": "Dean Pomerleau.",
313
+ "venue": "Neural Computation, 3(1):88\u201397, 1991.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "25": {
319
+ "title": "Direct preference optimization: Your language model is secretly a\nreward model.",
320
+ "author": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D\nManning, and Chelsea Finn.",
321
+ "venue": "arXiv preprint arXiv:2305.18290, 2023.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "26": {
327
+ "title": "Artificial Intelligence: A Modern Approach.",
328
+ "author": "Stuart J Russell and Peter Norvig.",
329
+ "venue": "London, 2010.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "27": {
335
+ "title": "Mastering atari, go, chess and shogi by planning with a learned\nmodel.",
336
+ "author": "Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan,\nLaurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis,\nThore Graepel, et al.",
337
+ "venue": "Nature, 588(7839):604\u2013609, 2020.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "28": {
343
+ "title": "Proximal policy optimization algorithms.",
344
+ "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.",
345
+ "venue": "arXiv, 1707.06347, 2017.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "29": {
351
+ "title": "Learning to summarize with human feedback.",
352
+ "author": "Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea\nVoss, Alec Radford, Dario Amodei, and Paul F Christiano.",
353
+ "venue": "Advances in Neural Information Processing Systems,\n33:3008\u20133021, 2020.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "30": {
359
+ "title": "Reinforcement Learning: An Introduction.",
360
+ "author": "Richard S Sutton and Andrew G Barto.",
361
+ "venue": "MIT press, 2018.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "31": {
367
+ "title": "Llama 2: Open foundation and fine-tuned chat models.",
368
+ "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine\nBabaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale,\net al.",
369
+ "venue": "arXiv preprint arXiv:2307.09288, 2023.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "32": {
375
+ "title": "Leverage the average: an analysis of kl regularization in\nreinforcement learning.",
376
+ "author": "Nino Vieillard, Tadashi Kozuno, Bruno Scherrer, Olivier Pietquin, R\u00e9mi\nMunos, and Matthieu Geist.",
377
+ "venue": "Advances in Neural Information Processing Systems,\n33:12163\u201312174, 2020.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "33": {
383
+ "title": "Gibbs sampling from human feedback: A provable kl-constrained\nframework for rlhf.",
384
+ "author": "Wei Xiong, Hanze Dong, Chenlu Ye, Han Zhong, Nan Jiang, and Tong Zhang.",
385
+ "venue": "arXiv preprint arXiv:2312.11456, 2023.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "34": {
391
+ "title": "Understanding adversarial imitation learning in small sample regime:\nA stage-coupled analysis.",
392
+ "author": "Tian Xu, Ziniu Li, Yang Yu, and Zhi-Quan Luo.",
393
+ "venue": "arXiv preprint arXiv:2208.01899, 2022.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "35": {
399
+ "title": "Deepspeed-chat: Easy, fast and affordable rlhf training of\nchatgpt-like models at all scales.",
400
+ "author": "Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari,\nXiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor\nHolmes, et al.",
401
+ "venue": "arXiv preprint arXiv:2308.01320, 2023.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "36": {
407
+ "title": "Mopo: Model-based offline policy optimization.",
408
+ "author": "Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y Zou, Sergey\nLevine, Chelsea Finn, and Tengyu Ma.",
409
+ "venue": "Advances in Neural Information Processing Systems 33, pages\n14129\u201314142, 2020.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "37": {
415
+ "title": "Lmsys-chat-1m: A large-scale real-world llm conversation dataset.",
416
+ "author": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao\nWu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric Xing, et al.",
417
+ "venue": "arXiv preprint arXiv:2309.11998, 2023.",
418
+ "url": null
419
+ }
420
+ }
421
+ ],
422
+ "url": "http://arxiv.org/html/2312.10584v2"
423
+ }