diff --git "a/Subarxiv2023.json" "b/Subarxiv2023.json" new file mode 100644--- /dev/null +++ "b/Subarxiv2023.json" @@ -0,0 +1,23536 @@ +[ + { + "node_id": 0, + "label": 16, + "text": "Title: LMEye: An Interactive Perception Network for Large Language Models\nAbstract: Training a Large Visual Language Model (LVLM) from scratch, like GPT-4, is resource-intensive. Our paper presents a play-and-plug module for Large Language Models (LLMs), namely Interactive Perception Network (IPN), aiming to achieve a LVLM by incorporating the image understanding capability into LLMs. Previous methods incorporate visual information into LLMs with a simple visual mapping network, where the image feature is projected into the embedding space of LLMs via a linear layer. Such mapping network projects the image feature once yet does not consider the interaction between the image and the human input query. Hence, the obtained visual information with no connections with human intention may be inadequate for LLMs to make intention-following responses, which we term as static visual information. IPN addresses this issue by allowing the LLM to request the desired visual information aligned with various human instructions, which we term as the dynamic interaction between the LLM and visual information. Specifically, IPN consists of a simple visual mapping network to provide the basic perception of an image for LLMs. It also contains additional modules responsible for acquiring requests from LLMs, performing request-based visual information interaction, and transmitting the resulting interacted visual information to LLMs, respectively. In this way, LLMs act to understand the human query, deliver the corresponding request to the request-based visual information interaction module, and generate the response based on the interleaved multimodal information. We evaluate IPN through extensive experiments on multimodal question answering, reasoning, and so on, demonstrating that it significantly improves the zero-shot performance of LVLMs on various multimodal tasks compared to previous methods.", + "neighbors": [ + 57, + 183, + 319, + 618, + 704, + 754, + 887, + 1047, + 1052, + 1071, + 1148, + 1659, + 1668, + 1765, + 1810, + 1863, + 2036, + 2155 + ], + "mask": "Validation" + }, + { + "node_id": 1, + "label": 28, + "text": "Title: Density Devolution for Ordering Synthetic Channels\nAbstract: Constructing a polar code is all about selecting a subset of rows from a Kronecker power of $ \\left[ {\\begin{array}{c} {10} \\\\ {11} \\end{array}} \\right] $. It is known that, under successive cancellation decoder, some rows are Paretobetter than the other. For instance, whenever a user sees a substring 01 in the binary expansion of a row index and replaces it with 10, the user obtains a row index that is always more welcomed. We call this a \"rule\" and denote it by 10 \u227d 01. In present work, we first enumerate some rules over binary erasure channels such as 1001 \u227d 0110 and 10001 \u227d 01010 and 10101 \u227d 01110. We then summarize them using a \"rule of rules\": if 10a \u227d 01b is a rule, where a and b are arbitrary binary strings, then 100a \u227d 010b and 101a \u227d 011b are rules. This work\u2019s main contribution is using field theory, Galois theory, and numerical analysis to develop an algorithm that decides if a rule of rules is mathematically sound. We apply the algorithm to enumerate some rules of rules. Each rule of rule is capable of generating an infinite family of rules. For instance, 10c01 \u227d 01c10 for arbitrary binary string c can be generated. We found an application of 10c01 \u227d 01c10 that is related to integer partition and the dominance order therein.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 2, + "label": 24, + "text": "Title: SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving Systems\nAbstract: A CF explainer identifies the minimum modifications in the input that would alter the model's output to its complement. In other words, a CF explainer computes the minimum modifications required to cross the model's decision boundary. Current deep generative CF models often work with user-selected features rather than focusing on the discriminative features of the black-box model. Consequently, such CF examples may not necessarily lie near the decision boundary, thereby contradicting the definition of CFs. To address this issue, we propose in this paper a novel approach that leverages saliency maps to generate more informative CF explanations. Source codes are available at: https://github.com/Amir-Samadi//Saliency_Aware_CF.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 3, + "label": 30, + "text": "Title: Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent\nAbstract: Large Language Models (LLMs) have demonstrated a remarkable ability to generalize zero-shot to various language-related tasks. This paper focuses on the study of exploring generative LLMs such as ChatGPT and GPT-4 for relevance ranking in Information Retrieval (IR). Surprisingly, our experiments reveal that properly instructed ChatGPT and GPT-4 can deliver competitive, even superior results than supervised methods on popular IR benchmarks. Notably, GPT-4 outperforms the fully fine-tuned monoT5-3B on MS MARCO by an average of 2.7 nDCG on TREC datasets, an average of 2.3 nDCG on eight BEIR datasets, and an average of 2.7 nDCG on ten low-resource languages Mr.TyDi. Subsequently, we delve into the potential for distilling the ranking capabilities of ChatGPT into a specialized model. Our small specialized model that trained on 10K ChatGPT generated data outperforms monoT5 trained on 400K annotated MS MARCO data on BEIR. The code to reproduce our results is available at www.github.com/sunnweiwei/RankGPT", + "neighbors": [ + 36, + 570, + 644, + 840, + 1001, + 1092, + 1613, + 1636, + 1678, + 1863, + 1915, + 2094 + ], + "mask": "Train" + }, + { + "node_id": 4, + "label": 23, + "text": "Title: CCRep: Learning Code Change Representations via Pre-Trained Code Model and Query Back\nAbstract: Representing code changes as numeric feature vectors, i.e., code change representations, is usually an essential step to automate many software engineering tasks related to code changes, e.g., commit message generation and just-in-time defect prediction. Intuitively, the quality of code change representations is crucial for the effectiveness of automated approaches. Prior work on code changes usually designs and evaluates code change representation approaches for a specific task, and little work has investigated code change encoders that can be used and jointly trained on various tasks. To fill this gap, this work proposes a novel Code Change Representation learning approach named CCRep, which can learn to encode code changes as feature vectors for diverse downstream tasks. Specifically, CCRep regards a code change as the combination of its before-change and after-change code, leverages a pre-trained code model to obtain high-quality contextual embeddings of code, and uses a novel mechanism named query back to extract and encode the changed code fragments and make them explicitly interact with the whole code change. To evaluate CCRep and demonstrate its applicability to diverse code-change-related tasks, we apply it to three tasks: commit message generation, patch correctness assessment, and just-in-time defect prediction. Experimental results show that CCRep outperforms the state-of-the-art techniques on each task.", + "neighbors": [ + 546 + ], + "mask": "Test" + }, + { + "node_id": 5, + "label": 28, + "text": "Title: Indexed Multiple Access with Reconfigurable Intelligent Surfaces: The Reflection Tuning Potential\nAbstract: Indexed modulation (IM) is an evolving technique that has become popular due to its ability of parallel data communication over distinct combinations of transmission entities. In this article, we first provide a comprehensive survey of IM-enabled multiple access (MA) techniques, emphasizing the shortcomings of existing non-indexed MA schemes. Theoretical comparisons are presented to show how the notion of indexing eliminates the limitations of non-indexed solutions. We also discuss the benefits that the utilization of a reconfigurable intelligent surface (RIS) can offer when deployed as an indexing entity. In particular, we propose an RIS-indexed multiple access (RIMA) transmission scheme that utilizes dynamic phase tuning to embed multi-user information over a single carrier. The performance of the proposed RIMA is assessed in light of simulation results that confirm its performance gains. The article further includes a list of relevant open technical issues and research directions.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 6, + "label": 24, + "text": "Title: Optimal Approximation and Learning Rates for Deep Convolutional Neural Networks\nAbstract: This paper focuses on approximation and learning performance analysis for deep convolutional neural networks with zero-padding and max-pooling. We prove that, to approximate $r$-smooth function, the approximation rates of deep convolutional neural networks with depth $L$ are of order $ (L^2/\\log L)^{-2r/d} $, which is optimal up to a logarithmic factor. Furthermore, we deduce almost optimal learning rates for implementing empirical risk minimization over deep convolutional neural networks.", + "neighbors": [ + 648 + ], + "mask": "Train" + }, + { + "node_id": 7, + "label": 24, + "text": "Title: Pushing the Accuracy-Group Robustness Frontier with Introspective Self-play\nAbstract: Standard empirical risk minimization (ERM) training can produce deep neural network (DNN) models that are accurate on average but under-perform in under-represented population subgroups, especially when there are imbalanced group distributions in the long-tailed training data. Therefore, approaches that improve the accuracy-group robustness trade-off frontier of a DNN model (i.e. improving worst-group accuracy without sacrificing average accuracy, or vice versa) is of crucial importance. Uncertainty-based active learning (AL) can potentially improve the frontier by preferentially sampling underrepresented subgroups to create a more balanced training dataset. However, the quality of uncertainty estimates from modern DNNs tend to degrade in the presence of spurious correlations and dataset bias, compromising the effectiveness of AL for sampling tail groups. In this work, we propose Introspective Self-play (ISP), a simple approach to improve the uncertainty estimation of a deep neural network under dataset bias, by adding an auxiliary introspection task requiring a model to predict the bias for each data point in addition to the label. We show that ISP provably improves the bias-awareness of the model representation and the resulting uncertainty estimates. On two real-world tabular and language tasks, ISP serves as a simple\"plug-in\"for AL model training, consistently improving both the tail-group sampling rate and the final accuracy-fairness trade-off frontier of popular AL methods.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 8, + "label": 24, + "text": "Title: PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs\nAbstract: While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. While PINNacle does not guarantee success in all real-world scenarios, it represents a significant contribution to the field by offering a robust, diverse, and comprehensive benchmark suite that will undoubtedly foster further research and development in PINNs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 9, + "label": 24, + "text": "Title: Empirically Validating Conformal Prediction on Modern Vision Architectures Under Distribution Shift and Long-tailed Data\nAbstract: Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. Yet, its performance is known to degrade under distribution shift and long-tailed class distributions, which are often present in real world applications. Here, we characterize the performance of several post-hoc and training-based conformal prediction methods under these settings, providing the first empirical evaluation on large-scale datasets and models. We show that across numerous conformal methods and neural network families, performance greatly degrades under distribution shifts violating safety guarantees. Similarly, we show that in long-tailed settings the guarantees are frequently violated on many classes. Understanding the limitations of these methods is necessary for deployment in real world and safety-critical applications.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 10, + "label": 5, + "text": "Title: Cloud Services Enable Efficient AI-Guided Simulation Workflows across Heterogeneous Resources\nAbstract: Applications that fuse machine learning and simulation can benefit from the use of multiple computing resources, with, for example, simulation codes running on highly parallel supercomputers and AI training and inference tasks on specialized accelerators. Here, we present our experiences deploying two AI-guided simulation workflows across such heterogeneous systems. A unique aspect of our approach is our use of cloud-hosted management services to manage challenging aspects of cross-resource authentication and authorization, function-as-a-service (FaaS) function invocation, and data transfer. We show that these methods can achieve performance parity with systems that rely on direct connection between resources. We achieve parity by integrating the FaaS system and data transfer capabilities with a system that passes data by reference among managers and workers, and a user-configurable steering algorithm to hide data transfer latencies. We anticipate that this ease of use can enable routine use of heterogeneous resources in computational science.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 11, + "label": 30, + "text": "Title: The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only\nAbstract: Large language models are commonly trained on a mixture of filtered web data and curated high-quality corpora, such as social media conversations, books, or technical papers. This curation process is believed to be necessary to produce performant models with broad zero-shot generalization abilities. However, as larger models requiring pretraining on trillions of tokens are considered, it is unclear how scalable is curation and whether we will run out of unique high-quality data soon. At variance with previous beliefs, we show that properly filtered and deduplicated web data alone can lead to powerful models; even significantly outperforming models from the state-of-the-art trained on The Pile. Despite extensive filtering, the high-quality data we extract from the web is still plentiful, and we are able to obtain five trillion tokens from CommonCrawl. We publicly release an extract of 600 billion tokens from our RefinedWeb dataset, and 1.3/7.5B parameters language models trained on it.", + "neighbors": [ + 602, + 682, + 817, + 1052, + 1237, + 1546, + 1548, + 1556, + 1733, + 1950, + 2235, + 2257 + ], + "mask": "Validation" + }, + { + "node_id": 12, + "label": 25, + "text": "Title: RobustL2S: Speaker-Specific Lip-to-Speech Synthesis exploiting Self-Supervised Representations\nAbstract: Significant progress has been made in speaker dependent Lip-to-Speech synthesis, which aims to generate speech from silent videos of talking faces. Current state-of-the-art approaches primarily employ non-autoregressive sequence-to-sequence architectures to directly predict mel-spectrograms or audio waveforms from lip representations. We hypothesize that the direct mel-prediction hampers training/model efficiency due to the entanglement of speech content with ambient information and speaker characteristics. To this end, we propose RobustL2S, a modularized framework for Lip-to-Speech synthesis. First, a non-autoregressive sequence-to-sequence model maps self-supervised visual features to a representation of disentangled speech content. A vocoder then converts the speech features into raw waveforms. Extensive evaluations confirm the effectiveness of our setup, achieving state-of-the-art performance on the unconstrained Lip2Wav dataset and the constrained GRID and TCD-TIMIT datasets. Speech samples from RobustL2S can be found at https://neha-sherin.github.io/RobustL2S/", + "neighbors": [ + 1869 + ], + "mask": "Validation" + }, + { + "node_id": 13, + "label": 24, + "text": "Title: Text analysis using deep neural networks in digital humanities and information science\nAbstract: Combining computational technologies and humanities is an ongoing effort aimed at making resources such as texts, images, audio, video, and other artifacts digitally available, searchable, and analyzable. In recent years, deep neural networks (DNN) dominate the field of automatic text analysis and natural language processing (NLP), in some cases presenting a super\u2010human performance. DNNs are the state\u2010of\u2010the\u2010art machine learning algorithms solving many NLP tasks that are relevant for Digital Humanities (DH) research, such as spell checking, language detection, entity extraction, author detection, question answering, and other tasks. These supervised algorithms learn patterns from a large number of \u201cright\u201d and \u201cwrong\u201d examples and apply them to new examples. However, using DNNs for analyzing the text resources in DH research presents two main challenges: (un)availability of training data and a need for domain adaptation. This paper explores these challenges by analyzing multiple use\u2010cases of DH studies in recent literature and their possible solutions and lays out a practical decision model for DH experts for when and how to choose the appropriate deep learning approaches for their research. Moreover, in this paper, we aim to raise awareness of the benefits of utilizing deep learning models in the DH community.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 14, + "label": 23, + "text": "Title: Eadro: An End-to-End Troubleshooting Framework for Microservices on Multi-source Data\nAbstract: The complexity and dynamism of microservices pose significant challenges to system reliability, and thereby, automated troubleshooting is crucial. Effective root cause localization after anomaly detection is crucial for ensuring the reliability of microservice systems. However, two significant issues rest in existing approaches: (1) Microservices generate traces, system logs, and key performance indicators (KPIs), but existing approaches usually consider traces only, failing to understand the system fully as traces cannot depict all anomalies; (2) Troubleshooting microservices generally contains two main phases, i.e., anomaly detection and root cause localization. Existing studies regard these two phases as independent, ignoring their close correlation. Even worse, inaccurate detection results can deeply affect localization effectiveness. To overcome these limitations, we propose Eadro, the first end-to-end framework to integrate anomaly detection and root cause localization based on multi-source data for troubleshooting large-scale microservices. The key insights of Eadro are the anomaly manifestations on different data sources and the close connection between detection and localization. Thus, Eadro models intra-service behaviors and inter-service dependencies from traces, logs, and KPIs, all the while leveraging the shared knowledge of the two phases via multi-task learning. Experiments on two widely-used benchmark microservices demonstrate that Eadro outperforms state-of-the-art approaches by a large margin. The results also show the usefulness of integrating multi-source data. We also release our code and data to facilitate future research.", + "neighbors": [ + 919, + 1131, + 2173 + ], + "mask": "Train" + }, + { + "node_id": 15, + "label": 38, + "text": "Title: The disruption index is biased by citation inflation\nAbstract: A recent analysis of scientific publication and patent citation networks by Park et al. (Nature, 2023) suggests that publications and patents are becoming less disruptive over time. Here we show that the reported decrease in disruptiveness is an artifact of systematic shifts in the structure of citation networks unrelated to innovation system capacity. Instead, the decline is attributable to 'citation inflation', an unavoidable characteristic of real citation networks that manifests as a systematic time-dependent bias and renders cross-temporal analysis challenging. One driver of citation inflation is the ever-increasing lengths of reference lists over time, which in turn increases the density of links in citation networks, and causes the disruption index to converge to 0. A second driver is attributable to shifts in the construction of reference lists, which is increasingly impacted by self-citations that increase in the rate of triadic closure in citation networks, and thus confounds efforts to measure disruption, which is itself a measure of triadic closure. Combined, these two systematic shifts render the disruption index temporally biased, and unsuitable for cross-temporal analysis. The impact of this systematic bias further stymies efforts to correlate disruption to other measures that are also time-dependent, such as team size and citation counts. In order to demonstrate this fundamental measurement problem, we present three complementary lines of critique (deductive, empirical and computational modeling), and also make available an ensemble of synthetic citation networks that can be used to test alternative citation-based indices for systematic bias.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 16, + "label": 10, + "text": "Title: Efficient Computation of Shap Explanation Scores for Neural Network Classifiers via Knowledge Compilation\nAbstract: The use of Shap scores has become widespread in Explainable AI. However, their computation is in general intractable, in particular when done with a black-box classifier, such as neural network. Recent research has unveiled classes of open-box Boolean Circuit classifiers for which Shap can be computed efficiently. We show how to transform binary neural networks into those circuits for efficient Shap computation.We use logic-based knowledge compilation techniques. The performance gain is huge, as we show in the light of our experiments.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 17, + "label": 16, + "text": "Title: FedSIS: Federated Split Learning with Intermediate Representation Sampling for Privacy-preserving Generalized Face Presentation Attack Detection\nAbstract: Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms. Existing attempts to enhance the generalizability of FacePAD solutions assume that data from multiple source domains are available with a single entity to enable centralized training. In practice, data from different source domains may be collected by diverse entities, who are often unable to share their data due to legal and privacy constraints. While collaborative learning paradigms such as federated learning (FL) can overcome this problem, standard FL methods are ill-suited for domain generalization because they struggle to surmount the twin challenges of handling non-iid client data distributions during training and generalizing to unseen domains during inference. In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization. In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a combination of FL and split learning to achieve robustness against statistical heterogeneity in the client data distributions without any sharing of raw data (thereby preserving privacy). To further improve generalization to unseen domains, a novel feature augmentation strategy called intermediate representation sampling is employed, and discriminative information from intermediate blocks of a ViT is distilled using a shared adapter network. The FedSIS approach has been evaluated on two well-known benchmarks for cross-domain FacePAD to demonstrate that it is possible to achieve state-of-the-art generalization performance without data sharing. Code: https://github.com/Naiftt/FedSIS", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 18, + "label": 31, + "text": "Title: Identifying document similarity using a fast estimation of the Levenshtein Distance based on compression and signatures\nAbstract: Identifying document similarity has many applications, e.g., source code analysis or plagiarism detection. However, identifying similarities is not trivial and can be time complex. For instance, the Levenshtein Distance is a common metric to define the similarity between two documents but has quadratic runtime which makes it impractical for large documents where large starts with a few hundred kilobytes. In this paper, we present a novel concept that allows estimating the Levenshtein Distance: the algorithm first compresses documents to signatures (similar to hash values) using a user-defined compression ratio. Signatures can then be compared against each other (some constrains apply) where the outcome is the estimated Levenshtein Distance. Our evaluation shows promising results in terms of runtime efficiency and accuracy. In addition, we introduce a significance score allowing examiners to set a threshold and identify related documents.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 19, + "label": 24, + "text": "Title: Generating Multidimensional Clusters With Support Lines\nAbstract: nan", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 20, + "label": 27, + "text": "Title: TactoFind: A Tactile Only System for Object Retrieval\nAbstract: We study the problem of object retrieval in scenarios where visual sensing is absent, object shapes are unknown beforehand and objects can move freely, like grabbing objects out of a drawer. Successful solutions require localizing free objects, identifying specific object instances, and then grasping the identified objects, only using touch feedback. Unlike vision, where cameras can observe the entire scene, touch sensors are local and only observe parts of the scene that are in contact with the manipulator. Moreover, information gathering via touch sensors necessitates applying forces on the touched surface which may disturb the scene itself. Reasoning with touch, therefore, requires careful exploration and integration of information over time - a challenge we tackle. We present a system capable of using sparse tactile feedback from fingertip touch sensors on a dexterous hand to localize, identify and grasp novel objects without any visual feedback. Videos are available at https://sites.google.com/view/tactofind.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 21, + "label": 24, + "text": "Title: B2Opt: Learning to Optimize Black-box Optimization with Little Budget\nAbstract: The core challenge of high-dimensional and expensive black-box optimization (BBO) is how to obtain better performance faster with little function evaluation cost. The essence of the problem is how to design an efficient optimization strategy tailored to the target task. This paper designs a powerful optimization framework to automatically learn the optimization strategies from the target or cheap surrogate task without human intervention. However, current methods are weak for this due to poor representation of optimization strategy. To achieve this, 1) drawing on the mechanism of genetic algorithm, we propose a deep neural network framework called B2Opt, which has a stronger representation of optimization strategies based on survival of the fittest; 2) B2Opt can utilize the cheap surrogate functions of the target task to guide the design of the efficient optimization strategies. Compared to the state-of-the-art BBO baselines, B2Opt can achieve multiple orders of magnitude performance improvement with less function evaluation cost. We validate our proposal on high-dimensional synthetic functions and two real-world applications. We also find that deep B2Opt performs better than shallow ones.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 22, + "label": 28, + "text": "Title: Joint data rate and EMF exposure analysis in user-centric cell-free massive MIMO networks\nAbstract: The objective of this study is to analyze the statistics of the data rate and of the incident power density (IPD) in user-centric cell-free networks (UCCFNs). To this purpose, our analysis proposes a number of performance metrics derived using stochastic geometry (SG). On the one hand, the \ufb01rst moments and the marginal distribution of the IPD are calculated. On the other hand, bounds on the joint distributions of rate and IPD are provided for two scenarios: when it is relevant to obtain IPD values above a given threshold (for energy harvesting purposes), and when these values should instead remain below the threshold (for public health reasons). In addition to deriving these metrics, this work incorporates features related to UCCFNs which are new in SG models: a power allocation based on collective channel statistics, as well as the presence of potential overlaps between adjacent clusters. Our numerical results illustrate the achievable trade-offs between the rate and IPD performance. For the considered system, these results also highlight the existence of an optimal node density maximizing the joint distributions.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 23, + "label": 24, + "text": "Title: Learning to Learn from APIs: Black-Box Data-Free Meta-Learning\nAbstract: Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data. Existing DFML work can only meta-learn from (i) white-box and (ii) small-scale pre-trained models (iii) with the same architecture, neglecting the more practical setting where the users only have inference access to the APIs with arbitrary model architectures and model scale inside. To solve this issue, we propose a Bi-level Data-free Meta Knowledge Distillation (BiDf-MKD) framework to transfer more general meta knowledge from a collection of black-box APIs to one single meta model. Specifically, by just querying APIs, we inverse each API to recover its training data via a zero-order gradient estimator and then perform meta-learning via a novel bi-level meta knowledge distillation structure, in which we design a boundary query set recovery technique to recover a more informative query set near the decision boundary. In addition, to encourage better generalization within the setting of limited API budgets, we propose task memory replay to diversify the underlying task distribution by covering more interpolated tasks. Extensive experiments in various real-world scenarios show the superior performance of our BiDf-MKD framework.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 24, + "label": 28, + "text": "Title: Sum-rank metric codes\nAbstract: Sum-rank metric codes are a natural extension of both linear block codes and rank-metric codes. They have several applications in information theory, including multishot network coding and distributed storage systems. The aim of this chapter is to present the mathematical theory of sum-rank metric codes, paying special attention to the $\\mathbb{F}_q$-linear case in which different sizes of matrices are allowed. We provide a comprehensive overview of the main results in the area. In particular, we discuss invariants, optimal anticodes, and MSRD codes. In the last section, we concentrate on $\\mathbb{F}_{q^m}$-linear codes.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 25, + "label": 10, + "text": "Title: ChatGPT for Robotics: Design Principles and Model Abilities\nAbstract: This paper presents an experimental study regarding the use of OpenAI's ChatGPT for robotics applications. We outline a strategy that combines design principles for prompt engineering and the creation of a high-level function library which allows ChatGPT to adapt to different robotics tasks, simulators, and form factors. We focus our evaluations on the effectiveness of different prompt engineering techniques and dialog strategies towards the execution of various types of robotics tasks. We explore ChatGPT's ability to use free-form dialog, parse XML tags, and to synthesize code, in addition to the use of task-specific prompting functions and closed-loop reasoning through dialogues. Our study encompasses a range of tasks within the robotics domain, from basic logical, geometrical, and mathematical reasoning all the way to complex domains such as aerial navigation, manipulation, and embodied agents. We show that ChatGPT can be effective at solving several of such tasks, while allowing users to interact with it primarily via natural language instructions. In addition to these studies, we introduce an open-sourced research tool called PromptCraft, which contains a platform where researchers can collaboratively upload and vote on examples of good prompting schemes for robotics applications, as well as a sample robotics simulator with ChatGPT integration, making it easier for users to get started with using ChatGPT for robotics.", + "neighbors": [ + 600, + 817, + 1036, + 1060, + 1128, + 1348, + 1353, + 1451, + 1659, + 1822, + 1877, + 2141, + 2166 + ], + "mask": "Train" + }, + { + "node_id": 26, + "label": 22, + "text": "Title: Alice or Bob?: Process Polymorphism in Choreographies\nAbstract: We present PolyChor$\\lambda$, a language for higher-order functional \\emph{choreographic programming} -- an emerging paradigm by which programmers write the desired cooperative behaviour of a system of communicating processes and then compile it into distributed implementations for each process, a translation called \\emph{endpoint projection}. Unlike its predecessor, Chor$\\lambda$, PolyChor$\\lambda$ has both type and \\emph{process} polymorphism inspired by System F$_\\omega$. That is, PolyChor$\\lambda$ is the first (higher-order) functional choreographic language which gives programmers the ability to write generic choreographies and determine the participants at runtime. This novel combination of features also allows PolyChor$\\lambda$ processes to communicate \\emph{distributed values}, leading to a new and intuitive way to write delegation. While some of the functional features of PolyChor$\\lambda$ give it a weaker correspondence between the semantics of choreographies and their endpoint-projected concurrent systems than some other choreographic languages, we still get the hallmark end result of choreographic programming: projected programs are deadlock-free by design.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 27, + "label": 30, + "text": "Title: GPT detectors are biased against non-native English writers\nAbstract: nan", + "neighbors": [ + 42, + 896, + 1487, + 1600, + 1805, + 2044 + ], + "mask": "Test" + }, + { + "node_id": 28, + "label": 8, + "text": "Title: BeamSense: Rethinking Wireless Sensing with MU-MIMO Wi-Fi Beamforming Feedback\nAbstract: In this paper, we propose BeamSense, a completely novel approach to implement standard-compliant Wi-Fi sensing applications. Wi-Fi sensing enables game-changing applications in remote healthcare, home entertainment, and home surveillance, among others. However, existing work leverages the manual extraction of channel state information (CSI) from Wi-Fi chips to classify activities, which is not supported by the Wi-Fi standard and hence requires the usage of specialized equipment. On the contrary, BeamSense leverages the standard-compliant beamforming feedback information (BFI) to characterize the propagation environment. Conversely from CSI, the BFI (i) can be easily recorded without any firmware modification, and (ii) captures the multiple channels between the access point and the stations, thus providing much better sensitivity. BeamSense includes a novel cross-domain few-shot learning (FSL) algorithm to handle unseen environments and subjects with few additional data points. We evaluate BeamSense through an extensive data collection campaign with three subjects performing twenty different activities in three different environments. We show that our BFI-based approach achieves about 10% more accuracy when compared to CSI-based prior work, while our FSL strategy improves accuracy by up to 30% and 80% when compared with state-of-the-art cross-domain algorithms.", + "neighbors": [ + 730, + 851 + ], + "mask": "Train" + }, + { + "node_id": 29, + "label": 13, + "text": "Title: Simplicial Hopfield networks\nAbstract: Hopfield networks are artificial neural networks which store memory patterns on the states of their neurons by choosing recurrent connection weights and update rules such that the energy landscape of the network forms attractors around the memories. How many stable, sufficiently-attracting memory patterns can we store in such a network using $N$ neurons? The answer depends on the choice of weights and update rule. Inspired by setwise connectivity in biology, we extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex. Simplicial complexes are higher dimensional analogues of graphs which naturally represent collections of pairwise and setwise relationships. We show that our simplicial Hopfield networks increase memory storage capacity. Surprisingly, even when connections are limited to a small random subset of equivalent size to an all-pairwise network, our networks still outperform their pairwise counterparts. Such scenarios include non-trivial simplicial topology. We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 30, + "label": 30, + "text": "Title: AspectCSE: Sentence Embeddings for Aspect-based Semantic Textual Similarity using Contrastive Learning and Structured Knowledge\nAbstract: Generic sentence embeddings provide a coarse-grained approximation of semantic textual similarity but ignore specific aspects that make texts similar. Conversely, aspect-based sentence embeddings provide similarities between texts based on certain predefined aspects. Thus, similarity predictions of texts are more targeted to specific requirements and more easily explainable. In this paper, we present AspectCSE, an approach for aspect-based contrastive learning of sentence embeddings. Results indicate that AspectCSE achieves an average improvement of 3.97% on information retrieval tasks across multiple aspects compared to the previous best results. We also propose using Wikidata knowledge graph properties to train models of multi-aspect sentence embeddings in which multiple specific aspects are simultaneously considered during similarity predictions. We demonstrate that multi-aspect embeddings outperform single-aspect embeddings on aspect-specific information retrieval tasks. Finally, we examine the aspect-based sentence embedding space and demonstrate that embeddings of semantically similar aspect labels are often close, even without explicit similarity training between different aspect labels.", + "neighbors": [ + 168 + ], + "mask": "Train" + }, + { + "node_id": 31, + "label": 30, + "text": "Title: Low-Resourced Machine Translation for Senegalese Wolof Language\nAbstract: Natural Language Processing (NLP) research has made great advancements in recent years with major breakthroughs that have established new benchmarks. However, these advances have mainly benefited a certain group of languages commonly referred to as resource-rich such as English and French. Majority of other languages with weaker resources are then left behind which is the case for most African languages including Wolof. In this work, we present a parallel Wolof/French corpus of 123,000 sentences on which we conducted experiments on machine translation models based on Recurrent Neural Networks (RNN) in different data configurations. We noted performance gains with the models trained on subworded data as well as those trained on the French-English language pair compared to those trained on the French-Wolof pair under the same experimental conditions.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 32, + "label": 24, + "text": "Title: Outlier Robust Adversarial Training\nAbstract: Supervised learning models are challenged by the intrinsic complexities of training data such as outliers and minority subpopulations and intentional attacks at inference time with adversarial samples. While traditional robust learning methods and the recent adversarial training approaches are designed to handle each of the two challenges, to date, no work has been done to develop models that are robust with regard to the low-quality training data and the potential adversarial attack at inference time simultaneously. It is for this reason that we introduce Outlier Robust Adversarial Training (ORAT) in this work. ORAT is based on a bi-level optimization formulation of adversarial training with a robust rank-based loss function. Theoretically, we show that the learning objective of ORAT satisfies the $\\mathcal{H}$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss. Furthermore, we analyze its generalization ability and provide uniform convergence rates in high probability. ORAT can be optimized with a simple algorithm. Experimental evaluations on three benchmark datasets demonstrate the effectiveness and robustness of ORAT in handling outliers and adversarial attacks. Our code is available at https://github.com/discovershu/ORAT.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 33, + "label": 16, + "text": "Title: BandRe: Rethinking Band-Pass Filters for Scale-Wise Object Detection Evaluation\nAbstract: Scale-wise evaluation of object detectors is important for real-world applications. However, existing metrics are either coarse or not sufficiently reliable. In this paper, we propose novel scale-wise metrics that strike a balance between fineness and reliability, using a filter bank consisting of triangular and trapezoidal band-pass filters. We conduct experiments with two methods on two datasets and show that the proposed metrics can highlight the differences between the methods and between the datasets. Code is available at https://github.com/shinya7y/UniverseNet.", + "neighbors": [ + 700 + ], + "mask": "Test" + }, + { + "node_id": 34, + "label": 16, + "text": "Title: VisionLLM: Large Language Model is also an Open-Ended Decoder for Vision-Centric Tasks\nAbstract: Large language models (LLMs) have notably accelerated progress towards artificial general intelligence (AGI), with their impressive zero-shot capacity for user-tailored tasks, endowing them with immense potential across a range of applications. However, in the field of computer vision, despite the availability of numerous powerful vision foundation models (VFMs), they are still restricted to tasks in a pre-defined form, struggling to match the open-ended task capabilities of LLMs. In this work, we present an LLM-based framework for vision-centric tasks, termed VisionLLM. This framework provides a unified perspective for vision and language tasks by treating images as a foreign language and aligning vision-centric tasks with language tasks that can be flexibly defined and managed using language instructions. An LLM-based decoder can then make appropriate predictions based on these instructions for open-ended tasks. Extensive experiments show that the proposed VisionLLM can achieve different levels of task customization through language instructions, from fine-grained object-level to coarse-grained task-level customization, all with good results. It's noteworthy that, with a generalist LLM-based framework, our model can achieve over 60\\% mAP on COCO, on par with detection-specific models. We hope this model can set a new baseline for generalist vision and language models. The demo shall be released based on https://github.com/OpenGVLab/InternGPT. The code shall be released at https://github.com/OpenGVLab/VisionLLM.", + "neighbors": [ + 173, + 319, + 392, + 602, + 719, + 836, + 855, + 887, + 1052, + 1071, + 1315, + 1459, + 1540, + 1913, + 2030, + 2036, + 2064, + 2155, + 2203, + 2216 + ], + "mask": "Train" + }, + { + "node_id": 35, + "label": 6, + "text": "Title: Animation Fidelity in Self-Avatars: Impact on User Performance and Sense of Agency\nAbstract: The use of self-avatars is gaining popularity thanks to affordable VR headsets. Unfortunately, mainstream VR devices often use a small number of trackers and provide low-accuracy animations. Previous studies have shown that the Sense of Embodiment, and in particular the Sense of Agency, depends on the extent to which the avatar's movements mimic the user's movements. However, few works study such effect for tasks requiring a precise interaction with the environment, i.e., tasks that require accurate manipulation, precise foot stepping, or correct body poses. In these cases, users are likely to notice inconsistencies between their self-avatars and their actual pose. In this paper, we study the impact of the animation fidelity of the user avatar on a variety of tasks that focus on arm movement, leg movement and body posture. We compare three different animation techniques: two of them using Inverse Kinematics to reconstruct the pose from sparse input (6 trackers), and a third one using a professional motion capture system with 17 inertial sensors. We evaluate these animation techniques both quantitatively (completion time, unintentional collisions, pose accuracy) and qualitatively (Sense of Embodiment). Our results show that the animation quality affects the Sense of Embodiment. Inertial-based MoCap performs significantly better in mimicking body poses. Surprisingly, IK-based solutions using fewer sensors outperformed MoCap in tasks requiring accurate positioning, which we attribute to the higher latency and the positional drift that causes errors at the end-effectors, which are more noticeable in contact areas such as the feet.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 36, + "label": 30, + "text": "Title: Large Language Models for Information Retrieval: A Survey\nAbstract: As a primary means of information acquisition, information retrieval (IR) systems, such as search engines, have integrated themselves into our daily lives. These systems also serve as components of dialogue, question-answering, and recommender systems. The trajectory of IR has evolved dynamically from its origins in term-based methods to its integration with advanced neural models. While the neural models excel at capturing complex contextual signals and semantic nuances, thereby reshaping the IR landscape, they still face challenges such as data scarcity, interpretability, and the generation of contextually plausible yet potentially inaccurate responses. This evolution requires a combination of both traditional methods (such as term-based sparse retrieval methods with rapid response) and modern neural architectures (such as language models with powerful language understanding capacity). Meanwhile, the emergence of large language models (LLMs), typified by ChatGPT and GPT-4, has revolutionized natural language processing due to their remarkable language understanding, generation, generalization, and reasoning abilities. Consequently, recent research has sought to leverage LLMs to improve IR systems. Given the rapid evolution of this research trajectory, it is necessary to consolidate existing methodologies and provide nuanced insights through a comprehensive overview. In this survey, we delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers. Additionally, we explore promising directions within this expanding field.", + "neighbors": [ + 3, + 424, + 529, + 1052, + 1194, + 1203, + 1481, + 1560, + 1678, + 1834, + 1863, + 1915, + 2013, + 2113 + ], + "mask": "Train" + }, + { + "node_id": 37, + "label": 22, + "text": "Title: Exact Bayesian Inference on Discrete Models via Probability Generating Functions: A Probabilistic Programming Approach\nAbstract: We present an exact Bayesian inference method for discrete statistical models, which can find exact solutions to many discrete inference problems, even with infinite support and continuous priors. To express such models, we introduce a probabilistic programming language that supports discrete and continuous sampling, discrete observations, affine functions, (stochastic) branching, and conditioning on events. Our key tool is probability generating functions: they provide a compact closed-form representation of distributions that are definable by programs, thus enabling the exact computation of posterior probabilities, expectation, variance, and higher moments. Our inference method is provably correct, fully automated and uses automatic differentiation (specifically, Taylor polynomials), but does not require computer algebra. Our experiments show that its performance on a range of real-world examples is competitive with approximate Monte Carlo methods, while avoiding approximation errors.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 38, + "label": 24, + "text": "Title: FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods\nAbstract: This paper introduces the Fair Fairness Benchmark (\\textsf{FFB}), a benchmarking framework for in-processing group fairness methods. Ensuring fairness in machine learning is critical for ethical and legal compliance. However, there exist challenges in comparing and developing of fairness methods due to inconsistencies in experimental settings, lack of accessible algorithmic implementations, and limited extensibility of current fairness packages and tools. To address these issues, we introduce an open-source, standardized benchmark for evaluating in-processing group fairness methods and provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness. This work offers the following key contributions: the provision of flexible, extensible, minimalistic, and research-oriented open-source code; the establishment of unified fairness method benchmarking pipelines; and extensive benchmarking, which yields key insights from $\\mathbf{45,079}$ experiments. We believe our work will significantly facilitate the growth and development of the fairness research community. The benchmark, including code and running logs, is available at https://github.com/ahxt/fair_fairness_benchmark", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 39, + "label": 24, + "text": "Title: Distributionally Robust Recourse Action\nAbstract: A recourse action aims to explain a particular algorithmic decision by showing one specific way in which the instance could be modified to receive an alternate outcome. Existing recourse generation methods often assume that the machine learning model does not change over time. However, this assumption does not always hold in practice because of data distribution shifts, and in this case, the recourse action may become invalid. To redress this shortcoming, we propose the Distributionally Robust Recourse Action (DiRRAc) framework, which generates a recourse action that has a high probability of being valid under a mixture of model shifts. We formulate the robustified recourse setup as a min-max optimization problem, where the max problem is specified by Gelbrich distance over an ambiguity set around the distribution of model parameters. Then we suggest a projected gradient descent algorithm to find a robust recourse according to the min-max objective. We show that our DiRRAc framework can be extended to hedge against the misspecification of the mixture weights. Numerical experiments with both synthetic and three real-world datasets demonstrate the benefits of our proposed framework over state-of-the-art recourse methods.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 40, + "label": 16, + "text": "Title: Hypotheses Tree Building for One-Shot Temporal Sentence Localization\nAbstract: Given an untrimmed video, temporal sentence localization (TSL) aims to localize a specific segment according to a given sentence query. Though respectable works have made decent achievements in this task, they severely rely on dense video frame annotations, which require a tremendous amount of human effort to collect. In this paper, we target another more practical and challenging setting: one-shot temporal sentence localization (one-shot TSL), which learns to retrieve the query information among the entire video with only one annotated frame. Particularly, we propose an effective and novel tree-structure baseline for one-shot TSL, called Multiple Hypotheses Segment Tree (MHST), to capture the query-aware discriminative frame-wise information under the insufficient annotations. Each video frame is taken as the leaf-node, and the adjacent frames sharing the same visual-linguistic semantics will be merged into the upper non-leaf node for tree building. At last, each root node is an individual segment hypothesis containing the consecutive frames of its leaf-nodes. During the tree construction, we also introduce a pruning strategy to eliminate the interference of query-irrelevant nodes. With our designed self-supervised loss functions, our MHST is able to generate high-quality segment hypotheses for ranking and selection with the query. Experiments on two challenging datasets demonstrate that MHST achieves competitive performance compared to existing methods.", + "neighbors": [ + 1732, + 2287 + ], + "mask": "Train" + }, + { + "node_id": 41, + "label": 16, + "text": "Title: Enhancing Visibility in Nighttime Haze Images Using Guided APSF and Gradient Adaptive Convolution\nAbstract: Visibility in hazy nighttime scenes is frequently reduced by multiple factors, including low light, intense glow, light scattering, and the presence of multicolored light sources. Existing nighttime dehazing methods often struggle with handling glow or low-light conditions, resulting in either excessively dark visuals or unsuppressed glow outputs. In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions. To handle glow effects, our framework learns from the rendered glow pairs. Specifically, a light source aware network is proposed to detect light sources of night images, followed by the APSF (Angular Point Spread Function)-guided glow rendering. Our framework is then trained on the rendered images, resulting in glow suppression. Moreover, we utilize gradient-adaptive convolution, to capture edges and textures in hazy scenes. By leveraging extracted edges and textures, we enhance the contrast of the scene without losing important structural details. To boost low-light intensity, our network learns an attention map, then adjusted by gamma correction. This attention has high values on low-light regions and low values on haze and glow regions. Extensive evaluation on real nighttime haze images, demonstrates the effectiveness of our method. Our experiments demonstrate that our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13$\\%$ on GTA5 nighttime haze dataset. Our data and code is available at: \\url{https://github.com/jinyeying/nighttime_dehaze}.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 42, + "label": 30, + "text": "Title: Can AI-Generated Text be Reliably Detected?\nAbstract: In this paper, both empirically and theoretically, we show that several AI-text detectors are not reliable in practical scenarios. Empirically, we show that paraphrasing attacks, where a light paraphraser is applied on top of a large language model (LLM), can break a whole range of detectors, including ones using watermarking schemes as well as neural network-based detectors and zero-shot classifiers. Our experiments demonstrate that retrieval-based detectors, designed to evade paraphrasing attacks, are still vulnerable to recursive paraphrasing. We then provide a theoretical impossibility result indicating that as language models become more sophisticated and better at emulating human text, the performance of even the best-possible detector decreases. For a sufficiently advanced language model seeking to imitate human text, even the best-possible detector may only perform marginally better than a random classifier. Our result is general enough to capture specific scenarios such as particular writing styles, clever prompt design, or text paraphrasing. We also extend the impossibility result to include the case where pseudorandom number generators are used for AI-text generation instead of true randomness. We show that the same result holds with a negligible correction term for all polynomial-time computable detectors. Finally, we show that even LLMs protected by watermarking schemes can be vulnerable against spoofing attacks where adversarial humans can infer hidden LLM text signatures and add them to human-generated text to be detected as text generated by the LLMs, potentially causing reputational damage to their developers. We believe these results can open an honest conversation in the community regarding the ethical and reliable use of AI-generated text.", + "neighbors": [ + 27, + 352, + 580, + 691, + 896, + 1436, + 1487, + 1574, + 1600, + 1805, + 1863, + 2044, + 2249 + ], + "mask": "Validation" + }, + { + "node_id": 43, + "label": 16, + "text": "Title: Diffusion Models for Image Restoration and Enhancement - A Comprehensive Survey\nAbstract: Image restoration (IR) has been an indispensable and challenging task in the low-level vision field, which strives to improve the subjective quality of images distorted by various forms of degradation. Recently, the diffusion model has achieved significant advancements in the visual generation of AIGC, thereby raising an intuitive question,\"whether diffusion model can boost image restoration\". To answer this, some pioneering studies attempt to integrate diffusion models into the image restoration task, resulting in superior performances than previous GAN-based methods. Despite that, a comprehensive and enlightening survey on diffusion model-based image restoration remains scarce. In this paper, we are the first to present a comprehensive review of recent diffusion model-based methods on image restoration, encompassing the learning paradigm, conditional strategy, framework design, modeling strategy, and evaluation. Concretely, we first introduce the background of the diffusion model briefly and then present two prevalent workflows that exploit diffusion models in image restoration. Subsequently, we classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR, intending to inspire future development. To evaluate existing methods thoroughly, we summarize the commonly-used dataset, implementation details, and evaluation metrics. Additionally, we present the objective comparison for open-sourced methods across three tasks, including image super-resolution, deblurring, and inpainting. Ultimately, informed by the limitations in existing works, we propose five potential and challenging directions for the future research of diffusion model-based IR, including sampling efficiency, model compression, distortion simulation and estimation, distortion invariant learning, and framework design.", + "neighbors": [ + 732, + 800, + 1173, + 1539, + 1902 + ], + "mask": "Validation" + }, + { + "node_id": 44, + "label": 24, + "text": "Title: GAD-NR: Graph Anomaly Detection via Neighborhood Reconstruction\nAbstract: Graph Anomaly Detection (GAD) is a technique used to identify abnormal nodes within graphs, finding applications in network security, fraud detection, social media spam detection, and various other domains. A common method for GAD is Graph Auto-Encoders (GAEs), which encode graph data into node representations and identify anomalies by assessing the reconstruction quality of the graphs based on these representations. However, existing GAE models are primarily optimized for direct link reconstruction, resulting in nodes connected in the graph being clustered in the latent space. As a result, they excel at detecting cluster-type structural anomalies but struggle with more complex structural anomalies that do not conform to clusters. To address this limitation, we propose a novel solution called GAD-NR, a new variant of GAE that incorporates neighborhood reconstruction for graph anomaly detection. GAD-NR aims to reconstruct the entire neighborhood of a node, encompassing the local structure, self-attributes, and neighbor attributes, based on the corresponding node representation. By comparing the neighborhood reconstruction loss between anomalous nodes and normal nodes, GAD-NR can effectively detect any anomalies. Extensive experimentation conducted on six real-world datasets validates the effectiveness of GAD-NR, showcasing significant improvements (by up to 30% in AUC) over state-of-the-art competitors. The source code for GAD-NR is openly available. Importantly, the comparative analysis reveals that the existing methods perform well only in detecting one or two types of anomalies out of the three types studied. In contrast, GAD-NR excels at detecting all three types of anomalies across the datasets, demonstrating its comprehensive anomaly detection capabilities.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 45, + "label": 16, + "text": "Title: K-Diag: Knowledge-enhanced Disease Diagnosis in Radiographic Imaging\nAbstract: In this paper, we consider the problem of disease diagnosis. Unlike the conventional learning paradigm that treats labels independently, we propose a knowledge-enhanced framework, that enables training visual representation with the guidance of medical domain knowledge. In particular, we make the following contributions: First, to explicitly incorporate experts' knowledge, we propose to learn a neural representation for the medical knowledge graph via contrastive learning, implicitly establishing relations between different medical concepts. Second, while training the visual encoder, we keep the parameters of the knowledge encoder frozen and propose to learn a set of prompt vectors for efficient adaptation. Third, we adopt a Transformer-based disease-query module for cross-model fusion, which naturally enables explainable diagnosis results via cross attention. To validate the effectiveness of our proposed framework, we conduct thorough experiments on three x-ray imaging datasets across different anatomy structures, showing our model is able to exploit the implicit relations between diseases/findings, thus is beneficial to the commonly encountered problem in the medical domain, namely, long-tailed and zero-shot recognition, which conventional methods either struggle or completely fail to realize.", + "neighbors": [ + 607 + ], + "mask": "Train" + }, + { + "node_id": 46, + "label": 4, + "text": "Title: Privacy Dashboards for Citizens and GDPR Services for Small Data Holders: A Literature Review\nAbstract: Citizens have gained many rights with the GDPR, e.g. the right to get a copy of their personal data. In practice, however, this is fraught with problems for citizens and small data holders. We present a literature review on solutions promising relief in the form of privacy dashboards for citizens and GDPR services for small data holders. Covered topics are analyzed, categorized and compared. This is ought to be a step towards both enabling citizens to exercise their GDPR rights and supporting small data holders to comply with their GDPR duties.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 47, + "label": 16, + "text": "Title: OpenOccupancy: A Large Scale Benchmark for Surrounding Semantic Occupancy Perception\nAbstract: Semantic occupancy perception is essential for autonomous driving, as automated vehicles require a fine-grained perception of the 3D urban structures. However, existing relevant benchmarks lack diversity in urban scenes, and they only evaluate front-view predictions. Towards a comprehensive benchmarking of surrounding perception algorithms, we propose OpenOccupancy, which is the first surrounding semantic occupancy perception benchmark. In the OpenOccupancy benchmark, we extend the large-scale nuScenes dataset with dense semantic occupancy annotations. Previous annotations rely on LiDAR points superimposition, where some occupancy labels are missed due to sparse LiDAR channels. To mitigate the problem, we introduce the Augmenting And Purifying (AAP) pipeline to ~2x densify the annotations, where ~4000 human hours are involved in the labeling process. Besides, camera-based, LiDAR-based and multi-modal baselines are established for the OpenOccupancy benchmark. Furthermore, considering the complexity of surrounding occupancy perception lies in the computational burden of high-resolution 3D predictions, we propose the Cascade Occupancy Network (CONet) to refine the coarse prediction, which relatively enhances the performance by ~30% than the baseline. We hope the OpenOccupancy benchmark will boost the development of surrounding occupancy perception algorithms.", + "neighbors": [ + 1571, + 2198, + 2308 + ], + "mask": "Train" + }, + { + "node_id": 48, + "label": 16, + "text": "Title: Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions\nAbstract: We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.", + "neighbors": [ + 286, + 330, + 624, + 1205, + 1355, + 1773 + ], + "mask": "Validation" + }, + { + "node_id": 49, + "label": 24, + "text": "Title: A novel framework for handling sparse data in traffic forecast\nAbstract: The ever increasing amount of GPS-equipped vehicles provides in real-time valuable traffic information for the roads traversed by the moving vehicles. In this way, a set of sparse and time evolving traffic reports is generated for each road. These time series are a valuable asset in order to forecast the future traffic condition. In this paper we present a deep learning framework that encodes the sparse recent traffic information and forecasts the future traffic condition. Our framework consists of a recurrent part and a decoder. The recurrent part employs an attention mechanism that encodes the traffic reports that are available at a particular time window. The decoder is responsible to forecast the future traffic condition.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 50, + "label": 16, + "text": "Title: Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person Retrieval\nAbstract: Text-to-image person retrieval aims to identify the target person based on a given textual description query. The primary challenge is to learn the mapping of visual and textual modalities into a common latent space. Prior works have attempted to address this challenge by leveraging separately pre-trained unimodal models to extract visual and textual features. However, these approaches lack the necessary underlying alignment capabilities required to match multimodal data effectively. Besides, these works use prior information to explore explicit part alignments, which may lead to the distortion of intra-modality information. To alleviate these issues, we present IRRA: a cross-modal Implicit Relation Reasoning and Aligning framework that learns relations between local visual-textual tokens and enhances global image-text matching without requiring additional prior supervision. Specifically, we first design an Implicit Relation Reasoning module in a masked language modeling paradigm. This achieves cross-modal interaction by integrating the visual cues into the textual tokens with a cross-modal multimodal interaction encoder. Secondly, to globally align the visual and textual embeddings, Similarity Distribution Matching is proposed to minimize the KL divergence between image-text similarity distributions and the normalized label matching distributions. The proposed method achieves new state-of-the-art results on all three public datasets, with a notable margin of about 3%-9% for Rank-1 accuracy compared to prior methods.", + "neighbors": [ + 2179 + ], + "mask": "Train" + }, + { + "node_id": 51, + "label": 10, + "text": "Title: Notation3 as an Existential Rule Language\nAbstract: Notation3 Logic (\\nthree) is an extension of RDF that allows the user to write rules introducing new blank nodes to RDF graphs. Many applications (e.g., ontology mapping) rely on this feature as blank nodes -- used directly or in auxiliary constructs -- are omnipresent on the Web. However, the number of fast \\nthree reasoners covering this very important feature of the logic is rather limited. On the other hand, there are engines like VLog or Nemo which do not directly support Semantic Web rule formats but which are developed and optimized for very similar constructs: existential rules. In this paper, we investigate the relation between \\nthree rules with blank nodes in their heads and existential rules. We identify a subset of \\nthree which can be mapped directly to existential rules and define such a mapping preserving the equivalence of \\nthree formulae. In order to also illustrate that in some cases \\nthree reasoning could benefit from our translation, we then employ this mapping in an implementation to compare the performance of the \\nthree reasoners EYE and cwm to VLog and Nemo on \\nthree rules and their mapped counterparts. Our tests show that the existential rule reasoners perform particularly well for use cases containing many facts while especially the EYE reasoner is very fast when dealing with a high number of dependent rules. We thus provide a tool enabling the Semantic Web community to directly use existing and future existential rule reasoners and benefit from the findings of this active community.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 52, + "label": 16, + "text": "Title: Fast Full-frame Video Stabilization with Iterative Optimization\nAbstract: Video stabilization refers to the problem of transforming a shaky video into a visually pleasing one. The question of how to strike a good trade-off between visual quality and computational speed has remained one of the open challenges in video stabilization. Inspired by the analogy between wobbly frames and jigsaw puzzles, we propose an iterative optimization-based learning approach using synthetic datasets for video stabilization, which consists of two interacting submodules: motion trajectory smoothing and full-frame outpainting. First, we develop a two-level (coarse-to-fine) stabilizing algorithm based on the probabilistic flow field. The confidence map associated with the estimated optical flow is exploited to guide the search for shared regions through backpropagation. Second, we take a divide-and-conquer approach and propose a novel multiframe fusion strategy to render full-frame stabilized views. An important new insight brought about by our iterative optimization approach is that the target video can be interpreted as the fixed point of nonlinear mapping for video stabilization. We formulate video stabilization as a problem of minimizing the amount of jerkiness in motion trajectories, which guarantees convergence with the help of fixed-point theory. Extensive experimental results are reported to demonstrate the superiority of the proposed approach in terms of computational speed and visual quality. The code will be available on GitHub.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 53, + "label": 10, + "text": "Title: Decentralized Adaptive Formation via Consensus-Oriented Multi-Agent Communication\nAbstract: Adaptive multi-agent formation control, which requires the formation to flexibly adjust along with the quantity variations of agents in a decentralized manner, belongs to one of the most challenging issues in multi-agent systems, especially under communication-limited constraints. In this paper, we propose a novel Consensus-based Decentralized Adaptive Formation (Cons-DecAF) framework. Specifically, we develop a novel multi-agent reinforcement learning method, Consensus-oriented Multi-Agent Communication (ConsMAC), to enable agents to perceive global information and establish the consensus from local states by effectively aggregating neighbor messages. Afterwards, we leverage policy distillation to accomplish the adaptive formation adjustment. Meanwhile, instead of pre-assigning specific positions of agents, we employ a displacement-based formation by Hausdorff distance to significantly improve the formation efficiency. The experimental results through extensive simulations validate that the proposed method has achieved outstanding performance in terms of both speed and stability.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 54, + "label": 16, + "text": "Title: RiDDLE: Reversible and Diversified De-identification with Latent Encryptor\nAbstract: This work presents RiDDLE, short for Reversible and Diversified De-identification with Latent Encryptor, to protect the identity information of people from being misused. Built upon a pre-learned StyleGAN2 generator, RiDDLE manages to encrypt and decrypt the facial identity within the latent space. The design of RiDDLE has three appealing properties. First, the encryption process is cipher-guided and hence allows diverse anonymization using different passwords. Second, the true identity can only be decrypted with the correct password, otherwise the system will produce another de-identified face to maintain the privacy. Third, both encryption and decryption share an efficient implementation, benefiting from a carefully tailored lightweight encryptor. Comparisons with existing alternatives confirm that our approach accomplishes the de-identification task with better quality, higher diversity, and stronger reversibility. We further demonstrate the effectiveness of RiDDLE in anonymizing videos. Code and models will be made publicly available.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 55, + "label": 24, + "text": "Title: Few-shot Node Classification with Extremely Weak Supervision\nAbstract: Few-shot node classification aims at classifying nodes with limited labeled nodes as references. Recent few-shot node classification methods typically learn from classes with abundant labeled nodes (i.e., meta-training classes) and then generalize to classes with limited labeled nodes (i.e., meta-test classes). Nevertheless, on real-world graphs, it is usually difficult to obtain abundant labeled nodes for many classes. In practice, each meta-training class can only consist of several labeled nodes, known as the extremely weak supervision problem. In few-shot node classification, with extremely limited labeled nodes for meta-training, the generalization gap between meta-training and meta-test will become larger and thus lead to suboptimal performance. To tackle this issue, we study a novel problem of few-shot node classification with extremely weak supervision and propose a principled framework X-FNC under the prevalent meta-learning framework. Specifically, our goal is to accumulate meta-knowledge across different meta-training tasks with extremely weak supervision and generalize such knowledge to meta-test tasks. To address the challenges resulting from extremely scarce labeled nodes, we propose two essential modules to obtain pseudo-labeled nodes as extra references and effectively learn from extremely limited supervision information. We further conduct extensive experiments on four node classification datasets with extremely weak supervision to validate the superiority of our framework compared to the state-of-the-art baselines.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 56, + "label": 24, + "text": "Title: A Coreset Learning Reality Check\nAbstract: Subsampling algorithms are a natural approach to reduce data size before fitting models on massive datasets. In recent years, several works have proposed methods for subsampling rows from a data matrix while maintaining relevant information for classification. While these works are supported by theory and limited experiments, to date there has not been a comprehensive evaluation of these methods. In our work, we directly compare multiple methods for logistic regression drawn from the coreset and optimal subsampling literature and discover inconsistencies in their effectiveness. In many cases, methods do not outperform simple uniform subsampling.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 57, + "label": 30, + "text": "Title: Toolformer: Language Models Can Teach Themselves to Use Tools\nAbstract: Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.", + "neighbors": [ + 0, + 81, + 118, + 127, + 173, + 183, + 240, + 363, + 401, + 505, + 618, + 704, + 817, + 818, + 902, + 945, + 989, + 1001, + 1026, + 1128, + 1182, + 1197, + 1238, + 1262, + 1306, + 1327, + 1348, + 1353, + 1467, + 1659, + 1667, + 1720, + 1863, + 1877, + 1878, + 1893, + 1915, + 2104, + 2113, + 2166, + 2184, + 2216, + 2235 + ], + "mask": "Train" + }, + { + "node_id": 58, + "label": 24, + "text": "Title: Learning from Hypervectors: A Survey on Hypervector Encoding\nAbstract: Hyperdimensional computing (HDC) is an emerging computing paradigm that imitates the brain's structure to offer a powerful and efficient processing and learning model. In HDC, the data are encoded with long vectors, called hypervectors, typically with a length of 1K to 10K. The literature provides several encoding techniques to generate orthogonal or correlated hypervectors, depending on the intended application. The existing surveys in the literature often focus on the overall aspects of HDC systems, including system inputs, primary computations, and final outputs. However, this study takes a more specific approach. It zeroes in on the HDC system input and the generation of hypervectors, directly influencing the hypervector encoding process. This survey brings together various methods for hypervector generation from different studies and explores the limitations, challenges, and potential benefits they entail. Through a comprehensive exploration of this survey, readers will acquire a profound understanding of various encoding types in HDC and gain insights into the intricate process of hypervector generation for diverse applications.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 59, + "label": 35, + "text": "Title: Unleashing Unprivileged eBPF Potential with Dynamic Sandboxing\nAbstract: For safety reasons, unprivileged users today have only limited ways to customize the kernel through the extended Berkeley Packet Filter (eBPF). This is unfortunate, especially since the eBPF framework itself has seen an increase in scope over the years. We propose SandBPF, a software-based kernel isolation technique that dynamically sandboxes eBPF programs to allow unprivileged users to safely extend the kernel, unleashing eBPF's full potential. Our early proof-of-concept shows that SandBPF can effectively prevent exploits missed by eBPF's native safety mechanism (i.e., static verification) while incurring 0%-10% overhead on web server benchmarks.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 60, + "label": 28, + "text": "Title: Reconfigurable Intelligent Surfaces to Enable Energy-Efficient IoT Networks\nAbstract: In this article, we study the uplink (UL) channel of a cellular network of Internet of Things (IoT) devices assisted by a reconfigurable intelligent surface (RIS) with a limited number of reflecting angle configurations. Firstly, we derive an expression of the required transmit power for the machine-type devices (MTDs) to attain a target signal-to-noise ratio (SNR), considering a channel model that accounts for the RIS discretization into sub-wavelength reflecting elements. Such an expression demonstrates that the transmit power depends on the target SNR, the position of the MTD in the service area, and the RIS setup, which includes the number of reflecting elements and the available reflecting angle configurations. Secondly, we develop an expression for the expected battery lifetime (EBL) of the MTDs, which explicitly depends on the MTD transmit power. Numerical simulations on the energy efficiency (EE) evaluated via the EBL demonstrate the benefits of adopting RISs to enable energy-efficient IoT networks.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 61, + "label": 36, + "text": "Title: Rational verification and checking for Nash and subgame-perfect equilibria in graph games\nAbstract: We study two natural problems about rational behaviors in multiplayer non-zero-sum sequential infinite duration games played on graphs: checking problems, that consist in deciding whether a strategy profile, defined by a Mealy machine, is rational; and rational verification, that consists in deciding whether all the rational answers to a given strategy satisfy some specification. We give the complexities of those problems for two major concepts of rationality: Nash equilibria and subgame-perfect equilibria, and for five major classes of payoff functions: parity, mean-payoff, quantitative reachability, energy, and discounted-sum.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 62, + "label": 4, + "text": "Title: When GPT Meets Program Analysis: Towards Intelligent Detection of Smart Contract Logic Vulnerabilities in GPTScan\nAbstract: Smart contracts are prone to various vulnerabilities, leading to substantial financial losses over time. Current analysis tools mainly target vulnerabilities with fixed control or dataflow patterns, such as re-entrancy and integer overflow. However, a recent study on Web3 security bugs revealed that about 80% of these bugs cannot be audited by existing tools due to the lack of domain-specific property description and checking. Given recent advances in Generative Pretraining Transformer (GPT), it is worth exploring how GPT could aid in detecting logic vulnerabilities in smart contracts. In this paper, we propose GPTScan, the first tool combining GPT with static analysis for smart contract logic vulnerability detection. Instead of relying solely on GPT to identify vulnerabilities, which can lead to high false positives and is limited by GPT's pre-trained knowledge, we utilize GPT as a versatile code understanding tool. By breaking down each logic vulnerability type into scenarios and properties, GPTScan matches candidate vulnerabilities with GPT. To enhance accuracy, GPTScan further instructs GPT to intelligently recognize key variables and statements, which are then validated by static confirmation. Evaluation on diverse datasets with around 400 contract projects and 3K Solidity files shows that GPTScan achieves high precision (over 90%) for token contracts and acceptable precision (57.14%) for large projects like Web3Bugs. It effectively detects groundtruth logic vulnerabilities with a recall of over 80%, including 9 new vulnerabilities missed by human auditors. GPTScan is fast and cost-effective, taking an average of 14.39 seconds and 0.01 USD to scan per thousand lines of Solidity code. Moreover, static confirmation helps GPTScan reduce two-thirds of false positives.", + "neighbors": [ + 1527, + 1622 + ], + "mask": "Train" + }, + { + "node_id": 63, + "label": 16, + "text": "Title: Text2Tex: Text-driven Texture Synthesis via Diffusion Models\nAbstract: We present Text2Tex, a novel method for generating high-quality textures for 3D meshes from the given text prompts. Our method incorporates inpainting into a pre-trained depth-aware image diffusion model to progressively synthesize high resolution partial textures from multiple viewpoints. To avoid accumulating inconsistent and stretched artifacts across views, we dynamically segment the rendered view into a generation mask, which represents the generation status of each visible texel. This partitioned view representation guides the depth-aware inpainting model to generate and update partial textures for the corresponding regions. Furthermore, we propose an automatic view sequence generation scheme to determine the next best view for updating the partial texture. Extensive experiments demonstrate that our method significantly outperforms the existing text-driven approaches and GAN-based methods.", + "neighbors": [ + 1205, + 1902, + 1905, + 2085, + 2117 + ], + "mask": "Validation" + }, + { + "node_id": 64, + "label": 24, + "text": "Title: MUDiff: Unified Diffusion for Complete Molecule Generation\nAbstract: Molecule generation is a very important practical problem, with uses in drug discovery and material design, and AI methods promise to provide useful solutions. However, existing methods for molecule generation focus either on 2D graph structure or on 3D geometric structure, which is not sufficient to represent a complete molecule as 2D graph captures mainly topology while 3D geometry captures mainly spatial atom arrangements. Combining these representations is essential to better represent a molecule. In this paper, we present a new model for generating a comprehensive representation of molecules, including atom features, 2D discrete molecule structures, and 3D continuous molecule coordinates, by combining discrete and continuous diffusion processes. The use of diffusion processes allows for capturing the probabilistic nature of molecular processes and exploring the effect of different factors on molecular structures. Additionally, we propose a novel graph transformer architecture to denoise the diffusion process. The transformer adheres to 3D roto-translation equivariance constraints, allowing it to learn invariant atom and edge representations while preserving the equivariance of atom coordinates. This transformer can be used to learn molecular representations robust to geometric transformations. We evaluate the performance of our model through experiments and comparisons with existing methods, showing its ability to generate more stable and valid molecules. Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.", + "neighbors": [ + 1922 + ], + "mask": "Train" + }, + { + "node_id": 65, + "label": 24, + "text": "Title: The Lie-Group Bayesian Learning Rule\nAbstract: The Bayesian Learning Rule provides a framework for generic algorithm design but can be difficult to use for three reasons. First, it requires a specific parameterization of exponential family. Second, it uses gradients which can be difficult to compute. Third, its update may not always stay on the manifold. We address these difficulties by proposing an extension based on Lie-groups where posteriors are parametrized through transformations of an arbitrary base distribution and updated via the group's exponential map. This simplifies all three difficulties for many cases, providing flexible parametrizations through group's action, simple gradient computation through reparameterization, and updates that always stay on the manifold. We use the new learning rule to derive a new algorithm for deep learning with desirable biologically-plausible attributes to learn sparse features. Our work opens a new frontier for the design of new algorithms by exploiting Lie-group structures.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 66, + "label": 36, + "text": "Title: Taming the Exponential Action Set: Sublinear Regret and Fast Convergence to Nash Equilibrium in Online Congestion Games\nAbstract: The congestion game is a powerful model that encompasses a range of engineering systems such as traffic networks and resource allocation. It describes the behavior of a group of agents who share a common set of $F$ facilities and take actions as subsets with $k$ facilities. In this work, we study the online formulation of congestion games, where agents participate in the game repeatedly and observe feedback with randomness. We propose CongestEXP, a decentralized algorithm that applies the classic exponential weights method. By maintaining weights on the facility level, the regret bound of CongestEXP avoids the exponential dependence on the size of possible facility sets, i.e., $\\binom{F}{k} \\approx F^k$, and scales only linearly with $F$. Specifically, we show that CongestEXP attains a regret upper bound of $O(kF\\sqrt{T})$ for every individual player, where $T$ is the time horizon. On the other hand, exploiting the exponential growth of weights enables CongestEXP to achieve a fast convergence rate. If a strict Nash equilibrium exists, we show that CongestEXP can converge to the strict Nash policy almost exponentially fast in $O(F\\exp(-t^{1-\\alpha}))$, where $t$ is the number of iterations and $\\alpha \\in (1/2, 1)$.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 67, + "label": 34, + "text": "Title: Long Directed Detours: Reduction to 2-Disjoint Paths\nAbstract: In the Longest ( s, t ) -Detour problem, we look for an ( s, t ) -path that is at least k vertices longer than a shortest one. We study the parameterized complexity of Longest ( s, t ) -Detour when parameterized by k : this falls into the research paradigm of \u2018parameterization above guarantee\u2019. Whereas the problem is known to be \ufb01xed-parameter tractable (FPT) on undirected graphs, the status of Longest ( s, t ) -Detour on directed graphs remains highly unclear: it is not even known to be solvable in polynomial time for k = 1 . Recently, Fomin et al. made progress in this direction by showing that the problem is FPT on every class of directed graphs where the 3-Disjoint Paths problem is solvable in polynomial time. We improve upon their result by weakening this assumption: we show that only a polynomial-time algorithm for 2-Disjoint Paths is required. What is more, our approach yields an arguably simpler proof.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 68, + "label": 26, + "text": "Title: Impact of the Covid 19 outbreaks on the italian twitter vaccination debat: a network based analysis\nAbstract: Vaccine hesitancy, or the reluctance to be vaccinated, is a phenomenon that has recently become particularly significant, in conjunction with the vaccination campaign against COVID-19. During the lockdown period, necessary to control the spread of the virus, social networks have played an important role in the Italian debate on vaccination, generally representing the easiest and safest way to exchange opinions and maintain some form of sociability. Among social network platforms, Twitter has assumed a strategic role in driving the public opinion, creating compact groups of users sharing similar views towards the utility, uselessness or even dangerousness of vaccines. In this paper, we present a new, publicly available, dataset of Italian tweets, TwitterVax, collected in the period January 2019--May 2022. Considering monthly data, gathered into forty one retweet networks -- where nodes identify users and edges are present between users who have retweeted each other -- we performed community detection within the networks, analyzing their evolution and polarization with respect to NoVax and ProVax users through time. This allowed us to clearly discover debate trends as well as identify potential key moments and actors in opinion flows, characterizing the main features and tweeting behavior of the two communities.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 69, + "label": 30, + "text": "Title: Toxicity in ChatGPT: Analyzing Persona-assigned Language Models\nAbstract: Large language models (LLMs) have shown incredible capabilities and transcended the natural language processing (NLP) community, with adoption throughout many services like healthcare, therapy, education, and customer service. Since users include people with critical information needs like students or patients engaging with chatbots, the safety of these systems is of prime importance. Therefore, a clear understanding of the capabilities and limitations of LLMs is necessary. To this end, we systematically evaluate toxicity in over half a million generations of ChatGPT, a popular dialogue-based LLM. We find that setting the system parameter of ChatGPT by assigning it a persona, say that of the boxer Muhammad Ali, significantly increases the toxicity of generations. Depending on the persona assigned to ChatGPT, its toxicity can increase up to 6x, with outputs engaging in incorrect stereotypes, harmful dialogue, and hurtful opinions. This may be potentially defamatory to the persona and harmful to an unsuspecting user. Furthermore, we find concerning patterns where specific entities (e.g., certain races) are targeted more than others (3x more) irrespective of the assigned persona, that reflect inherent discriminatory biases in the model. We hope that our findings inspire the broader AI community to rethink the efficacy of current safety guardrails and develop better techniques that lead to robust, safe, and trustworthy AI systems.", + "neighbors": [ + 1001, + 1520, + 1713, + 1801, + 1878, + 1952 + ], + "mask": "Train" + }, + { + "node_id": 70, + "label": 27, + "text": "Title: Dynamic Object Removal for Effective Slam\nAbstract: This research paper focuses on the problem of dynamic objects and their impact on effective motion planning and localization. The paper proposes a two-step process to address this challenge, which involves finding the dynamic objects in the scene using a Flow-based method and then using a deep Video inpainting algorithm to remove them. The study aims to test the validity of this approach by comparing it with baseline results using two state-of-the-art SLAM algorithms, ORB-SLAM2 and LSD, and understanding the impact of dynamic objects and the corresponding trade-offs. The proposed approach does not require any significant modifications to the baseline SLAM algorithms, and therefore, the computational effort required remains unchanged. The paper presents a detailed analysis of the results obtained and concludes that the proposed method is effective in removing dynamic objects from the scene, leading to improved SLAM performance.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 71, + "label": 16, + "text": "Title: SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving\nAbstract: Semantic scene completion (SSC) is crucial for holistic 3D scene understanding by jointly estimating semantics and geometry from sparse observations. However, progress in SSC, particularly in autonomous driving scenarios, is hindered by the scarcity of high-quality datasets. To overcome this challenge, we introduce SSCBench, a comprehensive benchmark that integrates scenes from widely-used automotive datasets (e.g., KITTI-360, nuScenes, and Waymo). SSCBench follows an established setup and format in the community, facilitating the easy exploration of the camera- and LiDAR-based SSC across various real-world scenarios. We present quantitative and qualitative evaluations of state-of-the-art algorithms on SSCBench and commit to continuously incorporating novel automotive datasets and SSC algorithms to drive further advancements in this field. Our resources are released on https://github.com/ai4ce/SSCBench.", + "neighbors": [ + 1260, + 1571 + ], + "mask": "Train" + }, + { + "node_id": 72, + "label": 16, + "text": "Title: Conflict-Based Cross-View Consistency for Semi-Supervised Semantic Segmentation\nAbstract: Semi-supervised semantic segmentation (SSS) has recently gained increasing research interest as it can reduce the requirement for large-scale fully-annotated training data. The current methods often suffer from the confirmation bias from the pseudo-labelling process, which can be alleviated by the co-training framework. The current co-training-based SSS methods rely on hand-crafted perturbations to prevent the different sub-nets from collapsing into each other, but these artificial perturbations cannot lead to the optimal solution. In this work, we propose a new conflict-based cross-view consistency (CCVC) method based on a two-branch co-training framework which aims at enforcing the two sub-nets to learn informative features from irrelevant views. In particular, we first propose a new cross-view consistency (CVC) strategy that encourages the two sub-nets to learn distinct features from the same input by introducing a feature discrepancy loss, while these distinct features are expected to generate consistent prediction scores of the input. The CVC strategy helps to prevent the two sub-nets from stepping into the collapse. In addition, we further propose a conflict-based pseudo-labelling (CPL) method to guarantee the model will learn more useful information from conflicting predictions, which will lead to a stable training process. We validate our new CCVC approach on the SSS benchmark datasets where our method achieves new state-of-the-art performance. Our code is available at https://github.com/xiaoyao3302/CCVC.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 73, + "label": 24, + "text": "Title: The CausalBench challenge: A machine learning contest for gene network inference from single-cell perturbation data\nAbstract: In drug discovery, mapping interactions between genes within cellular systems is a crucial early step. This helps formulate hypotheses regarding molecular mechanisms that could potentially be targeted by future medicines. The CausalBench Challenge was an initiative to invite the machine learning community to advance the state of the art in constructing gene-gene interaction networks. These networks, derived from large-scale, real-world datasets of single cells under various perturbations, are crucial for understanding the causal mechanisms underlying disease biology. Using the framework provided by the CausalBench benchmark, participants were tasked with enhancing the capacity of the state of the art methods to leverage large-scale genetic perturbation data. This report provides an analysis and summary of the methods submitted during the challenge to give a partial image of the state of the art at the time of the challenge. The winning solutions significantly improved performance compared to previous baselines, establishing a new state of the art for this critical task in biology and medicine.", + "neighbors": [ + 160 + ], + "mask": "Train" + }, + { + "node_id": 74, + "label": 24, + "text": "Title: Byte Pair Encoding for Symbolic Music\nAbstract: The symbolic music modality is nowadays mostly represented as discrete and used with sequential models such as Transformers, for deep learning tasks. Recent research put efforts on the tokenization, i.e. the conversion of data into sequences of integers intelligible to such models. This can be achieved by many ways as music can be composed of simultaneous tracks, of simultaneous notes with several attributes. Until now, the proposed tokenizations are based on small vocabularies describing the note attributes and time events, resulting in fairly long token sequences. In this paper, we show how Byte Pair Encoding (BPE) can improve the results of deep learning models while improving its performances. We experiment on music generation and composer classification, and study the impact of BPE on how models learn the embeddings, and show that it can help to increase their isotropy, i.e., the uniformity of the variance of their positions in the space.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 75, + "label": 30, + "text": "Title: Whose Opinions Do Language Models Reflect?\nAbstract: Language models (LMs) are increasingly being used in open-ended contexts, where the opinions reflected by LMs in response to subjective queries can have a profound impact, both on user satisfaction, as well as shaping the views of society at large. In this work, we put forth a quantitative framework to investigate the opinions reflected by LMs -- by leveraging high-quality public opinion polls and their associated human responses. Using this framework, we create OpinionsQA, a new dataset for evaluating the alignment of LM opinions with those of 60 US demographic groups over topics ranging from abortion to automation. Across topics, we find substantial misalignment between the views reflected by current LMs and those of US demographic groups: on par with the Democrat-Republican divide on climate change. Notably, this misalignment persists even after explicitly steering the LMs towards particular demographic groups. Our analysis not only confirms prior observations about the left-leaning tendencies of some human feedback-tuned LMs, but also surfaces groups whose opinions are poorly reflected by current LMs (e.g., 65+ and widowed individuals). Our code and data are available at https://github.com/tatsu-lab/opinions_qa.", + "neighbors": [ + 352, + 1259, + 1520, + 2140, + 2258, + 2305 + ], + "mask": "Test" + }, + { + "node_id": 76, + "label": 4, + "text": "Title: Exploring Smart Commercial Building Occupants\u2019 Perceptions and Notification Preferences of Internet of Things Data Collection in the United States\nAbstract: Data collection through the Internet of Things (IoT) devices, or smart devices, in commercial buildings enables possibilities for increased convenience and energy efficiency. However, such benefits face a large perceptual challenge when being implemented in practice, due to the different ways occupants working in the buildings understand and trust in the data collection. The semi-public, pervasive, and multi-modal nature of data collection in smart buildings points to the need to study occupants\u2019 understanding of data collection and notification preferences. We conduct an online study with 492 participants in the US who report working in smart commercial buildings regarding: 1) awareness and perception of data collection in smart commercial buildings, 2) privacy notification preferences, and 3) potential factors for privacy notification preferences. We find that around half of the participants are not fully aware of the data collection and use practices of IoT even though they notice the presence of IoT devices and sensors. We also discover many misunderstandings around different data practices. The majority of participants want to be notified of data practices in smart buildings, and they prefer push notifications to passive ones such as websites or physical signs. Surprisingly, mobile app notification, despite being a popular channel for smart homes, is the least preferred method for smart commercial buildings.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 77, + "label": 16, + "text": "Title: Learn More for Food Recognition via Progressive Self-Distillation\nAbstract: Food recognition has a wide range of applications, such as health-aware recommendation and self-service restaurants. Most previous methods of food recognition firstly locate informative regions in some weakly-supervised manners and then aggregate their features. However, location errors of informative regions limit the effectiveness of these methods to some extent. Instead of locating multiple regions, we propose a Progressive Self-Distillation (PSD) method, which progressively enhances the ability of network to mine more details for food recognition. The training of PSD simultaneously contains multiple self-distillations, in which a teacher network and a student network share the same embedding network. Since the student network receives a modified image from its teacher network by masking some informative regions, the teacher network outputs stronger semantic representations than the student network. Guided by such teacher network with stronger semantics, the student network is encouraged to mine more useful regions from the modified image by enhancing its own ability. The ability of the teacher network is also enhanced with the shared embedding network. By using progressive training, the teacher network incrementally improves its ability to mine more discriminative regions. In inference phase, only the teacher network is used without the help of the student network. Extensive experiments on three datasets demonstrate the effectiveness of our proposed method and state-of-the-art performance.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 78, + "label": 24, + "text": "Title: A Hybrid Chimp Optimization Algorithm and Generalized Normal Distribution Algorithm with Opposition-Based Learning Strategy for Solving Data Clustering Problems\nAbstract: This paper is concerned with data clustering to separate clusters based on the connectivity principle for categorizing similar and dissimilar data into different groups. Although classical clustering algorithms such as K-means are efficient techniques, they often trap in local optima and have a slow convergence rate in solving high-dimensional problems. To address these issues, many successful meta-heuristic optimization algorithms and intelligence-based methods have been introduced to attain the optimal solution in a reasonable time. They are designed to escape from a local optimum problem by allowing flexible movements or random behaviors. In this study, we attempt to conceptualize a powerful approach using the three main components: Chimp Optimization Algorithm (ChOA), Generalized Normal Distribution Algorithm (GNDA), and Opposition-Based Learning (OBL) method. Firstly, two versions of ChOA with two different independent groups' strategies and seven chaotic maps, entitled ChOA(I) and ChOA(II), are presented to achieve the best possible result for data clustering purposes. Secondly, a novel combination of ChOA and GNDA algorithms with the OBL strategy is devised to solve the major shortcomings of the original algorithms. Lastly, the proposed ChOAGNDA method is a Selective Opposition (SO) algorithm based on ChOA and GNDA, which can be used to tackle large and complex real-world optimization problems, particularly data clustering applications. The results are evaluated against seven popular meta-heuristic optimization algorithms and eight recent state-of-the-art clustering techniques. Experimental results illustrate that the proposed work significantly outperforms other existing methods in terms of the achievement in minimizing the Sum of Intra-Cluster Distances (SICD), obtaining the lowest Error Rate (ER), accelerating the convergence speed, and finding the optimal cluster centers.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 79, + "label": 10, + "text": "Title: Assessing Trustworthiness of Autonomous Systems\nAbstract: As Autonomous Systems (AS) become more ubiquitous in society, more responsible for our safety and our interaction with them more frequent, it is essential that they are trustworthy. Assessing the trustworthiness of AS is a mandatory challenge for the verification and development community. This will require appropriate standards and suitable metrics that may serve to objectively and comparatively judge trustworthiness of AS across the broad range of current and future applications. The meta-expression `trustworthiness' is examined in the context of AS capturing the relevant qualities that comprise this term in the literature. Recent developments in standards and frameworks that support assurance of autonomous systems are reviewed. A list of key challenges are identified for the community and we present an outline of a process that can be used as a trustworthiness assessment framework for AS.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 80, + "label": 27, + "text": "Title: Agent 3, change your route: possible conversation between a human manager and UAM Air Traffic Management (UATM)\nAbstract: This work in progress paper provides an example to show a detouring procedure through knowledge representation and reasoning. When a human manager requests a detouring, this should affect the related agents. Through non-monotonic reasoning process, we verify each step to be proceeded and provide all the successful connections of the reasoning. Following this progress and continuing this idea development, we expect that this simulated scenario can be a guideline to build the traffic management system in real. After a brief introduction including related works, we provide our problem formulation, primary work, discussion, and conclusions.", + "neighbors": [ + 2128 + ], + "mask": "Train" + }, + { + "node_id": 81, + "label": 30, + "text": "Title: Enabling Large Language Models to Generate Text with Citations\nAbstract: Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, we aim to enable LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search engines and human evaluation, making it challenging to reproduce and compare with different modeling approaches. We propose ALCE, the first benchmark for Automatic LLMs' Citation Evaluation. ALCE collects a diverse set of questions and retrieval corpora and requires building end-to-end systems to retrieve supporting evidence and generate answers with citations. We build automatic metrics along three dimensions -- fluency, correctness, and citation quality -- and demonstrate their strong correlation with human judgements. Our experiments with state-of-the-art LLMs and novel prompting strategies show that current systems have considerable room for improvements -- for example, on the ELI5 dataset, even the best model has 49% of its generations lacking complete citation support. Our extensive analyses further highlight promising future directions, including developing better retrievers, advancing long-context LLMs, and improving the ability to synthesize information from multiple sources.", + "neighbors": [ + 57, + 1052, + 1950, + 1972, + 2038 + ], + "mask": "Test" + }, + { + "node_id": 82, + "label": 24, + "text": "Title: Patchwork Learning: A Paradigm Towards Integrative Analysis across Diverse Biomedical Data Sources\nAbstract: Machine learning (ML) in healthcare presents numerous opportunities for enhancing patient care, population health, and healthcare providers' workflows. However, the real-world clinical and cost benefits remain limited due to challenges in data privacy, heterogeneous data sources, and the inability to fully leverage multiple data modalities. In this perspective paper, we introduce\"patchwork learning\"(PL), a novel paradigm that addresses these limitations by integrating information from disparate datasets composed of different data modalities (e.g., clinical free-text, medical images, omics) and distributed across separate and secure sites. PL allows the simultaneous utilization of complementary data sources while preserving data privacy, enabling the development of more holistic and generalizable ML models. We present the concept of patchwork learning and its current implementations in healthcare, exploring the potential opportunities and applicable data sources for addressing various healthcare challenges. PL leverages bridging modalities or overlapping feature spaces across sites to facilitate information sharing and impute missing data, thereby addressing related prediction tasks. We discuss the challenges associated with PL, many of which are shared by federated and multimodal learning, and provide recommendations for future research in this field. By offering a more comprehensive approach to healthcare data integration, patchwork learning has the potential to revolutionize the clinical applicability of ML models. This paradigm promises to strike a balance between personalization and generalizability, ultimately enhancing patient experiences, improving population health, and optimizing healthcare providers' workflows.", + "neighbors": [ + 1047, + 1584, + 1863 + ], + "mask": "Train" + }, + { + "node_id": 83, + "label": 16, + "text": "Title: An Efficient Semi-Automated Scheme for Infrastructure LiDAR Annotation\nAbstract: Most existing perception systems rely on sensory data acquired from cameras, which perform poorly in low light and adverse weather conditions. To resolve this limitation, we have witnessed advanced LiDAR sensors become popular in perception tasks in autonomous driving applications. Nevertheless, their usage in traffic monitoring systems is less ubiquitous. We identify two significant obstacles in cost-effectively and efficiently developing such a LiDAR-based traffic monitoring system: (i) public LiDAR datasets are insufficient for supporting perception tasks in infrastructure systems, and (ii) 3D annotations on LiDAR point clouds are time-consuming and expensive. To fill this gap, we present an efficient semi-automated annotation tool that automatically annotates LiDAR sequences with tracking algorithms while offering a fully annotated infrastructure LiDAR dataset -- FLORIDA (Florida LiDAR-based Object Recognition and Intelligent Data Annotation) -- which will be made publicly available. Our advanced annotation tool seamlessly integrates multi-object tracking (MOT), single-object tracking (SOT), and suitable trajectory post-processing techniques. Specifically, we introduce a human-in-the-loop schema in which annotators recursively fix and refine annotations imperfectly predicted by our tool and incrementally add them to the training dataset to obtain better SOT and MOT models. By repeating the process, we significantly increase the overall annotation speed by three to four times and obtain better qualitative annotations than a state-of-the-art annotation tool. The human annotation experiments verify the effectiveness of our annotation tool. In addition, we provide detailed statistics and object detection evaluation results for our dataset in serving as a benchmark for perception tasks at traffic intersections.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 84, + "label": 34, + "text": "Title: Algorithmically Effective Differentially Private Synthetic Data\nAbstract: We present a highly effective algorithmic approach for generating $\\varepsilon$-differentially private synthetic data in a bounded metric space with near-optimal utility guarantees under the 1-Wasserstein distance. In particular, for a dataset $X$ in the hypercube $[0,1]^d$, our algorithm generates synthetic dataset $Y$ such that the expected 1-Wasserstein distance between the empirical measure of $X$ and $Y$ is $O((\\varepsilon n)^{-1/d})$ for $d\\geq 2$, and is $O(\\log^2(\\varepsilon n)(\\varepsilon n)^{-1})$ for $d=1$. The accuracy guarantee is optimal up to a constant factor for $d\\geq 2$, and up to a logarithmic factor for $d=1$. Our algorithm has a fast running time of $O(\\varepsilon dn)$ for all $d\\geq 1$ and demonstrates improved accuracy compared to the method in (Boedihardjo et al., 2022) for $d\\geq 2$.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 85, + "label": 10, + "text": "Title: HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level\nAbstract: Link Prediction on Hyper-relational Knowledge Graphs (HKG) is a worthwhile endeavor. HKG consists of hyper-relational facts (H-Facts), composed of a main triple and several auxiliary attribute-value qualifiers, which can effectively represent factually comprehensive information. The internal structure of HKG can be represented as a hypergraph-based representation globally and a semantic sequence-based representation locally. However, existing research seldom simultaneously models the graphical and sequential structure of HKGs, limiting HKGs\u2019 representation. To overcome this limitation, we propose a novel Hierarchical Attention model for HKG Embedding (HAHE), including global-level and local-level attention. The global-level attention can model the graphical structure of HKG using hypergraph dual-attention layers, while the local-level attention can learn the sequential structure inside H-Facts via heterogeneous self-attention layers. Experiment results indicate that HAHE achieves state-of-the-art performance in link prediction tasks on HKG standard datasets. In addition, HAHE addresses the issue of HKG multi-position prediction for the first time, increasing the applicability of the HKG link prediction task. Our code is publicly available.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 86, + "label": 13, + "text": "Title: Toward spike-based stochastic neural computing\nAbstract: Inspired by the highly irregular spiking activity of cortical neurons, stochastic neural computing is an attractive theory for explaining the operating principles of the brain and the ability to represent uncertainty by intelligent agents. However, computing and learning with high-dimensional joint probability distributions of spiking neural activity across large populations of neurons present as a major challenge. To overcome this, we develop a novel moment embedding approach to enable gradient-based learning in spiking neural networks accounting for the propagation of correlated neural variability. We show under the supervised learning setting a spiking neural network trained this way is able to learn the task while simultaneously minimizing uncertainty, and further demonstrate its application to neuromorphic hardware. Built on the principle of spike-based stochastic neural computing, the proposed method opens up new opportunities for developing machine intelligence capable of computing uncertainty and for designing unconventional computing architectures.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 87, + "label": 28, + "text": "Title: The Freshness Game: Timely Communications in the Presence of an Adversary\nAbstract: We consider a communication system where a base station (BS) transmits update packets to $N$ users, one user at a time, over a wireless channel. We investigate the age of this status updating system with an adversary that jams the update packets in the downlink. We consider two system models: with diversity and without diversity. In the model without diversity, we show that if the BS schedules the users with a stationary randomized policy, then the optimal choice for the adversary is to block the user which has the lowest probability of getting scheduled by the BS, at the middle of the time horizon, consecutively for $\\alpha T$ time slots. In the model with diversity, we show that for large $T$, the uniform user scheduling algorithm together with the uniform sub-carrier choosing algorithm is $\\frac{2 N_{sub}}{N_{sub}-1}$ optimal. Next, we investigate the game theoretic equilibrium points of this status updating system. For the model without diversity, we show that a Nash equilibrium does not exist, however, a Stackelberg equilibrium exists when the scheduling algorithm of the BS acts as the leader and the adversary acts as the follower. For the model with diversity, we show that a Nash equilibrium exists and identify the Nash equilibrium. Finally, we extend the model without diversity to the case where the BS can serve multiple users and the adversary can jam multiple users, at a time.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 88, + "label": 16, + "text": "Title: On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization\nAbstract: Achieving high accuracy on data from domains unseen during training is a fundamental challenge in domain generalization (DG). While state-of-the-art DG classifiers have demonstrated impressive performance across various tasks, they have shown a bias towards domain-dependent information, such as image styles, rather than domain-invariant information, such as image content. This bias renders them unreliable for deployment in risk-sensitive scenarios such as autonomous driving where a misclassification could lead to catastrophic consequences. To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a\"style-smoothed\"version of the DG classifier for prediction at test time. Specifically, the style-smoothed classifier classifies a test image as the most probable class predicted by the DG classifier on random re-stylizations of the test image. TT-NSS uses a neural style transfer module to stylize a test image on the fly, requires only black-box access to the DG classifier, and crucially, abstains when predictions of the DG classifier on the stylized test images lack consensus. Additionally, we propose a neural style smoothing (NSS) based training procedure that can be seamlessly integrated with existing DG methods. This procedure enhances prediction consistency, improving the performance of TT-NSS on non-abstained samples. Our empirical results demonstrate the effectiveness of TT-NSS and NSS at producing and improving risk-averse predictions on unseen domains from DG classifiers trained with SOTA training methods on various benchmark datasets and their variations.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 89, + "label": 34, + "text": "Title: Deterministic Massively Parallel Symmetry Breaking for Sparse Graphs\nAbstract: We consider the problem of designing deterministic graph algorithms for the model of Massively Parallel Computation (MPC) that improve with the sparsity of the input graph, as measured by the standard notion of arboricity. For the problems of maximal independent set (MIS), maximal matching (MM), and vertex coloring, we improve the state of the art as follows. Let \u03bb denote the arboricity of the n-node input graph with maximum degree \u0394. MIS and MM We develop a low-space MPC algorithm that deterministically reduces the maximum degree to poly(\u03bb) in O(log log n) rounds, improving and simplifying the randomized O(log log n)-round poly(max(\u03bb, log n))-degree reduction of Ghaffari, Grunau, Jin [DISC'20]. Our approach when combined with the state-of-the-art O(log \u0394 + log log n)-round algorithm by Czumaj, Davies, Parter [SPAA'20, TALG'21] leads to an improved deterministic round complexity of O(log \u03bb + log log n). The above MIS and MM algorithm however works in the setting where the global memory allowed, i.e., the number of machines times the local memory per machine, is superlinear in the input size. We extend them to obtain the first low-space MIS and MM algorithms that work with linear global memory. Specifically, we show that both problems can be solved in deterministic time O(log \u03bb \u00b7 log log\u03bb n), and even in O(log log n) time for graphs with arboricity at most logO(1) log n. In this setting, only a O(log2 log n)-running time bound for trees was known due to Latypov and Uitto [ArXiv'21]. Vertex Coloring We present a O(1)-round deterministic algorithm for the problem of O(&955;)-coloring in the linear-memory regime of MPC, with relaxed global memory of n \u00b7 poly(&955;). This matches the round complexity of the state-of-the-art randomized algorithm by Ghaffari and Sayyadi [ICALP'19] and significantly improves upon the deterministic O(&955;\u03b5 )-round algorithm by Barenboim and Khazanov [CSR'18]. Our algorithm solves the problem after just one single graph partitioning step, in contrast to the involved local coloring simulations of the above state-of-the-art algorithms. Using O(n + m) global memory, we derive a O(log \u03bb)-round algorithm by combining the constant-round (\u0394 + 1)-list-coloring algorithm by Czumaj, Davies, Parter [PODC'20, SIAM J. Comput.'21] with that of Barenboim and Khazanov.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 90, + "label": 10, + "text": "Title: Concise QBF Encodings for Games on a Grid (extended version)\nAbstract: Encoding 2-player games in QBF correctly and efficiently is challenging and error-prone. To enable concise specifications and uniform encodings of games played on grid boards, like Tic-Tac-Toe, Connect-4, Domineering, Pursuer-Evader and Breakthrough, we introduce Board-game Domain Definition Language (BDDL), inspired by the success of PDDL in the planning domain. We provide an efficient translation from BDDL into QBF, encoding the existence of a winning strategy of bounded depth. Our lifted encoding treats board positions symbolically and allows concise definitions of conditions, effects and winning configurations, relative to symbolic board positions. The size of the encoding grows linearly in the input model and the considered depth. To show the feasibility of such a generic approach, we use QBF solvers to compute the critical depths of winning strategies for instances of several known games. For several games, our work provides the first QBF encoding. Unlike plan validation in SAT-based planning, validating QBF-based winning strategies is difficult. We show how to validate winning strategies using QBF certificates and interactive game play.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 91, + "label": 24, + "text": "Title: Cognitively Inspired Cross-Modal Data Generation Using Diffusion Models\nAbstract: Most existing cross-modal generative methods based on diffusion models use guidance to provide control over the latent space to enable conditional generation across different modalities. Such methods focus on providing guidance through separately-trained models, each for one modality. As a result, these methods suffer from cross-modal information loss and are limited to unidirectional conditional generation. Inspired by how humans synchronously acquire multi-modal information and learn the correlation between modalities, we explore a multi-modal diffusion model training and sampling scheme that uses channel-wise image conditioning to learn cross-modality correlation during the training phase to better mimic the learning process in the brain. Our empirical results demonstrate that our approach can achieve data generation conditioned on all correlated modalities.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 92, + "label": 6, + "text": "Title: Virtual Reality Training of Social Skills in Adults with Autism Spectrum Disorder: An Examination of Acceptability, Usability, User Experience, Social Skills, and Executive Functions\nAbstract: Poor social skills in autism spectrum disorder (ASD) are associated with reduced independence in daily life. Current interventions for improving the social skills of individuals with ASD fail to represent the complexity of real-life social settings and situations. Virtual reality (VR) may facilitate social skills training in social environments and situations similar to those in real life; however, more research is needed to elucidate aspects such as the acceptability, usability, and user experience of VR systems in ASD. Twenty-five participants with ASD attended a neuropsychological evaluation and three sessions of VR social skills training, which incorporated five social scenarios with three difficulty levels. Participants reported high acceptability, system usability, and user experience. Significant correlations were observed between performance in social scenarios, self-reports, and executive functions. Working memory and planning ability were significant predictors of the functionality level in ASD and the VR system\u2019s perceived usability, respectively. Yet, performance in social scenarios was the best predictor of usability, acceptability, and functionality level. Planning ability substantially predicted performance in social scenarios, suggesting an implication in social skills. Immersive VR social skills training in individuals with ASD appears to be an appropriate service, but an errorless approach that is adaptive to the individual\u2019s needs should be preferred.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 93, + "label": 24, + "text": "Title: Controlling Learned Effects to Reduce Spurious Correlations in Text Classifiers\nAbstract: To address the problem of NLP classifiers learning spurious correlations between training features and target labels, a common approach is to make the model\u2019s predictions invariant to these features. However, this can be counter-productive when the features have a non-zero causal effect on the target label and thus are important for prediction. Therefore, using methods from the causal inference literature, we propose an algorithm to regularize the learnt effect of the features on the model\u2019s prediction to the estimated effect of feature on label. This results in an automated augmentation method that leverages the estimated effect of a feature to appropriately change the labels for new augmented inputs. On toxicity and IMDB review datasets, the proposed algorithm minimises spurious correlations and improves the minority group (i.e., samples breaking spurious correlations) accuracy, while also improving the total accuracy compared to standard training.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 94, + "label": 24, + "text": "Title: Numerical Association Rule Mining: A Systematic Literature Review\nAbstract: Numerical association rule mining is a widely used variant of the association rule mining technique, and it has been extensively used in discovering patterns and relationships in numerical data. Initially, researchers and scientists integrated numerical attributes in association rule mining using various discretization approaches; however, over time, a plethora of alternative methods have emerged in this field. Unfortunately, the increase of alternative methods has resulted into a significant knowledge gap in understanding diverse techniques employed in numerical association rule mining -- this paper attempts to bridge this knowledge gap by conducting a comprehensive systematic literature review. We provide an in-depth study of diverse methods, algorithms, metrics, and datasets derived from 1,140 scholarly articles published from the inception of numerical association rule mining in the year 1996 to 2022. In compliance with the inclusion, exclusion, and quality evaluation criteria, 68 papers were chosen to be extensively evaluated. To the best of our knowledge, this systematic literature review is the first of its kind to provide an exhaustive analysis of the current literature and previous surveys on numerical association rule mining. The paper discusses important research issues, the current status, and future possibilities of numerical association rule mining. On the basis of this systematic review, the article also presents a novel discretization measure that contributes by providing a partitioning of numerical data that meets well human perception of partitions.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 95, + "label": 36, + "text": "Title: Synthesizing Permissive Winning Strategy Templates for Parity Games\nAbstract: We present a novel method to compute \\emph{permissive winning strategies} in two-player games over finite graphs with $ \\omega $-regular winning conditions. Given a game graph $G$ and a parity winning condition $\\Phi$, we compute a \\emph{winning strategy template} $\\Psi$ that collects an infinite number of winning strategies for objective $\\Phi$ in a concise data structure. We use this new representation of sets of winning strategies to tackle two problems arising from applications of two-player games in the context of cyber-physical system design -- (i) \\emph{incremental synthesis}, i.e., adapting strategies to newly arriving, \\emph{additional} $\\omega$-regular objectives $\\Phi'$, and (ii) \\emph{fault-tolerant control}, i.e., adapting strategies to the occasional or persistent unavailability of actuators. The main features of our strategy templates -- which we utilize for solving these challenges -- are their easy computability, adaptability, and compositionality. For \\emph{incremental synthesis}, we empirically show on a large set of benchmarks that our technique vastly outperforms existing approaches if the number of added specifications increases. While our method is not complete, our prototype implementation returns the full winning region in all 1400 benchmark instances, i.e., handling a large problem class efficiently in practice.", + "neighbors": [ + 2285 + ], + "mask": "Train" + }, + { + "node_id": 96, + "label": 4, + "text": "Title: Protecting the Decentralized Future: An Exploration of Common Blockchain Attacks and their Countermeasures\nAbstract: Blockchain technology transformed the digital sphere by providing a transparent, secure, and decentralized platform for data security across a range of industries, including cryptocurrencies and supply chain management. Blockchain's integrity and dependability have been jeopardized by the rising number of security threats, which have attracted cybercriminals as a target. By summarizing suggested fixes, this research aims to offer a thorough analysis of mitigating blockchain attacks. The objectives of the paper include identifying weak blockchain attacks, evaluating various solutions, and determining how effective and effective they are at preventing these attacks. The study also highlights how crucial it is to take into account the particular needs of every blockchain application. This study provides beneficial perspectives and insights for blockchain researchers and practitioners, making it essential reading for those interested in current and future trends in blockchain security research.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 97, + "label": 24, + "text": "Title: Federated PAC-Bayesian Learning on Non-IID data\nAbstract: Existing research has either adapted the Probably Approximately Correct (PAC) Bayesian framework for federated learning (FL) or used information-theoretic PAC-Bayesian bounds while introducing their theorems, but few considering the non-IID challenges in FL. Our work presents the first non-vacuous federated PAC-Bayesian bound tailored for non-IID local data. This bound assumes unique prior knowledge for each client and variable aggregation weights. We also introduce an objective function and an innovative Gibbs-based algorithm for the optimization of the derived bound. The results are validated on real-world datasets.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 98, + "label": 25, + "text": "Title: An Attention-based Approach to Hierarchical Multi-label Music Instrument Classification\nAbstract: Although music is typically multi-label, many works have studied hierarchical music tagging with simplified settings such as single-label data. Moreover, there lacks a framework to describe various joint training methods under the multi-label setting. In order to discuss the above topics, we introduce hierarchical multi-label music instrument classification task. The task provides a realistic setting where multi-instrument real music data is assumed. Various hierarchical methods that jointly train a DNN are summarized and explored in the context of the fusion of deep learning and conventional techniques. For the effective joint training in the multi-label setting, we propose two methods to model the connection between fine- and coarse-level tags, where one uses rule-based grouped max-pooling, the other one uses the attention mechanism obtained in a data-driven manner. Our evaluation reveals that the proposed methods have advantages over the method without joint training. In addition, the decision procedure within the proposed methods can be interpreted by visualizing attention maps or referring to fixed rules.", + "neighbors": [ + 1813 + ], + "mask": "Train" + }, + { + "node_id": 99, + "label": 16, + "text": "Title: Image Synthesis under Limited Data: A Survey and Taxonomy\nAbstract: Deep generative models, which target reproducing the given data distribution to produce novel samples, have made unprecedented advancements in recent years. Their technical breakthroughs have enabled unparalleled quality in the synthesis of visual content. However, one critical prerequisite for their tremendous success is the availability of a sufficient number of training samples, which requires massive computation resources. When trained on limited data, generative models tend to suffer from severe performance deterioration due to overfitting and memorization. Accordingly, researchers have devoted considerable attention to develop novel models that are capable of generating plausible and diverse images from limited training data recently. Despite numerous efforts to enhance training stability and synthesis quality in the limited data scenarios, there is a lack of a systematic survey that provides 1) a clear problem definition, critical challenges, and taxonomy of various tasks; 2) an in-depth analysis on the pros, cons, and remain limitations of existing literature; as well as 3) a thorough discussion on the potential applications and future directions in the field of image synthesis under limited data. In order to fill this gap and provide a informative introduction to researchers who are new to this topic, this survey offers a comprehensive review and a novel taxonomy on the development of image synthesis under limited data. In particular, it covers the problem definition, requirements, main solutions, popular benchmarks, and remain challenges in a comprehensive and all-around manner.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 100, + "label": 31, + "text": "Title: Metric@CustomerN: Evaluating Metrics at a Customer Level in E-Commerce\nAbstract: Accuracy measures such as Recall, Precision, and Hit Rate have been a standard way of evaluating Recommendation Systems. The assumption is to use a fixed Top-N to represent them. We propose that median impressions viewed from historical sessions per diner be used as a personalized value for N. We present preliminary exploratory results and list future steps to improve upon and evaluate the efficacy of these personalized metrics.", + "neighbors": [ + 1413 + ], + "mask": "Train" + }, + { + "node_id": 101, + "label": 16, + "text": "Title: CAFS: Class Adaptive Framework for Semi-Supervised Semantic Segmentation\nAbstract: Semi-supervised semantic segmentation learns a model for classifying pixels into specific classes using a few labeled samples and numerous unlabeled images. The recent leading approach is consistency regularization by selftraining with pseudo-labeling pixels having high confidences for unlabeled images. However, using only highconfidence pixels for self-training may result in losing much of the information in the unlabeled datasets due to poor confidence calibration of modern deep learning networks. In this paper, we propose a class-adaptive semisupervision framework for semi-supervised semantic segmentation (CAFS) to cope with the loss of most information that occurs in existing high-confidence-based pseudolabeling methods. Unlike existing semi-supervised semantic segmentation frameworks, CAFS constructs a validation set on a labeled dataset, to leverage the calibration performance for each class. On this basis, we propose a calibration aware class-wise adaptive thresholding and classwise adaptive oversampling using the analysis results from the validation set. Our proposed CAFS achieves state-ofthe-art performance on the full data partition of the base PASCAL VOC 2012 dataset and on the 1/4 data partition of the Cityscapes dataset with significant margins of 83.0% and 80.4%, respectively. The code is available at https://github.com/cjf8899/CAFS.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 102, + "label": 8, + "text": "Title: Computation Offloading for Uncertain Marine Tasks by Cooperation of UAVs and Vessels\nAbstract: With the continuous increment of maritime applications, the development of marine networks for data offloading becomes necessary. However, the limited maritime network resources are very difficult to satisfy real-time demands. Besides, how to effectively handle multiple compute-intensive tasks becomes another intractable issue. Hence, in this paper, we focus on the decision of maritime task offloading by the cooperation of unmanned aerial vehicles (UAVs) and vessels. Specifically, we first propose a cooperative offloading framework, including the demands from marine Internet of Things (MIoTs) devices and resource providers from UAVs and vessels. Due to the limited energy and computation ability of UAVs, it is necessary to help better apply the vessels to computation offloading. Then, we formulate the studied problem into a Markov decision process, aiming to minimize the total execution time and energy cost. Then, we leverage Lyapunov optimization to convert the long-term constraints of the total execution time and energy cost into their short-term constraints, further yielding a set of per-time-slot optimization problems. Furthermore, we propose a Q-learning based approach to solve the short-term problem efficiently. Finally, simulation results are conducted to verify the correctness and effectiveness of the proposed algorithm.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 103, + "label": 16, + "text": "Title: A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics\nAbstract: nan", + "neighbors": [ + 2102 + ], + "mask": "Train" + }, + { + "node_id": 104, + "label": 16, + "text": "Title: Asymmetric double-winged multi-view clustering network for exploring Diverse and Consistent Information\nAbstract: In unsupervised scenarios, deep contrastive multi-view clustering (DCMVC) is becoming a hot research spot, which aims to mine the potential relationships between different views. Most existing DCMVC algorithms focus on exploring the consistency information for the deep semantic features, while ignoring the diverse information on shallow features. To fill this gap, we propose a novel multi-view clustering network termed CodingNet to explore the diverse and consistent information simultaneously in this paper. Specifically, instead of utilizing the conventional auto-encoder, we design an asymmetric structure network to extract shallow and deep features separately. Then, by aligning the similarity matrix on the shallow feature to the zero matrix, we ensure the diversity for the shallow features, thus offering a better description of multi-view data. Moreover, we propose a dual contrastive mechanism that maintains consistency for deep features at both view-feature and pseudo-label levels. Our framework's efficacy is validated through extensive experiments on six widely used benchmark datasets, outperforming most state-of-the-art multi-view clustering algorithms.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 105, + "label": 24, + "text": "Title: FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?\nAbstract: Few-shot learning aims to train models that can be generalized to novel classes with only a few samples. Recently, a line of works are proposed to enhance few-shot learning with accessible semantic information from class names. However, these works focus on improving existing modules such as visual prototypes and feature extractors of the standard few-shot learning framework. This limits the full potential use of semantic information. In this paper, we propose a novel few-shot learning framework that uses pre-trained language models based on contrastive learning. To address the challenge of alignment between visual features and textual embeddings obtained from text-based pre-trained language model, we carefully design the textual branch of our framework and introduce a metric module to generalize the cosine similarity. For better transferability, we let the metric module adapt to different few-shot tasks and adopt MAML to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate the effectiveness of our method.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 106, + "label": 16, + "text": "Title: Review of Large Vision Models and Visual Prompt Engineering\nAbstract: Visual prompt engineering is a fundamental technology in the field of visual and image Artificial General Intelligence, serving as a key component for achieving zero-shot capabilities. As the development of large vision models progresses, the importance of prompt engineering becomes increasingly evident. Designing suitable prompts for specific visual tasks has emerged as a meaningful research direction. This review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering, exploring the latest advancements in visual prompt engineering. We present influential large models in the visual domain and a range of prompt engineering methods employed on these models. It is our hope that this review provides a comprehensive and systematic description of prompt engineering methods based on large visual models, offering valuable insights for future researchers in their exploration of this field.", + "neighbors": [ + 535, + 584, + 929, + 1050, + 1052, + 1084, + 1199, + 1262, + 1720, + 1863, + 2057, + 2113, + 2203, + 2296 + ], + "mask": "Train" + }, + { + "node_id": 107, + "label": 8, + "text": "Title: Improving the Efficiency of MIMO Simulations in ns-3\nAbstract: Channel modeling is a fundamental task for the design and evaluation of wireless technologies and networks, before actual prototyping, commercial product development and real deployments. The recent trends of current and future mobile networks, which include large antenna systems, massive deployments, and high-frequency bands, require complex channel models for the accurate simulation of massive MIMO (m-MIMO) in millimeter wave (mmWave) and Terahertz (THz) bands. To address the complexity/accuracy trade-off, a spatial channel model has been defined by 3GPP (TR 38.901), which has been shown to be the main bottleneck of current system-level simulations in ns-3. In this paper, we focus on improving the channel modeling efficiency for large-scale MIMO system-level simulations. Extensions are developed in two directions. First, we improve the efficiency of the current 3GPP TR 38.901 implementation code in ns-3, by allowing the use of the Eigen library for more efficient matrix algebra operations, among other optimizations and a more modular code structure. Second, we propose a new performance-oriented MIMO channel model for reduced complexity, as an alternative model suitable for mmWave/THz bands, and calibrate it against the 3GPP TR 38.901 model. Simulation results demonstrate the proper calibration of the newly introduced model for various scenarios and channel conditions, and exhibit an effective reduction of the simulation time (up to 16 times compared to the previous baseline) thanks to the various proposed improvements.", + "neighbors": [ + 839 + ], + "mask": "Train" + }, + { + "node_id": 108, + "label": 24, + "text": "Title: Symplectic Structure-Aware Hamiltonian (Graph) Embeddings\nAbstract: In traditional Graph Neural Networks (GNNs), the assumption of a fixed embedding manifold often limits their adaptability to diverse graph geometries. Recently, Hamiltonian system-inspired GNNs are proposed to address the dynamic nature of such embeddings by incorporating physical laws into node feature updates. In this work, we present SAH-GNN, a novel approach that generalizes Hamiltonian dynamics for more flexible node feature updates. Unlike existing Hamiltonian-inspired GNNs, SAH-GNN employs Riemannian optimization on the symplectic Stiefel manifold to adaptively learn the underlying symplectic structure during training, circumventing the limitations of existing Hamiltonian GNNs that rely on a pre-defined form of standard symplectic structure. This innovation allows SAH-GNN to automatically adapt to various graph datasets without extensive hyperparameter tuning. Moreover, it conserves energy during training such that the implicit Hamiltonian system is physically meaningful. To this end, we empirically validate SAH-GNN's superior performance and adaptability in node classification tasks across multiple types of graph datasets.", + "neighbors": [ + 236 + ], + "mask": "Train" + }, + { + "node_id": 109, + "label": 8, + "text": "Title: A Review of Gaps between Web 4.0 and Web 3.0 Intelligent Network Infrastructure\nAbstract: World Wide Web is speeding up its pace into an intelligent and decentralized ecosystem, as seen in the campaign of Web 3.0 and forthcoming Web 4.0. Marked by the Europe Commission's latest mention of Web 4.0, a race towards strategic Web 4.0 success has started. Web 4.0 is committed to bringing the next technological transition with an open, secure, trustworthy fairness and digital ecosystem for individuals and businesses in private and public sectors. Despite overlapping scopes and objectives of Web 3.0 and Web 4.0 from academic and industrial perspectives, there are distinct and definitive features and gaps for the next generation of WWW. In this review, a brief introduction to WWW development unravels the entangled but consistent requirement of a more vivid web experience, enhancing human-centric experience in both societal and technical aspects. Moreover, the review brings a decentralized intelligence prospect of view on native AI entities for Web 4.0, envisioning sustainable, autonomous and decentralized AI services for the entire Web 4.0 environment, powering a self-sustainable Decentralized Physical and Software Infrastructure for Computing Force Network, Semantic Network, Virtual/Mixed Reality, and Privacy-preserving content presumption. The review aims to reveal that Web 4.0 offers native intelligence with focused thinking on utilizing decentralized physical infrastructure, in addition to sole requirements on decentralization, bridging the gap between Web 4.0 and Web 3.0 advances with the latest future-shaping blockchain-enabled computing and network routing protocols.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 110, + "label": 6, + "text": "Title: StoryChat: Designing a Narrative-Based Viewer Participation Tool for Live Streaming Chatrooms\nAbstract: Live streaming platforms and existing viewer participation tools enable users to interact and engage with an online community, but the anonymity and scale of chat usually result in the spread of negative comments. However, only a few existing moderation tools investigate the influence of proactive moderation on viewers\u2019 engagement and prosocial behavior. To address this, we developed StoryChat, a narrative-based viewer participation tool that utilizes a dynamic graphical plot to reflect chatroom negativity. We crafted the narrative through a viewer-centered (N=65) iterative design process and evaluated the tool with 48 experienced viewers in a deployment study. We discovered that StoryChat encouraged viewers to contribute prosocial comments, increased viewer engagement, and fostered viewers\u2019 sense of community. Viewers reported a closer connection between streamers and other viewers because of the narrative design, suggesting that narrative-based viewer engagement tools have the potential to encourage community engagement and prosocial behaviors.", + "neighbors": [ + 452 + ], + "mask": "Train" + }, + { + "node_id": 111, + "label": 24, + "text": "Title: ShuttleSet: A Human-Annotated Stroke-Level Singles Dataset for Badminton Tactical Analysis\nAbstract: With the recent progress in sports analytics, deep learning approaches have demonstrated the effectiveness of mining insights into players' tactics for improving performance quality and fan engagement. This is attributed to the availability of public ground-truth datasets. While there are a few available datasets for turn-based sports for action detection, these datasets severely lack structured source data and stroke-level records since these require high-cost labeling efforts from domain experts and are hard to detect using automatic techniques. Consequently, the development of artificial intelligence approaches is significantly hindered when existing models are applied to more challenging structured turn-based sequences. In this paper, we present ShuttleSet, the largest publicly-available badminton singles dataset with annotated stroke-level records. It contains 104 sets, 3,685 rallies, and 36,492 strokes in 44 matches between 2018 and 2021 with 27 top-ranking men's singles and women's singles players. ShuttleSet is manually annotated with a computer-aided labeling tool to increase the labeling efficiency and effectiveness of selecting the shot type with a choice of 18 distinct classes, the corresponding hitting locations, and the locations of both players at each stroke. In the experiments, we provide multiple benchmarks (i.e., stroke influence, stroke forecasting, and movement forecasting) with baselines to illustrate the practicability of using ShuttleSet for turn-based analytics, which is expected to stimulate both academic and sports communities. Over the past two years, a visualization platform has been deployed to illustrate the variability of analysis cases from ShuttleSet for coaches to delve into players' tactical preferences with human-interactive interfaces, which was also used by national badminton teams during multiple international high-ranking matches.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 112, + "label": 15, + "text": "Title: VarSim: A Fast Process Variation-aware Thermal Modeling Methodology Using Green's Functions\nAbstract: Despite temperature rise being a first-order design constraint, traditional thermal estimation techniques have severe limitations in modeling critical aspects affecting the temperature in modern-day chips. Existing thermal modeling techniques often ignore the effects of parameter variation, which can lead to significant errors. Such methods also ignore the dependence of conductivity on temperature and its variation. Leakage power is also incorporated inadequately by state-of-the-art techniques. Thermal modeling is a process that has to be repeated at least thousands of times in the design cycle, and hence speed is of utmost importance. To overcome these limitations, we propose VarSim, an ultrafast thermal simulator based on Green's functions. Green's functions have been shown to be faster than the traditional finite difference and finite element-based approaches but have rarely been employed in thermal modeling. Hence we propose a new Green's function-based method to capture the effects of leakage power as well as process variation analytically. We provide a closed-form solution for the Green's function considering the effects of variation on the process, temperature, and thermal conductivity. In addition, we propose a novel way of dealing with the anisotropicity introduced by process variation by splitting the Green's functions into shift-variant and shift-invariant components. Since our solutions are analytical expressions, we were able to obtain speedups that were several orders of magnitude over and above state-of-the-art proposals with a mean absolute error limited to 4% for a wide range of test cases. Furthermore, our method accurately captures the steady-state as well as the transient variation in temperature.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 113, + "label": 24, + "text": "Title: CenTime: Event-Conditional Modelling of Censoring in Survival Analysis\nAbstract: Survival analysis is a valuable tool for estimating the time until specific events, such as death or cancer recurrence, based on baseline observations. This is particularly useful in healthcare to prognostically predict clinically important events based on patient data. However, existing approaches often have limitations; some focus only on ranking patients by survivability, neglecting to estimate the actual event time, while others treat the problem as a classification task, ignoring the inherent time-ordered structure of the events. Furthermore, the effective utilization of censored samples - training data points where the exact event time is unknown - is essential for improving the predictive accuracy of the model. In this paper, we introduce CenTime, a novel approach to survival analysis that directly estimates the time to event. Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce. We demonstrate that our approach forms a consistent estimator for the event model parameters, even in the absence of uncensored data. Furthermore, CenTime is easily integrated with deep learning models with no restrictions on batch size or the number of uncensored samples. We compare our approach with standard survival analysis methods, including the Cox proportional-hazard model and DeepHit. Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance. Our implementation is publicly available at https://github.com/ahmedhshahin/CenTime.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 114, + "label": 4, + "text": "Title: ForensiBlock: A Provenance-Driven Blockchain Framework for Data Forensics and Auditability\nAbstract: Maintaining accurate provenance records is paramount in digital forensics, as they underpin evidence credibility and integrity, addressing essential aspects like accountability and reproducibility. Blockchains have several properties that can address these requirements. Previous systems utilized public blockchains, i.e., treated blockchain as a black box, and benefiting from the immutability property. However, the blockchain was accessible to everyone, giving rise to security concerns and moreover, efficient extraction of provenance faces challenges due to the enormous scale and complexity of digital data. This necessitates a tailored blockchain design for digital forensics. Our solution, Forensiblock has a novel design that automates investigation steps, ensures secure data access, traces data origins, preserves records, and expedites provenance extraction. Forensiblock incorporates Role-Based Access Control with Staged Authorization (RBAC-SA) and a distributed Merkle root for case tracking. These features support authorized resource access with an efficient retrieval of provenance records. Particularly, comparing two methods for extracting provenance records off chain storage retrieval with Merkle root verification and a brute-force search the offchain method is significantly better, especially as the blockchain size and number of cases increase. We also found that our distributed Merkle root creation slightly increases smart contract processing time but significantly improves history access. Overall, we show that Forensiblock offers secure, efficient, and reliable handling of digital forensic data", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 115, + "label": 16, + "text": "Title: Multi-scale Geometry-aware Transformer for 3D Point Cloud Classification\nAbstract: Self-attention modules have demonstrated remarkable capabilities in capturing long-range relationships and improving the performance of point cloud tasks. However, point cloud objects are typically characterized by complex, disordered, and non-Euclidean spatial structures with multiple scales, and their behavior is often dynamic and unpredictable. The current self-attention modules mostly rely on dot product multiplication and dimension alignment among query-key-value features, which cannot adequately capture the multi-scale non-Euclidean structures of point cloud objects. To address these problems, this paper proposes a self-attention plug-in module with its variants, Multi-scale Geometry-aware Transformer (MGT). MGT processes point cloud data with multi-scale local and global geometric information in the following three aspects. At first, the MGT divides point cloud data into patches with multiple scales. Secondly, a local feature extractor based on sphere mapping is proposed to explore the geometry inner each patch and generate a fixed-length representation for each patch. Thirdly, the fixed-length representations are fed into a novel geodesic-based self-attention to capture the global non-Euclidean geometry between patches. Finally, all the modules are integrated into the framework of MGT with an end-to-end training scheme. Experimental results demonstrate that the MGT vastly increases the capability of capturing multi-scale geometry using the self-attention mechanism and achieves strong competitive performance on mainstream point cloud benchmarks.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 116, + "label": 24, + "text": "Title: Fast Exact NPN Classification with Influence-aided Canonical Form\nAbstract: NPN classification has many applications in the synthesis and verification of digital circuits. The canonical-form-based method is the most common approach, designing a canonical form as representative for the NPN equivalence class first and then computing the transformation function according to the canonical form. Most works use variable symmetries and several signatures, mainly based on the cofactor, to simplify the canonical form construction and computation. This paper describes a novel canonical form and its computation algorithm by introducing Boolean influence to NPN classification, which is a basic concept in analysis of Boolean functions. We show that influence is input-negation-independent, input-permutation-dependent, and has other structural information than previous signatures for NPN classification. Therefore, it is a significant ingredient in speeding up NPN classification. Experimental results prove that influence plays an important role in reducing the transformation enumeration in computing the canonical form. Compared with the state-of-the-art algorithm implemented in ABC, our influence-aided canonical form for exact NPN classification gains up to 5.5x speedup.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 117, + "label": 16, + "text": "Title: Detecting and Grounding Multi-Modal Media Manipulation\nAbstract: Misinformation has become a pressing issue. Fake media, in both visual and textual forms, is widespread on the web. While various deepfake detection and text fake news detection methods have been proposed, they are only designed for single-modality forgery based on binary classification, let alone analyzing and reasoning subtle forgery traces across different modalities. In this paper, we high-light a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4). DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content (i.e., image bounding boxes and text tokens), which requires deeper reasoning of multi-modal media manipulation. To support a large-scale investigation, we construct the first DGM4 dataset, where image-text pairs are manipulated by various approaches, with rich annotation of diverse manipulations. Moreover, we propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities. HAMMER performs 1) manipulation-aware contrastive learning between two uni-modal encoders as shallow manipulation reasoning, and 2) modality-aware cross-attention by multi-modal aggregator as deep manipulation reasoning. Dedicated manipulation detection and grounding heads are integrated from shallow to deep levels based on the interacted multi-modal information. Finally, we build an extensive bench-mark and set up rigorous evaluation metrics for this new research problem. Comprehensive experiments demonstrate the superiority of our model; several valuable observations are also revealed to facilitate future research in multi-modal media manipulation.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 118, + "label": 10, + "text": "Title: GPT4Graph: Can Large Language Models Understand Graph Structured Data ? An Empirical Evaluation and Benchmarking\nAbstract: Large language models~(LLM) like ChatGPT have become indispensable to artificial general intelligence~(AGI), demonstrating excellent performance in various natural language processing tasks. In the real world, graph data is ubiquitous and an essential part of AGI and prevails in domains like social network analysis, bioinformatics and recommender systems. The training corpus of large language models often includes some algorithmic components, which allows them to achieve certain effects on some graph data-related problems. However, there is still little research on their performance on a broader range of graph-structured data. In this study, we conduct an extensive investigation to assess the proficiency of LLMs in comprehending graph data, employing a diverse range of structural and semantic-related tasks. Our analysis encompasses 10 distinct tasks that evaluate the LLMs' capabilities in graph understanding. Through our study, we not only uncover the current limitations of language models in comprehending graph structures and performing associated reasoning tasks but also emphasize the necessity for further advancements and novel approaches to enhance their graph processing capabilities. Our findings contribute valuable insights towards bridging the gap between language models and graph understanding, paving the way for more effective graph mining and knowledge extraction.", + "neighbors": [ + 57, + 1238, + 1544, + 2109, + 2113, + 2136, + 2281 + ], + "mask": "Train" + }, + { + "node_id": 119, + "label": 16, + "text": "Title: Fracture Detection in Pediatric Wrist Trauma X-ray Images Using YOLOv8 Algorithm\nAbstract: Hospital emergency departments frequently receive lots of bone fracture cases, with pediatric wrist trauma fracture accounting for the majority of them. Before pediatric surgeons perform surgery, they need to ask patients how the fracture occurred and analyze the fracture situation by interpreting X-ray images. The interpretation of X-ray images often requires a combination of techniques from radiologists and surgeons, which requires time-consuming specialized training. With the rise of deep learning in the field of computer vision, network models applying for fracture detection has become an important research topic. In this paper, we train YOLOv8 (the latest version of You Only Look Once) model on the GRAZPEDWRI-DX dataset, and use data augmentation to improve the model performance. The experimental results show that our model have reached the state-of-the-art (SOTA) real-time model performance. Specifically, compared to YOLOv8s models, the mean average precision (mAP 50) of our models improve from 0.604 and 0.625 to 0.612 and 0.631 at the input image size of 640 and 1024, respectively. To enable surgeons to use our model for fracture detection on pediatric wrist trauma X-ray images, we have designed the application\"Fracture Detection Using YOLOv8 App\"to assist surgeons in diagnosing fractures, reducing the probability of error analysis, and providing more useful information for surgery. Our implementation code is released at https://github.com/RuiyangJu/Bone_Fracture_Detection_YOLOv8.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 120, + "label": 24, + "text": "Title: On student-teacher deviations in distillation: does it pay to disobey?\nAbstract: Knowledge distillation (KD) has been widely-used to improve the test accuracy of a ``student'' network by training the student to mimic soft probabilities of a trained\"teacher\"network. Yet, it has been shown in recent work that, despite being trained to fit the teacher's probabilities, the student not only significantly deviates from these probabilities, but also performs even better than the teacher. Our work aims to reconcile this seemingly paradoxical observation by characterizing the precise nature of the student-teacher deviations, and by arguing how they can co-occur with better generalization. First, through experiments on image and language data, we identify that these deviations correspond to the student systematically exaggerating the confidence levels of the teacher. Next, we theoretically and empirically establish in some simple settings that KD also exaggerates the implicit bias of gradient descent in converging faster along the top eigendirections of the data. Finally, we demonstrate that this exaggerated bias effect can simultaneously result in both (a) the exaggeration of confidence and (b) the improved generalization of the student, thus offering a resolution to the apparent paradox. Our analysis brings existing theory and practice closer by considering the role of gradient descent in KD and by demonstrating the exaggerated bias effect in both theoretical and empirical settings.", + "neighbors": [ + 203 + ], + "mask": "Train" + }, + { + "node_id": 121, + "label": 30, + "text": "Title: A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability\nAbstract: This paper presents the first comprehensive analysis of ChatGPT's Text-to-SQL ability. Given the recent emergence of large-scale conversational language model ChatGPT and its impressive capabilities in both conversational abilities and code generation, we sought to evaluate its Text-to-SQL performance. We conducted experiments on 12 benchmark datasets with different languages, settings, or scenarios, and the results demonstrate that ChatGPT has strong text-to-SQL abilities. Although there is still a gap from the current state-of-the-art (SOTA) model performance, considering that the experiment was conducted in a zero-shot scenario, ChatGPT's performance is still impressive. Notably, in the ADVETA (RPL) scenario, the zero-shot ChatGPT even outperforms the SOTA model that requires fine-tuning on the Spider dataset by 4.1\\%, demonstrating its potential for use in practical applications. To support further research in related fields, we have made the data generated by ChatGPT publicly available at https://github.com/THU-BPM/chatgpt-sql.", + "neighbors": [ + 924, + 1636, + 1797, + 2254 + ], + "mask": "Train" + }, + { + "node_id": 122, + "label": 24, + "text": "Title: Combining Slow and Fast: Complementary Filtering for Dynamics Learning\nAbstract: Modeling an unknown dynamical system is crucial in order to predict the future behavior of the system. A standard approach is training recurrent models on measurement data. While these models typically provide exact short-term predictions, accumulating errors yield deteriorated long-term behavior. In contrast, models with reliable long-term predictions can often be obtained, either by training a robust but less detailed model, or by leveraging physics-based simulations. In both cases, inaccuracies in the models yield a lack of short-time details. Thus, different models with contrastive properties on different time horizons are available. This observation immediately raises the question: Can we obtain predictions that combine the best of both worlds? Inspired by sensor fusion tasks, we interpret the problem in the frequency domain and leverage classical methods from signal processing, in particular complementary filters. This filtering technique combines two signals by applying a high-pass filter to one signal, and low-pass filtering the other. Essentially, the high-pass filter extracts high-frequencies, whereas the low-pass filter extracts low frequencies. Applying this concept to dynamics model learning enables the construction of models that yield accurate long- and short-term predictions. Here, we propose two methods, one being purely learning-based and the other one being a hybrid model that requires an additional physics-based simulator.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 123, + "label": 15, + "text": "Title: Sparq: A Custom RISC-V Vector Processor for Efficient Sub-Byte Quantized Inference\nAbstract: Convolutional Neural Networks (CNNs) are used in a wide range of applications, with full-precision CNNs achieving high accuracy at the expense of portability. Recent progress in quantization techniques has demonstrated that sub-byte Quantized Neural Networks (QNNs) achieve comparable or superior accuracy while significantly reducing the computational cost and memory footprint. However, sub-byte computation on commodity hardware is sub-optimal due to the lack of support for such precision. In this paper, we introduce Sparq, a Sub-byte vector Processor designed for the AcceleRation of QNN inference. This processor is based on a modified version of Ara, an open-source 64-bit RISC-V \u201cV\u201d compliant processor. Sparq is implemented in GLOBAL FOUNDRIES 22FDX FD-SOI technology and extends the Instruction Set Architecture (ISA) by adding a new multiply-shift-accumulate instruction to improve sub-byte computation effciency. The floating-point unit is also removed to minimize area and power usage. To demonstrate Sparq performance, we implement an ultra-low-precision (1-bit to 4-bit) vectorized conv2d operation taking advantage of the dedicated hardware. We show that Sparq can significantly accelerate sub-byte computations with respectively 3.2 times, and 1.7 times acceleration over an optimized 16-bit 2D convolution for 2-bit and 4-bit quantization.", + "neighbors": [ + 1301 + ], + "mask": "Validation" + }, + { + "node_id": 124, + "label": 24, + "text": "Title: Improving Interpretability of Deep Sequential Knowledge Tracing Models with Question-centric Cognitive Representations\nAbstract: Knowledge tracing (KT) is a crucial technique to predict students\u2019 future performance by observing their historical learning processes. Due to the powerful representation ability of deep neural networks, remarkable progress has been made by using deep learning techniques to solve the KT problem. The majority of existing approaches rely on the homogeneous question assumption that questions have equivalent contributions if they share the same set of knowledge components. Unfortunately, this assumption is inaccurate in real-world educational scenarios. Furthermore, it is very challenging to interpret the prediction results from the existing deep learning based KT models. Therefore, in this paper, we present QIKT, a question-centric interpretable KT model to address the above challenges. The proposed QIKT approach explicitly models students\u2019 knowledge state variations at a \ufb01ne-grained level with question-sensitive cognitive representations that are jointly learned from a question-centric knowledge acquisition module and a question-centric problem solving module. Meanwhile, the QIKT utilizes an item response theory based prediction layer to generate interpretable prediction results. The proposed QIKT model is evaluated on three public real-world educational datasets. The results demonstrate that our approach is superior on the KT prediction task, and it outperforms a wide range of deep learning based KT models in terms of prediction accuracy with better model interpretability. To encourage reproducible results, we have provided all the datasets and code at https://pykt.org/.", + "neighbors": [ + 263 + ], + "mask": "Train" + }, + { + "node_id": 125, + "label": 10, + "text": "Title: Engineering LaCAM$^\\ast$: Towards Real-Time, Large-Scale, and Near-Optimal Multi-Agent Pathfinding\nAbstract: This paper addresses the challenges of real-time, large-scale, and near-optimal multi-agent pathfinding (MAPF) through enhancements to the recently proposed LaCAM* algorithm. LaCAM* is a scalable search-based algorithm that guarantees the eventual finding of optimal solutions for cumulative transition costs. While it has demonstrated remarkable planning success rates, surpassing various state-of-the-art MAPF methods, its initial solution quality is far from optimal, and its convergence speed to the optimum is slow. To overcome these limitations, this paper introduces several improvement techniques, partly drawing inspiration from other MAPF methods. We provide empirical evidence that the fusion of these techniques significantly improves the solution quality of LaCAM*, thus further pushing the boundaries of MAPF algorithms.", + "neighbors": [ + 1743 + ], + "mask": "Train" + }, + { + "node_id": 126, + "label": 24, + "text": "Title: AlpacaFarm: A Simulation Framework for Methods that Learn from Human Feedback\nAbstract: Large language models (LLMs) such as ChatGPT have seen widespread adoption due to their ability to follow user instructions well. Developing these LLMs involves a complex yet poorly understood workflow requiring training with human feedback. Replicating and understanding this instruction-following process faces three major challenges: the high cost of data collection, the lack of trustworthy evaluation, and the absence of reference method implementations. We address these challenges with AlpacaFarm, a simulator that enables research and development for learning from feedback at a low cost. First, we design LLM prompts to simulate human feedback that are 45x cheaper than crowdworkers and display high agreement with humans. Second, we propose an automatic evaluation and validate it against human instructions obtained on real-world interactions. Third, we contribute reference implementations for several methods (PPO, best-of-n, expert iteration, and more) that learn from pairwise feedback. Finally, as an end-to-end validation of AlpacaFarm, we train and evaluate eleven models on 10k pairs of real human feedback and show that rankings of models trained in AlpacaFarm match rankings of models trained on human data. As a demonstration of the research possible in AlpacaFarm, we find that methods that use a reward model can substantially improve over supervised fine-tuning and that our reference PPO implementation leads to a +10% improvement in win-rate against Davinci003. We release all components of AlpacaFarm at https://github.com/tatsu-lab/alpaca_farm.", + "neighbors": [ + 430, + 761, + 811, + 855, + 1002, + 1007, + 1039, + 1052, + 1114, + 1203, + 1227, + 1249, + 1267, + 1346, + 1617, + 1647, + 1969, + 2016, + 2036, + 2087, + 2122, + 2257 + ], + "mask": "Train" + }, + { + "node_id": 127, + "label": 30, + "text": "Title: Language Models can Solve Computer Tasks\nAbstract: Agents capable of carrying out general tasks on a computer can improve efficiency and productivity by automating repetitive tasks and assisting in complex problem-solving. Ideally, such agents should be able to solve new computer tasks presented to them through natural language commands. However, previous approaches to this problem require large amounts of expert demonstrations and task-specific reward functions, both of which are impractical for new tasks. In this work, we show that a pre-trained large language model (LLM) agent can execute computer tasks guided by natural language using a simple prompting scheme where the agent Recursively Criticizes and Improves its output (RCI). The RCI approach significantly outperforms existing LLM methods for automating computer tasks and surpasses supervised learning (SL) and reinforcement learning (RL) approaches on the MiniWoB++ benchmark. We compare multiple LLMs and find that RCI with the InstructGPT-3+RLHF LLM is state-of-the-art on MiniWoB++, using only a handful of demonstrations per task rather than tens of thousands, and without a task-specific reward function. Furthermore, we demonstrate RCI prompting's effectiveness in enhancing LLMs' reasoning abilities on a suite of natural language reasoning tasks, outperforming chain of thought (CoT) prompting. We find that RCI combined with CoT performs better than either separately. Our code can be found here: https://github.com/posgnu/rci-agent.", + "neighbors": [ + 57, + 240, + 667, + 704, + 1039, + 1044, + 1047, + 1128, + 1267, + 1306, + 1490, + 1659, + 1810, + 1878, + 1906, + 2016, + 2092, + 2136, + 2166 + ], + "mask": "Validation" + }, + { + "node_id": 128, + "label": 16, + "text": "Title: EMP-SSL: Towards Self-Supervised Learning in One Training Epoch\nAbstract: Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation. Despite the empirical success, most self-supervised learning methods are rather\"inefficient\"learners, typically taking hundreds of training epochs to fully converge. In this work, we show that the key towards efficient self-supervised learning is to increase the number of crops from each image instance. Leveraging one of the state-of-the-art SSL method, we introduce a simplistic form of self-supervised learning method called Extreme-Multi-Patch Self-Supervised-Learning (EMP-SSL) that does not rely on many heuristic techniques for SSL such as weight sharing between the branches, feature-wise normalization, output quantization, and stop gradient, etc, and reduces the training epochs by two orders of magnitude. We show that the proposed method is able to converge to 85.1% on CIFAR-10, 58.5% on CIFAR-100, 38.1% on Tiny ImageNet and 58.5% on ImageNet-100 in just one epoch. Furthermore, the proposed method achieves 91.5% on CIFAR-10, 70.1% on CIFAR-100, 51.5% on Tiny ImageNet and 78.9% on ImageNet-100 with linear probing in less than ten training epochs. In addition, we show that EMP-SSL shows significantly better transferability to out-of-domain datasets compared to baseline SSL methods. We will release the code in https://github.com/tsb0601/EMP-SSL.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 129, + "label": 9, + "text": "Title: Tight (Double) Exponential Bounds for NP-Complete Problems: Treewidth and Vertex Cover Parameterizations\nAbstract: Treewidth is as an important parameter that yields tractability for many problems. For example, graph problems expressible in Monadic Second Order (MSO) logic and QUANTIFIED SAT or, more generally, QUANTIFIED CSP, are fixed-parameter tractable parameterized by the treewidth of the input's (primal) graph plus the length of the MSO-formula [Courcelle, Information&Computation 1990] and the quantifier rank [Chen, ECAI 2004], respectively. The algorithms generated by these (meta-)results have running times whose dependence on treewidth is a tower of exponents. A conditional lower bound by Fichte et al. [LICS 2020] shows that, for QUANTIFIED SAT, the height of this tower is equal to the number of quantifier alternations. Lower bounds showing that at least double-exponential factors in the running time are necessary, exhibit the extraordinary computational hardness of such problems, and are rare: there are very few (for treewidth tw and vertex cover vc parameterizations) and they are for $\\Sigma_2^p$-, $\\Sigma_3^p$- or #NP-complete problems. We show, for the first time, that it is not necessary to go higher up in the polynomial hierarchy to obtain such lower bounds. Specifically, for the well-studied NP-complete metric graph problems METRIC DIMENSION, STRONG METRIC DIMENSION, and GEODETIC SET, we prove that they do not admit $2^{2^{o(tw)}} \\cdot n^{O(1)}$-time algorithms, even on bounded diameter graphs, unless the ETH fails. For STRONG METRIC DIMENSION, this lower bound holds even for vc. This is impossible for the other two as they admit $2^{O({vc}^2)} \\cdot n^{O(1)}$-time algorithms. We show that, unless the ETH fails, they do not admit $2^{o({vc}^2)}\\cdot n^{O(1)}$-time algorithms, thereby adding to the short list of problems admitting such lower bounds. The latter results also yield lower bounds on the vertex-kernel sizes. We complement all our lower bounds with matching upper bounds.", + "neighbors": [ + 2019 + ], + "mask": "Test" + }, + { + "node_id": 130, + "label": 27, + "text": "Title: Prediction of SLAM ATE Using an Ensemble Learning Regression Model and 1-D Global Pooling of Data Characterization\nAbstract: Robustness and resilience of simultaneous localization and mapping (SLAM) are critical requirements for modern autonomous robotic systems. One of the essential steps to achieve robustness and resilience is the ability of SLAM to have an integrity measure for its localization estimates, and thus, have internal fault tolerance mechanisms to deal with performance degradation. In this work, we introduce a novel method for predicting SLAM localization error based on the characterization of raw sensor inputs. The proposed method relies on using a random forest regression model trained on 1-D global pooled features that are generated from characterized raw sensor data. The model is validated by using it to predict the performance of ORB-SLAM3 on three different datasets running on four different operating modes, resulting in an average prediction accuracy of up to 94.7\\%. The paper also studies the impact of 12 different 1-D global pooling functions on regression quality, and the superiority of 1-D global averaging is quantitatively proven. Finally, the paper studies the quality of prediction with limited training data, and proves that we are able to maintain proper prediction quality when only 20 \\% of the training examples are used for training, which highlights how the proposed model can optimize the evaluation footprint of SLAM systems.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 131, + "label": 16, + "text": "Title: Grouped Knowledge Distillation for Deep Face Recognition\nAbstract: Compared with the feature-based distillation methods, logits distillation can liberalize the requirements of consistent feature dimension between teacher and student networks, while the performance is deemed inferior in face recognition. One major challenge is that the light-weight student network has difficulty fitting the target logits due to its low model capacity, which is attributed to the significant number of identities in face recognition. Therefore, we seek to probe the target logits to extract the primary knowledge related to face identity, and discard the others, to make the distillation more achievable for the student network. Specifically, there is a tail group with near-zero values in the prediction, containing minor knowledge for distillation. To provide a clear perspective of its impact, we first partition the logits into two groups, i.e., Primary Group and Secondary Group, according to the cumulative probability of the softened prediction. Then, we reorganize the Knowledge Distillation (KD) loss of grouped logits into three parts, i.e., Primary-KD, Secondary-KD, and Binary-KD. Primary-KD refers to distilling the primary knowledge from the teacher, Secondary-KD aims to refine minor knowledge but increases the difficulty of distillation, and Binary-KD ensures the consistency of knowledge distribution between teacher and student. We experimentally found that (1) Primary-KD and Binary-KD are indispensable for KD, and (2) Secondary-KD is the culprit restricting KD at the bottleneck. Therefore, we propose a Grouped Knowledge Distillation (GKD) that retains the Primary-KD and Binary-KD but omits Secondary-KD in the ultimate KD loss calculation. Extensive experimental results on popular face recognition benchmarks demonstrate the superiority of proposed GKD over state-of-the-art methods.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 132, + "label": 24, + "text": "Title: Thompson Sampling for Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit\nAbstract: We study the real-valued combinatorial pure exploration of the multi-armed bandit (R-CPE-MAB) problem. In R-CPE-MAB, a player is given $d$ stochastic arms, and the reward of each arm $s\\in\\{1, \\ldots, d\\}$ follows an unknown distribution with mean $\\mu_s$. In each time step, a player pulls a single arm and observes its reward. The player's goal is to identify the optimal \\emph{action} $\\boldsymbol{\\pi}^{*} = \\argmax_{\\boldsymbol{\\pi} \\in \\mathcal{A}} \\boldsymbol{\\mu}^{\\top}\\boldsymbol{\\pi}$ from a finite-sized real-valued \\emph{action set} $\\mathcal{A}\\subset \\mathbb{R}^{d}$ with as few arm pulls as possible. Previous methods in the R-CPE-MAB assume that the size of the action set $\\mathcal{A}$ is polynomial in $d$. We introduce an algorithm named the Generalized Thompson Sampling Explore (GenTS-Explore) algorithm, which is the first algorithm that can work even when the size of the action set is exponentially large in $d$. We also introduce a novel problem-dependent sample complexity lower bound of the R-CPE-MAB problem, and show that the GenTS-Explore algorithm achieves the optimal sample complexity up to a problem-dependent constant factor.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 133, + "label": 24, + "text": "Title: A Provable Splitting Approach for Symmetric Nonnegative Matrix Factorization\nAbstract: The symmetric Nonnegative Matrix Factorization (NMF), a special but important class of the general NMF, has found numerous applications in data analysis such as various clustering tasks. Unfortunately, designing fast algorithms for the symmetric NMF is not as easy as for its nonsymmetric counterpart, since the latter admits the splitting property that allows state-of-the-art alternating-type algorithms. To overcome this issue, we first split the decision variable and transform the symmetric NMF to a penalized nonsymmetric one, paving the way for designing efficient alternating-type algorithms. We then show that solving the penalized nonsymmetric reformulation returns a solution to the original symmetric NMF. Moreover, we design a family of alternating-type algorithms and show that they all admit strong convergence guarantee: the generated sequence of iterates is convergent and converges at least sublinearly to a critical point of the original symmetric NMF. Finally, we conduct experiments on both synthetic data and real image clustering to support our theoretical results and demonstrate the performance of the alternating-type algorithms.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 134, + "label": 36, + "text": "Title: Sequential Principal-Agent Problems with Communication: Efficient Computation and Learning\nAbstract: We study a sequential decision making problem between a principal and an agent with incomplete information on both sides. In this model, the principal and the agent interact in a stochastic environment, and each is privy to observations about the state not available to the other. The principal has the power of commitment, both to elicit information from the agent and to provide signals about her own information. The principal and the agent communicate their signals to each other, and select their actions independently based on this communication. Each player receives a payoff based on the state and their joint actions, and the environment moves to a new state. The interaction continues over a finite time horizon, and both players act to optimize their own total payoffs over the horizon. Our model encompasses as special cases stochastic games of incomplete information and POMDPs, as well as sequential Bayesian persuasion and mechanism design problems. We study both computation of optimal policies and learning in our setting. While the general problems are computationally intractable, we study algorithmic solutions under a conditional independence assumption on the underlying state-observation distributions. We present an polynomial-time algorithm to compute the principal's optimal policy up to an additive approximation. Additionally, we show an efficient learning algorithm in the case where the transition probabilities are not known beforehand. The algorithm guarantees sublinear regret for both players.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 135, + "label": 23, + "text": "Title: Techniques for Improving the Energy Efficiency of Mobile Apps: A Taxonomy and Systematic Literature Review\nAbstract: Building energy efficient software is an increasingly important task for mobile developers. However, a cumulative body of knowledge of techniques that support this goal does not exist. We conduct a systematic literature review to gather information on existing techniques that allow developers to increase energy efficiency in mobile apps. Based on a synthesis of the 91 included primary studies, we propose a taxonomy of techniques for improving the energy efficiency in mobile apps. The taxonomy includes seven main categories of techniques and serves as a collection of available methods for developers and as a reference guide for software testers when performing energy efficiency testing by the means of benchmark tests.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 136, + "label": 16, + "text": "Title: CIEM: Contrastive Instruction Evaluation Method for Better Instruction Tuning\nAbstract: Nowadays, the research on Large Vision-Language Models (LVLMs) has been significantly promoted thanks to the success of Large Language Models (LLM). Nevertheless, these Vision-Language Models (VLMs) are suffering from the drawback of hallucination -- due to insufficient understanding of vision and language modalities, VLMs may generate incorrect perception information when doing downstream applications, for example, captioning a non-existent entity. To address the hallucination phenomenon, on the one hand, we introduce a Contrastive Instruction Evaluation Method (CIEM), which is an automatic pipeline that leverages an annotated image-text dataset coupled with an LLM to generate factual/contrastive question-answer pairs for the evaluation of the hallucination of VLMs. On the other hand, based on CIEM, we further propose a new instruction tuning method called CIT (the abbreviation of Contrastive Instruction Tuning) to alleviate the hallucination of VLMs by automatically producing high-quality factual/contrastive question-answer pairs and corresponding justifications for model tuning. Through extensive experiments on CIEM and CIT, we pinpoint the hallucination issues commonly present in existing VLMs, the disability of the current instruction-tuning dataset to handle the hallucination phenomenon and the superiority of CIT-tuned VLMs over both CIEM and public datasets.", + "neighbors": [ + 392, + 811, + 887, + 1052, + 1148, + 1485, + 1863, + 2036, + 2155 + ], + "mask": "Validation" + }, + { + "node_id": 137, + "label": 16, + "text": "Title: Depth-Aware Generative Adversarial Network for Talking Head Video Generation\nAbstract: Talking head video generation aims to produce a synthetic human face video that contains the identity and pose information respectively from a given source image and a driving video. Existing works for this task heavily rely on 2D representations (e.g. appearance and motion) learned from the input images. However, dense 3D facial geometry (e.g. pixel-wise depth) is extremely important for this task as it is particularly beneficial for us to essentially generate accurate 3D face structures and distinguish noisy information from the possibly cluttered background. Nevertheless, dense 3D geometry annotations are prohibitively costly for videos and are typically not available for this video generation task. In this paper, we introduce a self-supervised face-depth learning method to automatically recover dense 3D facial geometry (i.e. depth) from the face videos without the requirement of any expensive 3D annotation data. Based on the learned dense depth maps, we further propose to leverage them to estimate sparse facial keypoints that capture the critical movement of the human head. In a more dense way, the depth is also utilized to learn 3D-aware cross-modal (i.e. appearance and depth) attention to guide the generation of motion fields for warping source image representations. All these contributions compose a novel depth-aware generative adversarial network (DaGAN) for talking head generation. Extensive experiments conducted demonstrate that our proposed method can generate highly realistic faces, and achieve significant results on the unseen human faces. 11https://github.com/harlanhong/CVPR2022-DaGAN", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 138, + "label": 16, + "text": "Title: Deep Learning based Fingerprint Presentation Attack Detection: A Comprehensive Survey\nAbstract: The vulnerabilities of fingerprint authentication systems have raised security concerns when adapting them to highly secure access-control applications. Therefore, Fingerprint Presentation Attack Detection (FPAD) methods are essential for ensuring reliable fingerprint authentication. Owing to the lack of generation capacity of traditional handcrafted based approaches, deep learning-based FPAD has become mainstream and has achieved remarkable performance in the past decade. Existing reviews have focused more on hand-cratfed rather than deep learning-based methods, which are outdated. To stimulate future research, we will concentrate only on recent deep-learning-based FPAD methods. In this paper, we first briefly introduce the most common Presentation Attack Instruments (PAIs) and publicly available fingerprint Presentation Attack (PA) datasets. We then describe the existing deep-learning FPAD by categorizing them into contact, contactless, and smartphone-based approaches. Finally, we conclude the paper by discussing the open challenges at the current stage and emphasizing the potential future perspective.", + "neighbors": [ + 301 + ], + "mask": "Test" + }, + { + "node_id": 139, + "label": 35, + "text": "Title: SWAM: Revisiting Swap and OOMK for Improving Application Responsiveness on Mobile Devices\nAbstract: Existing memory reclamation policies on mobile devices may be no longer valid because they have negative effects on the response time of running applications. In this paper, we propose SWAM, a new integrated memory management technique that complements the shortcomings of both the swapping and killing mechanism in mobile devices and improves the application responsiveness. SWAM consists of (1) Adaptive Swap that performs swapping adaptively into memory or storage device while managing the swap space dynamically, (2) OOM Cleaner that reclaims shared object pages in the swap space to secure available memory and storage space, and (3) EOOM Killer that terminates processes in the worst case while prioritizing the lowest initialization cost applications as victim processes first. Experimental results demonstrate that SWAM significantly reduces the number of applications killed by OOMK (6.5x lower), and improves application launch time (36% faster) and response time (41% faster), compared to the conventional schemes.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 140, + "label": 16, + "text": "Title: Change detection needs change information: improving deep 3D point cloud change detection\nAbstract: Change detection is an important task to rapidly identify modified areas, in particular when multi-temporal data are concerned. In landscapes with complex geometry such as urban environment, vertical information turn out to be a very useful knowledge not only to highlight changes but also to classify them into different categories. In this paper, we focus on change segmentation directly using raw 3D point clouds (PCs), to avoid any loss of information due to rasterization processes. While deep learning has recently proved its effectiveness for this particular task by encoding the information through Siamese networks, we investigate here the idea of also using change information in early steps of deep networks. To do this, we first propose to provide the Siamese KPConv State-of-The-Art (SoTA) network with hand-crafted features and especially a change-related one. This improves the mean of Intersection over Union (IoU) over classes of change by 4.70\\%. Considering that the major improvement was obtained thanks to the change-related feature, we propose three new architectures to address 3D PCs change segmentation: OneConvFusion, Triplet KPConv, and Encoder Fusion SiamKPConv. All the three networks take into account change information in early steps and outperform SoTA methods. In particular, the last network, entitled Encoder Fusion SiamKPConv, overtakes SoTA with more than 5% of mean of IoU over classes of change emphasizing the value of having the network focus on change information for change detection task.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 141, + "label": 24, + "text": "Title: AdaPlus: Integrating Nesterov Momentum and Precise Stepsize Adjustment on AdamW Basis\nAbstract: This paper proposes an efficient optimizer called AdaPlus which integrates Nesterov momentum and precise stepsize adjustment on AdamW basis. AdaPlus combines the advantages of AdamW, Nadam, and AdaBelief and, in particular, does not introduce any extra hyper-parameters. We perform extensive experimental evaluations on three machine learning tasks to validate the effectiveness of AdaPlus. The experiment results validate that AdaPlus (i) is the best adaptive method which performs most comparable with (even slightly better than) SGD with momentum on image classification tasks and (ii) outperforms other state-of-the-art optimizers on language modeling tasks and illustrates the highest stability when training GANs. The experiment code of AdaPlus is available at: https://github.com/guanleics/AdaPlus.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 142, + "label": 16, + "text": "Title: Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers\nAbstract: In this paper, we present our advanced solutions to the two sub-challenges of Affective Behavior Analysis in the wild (ABAW) 2023: the Emotional Reaction Intensity (ERI) Estimation Challenge and Expression (Expr) Classification Challenge. ABAW 2023 aims to tackle the challenge of affective behavior analysis in natural contexts, with the ultimate goal of creating intelligent machines and robots that possess the ability to comprehend human emotions, feelings, and behaviors. For the Expression Classification Challenge, we propose a streamlined approach that handles the challenges of classification effectively. However, our main contribution lies in our use of diverse models and tools to extract multimodal features such as audio and video cues from the Hume-Reaction dataset. By studying, analyzing, and combining these features, we significantly enhance the model\u2019s accuracy for sentiment prediction in a multimodal context. Furthermore, our method achieves outstanding results on the Emotional Reaction Intensity (ERI) Estimation Challenge, surpassing the baseline method by an impressive 84% increase, as measured by the Pearson Coefficient, on the validation dataset.", + "neighbors": [ + 1265, + 1541, + 2055 + ], + "mask": "Train" + }, + { + "node_id": 143, + "label": 30, + "text": "Title: C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models\nAbstract: New NLP benchmarks are urgently needed to align with the rapid development of large language models (LLMs). We present C-Eval, the first comprehensive Chinese evaluation suite designed to assess advanced knowledge and reasoning abilities of foundation models in a Chinese context. C-Eval comprises multiple-choice questions across four difficulty levels: middle school, high school, college, and professional. The questions span 52 diverse disciplines, ranging from humanities to science and engineering. C-Eval is accompanied by C-Eval Hard, a subset of very challenging subjects in C-Eval that requires advanced reasoning abilities to solve. We conduct a comprehensive evaluation of the most advanced LLMs on C-Eval, including both English- and Chinese-oriented models. Results indicate that only GPT-4 could achieve an average accuracy of over 60%, suggesting that there is still significant room for improvement for current LLMs. We anticipate C-Eval will help analyze important strengths and shortcomings of foundation models, and foster their development and growth for Chinese users.", + "neighbors": [ + 685, + 704, + 891, + 949, + 1001, + 1034, + 1052, + 1655, + 1950, + 2122, + 2215 + ], + "mask": "Train" + }, + { + "node_id": 144, + "label": 16, + "text": "Title: DataComp: In search of the next generation of multimodal datasets\nAbstract: Multimodal datasets are a critical component in recent breakthroughs such as Stable Diffusion and GPT-4, yet their design does not receive the same research attention as model architectures or training algorithms. To address this shortcoming in the ML ecosystem, we introduce DataComp, a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing the resulting model on 38 downstream test sets. Our benchmark consists of multiple compute scales spanning four orders of magnitude, which enables the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow leads to better training sets. In particular, our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training procedure and compute. We release DataComp and all accompanying code at www.datacomp.ai.", + "neighbors": [ + 719, + 2064 + ], + "mask": "Train" + }, + { + "node_id": 145, + "label": 24, + "text": "Title: One-Versus-Others Attention: Scalable Multimodal Integration\nAbstract: Multimodal learning models have become increasingly important as they surpass single-modality approaches on diverse tasks ranging from question-answering to autonomous driving. Despite the importance of multimodal learning, existing efforts focus on NLP applications, where the number of modalities is typically less than four (audio, video, text, images). However, data inputs in other domains, such as the medical field, may include X-rays, PET scans, MRIs, genetic screening, clinical notes, and more, creating a need for both efficient and accurate information fusion. Many state-of-the-art models rely on pairwise cross-modal attention, which does not scale well for applications with more than three modalities. For $n$ modalities, computing attention will result in $n \\choose 2$ operations, potentially requiring considerable amounts of computational resources. To address this, we propose a new domain-neutral attention mechanism, One-Versus-Others (OvO) attention, that scales linearly with the number of modalities and requires only $n$ attention operations, thus offering a significant reduction in computational complexity compared to existing cross-modal attention algorithms. Using three diverse real-world datasets as well as an additional simulation experiment, we show that our method improves performance compared to popular fusion techniques while decreasing computation costs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 146, + "label": 8, + "text": "Title: Beyond Deep Reinforcement Learning: A Tutorial on Generative Diffusion Models in Network Optimization\nAbstract: Generative Diffusion Models (GDMs) have emerged as a transformative force in the realm of Generative Artificial Intelligence (GAI), demonstrating their versatility and efficacy across a variety of applications. The ability to model complex data distributions and generate high-quality samples has made GDMs particularly effective in tasks such as image generation and reinforcement learning. Furthermore, their iterative nature, which involves a series of noise addition and denoising steps, is a powerful and unique approach to learning and generating data. This paper serves as a comprehensive tutorial on applying GDMs in network optimization tasks. We delve into the strengths of GDMs, emphasizing their wide applicability across various domains, such as vision, text, and audio generation.We detail how GDMs can be effectively harnessed to solve complex optimization problems inherent in networks. The paper first provides a basic background of GDMs and their applications in network optimization. This is followed by a series of case studies, showcasing the integration of GDMs with Deep Reinforcement Learning (DRL), incentive mechanism design, Semantic Communications (SemCom), Internet of Vehicles (IoV) networks, etc. These case studies underscore the practicality and efficacy of GDMs in real-world scenarios, offering insights into network design. We conclude with a discussion on potential future directions for GDM research and applications, providing major insights into how they can continue to shape the future of network optimization.", + "neighbors": [ + 490, + 1601, + 1684, + 1863, + 1908, + 2059, + 2245 + ], + "mask": "Train" + }, + { + "node_id": 147, + "label": 31, + "text": "Title: Duplicate Question Retrieval and Confirmation Time Prediction in Software Communities\nAbstract: Community Question Answering (CQA) in different domains is growing at a large scale because of the availability of several platforms and huge shareable information among users. With the rapid growth of such online platforms, a massive amount of archived data makes it difficult for moderators to retrieve possible duplicates for a new question and identify and confirm existing question pairs as duplicates at the right time. This problem is even more critical in CQAs corresponding to large software systems like askubuntu where moderators need to be experts to comprehend something as a duplicate. Note that the prime challenge in such CQA platforms is that the moderators are themselves experts and are therefore usually extremely busy with their time being extraordinarily expensive. To facilitate the task of the moderators, in this work, we have tackled two significant issues for the askubuntu CQA platform: (1) retrieval of duplicate questions given a new question and (2) duplicate question confirmation time prediction. In the first task, we focus on retrieving duplicate questions from a question pool for a particular newly posted question. In the second task, we solve a regression problem to rank a pair of questions that could potentially take a long time to get confirmed as duplicates. For duplicate question retrieval, we propose a Siamese neural network based approach by exploiting both text and network-based features, which outperforms several state-of-the-art baseline techniques. Our method outperforms DupPredictor and DUPE by 5% and 7% respectively. For duplicate confirmation time prediction, we have used both the standard machine learning models and neural network along with the text and graph-based features. We obtain Spearman's rank correlation of 0.20 and 0.213 (statistically significant) for text and graph based features respectively.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 148, + "label": 30, + "text": "Title: Semantic Parsing for Conversational Question Answering over Knowledge Graphs\nAbstract: In this paper, we are interested in developing semantic parsers which understand natural language questions embedded in a conversation with a user and ground them to formal queries over definitions in a general purpose knowledge graph (KG) with very large vocabularies (covering thousands of concept names and relations, and millions of entities). To this end, we develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof. We present two different semantic parsing approaches and highlight the challenges of the task: dealing with large vocabularies, modelling conversation context, predicting queries with multiple entities, and generalising to new questions at test time. We hope our dataset will serve as useful testbed for the development of conversational semantic parsers. Our dataset and models are released at https://github.com/EdinburghNLP/SPICE.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 149, + "label": 30, + "text": "Title: Jointly Optimizing Translations and Speech Timing to Improve Isochrony in Automatic Dubbing\nAbstract: Automatic dubbing (AD) is the task of translating the original speech in a video into target language speech. The new target language speech should satisfy isochrony; that is, the new speech should be time aligned with the original video, including mouth movements, pauses, hand gestures, etc. In this paper, we propose training a model that directly optimizes both the translation as well as the speech duration of the generated translations. We show that this system generates speech that better matches the timing of the original speech, compared to prior work, while simplifying the system architecture.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 150, + "label": 39, + "text": "Title: The Packing Chromatic Number of the Infinite Square Grid is 15\nAbstract: A packing $k$-coloring is a natural variation on the standard notion of graph $k$-coloring, where vertices are assigned numbers from $\\{1, \\ldots, k\\}$, and any two vertices assigned a common color $c \\in \\{1, \\ldots, k\\}$ need to be at a distance greater than $c$ (as opposed to $1$, in standard graph colorings). Despite a sequence of incremental work, determining the packing chromatic number of the infinite square grid has remained an open problem since its introduction in 2002. We culminate the search by proving this number to be 15. We achieve this result by improving the best-known method for this problem by roughly two orders of magnitude. The most important technique to boost performance is a novel, surprisingly effective propositional encoding for packing colorings. Additionally, we developed an alternative symmetry-breaking method. Since both new techniques are more complex than existing techniques for this problem, a verified approach is required to trust them. We include both techniques in a proof of unsatisfiability, reducing the trusted core to the correctness of the direct encoding.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 151, + "label": 16, + "text": "Title: Pluralistic Aging Diffusion Autoencoder\nAbstract: Face aging is an ill-posed problem because multiple plausible aging patterns may correspond to a given input. Most existing methods often produce one deterministic estimation. This paper proposes a novel CLIP-driven Pluralistic Aging Diffusion Autoencoder (PADA) to enhance the diversity of aging patterns. First, we employ diffusion models to generate diverse low-level aging details via a sequential denoising reverse process. Second, we present Probabilistic Aging Embedding (PAE) to capture diverse high-level aging patterns, which represents age information as probabilistic distributions in the common CLIP latent space. A text-guided KL-divergence loss is designed to guide this learning. Our method can achieve pluralistic face aging conditioned on open-world aging texts and arbitrary unseen face images. Qualitative and quantitative experiments demonstrate that our method can generate more diverse and high-quality plausible aging results.", + "neighbors": [ + 2276 + ], + "mask": "Train" + }, + { + "node_id": 152, + "label": 5, + "text": "Title: BFRT: Blockchained Federated Learning for Real-time Traffic Flow Prediction\nAbstract: Accurate real-time traffic flow prediction can be leveraged to relieve traffic congestion and associated negative impacts. The existing centralized deep learning methodologies have demonstrated high prediction accuracy, but suffer from privacy concerns due to the sensitive nature of transportation data. Moreover, the emerging literature on traffic prediction by distributed learning approaches, including federated learning, primarily focuses on offline learning. This paper proposes BFRT, a blockchained federated learning architecture for online traffic flow prediction using real-time data and edge computing. The proposed approach provides privacy for the underlying data, while enabling decentralized model training in real-time at the Internet of Vehicles edge. We federate GRU and LSTM models and conduct extensive experiments with dynamically collected arterial traffic data shards. We prototype the proposed permissioned blockchain network on Hyperledger Fabric and perform extensive tests using virtual machines to simulate the edge nodes. Experimental results outperform the centralized models, highlighting the feasibility of our approach for facili-tating privacy-preserving and decentralized real-time traffic flow prediction.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 153, + "label": 16, + "text": "Title: Spatially Adaptive Self-Supervised Learning for Real-World Image Denoising\nAbstract: Significant progress has been made in self-supervised image denoising (SSID) in the recent few years. However, most methods focus on dealing with spatially independent noise, and they have little practicality on real-world sRGB images with spatially correlated noise. Although pixel-shuffle downsampling has been suggested for breaking the noise correlation, it breaks the original information of images, which limits the denoising performance. In this paper, we propose a novel perspective to solve this problem, i.e., seeking for spatially adaptive supervision for real-world sRGB image denoising. Specifically, we take into account the respective characteristics of flat and textured regions in noisy images, and construct supervisions for them separately. For flat areas, the supervision can be safely derived from non-adjacent pixels, which are much far from the current pixel for excluding the influence of the noise-correlated ones. And we extend the blind-spot network to a blind-neighborhood network (BNN) for providing supervision on flat areas. For textured regions, the supervision has to be closely related to the content of adjacent pixels. And we present a locally aware network (LAN) to meet the requirement, while LAN itself is selectively supervised with the output of BNN. Combining these two supervisions, a denoising network (e.g., U-Net) can be well-trained. Extensive experiments show that our method performs favorably against state-of-the-art SSID methods on real-world sRGB photographs. The code is available at https://github.com/nagejacob/SpatiallyAdaptiveSSID.", + "neighbors": [ + 394 + ], + "mask": "Test" + }, + { + "node_id": 154, + "label": 16, + "text": "Title: Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality\nAbstract: Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification. Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the\"LSUN-Bedroom\"dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.", + "neighbors": [ + 1900, + 2021, + 2279 + ], + "mask": "Validation" + }, + { + "node_id": 155, + "label": 2, + "text": "Title: Compositional Solution of Mean Payoff Games by String Diagrams\nAbstract: Following our recent development of a compositional model checking algorithm for Markov decision processes, we present a compositional framework for solving mean payoff games (MPGs). The framework is derived from category theory, specifically that of monoidal categories: MPGs (extended with open ends) get composed in so-called string diagrams and thus organized in a monoidal category; their solution is then expressed as a functor, whose preservation properties embody compositionality. As usual, the key question to compositionality is how to enrich the semantic domain; the categorical framework gives an informed guidance in solving the question by singling out the algebraic structure required in the extended semantic domain. We implemented our compositional solution in Haskell; depending on benchmarks, it can outperform an existing algorithm by an order of magnitude.", + "neighbors": [ + 707 + ], + "mask": "Train" + }, + { + "node_id": 156, + "label": 26, + "text": "Title: The blame game: Understanding blame assignment in social media\nAbstract: Cognitive and psychological studies on morality have proposed underlying linguistic and semantic factors. However, laboratory experiments in the philosophical literature often lack the nuances and complexity of real life. This paper examines how well the findings of these cognitive studies generalize to a corpus of over 30,000 narratives of tense social situations submitted to a popular social media forum. These narratives describe interpersonal moral situations or misgivings; other users judge from the post whether the author (protagonist) or the opposing side (antagonist) is morally culpable. Whereas previous work focuses on predicting the polarity of normative behaviors, we extend and apply natural language processing (NLP) techniques to understand the effects of descriptions of the people involved in these posts. We conduct extensive experiments to investigate the effect sizes of features to understand how they affect the assignment of blame on social media. Our findings show that aggregating psychology theories enables understanding real-life moral situations. Moreover, our results suggest that there exist biases in blame assignment on social media, such as males are more likely to receive blame no matter whether they are protagonists or antagonists.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 157, + "label": 13, + "text": "Title: A Static Analysis of Informed Down-Samples\nAbstract: We present an analysis of the loss of population-level test coverage induced by different down-sampling strategies when combined with lexicase selection. We study recorded populations from the first generation of genetic programming runs, as well as entirely synthetic populations. Our findings verify the hypothesis that informed down-sampling better maintains population-level test coverage when compared to random down-sampling. Additionally, we show that both forms of down-sampling cause greater test coverage loss than standard lexicase selection with no down-sampling. However, given more information about the population, we found that informed down-sampling can further reduce its test coverage loss. We also recommend wider adoption of the static population analyses we present in this work.", + "neighbors": [ + 1017, + 1850 + ], + "mask": "Train" + }, + { + "node_id": 158, + "label": 16, + "text": "Title: PoSynDA: Multi-Hypothesis Pose Synthesis Domain Adaptation for Robust 3D Human Pose Estimation\nAbstract: The current 3D human pose estimators face challenges in adapting to new datasets due to the scarcity of 2D-3D pose pairs in target domain training sets. We present the \\textit{Multi-Hypothesis \\textbf{P}ose \\textbf{Syn}thesis \\textbf{D}omain \\textbf{A}daptation} (\\textbf{PoSynDA}) framework to overcome this issue without extensive target domain annotation. Utilizing a diffusion-centric structure, PoSynDA simulates the 3D pose distribution in the target domain, filling the data diversity gap. By incorporating a multi-hypothesis network, it creates diverse pose hypotheses and aligns them with the target domain. Target-specific source augmentation obtains the target domain distribution data from the source domain by decoupling the scale and position parameters. The teacher-student paradigm and low-rank adaptation further refine the process. PoSynDA demonstrates competitive performance on benchmarks, such as Human3.6M, MPI-INF-3DHP, and 3DPW, even comparable with the target-trained MixSTE model~\\cite{zhang2022mixste}. This work paves the way for the practical application of 3D human pose estimation. The code is available at https://github.com/hbing-l/PoSynDA.", + "neighbors": [ + 422, + 1605, + 2009 + ], + "mask": "Train" + }, + { + "node_id": 159, + "label": 24, + "text": "Title: FedLE: Federated Learning Client Selection with Lifespan Extension for Edge IoT Networks\nAbstract: Federated learning (FL) is a distributed and privacy-preserving learning framework for predictive modeling with massive data generated at the edge by Internet of Things (IoT) devices. One major challenge preventing the wide adoption of FL in IoT is the pervasive power supply constraints of IoT devices due to the intensive energy consumption of battery-powered clients for local training and model updates. Low battery levels of clients eventually lead to their early dropouts from edge networks, loss of training data jeopardizing the performance of FL, and their availability to perform other designated tasks. In this paper, we propose FedLE, an energy-efficient client selection framework that enables lifespan extension of edge IoT networks. In FedLE, the clients first run for a minimum epoch to generate their local model update. The models are partially uploaded to the server for calculating similarities between each pair of clients. Clustering is performed against these client pairs to identify those with similar model distributions. In each round, low-powered clients have a lower probability of being selected, delaying the draining of their batteries. Empirical studies show that FedLE outperforms baselines on benchmark datasets and lasts more training rounds than FedAvg with battery power constraints.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 160, + "label": 24, + "text": "Title: Learning DAGs from Data with Few Root Causes\nAbstract: We present a novel perspective and algorithm for learning directed acyclic graphs (DAGs) from data generated by a linear structural equation model (SEM). First, we show that a linear SEM can be viewed as a linear transform that, in prior work, computes the data from a dense input vector of random valued root causes (as we will call them) associated with the nodes. Instead, we consider the case of (approximately) few root causes and also introduce noise in the measurement of the data. Intuitively, this means that the DAG data is produced by few data-generating events whose effect percolates through the DAG. We prove identifiability in this new setting and show that the true DAG is the global minimizer of the $L^0$-norm of the vector of root causes. For data with few root causes, with and without noise, we show superior performance compared to prior DAG learning methods.", + "neighbors": [ + 73, + 1468 + ], + "mask": "Train" + }, + { + "node_id": 161, + "label": 24, + "text": "Title: xDeepInt: a hybrid architecture for modeling the vector-wise and bit-wise feature interactions\nAbstract: Learning feature interactions is the key to success for the large-scale CTR prediction and recommendation. In practice, handcrafted feature engineering usually requires exhaustive searching. In order to reduce the high cost of human efforts in feature engineering, researchers propose several deep neural networks (DNN)-based approaches to learn the feature interactions in an end-to-end fashion. However, existing methods either do not learn both vector-wise interactions and bit-wise interactions simultaneously, or fail to combine them in a controllable manner. In this paper, we propose a new model, xDeepInt, based on a novel network architecture called polynomial interaction network (PIN) which learns higher-order vector-wise interactions recursively. By integrating subspace-crossing mechanism, we enable xDeepInt to balance the mixture of vector-wise and bit-wise feature interactions at a bounded order. Based on the network architecture, we customize a combined optimization strategy to conduct feature selection and interaction selection. We implement the proposed model and evaluate the model performance on three real-world datasets. Our experiment results demonstrate the efficacy and effectiveness of xDeepInt over state-of-the-art models. We open-source the TensorFlow implementation of xDeepInt: https://github.com/yanyachen/xDeepInt.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 162, + "label": 24, + "text": "Title: Provably Efficient Model-Free Algorithms for Non-stationary CMDPs\nAbstract: We study model-free reinforcement learning (RL) algorithms in episodic non-stationary constrained Markov Decision Processes (CMDPs), in which an agent aims to maximize the expected cumulative reward subject to a cumulative constraint on the expected utility (cost). In the non-stationary environment, reward, utility functions, and transition kernels can vary arbitrarily over time as long as the cumulative variations do not exceed certain variation budgets. We propose the first model-free, simulator-free RL algorithms with sublinear regret and zero constraint violation for non-stationary CMDPs in both tabular and linear function approximation settings with provable performance guarantees. Our results on regret bound and constraint violation for the tabular case match the corresponding best results for stationary CMDPs when the total budget is known. Additionally, we present a general framework for addressing the well-known challenges associated with analyzing non-stationary CMDPs, without requiring prior knowledge of the variation budget. We apply the approach for both tabular and linear approximation settings.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 163, + "label": 24, + "text": "Title: SOBER: Highly Parallel Bayesian Optimization and Bayesian Quadrature over Discrete and Mixed Spaces\nAbstract: Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods of performing optimisation and quadrature where expensive-to-evaluate objective functions can be queried in parallel. However, current methods do not scale to large batch sizes -- a frequent desideratum in practice (e.g. drug discovery or simulation-based inference). We present a novel algorithm, SOBER, which permits scalable and diversified batch global optimisation and quadrature with arbitrary acquisition functions and kernels over discrete and mixed spaces. The key to our approach is to reformulate batch selection for global optimisation as a quadrature problem, which relaxes acquisition function maximisation (non-convex) to kernel recombination (convex). Bridging global optimisation and quadrature can efficiently solve both tasks by balancing the merits of exploitative Bayesian optimisation and explorative Bayesian quadrature. We show that SOBER outperforms 11 competitive baselines on 12 synthetic and diverse real-world tasks.", + "neighbors": [ + 850 + ], + "mask": "Train" + }, + { + "node_id": 164, + "label": 24, + "text": "Title: HyperFed: Hyperbolic Prototypes Exploration with Consistent Aggregation for Non-IID Data in Federated Learning\nAbstract: Federated learning (FL) collaboratively models user data in a decentralized way. However, in the real world, non-identical and independent data distributions (non-IID) among clients hinder the performance of FL due to three issues, i.e., (1) the class statistics shifting, (2) the insufficient hierarchical information utilization, and (3) the inconsistency in aggregating clients. To address the above issues, we propose HyperFed which contains three main modules, i.e., hyperbolic prototype Tammes initialization (HPTI), hyperbolic prototype learning (HPL), and consistent aggregation (CA). Firstly, HPTI in the server constructs uniformly distributed and fixed class prototypes, and shares them with clients to match class statistics, further guiding consistent feature representation for local clients. Secondly, HPL in each client captures the hierarchical information in local data with the supervision of shared class prototypes in the hyperbolic model space. Additionally, CA in the server mitigates the impact of the inconsistent deviations from clients to server. Extensive studies of four datasets prove that HyperFed is effective in enhancing the performance of FL under the non-IID setting.", + "neighbors": [ + 487 + ], + "mask": "Train" + }, + { + "node_id": 165, + "label": 23, + "text": "Title: SPSysML: A meta-model for quantitative evaluation of Simulation-Physical Systems\nAbstract: Robotic systems are complex cyber-physical systems (CPS) commonly equipped with multiple sensors and effectors. Recent simulation methods enable the Digital Twin (DT) concept realisation. However, DT employment in robotic system development, e.g. in-development testing, is unclear. During the system development, its parts evolve from simulated mockups to physical parts which run software deployed on the actual hardware. Therefore, a design tool and a flexible development procedure ensuring the integrity of the simulated and physical parts are required. We aim to maximise the integration between a CPS's simulated and physical parts in various setups. The better integration, the better simulation-based testing coverage of the physical part (hardware and software). We propose a Domain Specification Language (DSL) based on Systems Modeling Language (SysML) that we refer to as SPSysML (Simulation-Physical System Modeling Language). SPSysML defines the taxonomy of a Simulation-Physical System (SPSys), being a CPS consisting of at least a physical or simulated part. In particular, the simulated ones can be DTs. We propose a SPSys Development Procedure (SPSysDP) that enables the maximisation of the simulation-physical integrity of SPSys by evaluating the proposed factors. SPSysDP is used to develop a complex robotic system for the INCARE project. In subsequent iterations of SPSysDP, the simulation-physical integrity of the system is maximised. As a result, the system model consists of fewer components, and a greater fraction of the system components are shared between various system setups. We implement and test the system with popular frameworks, Robot Operating System (ROS) and Gazebo simulator. SPSysML with SPSysDP enables the design of SPSys (including DT and CPS), multi-setup system development featuring maximised integrity between simulation and physical parts in its setups.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 166, + "label": 24, + "text": "Title: WLD-Reg: A Data-dependent Within-layer Diversity Regularizer\nAbstract: Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer's overall diversity. We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks. The code is publically available at https://github.com/firasl/AAAI-23-WLD-Reg.", + "neighbors": [ + 2097 + ], + "mask": "Train" + }, + { + "node_id": 167, + "label": 24, + "text": "Title: SAD: Semi-Supervised Anomaly Detection on Dynamic Graphs\nAbstract: Anomaly detection aims to distinguish abnormal instances that deviate significantly from the majority of benign ones. As instances that appear in the real world are naturally connected and can be represented with graphs, graph neural networks become increasingly popular in tackling the anomaly detection problem. Despite the promising results, research on anomaly detection has almost exclusively focused on static graphs while the mining of anomalous patterns from dynamic graphs is rarely studied but has significant application value. In addition, anomaly detection is typically tackled from semi-supervised perspectives due to the lack of sufficient labeled data. However, most proposed methods are limited to merely exploiting labeled data, leaving a large number of unlabeled samples unexplored. In this work, we present semi-supervised anomaly detection (SAD), an end-to-end framework for anomaly detection on dynamic graphs. By a combination of a time-equipped memory bank and a pseudo-label contrastive learning module, SAD is able to fully exploit the potential of large unlabeled samples and uncover underlying anomalies on evolving graph streams. Extensive experiments on four real-world datasets demonstrate that SAD efficiently discovers anomalies from dynamic graphs and outperforms existing advanced methods even when provided with only little labeled data.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 168, + "label": 30, + "text": "Title: Efficient Domain Adaptation of Sentence Embeddings using Adapters\nAbstract: Sentence embeddings enable us to capture the semantic similarity of short texts. Most sentence embedding models are trained for general semantic textual similarity tasks. Therefore, to use sentence embeddings in a particular domain, the model must be adapted to it in order to achieve good results. Usually, this is done by fine-tuning the entire sentence embedding model for the domain of interest. While this approach yields state-of-the-art results, all of the model's weights are updated during fine-tuning, making this method resource-intensive. Therefore, instead of fine-tuning entire sentence embedding models for each target domain individually, we propose to train lightweight adapters. These domain-specific adapters do not require fine-tuning all underlying sentence embedding model parameters. Instead, we only train a small number of additional parameters while keeping the weights of the underlying sentence embedding model fixed. Training domain-specific adapters allows always using the same base model and only exchanging the domain-specific adapters to adapt sentence embeddings to a specific domain. We show that using adapters for parameter-efficient domain adaptation of sentence embeddings yields competitive performance within 1% of a domain-adapted, entirely fine-tuned sentence embedding model while only training approximately 3.6% of the parameters.", + "neighbors": [ + 30 + ], + "mask": "Train" + }, + { + "node_id": 169, + "label": 37, + "text": "Title: Using Learned Indexes to Improve Time Series Indexing Performance on Embedded Sensor Devices\nAbstract: Efficiently querying data on embedded sensor and IoT devices is challenging given the very limited memory and CPU resources. With the increasing volumes of collected data, it is critical to process, filter, and manipulate data on the edge devices where it is collected to improve efficiency and reduce network transmissions. Existing embedded index structures do not adapt to the data distribution and characteristics. This paper demonstrates how applying learned indexes that develop space efficient summaries of the data can dramatically improve the query performance and predictability. Learned indexes based on linear approximations can reduce the query I/O by 50 to 90% and improve query throughput by a factor of 2 to 5, while only requiring a few kilobytes of RAM. Experimental results on a variety of time series data sets demonstrate the advantages of learned indexes that considerably improve over the state-of-the-art index algorithms.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 170, + "label": 16, + "text": "Title: Vision + Language Applications: A Survey\nAbstract: Text-to-image generation has attracted significant interest from researchers and practitioners in recent years due to its widespread and diverse applications across various industries. Despite the progress made in the domain of vision and language research, the existing literature remains relatively limited, particularly with regard to advancements and applications in this field. This paper explores a relevant research track within multimodal applications, including text, vision, audio, and others. In addition to the studies discussed in this paper, we are also committed to continually updating the latest relevant papers, datasets, application projects and corresponding information at https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image.", + "neighbors": [ + 319, + 1450, + 1481, + 1601, + 1710, + 1758, + 1768, + 1863, + 1902, + 2161, + 2190, + 2242 + ], + "mask": "Validation" + }, + { + "node_id": 171, + "label": 4, + "text": "Title: MFDPG: Multi-Factor Authenticated Password Management With Zero Stored Secrets\nAbstract: While password managers are a vital tool for internet security, they can also create a massive central point of failure, as evidenced by several major recent data breaches. For over 20 years, deterministic password generators (DPGs) have been proposed, and largely rejected, as a viable alternative to password management tools. In this paper, we survey 45 existing DPGs to asses the main security, privacy, and usability issues hindering their adoption. We then present a new multi-factor deterministic password generator (MFDPG) design that aims to address these shortcomings. The result not only achieves strong, practical password management with zero credential storage, but also effectively serves as a progressive client-side upgrade of weak password-only websites to strong multi-factor authentication.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 172, + "label": 36, + "text": "Title: The Computational Complexity of Single-Player Imperfect-Recall Games\nAbstract: We study single-player extensive-form games with imperfect recall, such as the Sleeping Beauty problem or the Absentminded Driver game. For such games, two natural equilibrium concepts have been proposed as alternative solution concepts to ex-ante optimality. One equilibrium concept uses generalized double halving (GDH) as a belief system and evidential decision theory (EDT), and another one uses generalized thirding (GT) as a belief system and causal decision theory (CDT). Our findings relate those three solution concepts of a game to solution concepts of a polynomial maximization problem: global optima, optimal points with respect to subsets of variables and Karush\u2013Kuhn\u2013Tucker (KKT) points. Based on these correspondences, we are able to settle various complexity-theoretic questions on the computation of such strategies. For ex-ante optimality and (EDT,GDH)-equilibria, we obtain NP-hardness and inapproximability, and for (CDT,GT)-equilibria we obtain CLS-completeness results.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 173, + "label": 16, + "text": "Title: MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action\nAbstract: We propose MM-REACT, a system paradigm that integrates ChatGPT with a pool of vision experts to achieve multimodal reasoning and action. In this paper, we define and explore a comprehensive list of advanced vision tasks that are intriguing to solve, but may exceed the capabilities of existing vision and vision-language models. To achieve such advanced visual intelligence, MM-REACT introduces a textual prompt design that can represent text descriptions, textualized spatial coordinates, and aligned file names for dense visual signals such as images and videos. MM-REACT's prompt design allows language models to accept, associate, and process multimodal information, thereby facilitating the synergetic combination of ChatGPT and various vision experts. Zero-shot experiments demonstrate MM-REACT's effectiveness in addressing the specified capabilities of interests and its wide application in different scenarios that require advanced visual understanding. Furthermore, we discuss and compare MM-REACT's system paradigm with an alternative approach that extends language models for multimodal scenarios through joint finetuning. Code, demo, video, and visualization are available at https://multimodal-react.github.io/", + "neighbors": [ + 34, + 57, + 176, + 319, + 485, + 505, + 522, + 618, + 719, + 855, + 887, + 902, + 1026, + 1047, + 1050, + 1129, + 1148, + 1339, + 1353, + 1467, + 1574, + 1659, + 1755, + 1765, + 1863, + 1878, + 1893, + 1913, + 1990, + 2002, + 2030, + 2036, + 2064, + 2095, + 2155, + 2166, + 2216, + 2274, + 2286 + ], + "mask": "Train" + }, + { + "node_id": 174, + "label": 24, + "text": "Title: VDHLA: Variable Depth Hybrid Learning Automaton and Its Application to Defense Against the Selfish Mining Attack in Bitcoin\nAbstract: Learning Automaton (LA) is an adaptive self-organized model that improves its action-selection through interaction with an unknown environment. LA with finite action set can be classified into two main categories: fixed and variable structure. Furthermore, variable action-set learning automaton (VASLA) is one of the main subsets of variable structure learning automaton. In this paper, we propose VDHLA, a novel hybrid learning automaton model, which is a combination of fixed structure and variable action set learning automaton. In the proposed model, variable action set learning automaton can increase, decrease, or leave unchanged the depth of fixed structure learning automaton during the action switching phase. In addition, the depth of the proposed model can change in a symmetric (SVDHLA) or asymmetric (AVDHLA) manner. To the best of our knowledge, it is the first hybrid model that intelligently changes the depth of fixed structure learning automaton. Several computer simulations are conducted to study the performance of the proposed model with respect to the total number of rewards and action switching in stationary and non-stationary environments. The proposed model is compared with FSLA and VSLA. In order to determine the performance of the proposed model in a practical application, the selfish mining attack which threatens the incentive-compatibility of a proof-of-work based blockchain environment is considered. The proposed model is applied to defend against the selfish mining attack in Bitcoin and compared with the tie-breaking mechanism, which is a well-known defense. Simulation results in all environments have shown the superiority of the proposed model.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 175, + "label": 24, + "text": "Title: Fast Temporal Wavelet Graph Neural Networks\nAbstract: Spatio-temporal signals forecasting plays an important role in numerous domains, especially in neuroscience and transportation. The task is challenging due to the highly intricate spatial structure, as well as the non-linear temporal dynamics of the network. To facilitate reliable and timely forecast for the human brain and traffic networks, we propose the Fast Temporal Wavelet Graph Neural Networks (FTWGNN) that is both time- and memory-efficient for learning tasks on timeseries data with the underlying graph structure, thanks to the theories of multiresolution analysis and wavelet theory on discrete spaces. We employ Multiresolution Matrix Factorization (MMF) (Kondor et al., 2014) to factorize the highly dense graph structure and compute the corresponding sparse wavelet basis that allows us to construct fast wavelet convolution as the backbone of our novel architecture. Experimental results on real-world PEMS-BAY, METR-LA traffic datasets and AJILE12 ECoG dataset show that FTWGNN is competitive with the state-of-the-arts while maintaining a low computational footprint. Our PyTorch implementation is publicly available at https://github.com/HySonLab/TWGNN.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 176, + "label": 16, + "text": "Title: GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest\nAbstract: Instruction tuning large language model (LLM) on image-text pairs has achieved unprecedented vision-language multimodal abilities. However, their vision-language alignments are only built on image-level, the lack of region-level alignment limits their advancements to fine-grained multimodal understanding. In this paper, we propose instruction tuning on region-of-interest. The key design is to reformulate the bounding box as the format of spatial instruction. The interleaved sequences of visual features extracted by the spatial instruction and the language embedding are input to LLM, and trained on the transformed region-text data in instruction tuning format. Our region-level vision-language model, termed as GPT4RoI, brings brand new conversational and interactive experience beyond image-level understanding. (1) Controllability: Users can interact with our model by both language and spatial instructions to flexibly adjust the detail level of the question. (2) Capacities: Our model supports not only single-region spatial instruction but also multi-region. This unlocks more region-level multimodal capacities such as detailed region caption and complex region reasoning. (3) Composition: Any off-the-shelf object detector can be a spatial instruction provider so as to mine informative object attributes from our model, like color, shape, material, action, relation to other objects, etc. The code, data, and demo can be found at https://github.com/jshilong/GPT4RoI.", + "neighbors": [ + 173, + 319, + 719, + 811, + 880, + 1044, + 1047, + 1052, + 1129, + 1315, + 1344, + 1537, + 1574, + 1668, + 1765, + 1863, + 1893, + 1913, + 2030, + 2036, + 2095, + 2155 + ], + "mask": "Train" + }, + { + "node_id": 177, + "label": 20, + "text": "Title: Scenic Routes in Rd\nAbstract: In this work, we introduce the problem of scenic routes among points in Rd. The key development is the nature of the problem in terms of both defining the concept of scenic points and scenic routes and then coming up with algorithms that meet different criteria for the generated scenic routes. The scenic routes problem provides a visual trajectory for a user to comprehend the layout of high-dimensional points. The nature of this trajectory and the visual layout of the points have applications in comprehending the results of machine learning supervised and unsupervised learning techniques. We study the problem in 2D and 3D (with two color points) before exploring the issues in Rd. The red/blue points in our examples could be to be in a class or not to be in a class. The applications could include landscape design to adhere to the scenic beauty of the artifacts on the ground. The generation of equally separated layouts for designing composite hardware where interference could be an issue.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 178, + "label": 34, + "text": "Title: Change a Bit to save Bytes: Compression for Floating Point Time-Series Data\nAbstract: The number of IoT devices is expected to continue its dramatic growth in the coming years and, with it, a growth in the amount of data to be transmitted, processed and stored. Compression techniques that support analytics directly on the compressed data could pave the way for systems to scale efficiently to these growing demands. This paper proposes two novel methods for preprocessing a stream of floating point data to improve the compression capabilities of various IoT data compressors. In particular, these techniques are shown to be helpful with recent compressors that allow for random access and analytics while maintaining good compression. Our techniques improve compression with reductions up to 80% when allowing for at most 1% of recovery error.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 179, + "label": 29, + "text": "Title: Performance of the Gittins Policy in the G/G/1 and G/G/k, With and Without Setup Times\nAbstract: How should we schedule jobs to minimize mean queue length? In the preemptive M/G/1 queue, we know the optimal policy is the Gittins policy, which uses any available information about jobs' remaining service times to dynamically prioritize jobs. For models more complex than the M/G/1, optimal scheduling is generally intractable. This leads us to ask: beyond the M/G/1, does Gittins still perform well? Recent results indicate that Gittins performs well in the M/G/k, meaning that its additive suboptimality gap is bounded by an expression which is negligible in heavy traffic. But allowing multiple servers is just one way to extend the M/G/1, and most other extensions remain open. Does Gittins still perform well with non-Poisson arrival processes? Or if servers require setup times when transitioning from idle to busy? In this paper, we give the first analysis of the Gittins policy that can handle any combination of (a) multiple servers, (b) non-Poisson arrivals, and (c) setup times. Our results thus cover the G/G/1 and G/G/k, with and without setup times, bounding Gittins's suboptimality gap in each case. Each of (a), (b), and (c) adds a term to our bound, but all the terms are negligible in heavy traffic, thus implying Gittins's heavy-traffic optimality in all the systems we consider. Another consequence of our results is that Gittins is optimal in the M/G/1 with setup times at all loads.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 180, + "label": 28, + "text": "Title: Capacity Bounds for Vertically-Drifted First Arrival Position Channels under a Covariance Constraint\nAbstract: In this paper, we delve into the capacity problem of additive vertically-drifted first arrival position noise channel, which models a communication system where the position of molecules is harnessed to convey information. Drawing inspiration from the principles governing vector Gaussian interference channels, we examine this capacity problem within the context of a covariance constraint on input distributions. We offer analytical upper and lower bounds on this capacity for a three-dimensional spatial setting. This is achieved through a meticulous analysis of the characteristic function coupled with an investigation into the stability properties. The results of this study contribute to the ongoing effort to understand the fundamental limits of molecular communication systems.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 181, + "label": 16, + "text": "Title: $\\texttt{NePhi}$: Neural Deformation Fields for Approximately Diffeomorphic Medical Image Registration\nAbstract: This work proposes $\\texttt{NePhi}$, a neural deformation model which results in approximately diffeomorphic transformations. In contrast to the predominant voxel-based approaches, $\\texttt{NePhi}$ represents deformations functionally which allows for memory-efficient training and inference. This is of particular importance for large volumetric registrations. Further, while medical image registration approaches representing transformation maps via multi-layer perceptrons have been proposed, $\\texttt{NePhi}$ facilitates both pairwise optimization-based registration $\\textit{as well as}$ learning-based registration via predicted or optimized global and local latent codes. Lastly, as deformation regularity is a highly desirable property for most medical image registration tasks, $\\texttt{NePhi}$ makes use of gradient inverse consistency regularization which empirically results in approximately diffeomorphic transformations. We show the performance of $\\texttt{NePhi}$ on two 2D synthetic datasets as well as on real 3D lung registration. Our results show that $\\texttt{NePhi}$ can achieve similar accuracies as voxel-based representations in a single-resolution registration setting while using less memory and allowing for faster instance-optimization.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 182, + "label": 16, + "text": "Title: Towards Realistic Unsupervised Fine-tuning with CLIP\nAbstract: The emergence of vision-language models (VLMs), such as CLIP, has spurred a significant research effort towards their application for downstream supervised learning tasks. Although some previous studies have explored the unsupervised fine-tuning of CLIP, they often rely on prior knowledge in the form of class names associated with ground truth labels. In this paper, we delve into a realistic unsupervised fine-tuning scenario by assuming that the unlabeled data might contain out-of-distribution samples from unknown classes. Furthermore, we emphasize the importance of simultaneously enhancing out-of-distribution detection capabilities alongside the recognition of instances associated with predefined class labels. To tackle this problem, we present a simple, efficient, and effective fine-tuning approach called Universal Entropy Optimization (UEO). UEO leverages sample-level confidence to approximately minimize the conditional entropy of confident instances and maximize the marginal entropy of less confident instances. Apart from optimizing the textual prompts, UEO also incorporates optimization of channel-wise affine transformations within the visual branch of CLIP. Through extensive experiments conducted across 15 domains and 4 different types of prior knowledge, we demonstrate that UEO surpasses baseline methods in terms of both generalization and out-of-distribution detection.", + "neighbors": [ + 2232 + ], + "mask": "Train" + }, + { + "node_id": 183, + "label": 16, + "text": "Title: SwitchGPT: Adapting Large Language Models for Non-Text Outputs\nAbstract: Large Language Models (LLMs), primarily trained on text-based datasets, exhibit exceptional proficiencies in understanding and executing complex linguistic instructions via text outputs. However, they falter when requests to generate non-text ones. Concurrently, modality conversion models, such as text-to-image, despite generating high-quality images, suffer from a lack of extensive textual pretraining. As a result, these models are only capable of accommodating specific image descriptions rather than comprehending more complex instructions. To bridge this gap, we propose a novel approach, \\methodname, from a modality conversion perspective that evolves a text-based LLM into a multi-modal one. We specifically employ a minimal dataset to instruct LLMs to recognize the intended output modality as directed by the instructions. Consequently, the adapted LLM can effectively summon various off-the-shelf modality conversion models from the model zoos to generate non-text responses. This circumvents the necessity for complicated pretraining that typically requires immense quantities of paired multi-modal data, while simultaneously inheriting the extensive knowledge of LLMs and the ability of high-quality generative models. To evaluate and compare the adapted multi-modal LLM with its traditional counterparts, we have constructed a multi-modal instruction benchmark that solicits diverse modality outputs. The experiment results reveal that, with minimal training, LLMs can be conveniently adapted to comprehend requests for non-text responses, thus achieving higher flexibility in multi-modal scenarios. Code and data will be made available at https://github.com/xinke-wang/SwitchGPT.", + "neighbors": [ + 0, + 57, + 363, + 529, + 817, + 887, + 1047, + 1052, + 2036, + 2155, + 2235 + ], + "mask": "Validation" + }, + { + "node_id": 184, + "label": 24, + "text": "Title: Online Laplace Model Selection Revisited\nAbstract: The Laplace approximation provides a closed-form model selection objective for neural networks (NN). Online variants, which optimise NN parameters jointly with hyperparameters, like weight decay strength, have seen renewed interest in the Bayesian deep learning community. However, these methods violate Laplace's method's critical assumption that the approximation is performed around a mode of the loss, calling into question their soundness. This work re-derives online Laplace methods, showing them to target a variational bound on a mode-corrected variant of the Laplace evidence which does not make stationarity assumptions. Online Laplace and its mode-corrected counterpart share stationary points where 1. the NN parameters are a maximum a posteriori, satisfying the Laplace method's assumption, and 2. the hyperparameters maximise the Laplace evidence, motivating online methods. We demonstrate that these optima are roughly attained in practise by online algorithms using full-batch gradient descent on UCI regression datasets. The optimised hyperparameters prevent overfitting and outperform validation-based early stopping.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 185, + "label": 16, + "text": "Title: A Neuromorphic Dataset for Object Segmentation in Indoor Cluttered Environment\nAbstract: Taking advantage of an event-based camera, the issues of motion blur, low dynamic range and low time sampling of standard cameras can all be addressed. However, there is a lack of event-based datasets dedicated to the benchmarking of segmentation algorithms, especially those that provide depth information which is critical for segmentation in occluded scenes. This paper proposes a new Event-based Segmentation Dataset (ESD), a high-quality 3D spatial and temporal dataset for object segmentation in an indoor cluttered environment. Our proposed dataset ESD comprises 145 sequences with 14,166 RGB frames that are manually annotated with instance masks. Overall 21.88 million and 20.80 million events from two event-based cameras in a stereo-graphic configuration are collected, respectively. To the best of our knowledge, this densely annotated and 3D spatial-temporal event-based segmentation benchmark of tabletop objects is the first of its kind. By releasing ESD, we expect to provide the community with a challenging segmentation benchmark with high quality.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 186, + "label": 27, + "text": "Title: Resilient and Distributed Multi-Robot Visual SLAM: Datasets, Experiments, and Lessons Learned\nAbstract: This paper revisits Kimera-Multi, a distributed multi-robot Simultaneous Localization and Mapping (SLAM) system, towards the goal of deployment in the real world. In particular, this paper has three main contributions. First, we describe improvements to Kimera-Multi to make it resilient to large-scale real-world deployments, with particular emphasis on handling intermittent and unreliable communication. Second, we collect and release challenging multi-robot benchmarking datasets obtained during live experiments conducted on the MIT campus, with accurate reference trajectories and maps for evaluation. The datasets include up to 8 robots traversing long distances (up to 8 km) and feature many challenging elements such as severe visual ambiguities (e.g., in underground tunnels and hallways), mixed indoor and outdoor trajectories with different lighting conditions, and dynamic entities (e.g., pedestrians and cars). Lastly, we evaluate the resilience of Kimera-Multi under different communication scenarios, and provide a quantitative comparison with a centralized baseline system. Based on the results from both live experiments and subsequent analysis, we discuss the strengths and weaknesses of Kimera-Multi, and suggest future directions for both algorithm and system design. We release the source code of Kimera-Multi and all datasets to facilitate further research towards the reliable real-world deployment of multi-robot SLAM systems.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 187, + "label": 17, + "text": "Title: Experiencing avatar direction in low cost theatrical mixed reality setup\nAbstract: We introduce1 the setup and programming framework of AvatarStaging theatrical mixed reality experiment. We focus on a configuration addressing movement issues between physical and 3D digital spaces from performers and directors' points of view. We propose 3 practical exercises.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 188, + "label": 16, + "text": "Title: A Control-Centric Benchmark for Video Prediction\nAbstract: Video is a promising source of knowledge for embodied agents to learn models of the world's dynamics. Large deep networks have become increasingly effective at modeling complex video data in a self-supervised manner, as evaluated by metrics based on human perceptual similarity or pixel-wise comparison. However, it remains unclear whether current metrics are accurate indicators of performance on downstream tasks. We find empirically that for planning robotic manipulation, existing metrics can be unreliable at predicting execution success. To address this, we propose a benchmark for action-conditioned video prediction in the form of a control benchmark that evaluates a given model for simulated robotic manipulation through sampling-based planning. Our benchmark, Video Prediction for Visual Planning ($VP^2$), includes simulated environments with 11 task categories and 310 task instance definitions, a full planning implementation, and training datasets containing scripted interaction trajectories for each task category. A central design goal of our benchmark is to expose a simple interface -- a single forward prediction call -- so it is straightforward to evaluate almost any action-conditioned video prediction model. We then leverage our benchmark to study the effects of scaling model size, quantity of training data, and model ensembling by analyzing five highly-performant video prediction models, finding that while scale can improve perceptual quality when modeling visually diverse settings, other attributes such as uncertainty awareness can also aid planning performance.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 189, + "label": 24, + "text": "Title: Learning Unbiased News Article Representations: A Knowledge-Infused Approach\nAbstract: Quantification of the political leaning of online news articles can aid in understanding the dynamics of political ideology in social groups and measures to mitigating them. However, predicting the accurate political leaning of a news article with machine learning models is a challenging task. This is due to (i) the political ideology of a news article is defined by several factors, and (ii) the innate nature of existing learning models to be biased with the political bias of the news publisher during the model training. There is only a limited number of methods to study the political leaning of news articles which also do not consider the algorithmic political bias which lowers the generalization of machine learning models to predict the political leaning of news articles published by any new news publishers. In this work, we propose a knowledge-infused deep learning model that utilizes relatively reliable external data resources to learn unbiased representations of news articles using their global and local contexts. We evaluate the proposed model by setting the data in such a way that news domains or news publishers in the test set are completely unseen during the training phase. With this setup we show that the proposed model mitigates algorithmic political bias and outperforms baseline methods to predict the political leaning of news articles with up to 73% accuracy.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 190, + "label": 28, + "text": "Title: Fast and Low-Memory Compressive Sensing Algorithms for Low Tucker-Rank Tensor Approximation from Streamed Measurements\nAbstract: In this paper we consider the problem of recovering a low-rank Tucker approximation to a massive tensor based solely on structured random compressive measurements. Crucially, the proposed random measurement ensembles are both designed to be compactly represented (i.e., low-memory), and can also be efficiently computed in one-pass over the tensor. Thus, the proposed compressive sensing approach may be used to produce a low-rank factorization of a huge tensor that is too large to store in memory with a total memory footprint on the order of the much smaller desired low-rank factorization. In addition, the compressive sensing recovery algorithm itself (which takes the compressive measurements as input, and then outputs a low-rank factorization) also runs in a time which principally depends only on the size of the sought factorization, making its runtime sub-linear in the size of the large tensor one is approximating. Finally, unlike prior works related to (streaming) algorithms for low-rank tensor approximation from such compressive measurements, we present a unified analysis of both Kronecker and Khatri-Rao structured measurement ensembles culminating in error guarantees comparing the error of our recovery algorithm's approximation of the input tensor to the best possible low-rank Tucker approximation error achievable for the tensor by any possible algorithm. We further include an empirical study of the proposed approach that verifies our theoretical findings and explores various trade-offs of parameters of interest.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 191, + "label": 30, + "text": "Title: Topic-Selective Graph Network for Topic-Focused Summarization\nAbstract: Due to the success of the pre-trained language model (PLM), existing PLM-based summarization models show their powerful generative capability. However, these models are trained on general-purpose summarization datasets, leading to generated summaries failing to satisfy the needs of different readers. To generate summaries with topics, many efforts have been made on topic-focused summarization. However, these works generate a summary only guided by a prompt comprising topic words. Despite their success, these methods still ignore the disturbance of sentences with non-relevant topics and only conduct cross-interaction between tokens by attention module. To address this issue, we propose a topic-arc recognition objective and topic-selective graph network. First, the topic-arc recognition objective is used to model training, which endows the capability to discriminate topics for the model. Moreover, the topic-selective graph network can conduct topic-guided cross-interaction on sentences based on the results of topic-arc recognition. In the experiments, we conduct extensive evaluations on NEWTS and COVIDET datasets. Results show that our methods achieve state-of-the-art performance.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 192, + "label": 16, + "text": "Title: Accurate Eye Tracking from Dense 3D Surface Reconstructions using Single-Shot Deflectometry\nAbstract: Eye-tracking plays a crucial role in the development of virtual reality devices, neuroscience research, and psychology. Despite its significance in numerous applications, achieving an accurate, robust, and fast eye-tracking solution remains a considerable challenge for current state-of-the-art methods. While existing reflection-based techniques (e.g.,\"glint tracking\") are considered the most accurate, their performance is limited by their reliance on sparse 3D surface data acquired solely from the cornea surface. In this paper, we rethink the way how specular reflections can be used for eye tracking: We propose a novel method for accurate and fast evaluation of the gaze direction that exploits teachings from single-shot phase-measuring-deflectometry (PMD). In contrast to state-of-the-art reflection-based methods, our method acquires dense 3D surface information of both cornea and sclera within only one single camera frame (single-shot). Improvements in acquired reflection surface points(\"glints\") of factors $>3300 \\times$ are easily achievable. We show the feasibility of our approach with experimentally evaluated gaze errors of only $\\leq 0.25^\\circ$ demonstrating a significant improvement over the current state-of-the-art.", + "neighbors": [ + 501 + ], + "mask": "Test" + }, + { + "node_id": 193, + "label": 10, + "text": "Title: Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning\nAbstract: Integer Linear Programs (ILPs) are powerful tools for modeling and solving a large number of combinatorial optimization problems. Recently, it has been shown that Large Neighborhood Search (LNS), as a heuristic algorithm, can find high quality solutions to ILPs faster than Branch and Bound. However, how to find the right heuristics to maximize the performance of LNS remains an open problem. In this paper, we propose a novel approach, CL-LNS, that delivers state-of-the-art anytime performance on several ILP benchmarks measured by metrics including the primal gap, the primal integral, survival rates and the best performing rate. Specifically, CL-LNS collects positive and negative solution samples from an expert heuristic that is slow to compute and learns a new one with a contrastive loss. We use graph attention networks and a richer set of features to further improve its performance.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 194, + "label": 16, + "text": "Title: A Robust Likelihood Model for Novelty Detection\nAbstract: Current approaches to novelty or anomaly detection are based on deep neural networks. Despite their effectiveness, neural networks are also vulnerable to imperceptible deformations of the input data. This is a serious issue in critical applications, or when data alterations are generated by an adversarial attack. While this is a known problem that has been studied in recent years for the case of supervised learning, the case of novelty detection has received very limited attention. Indeed, in this latter setting the learning is typically unsupervised because outlier data is not available during training, and new approaches for this case need to be investigated. We propose a new prior that aims at learning a robust likelihood for the novelty test, as a defense against attacks. We also integrate the same prior with a state-of-the-art novelty detection approach. Because of the geometric properties of that approach, the resulting robust training is computationally very efficient. An initial evaluation of the method indicates that it is effective at improving performance with respect to the standard models in the absence and presence of attacks.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 195, + "label": 30, + "text": "Title: Leveraging Large Text Corpora for End-to-End Speech Summarization\nAbstract: End-to-end speech summarization (E2E SSum) is a technique to directly generate summary sentences from speech. Compared with the cascade approach, which combines automatic speech recognition (ASR) and text summarization models, the E2E approach is more promising because it mitigates ASR errors, incorporates nonverbal information, and simplifies the overall system. However, since collecting a large amount of paired data (i.e., speech and summary) is difficult, the training data is usually insufficient to train a robust E2E SSum system. In this paper, we present two novel methods that leverage a large amount of external text summarization data for E2E SSum training. The first technique is to utilize a text-to-speech (TTS) system to generate synthesized speech, which is used for E2E SSum training with the text summary. The second is a TTS-free method that directly inputs phoneme sequence instead of synthesized speech to the E2E SSum model. Experiments show that our proposed TTS- and phoneme-based methods improve several metrics on the How2 dataset. In particular, our best system outperforms a previous state-of-the-art one by a large margin (i.e., METEOR score improvements of more than 6 points). To the best of our knowledge, this is the first work to use external language resources for E2E SSum. Moreover, we report a detailed analysis of the How2 dataset to confirm the validity of our proposed E2E SSum system.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 196, + "label": 16, + "text": "Title: 3D-Aware Object Localization using Gaussian Implicit Occupancy Function\nAbstract: To automatically localize a target object in an image is crucial for many computer vision applications. To represent the 2D object, ellipse labels have recently been identified as a promising alternative to axis-aligned bounding boxes. This paper further considers 3D-aware ellipse labels, \\textit{i.e.}, ellipses which are projections of a 3D ellipsoidal approximation of the object, for 2D target localization. Indeed, projected ellipses carry more geometric information about the object geometry and pose (3D awareness) than traditional 3D-agnostic bounding box labels. Moreover, such a generic 3D ellipsoidal model allows for approximating known to coarsely known targets. We then propose to have a new look at ellipse regression and replace the discontinuous geometric ellipse parameters with the parameters of an implicit Gaussian distribution encoding object occupancy in the image. The models are trained to regress the values of this bivariate Gaussian distribution over the image pixels using a statistical loss function. We introduce a novel non-trainable differentiable layer, E-DSNT, to extract the distribution parameters. Also, we describe how to readily generate consistent 3D-aware Gaussian occupancy parameters using only coarse dimensions of the target and relative pose labels. We extend three existing spacecraft pose estimation datasets with 3D-aware Gaussian occupancy labels to validate our hypothesis. Labels and source code are publicly accessible here: https://cvi2.uni.lu/3d-aware-obj-loc/.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 197, + "label": 6, + "text": "Title: OldVisOnline: Curating a Dataset of Historical Visualizations\nAbstract: With the increasing adoption of digitization, more and more historical visualizations created hundreds of years ago are accessible in digital libraries online. It provides a unique opportunity for visualization and history research. Meanwhile, there is no large-scale digital collection dedicated to historical visualizations. The visualizations are scattered in various collections, which hinders retrieval. In this study, we curate the first large-scale dataset dedicated to historical visualizations. Our dataset comprises 13K historical visualization images with corresponding processed metadata from seven digital libraries. In curating the dataset, we propose a workflow to scrape and process heterogeneous metadata. We develop a semi-automatic labeling approach to distinguish visualizations from other artifacts. Our dataset can be accessed with OldVisOnline, a system we have built to browse and label historical visualizations. We discuss our vision of usage scenarios and research opportunities with our dataset, such as textual criticism for historical visualizations. Drawing upon our experience, we summarize recommendations for future efforts to improve our dataset.", + "neighbors": [ + 2196 + ], + "mask": "Validation" + }, + { + "node_id": 198, + "label": 6, + "text": "Title: Understanding Shared Control for Assistive Robotic Arms\nAbstract: Living a self-determined life independent of human caregivers or fully autonomous robots is a crucial factor for human dignity and the preservation of self-worth for people with motor impairments. Assistive robotic solutions - particularly robotic arms - are frequently deployed in domestic care, empowering people with motor impairments in performing ADLs independently. However, while assistive robotic arms can help them perform ADLs, currently available controls are highly complex and time-consuming due to the need to control multiple DoFs at once and necessary mode-switches. This work provides an overview of shared control approaches for assistive robotic arms, which aim to improve their ease of use for people with motor impairments. We identify three main takeaways for future research: Less is More, Pick-and-Place Matters, and Communicating Intent.", + "neighbors": [ + 1785 + ], + "mask": "Train" + }, + { + "node_id": 199, + "label": 24, + "text": "Title: Tight Memory-Regret Lower Bounds for Streaming Bandits\nAbstract: In this paper, we investigate the streaming bandits problem, wherein the learner aims to minimize regret by dealing with online arriving arms and sublinear arm memory. We establish the tight worst-case regret lower bound of $\\Omega \\left( (TB)^{\\alpha} K^{1-\\alpha}\\right), \\alpha = 2^{B} / (2^{B+1}-1)$ for any algorithm with a time horizon $T$, number of arms $K$, and number of passes $B$. The result reveals a separation between the stochastic bandits problem in the classical centralized setting and the streaming setting with bounded arm memory. Notably, in comparison to the well-known $\\Omega(\\sqrt{KT})$ lower bound, an additional double logarithmic factor is unavoidable for any streaming bandits algorithm with sublinear memory permitted. Furthermore, we establish the first instance-dependent lower bound of $\\Omega \\left(T^{1/(B+1)} \\sum_{\\Delta_x>0} \\frac{\\mu^*}{\\Delta_x}\\right)$ for streaming bandits. These lower bounds are derived through a unique reduction from the regret-minimization setting to the sample complexity analysis for a sequence of $\\epsilon$-optimal arms identification tasks, which maybe of independent interest. To complement the lower bound, we also provide a multi-pass algorithm that achieves a regret upper bound of $\\tilde{O} \\left( (TB)^{\\alpha} K^{1 - \\alpha}\\right)$ using constant arm memory.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 200, + "label": 2, + "text": "Title: Dynamic Logic of Communicating Hybrid Programs\nAbstract: This paper presents a dynamic logic $d\\mathcal{L}_\\text{CHP}$ for compositional deductive verification of communicating hybrid programs (CHPs). CHPs go beyond the traditional mixed discrete and continuous dynamics of hybrid systems by adding CSP-style operators for communication and parallelism. A compositional proof calculus is presented that modularly verifies CHPs including their parallel compositions from proofs of their subprograms by assumption-commitment reasoning in dynamic logic. Unlike Hoare-style assumption-commitments, $d\\mathcal{L}_\\text{CHP}$ supports intuitive symbolic execution via explicit recorder variables for communication primitives. Since $d\\mathcal{L}_\\text{CHP}$ is a conservative extension of differential dynamic logic $d\\mathcal{L}$, it can be used soundly along with the $d\\mathcal{L}$ proof calculus and $d\\mathcal{L}$'s complete axiomatization for differential equation invariants.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 201, + "label": 17, + "text": "Title: Deep and Fast Approximate Order Independent Transparency\nAbstract: We present a machine learning approach for efficiently computing order independent transparency (OIT). Our method is fast, requires a small constant amount of memory (depends only on the screen resolution and not on the number of triangles or transparent layers), is more accurate as compared to previous approximate methods, works for every scene without setup and is portable to all platforms running even with commodity GPUs. Our method requires a rendering pass to extract all features that are subsequently used to predict the overall OIT pixel color with a pre-trained neural network. We provide a comparative experimental evaluation and shader source code of all methods for reproduction of the experiments.", + "neighbors": [ + 317 + ], + "mask": "Train" + }, + { + "node_id": 202, + "label": 16, + "text": "Title: Thread Counting in Plain Weave for Old Paintings Using Semi-Supervised Regression Deep Learning Models\nAbstract: In this work, the authors develop regression approaches based on deep learning to perform thread density estimation for plain weave canvas analysis. Previous approaches were based on Fourier analysis, which is quite robust for some scenarios but fails in some others, in machine learning tools, that involve pre-labeling of the painting at hand, or the segmentation of thread crossing points, that provides good estimations in all scenarios with no need of pre-labeling. The segmentation approach is time-consuming as the estimation of the densities is performed after locating the crossing points. In this novel proposal, we avoid this step by computing the density of threads directly from the image with a regression deep learning model. We also incorporate some improvements in the initial preprocessing of the input image with an impact on the final error. Several models are proposed and analyzed to retain the best one. Furthermore, we further reduce the density estimation error by introducing a semi-supervised approach. The performance of our novel algorithm is analyzed with works by Ribera, Vel\\'azquez, and Poussin where we compare our results to the ones of previous approaches. Finally, the method is put into practice to support the change of authorship or a masterpiece at the Museo del Prado.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 203, + "label": 24, + "text": "Title: Supervision Complexity and its Role in Knowledge Distillation\nAbstract: Despite the popularity and efficacy of knowledge distillation, there is limited understanding of why it helps. In order to study the generalization behavior of a distilled student, we propose a new theoretical framework that leverages supervision complexity: a measure of alignment between teacher-provided supervision and the student's neural tangent kernel. The framework highlights a delicate interplay among the teacher's accuracy, the student's margin with respect to the teacher predictions, and the complexity of the teacher predictions. Specifically, it provides a rigorous justification for the utility of various techniques that are prevalent in the context of distillation, such as early stopping and temperature scaling. Our analysis further suggests the use of online distillation, where a student receives increasingly more complex supervision from teachers in different stages of their training. We demonstrate efficacy of online distillation and validate the theoretical findings on a range of image classification benchmarks and model architectures.", + "neighbors": [ + 120 + ], + "mask": "Train" + }, + { + "node_id": 204, + "label": 3, + "text": "Title: The LAVA Model: Learning Analytics Meets Visual Analytics\nAbstract: nan", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 205, + "label": 20, + "text": "Title: Online and Dynamic Algorithms for Geometric Set Cover and Hitting Set\nAbstract: Set cover and hitting set are fundamental problems in combinatorial optimization which are well-studied in the offline, online, and dynamic settings. We study the geometric versions of these problems and present new online and dynamic algorithms for them. In the online version of set cover (resp. hitting set), $m$ sets (resp.~$n$ points) are give $n$ points (resp.~$m$ sets) arrive online, one-by-one. In the dynamic versions, points (resp. sets) can arrive as well as depart. Our goal is to maintain a set cover (resp. hitting set), minimizing the size of the computed solution. For online set cover for (axis-parallel) squares of arbitrary sizes, we present a tight $O(\\log n)$-competitive algorithm. In the same setting for hitting set, we provide a tight $O(\\log N)$-competitive algorithm, assuming that all points have integral coordinates in $[0,N)^{2}$. No online algorithm had been known for either of these settings, not even for unit squares (apart from the known online algorithms for arbitrary set systems). For both dynamic set cover and hitting set with $d$-dimensional hyperrectangles, we obtain $(\\log m)^{O(d)}$-approximation algorithms with $(\\log m)^{O(d)}$ worst-case update time. This partially answers an open question posed by Chan et al. [SODA'22]. Previously, no dynamic algorithms with polylogarithmic update time were known even in the setting of squares (for either of these problems). Our main technical contributions are an \\emph{extended quad-tree }approach and a \\emph{frequency reduction} technique that reduces geometric set cover instances to instances of general set cover with bounded frequency.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 206, + "label": 24, + "text": "Title: A CNN-LSTM Architecture for Marine Vessel Track Association Using Automatic Identification System (AIS) Data\nAbstract: In marine surveillance, distinguishing between normal and anomalous vessel movement patterns is critical for identifying potential threats in a timely manner. Once detected, it is important to monitor and track these vessels until a necessary intervention occurs. To achieve this, track association algorithms are used, which take sequential observations comprising the geological and motion parameters of the vessels and associate them with respective vessels. The spatial and temporal variations inherent in these sequential observations make the association task challenging for traditional multi-object tracking algorithms. Additionally, the presence of overlapping tracks and missing data can further complicate the trajectory tracking process. To address these challenges, in this study, we approach this tracking task as a multivariate time series problem and introduce a 1D CNN-LSTM architecture-based framework for track association. This special neural network architecture can capture the spatial patterns as well as the long-term temporal relations that exist among the sequential observations. During the training process, it learns and builds the trajectory for each of these underlying vessels. Once trained, the proposed framework takes the marine vessel\u2019s location and motion data collected through the automatic identification system (AIS) as input and returns the most likely vessel track as output in real-time. To evaluate the performance of our approach, we utilize an AIS dataset containing observations from 327 vessels traveling in a specific geographic region. We measure the performance of our proposed framework using standard performance metrics such as accuracy, precision, recall, and F1 score. When compared with other competitive neural network architectures, our approach demonstrates a superior tracking performance.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 207, + "label": 16, + "text": "Title: Vision-Based Human Pose Estimation via Deep Learning: A Survey\nAbstract: Human pose estimation (HPE) has attracted a significant amount of attention from the computer vision community in the past decades. Moreover, HPE has been applied to various domains, such as human\u2013computer interaction, sports analysis, and human tracking via images and videos. Recently, deep learning-based approaches have shown state-of-the-art performance in HPE-based applications. Although deep learning-based approaches have achieved remarkable performance in HPE, a comprehensive review of deep learning-based HPE methods remains lacking in literature. In this article, we provide an up-to-date and in-depth overview of the deep learning approaches in vision-based HPE. We summarize these methods of 2-D and 3-D HPE, and their applications, discuss the challenges and the research trends through bibliometrics, and provide insightful recommendations for future research. This article provides a meaningful overview as introductory material for beginners to deep learning-based HPE, as well as supplementary material for advanced researchers.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 208, + "label": 6, + "text": "Title: Enough With \u201cHuman-AI Collaboration\u201d\nAbstract: Describing our interaction with Artificial Intelligence (AI) systems as \u2018collaboration\u2019 is well-intentioned, but flawed. Not only is it misleading, but it also takes away the credit of AI \u2018labour\u2019 from the humans behind it, and erases and obscures an often exploitative arrangement between AI producers and consumers. The AI \u2018collaboration\u2019 metaphor is merely the latest episode in a long history of labour appropriation and credit reassignment that disenfranchises labourers in the Global South. I propose that viewing AI as a tool or an instrument, rather than a collaborator, is more accurate, and ultimately fairer.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 209, + "label": 31, + "text": "Title: AutoMLP: Automated MLP for Sequential Recommendations\nAbstract: Sequential recommender systems aim to predict users\u2019 next interested item given their historical interactions. However, a long-standing issue is how to distinguish between users\u2019 long/short-term interests, which may be heterogeneous and contribute differently to the next recommendation. Existing approaches usually set pre-defined short-term interest length by exhaustive search or empirical experience, which is either highly inefficient or yields subpar results. The recent advanced transformer-based models can achieve state-of-the-art performances despite the aforementioned issue, but they have a quadratic computational complexity to the length of the input sequence. To this end, this paper proposes a novel sequential recommender system, AutoMLP, aiming for better modeling users\u2019 long/short-term interests from their historical interactions. In addition, we design an automated and adaptive search algorithm for preferable short-term interest length via end-to-end optimization. Through extensive experiments, we show that AutoMLP has competitive performance against state-of-the-art methods, while maintaining linear computational complexity.", + "neighbors": [ + 1830 + ], + "mask": "Train" + }, + { + "node_id": 210, + "label": 13, + "text": "Title: Quality Indicators for Preference-based Evolutionary Multi-objective Optimization Using a Reference Point: A Review and Analysis\nAbstract: Some quality indicators have been proposed for benchmarking preference-based evolutionary multi-objective optimization algorithms using a reference point. Although a systematic review and analysis of the quality indicators are helpful for both benchmarking and practical decision-making, neither has been conducted. In this context, first, this paper reviews existing regions of interest and quality indicators for preference-based evolutionary multi-objective optimization using the reference point. We point out that each quality indicator was designed for a different region of interest. Then, this paper investigates the properties of the quality indicators. We demonstrate that an achievement scalarizing function value is not always consistent with the distance from a solution to the reference point in the objective space. We observe that the regions of interest can be significantly different depending on the position of the reference point and the shape of the Pareto front. We identify undesirable properties of some quality indicators. We also show that the ranking of preference-based evolutionary multi-objective optimization algorithms depends on the choice of quality indicators.", + "neighbors": [ + 1164 + ], + "mask": "Train" + }, + { + "node_id": 211, + "label": 16, + "text": "Title: Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM\nAbstract: We present Co-SLAM, a neural RGB-D SLAM system based on a hybrid representation, that performs robust camera tracking and high-fidelity surface reconstruction in real time. Co-SLAM represents the scene as a multi-resolution hash-grid to exploit its high convergence speed and ability to represent high-frequency local features. In addition, Co-SLAM incorporates one-blob encoding, to encourage surface coherence and completion in unobserved areas. This joint parametric-coordinate encoding enables real-time and robust performance by bringing the best of both worlds: fast convergence and surface hole filling. Moreover, our ray sampling strategy allows Co-SLAM to perform global bundle adjustment over all keyframes instead of requiring keyframe selection to maintain a small number of active keyframes as competing neural SLAM approaches do. Experimental results show that Co-SLAM runs at 10-17Hz and achieves state-of-the-art scene reconstruction results, and competitive tracking performance in various datasets and benchmarks (ScanNet, TUM, Replica, Synthetic RGBD). Project page: https://hengyiwang.github.io/projects/CoSLAM", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 212, + "label": 27, + "text": "Title: Communications-Aware Robotics: Challenges and Opportunities\nAbstract: The use of Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs) has seen significant growth in the research community, industry, and society. Many of these agents are equipped with communication systems that are essential for completing certain tasks successfully. This has led to the emergence of a new interdisciplinary field at the intersection of robotics and communications, which has been further driven by the integration of UAVs into 5G and 6G communication networks. However, one of the main challenges in this research area is how many researchers tend to oversimplify either the robotics or the communications aspects, hindering the full potential of this new interdisciplinary field. In this paper, we present some of the necessary modeling tools for addressing these problems from both a robotics and communications perspective, using the UAV communications relay as an example.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 213, + "label": 3, + "text": "Title: QI2 - an Interactive Tool for Data Quality Assurance\nAbstract: The importance of high data quality is increasing with the growing impact and distribution of ML systems and big data. Also the planned AI Act from the European commission defines challenging legal requirements for data quality especially for the market introduction of safety relevant ML systems. In this paper we introduce a novel approach that supports the data quality assurance process of multiple data quality aspects. This approach enables the verification of quantitative data quality requirements. The concept and benefits are introduced and explained on small example data sets. How the method is applied is demonstrated on the well known MNIST data set based an handwritten digits.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 214, + "label": 16, + "text": "Title: Vision Transformers for Mobile Applications: A Short Survey\nAbstract: Vision Transformers (ViTs) have demonstrated state-of-the-art performance on many Computer Vision Tasks. Unfortunately, deploying these large-scale ViTs is resource-consuming and impossible for many mobile devices. While most in the community are building for larger and larger ViTs, we ask a completely opposite question: How small can a ViT be within the tradeoffs of accuracy and inference latency that make it suitable for mobile deployment? We look into a few ViTs specifically designed for mobile applications and observe that they modify the transformer's architecture or are built around the combination of CNN and transformer. Recent work has also attempted to create sparse ViT networks and proposed alternatives to the attention module. In this paper, we study these architectures, identify the challenges and analyze what really makes a vision transformer suitable for mobile applications. We aim to serve as a baseline for future research direction and hopefully lay the foundation to choose the exemplary vision transformer architecture for your application running on mobile devices.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 215, + "label": 10, + "text": "Title: Human in the Loop Novelty Generation\nAbstract: Developing artificial intelligence approaches to overcome novel, unexpected circumstances is a difficult, unsolved problem. One challenge to advancing the state of the art in novelty accommodation is the availability of testing frameworks for evaluating performance against novel situations. Recent novelty generation approaches in domains such as Science Birds and Monopoly leverage human domain expertise during the search to discover new novelties. Such approaches introduce human guidance before novelty generation occurs and yield novelties that can be directly loaded into a simulated environment. We introduce a new approach to novelty generation that uses abstract models of environments (including simulation domains) that do not require domain-dependent human guidance to generate novelties. A key result is a larger, often infinite space of novelties capable of being generated, with the trade-off being a requirement to involve human guidance to select and filter novelties post generation. We describe our Human-in-the-Loop novelty generation process using our open-source novelty generation library to test baseline agents in two domains: Monopoly and VizDoom. Our results shows the Human-in-the-Loop method enables users to develop, implement, test, and revise novelties within 4 hours for both Monopoly and VizDoom domains.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 216, + "label": 30, + "text": "Title: AI-UPV at EXIST 2023 - Sexism Characterization Using Large Language Models Under The Learning with Disagreements Regime\nAbstract: With the increasing influence of social media platforms, it has become crucial to develop automated systems capable of detecting instances of sexism and other disrespectful and hateful behaviors to promote a more inclusive and respectful online environment. Nevertheless, these tasks are considerably challenging considering different hate categories and the author's intentions, especially under the learning with disagreements regime. This paper describes AI-UPV team's participation in the EXIST (sEXism Identification in Social neTworks) Lab at CLEF 2023. The proposed approach aims at addressing the task of sexism identification and characterization under the learning with disagreements paradigm by training directly from the data with disagreements, without using any aggregated label. Yet, performances considering both soft and hard evaluations are reported. The proposed system uses large language models (i.e., mBERT and XLM-RoBERTa) and ensemble strategies for sexism identification and classification in English and Spanish. In particular, our system is articulated in three different pipelines. The ensemble approach outperformed the individual large language models obtaining the best performances both adopting a soft and a hard label evaluation. This work describes the participation in all the three EXIST tasks, considering a soft evaluation, it obtained fourth place in Task 2 at EXIST and first place in Task 3, with the highest ICM-Soft of -2.32 and a normalized ICM-Soft of 0.79. The source code of our approaches is publicly available at https://github.com/AngelFelipeMP/Sexism-LLM-Learning-With-Disagreement.", + "neighbors": [ + 1052, + 1219, + 1683, + 1965 + ], + "mask": "Train" + }, + { + "node_id": 217, + "label": 24, + "text": "Title: Explainable Data Poison Attacks on Human Emotion Evaluation Systems Based on EEG Signals\nAbstract: The major aim of this paper is to explain the data poisoning attacks using label-flipping during the training stage of the electroencephalogram (EEG) signal-based human emotion evaluation systems deploying Machine Learning models from the attackers\u2019 perspective. Human emotion evaluation using EEG signals has consistently attracted a lot of research attention. The identification of human emotional states based on EEG signals is effective to detect potential internal threats caused by insider individuals. Nevertheless, EEG signal-based human emotion evaluation systems have shown several vulnerabilities to data poison attacks. Besides, due to the instability and complexity of the EEG signals, it is challenging to explain and analyze how data poison attacks influence the decision process of EEG signal-based human emotion evaluation systems. In this paper, from the attackers\u2019 side, data poison attacks occurring in the training phases of six different Machine Learning models including Random Forest, Adaptive Boosting (AdaBoost), Extra Trees, XGBoost, Multilayer Perceptron (MLP), and K-Nearest Neighbors (KNN) intrude on the EEG signal-based human emotion evaluation systems using these Machine Learning models. This seeks to reduce the performance of the aforementioned Machine Learning models with regard to the classification task of 4 different human emotions using EEG signals. The findings of the experiments demonstrate that the suggested data poison assaults are model-independently successful, although various models exhibit varying levels of resilience to the attacks. In addition, the data poison attacks on the EEG signal-based human emotion evaluation systems are explained with several Explainable Artificial Intelligence (XAI) methods including Shapley Additive Explanation (SHAP) values, Local Interpretable Model-agnostic Explanations (LIME), and Generated Decision Trees. And the codes of this paper are publicly available on GitHub.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 218, + "label": 24, + "text": "Title: Diverse Probabilistic Trajectory Forecasting with Admissibility Constraints\nAbstract: Predicting multiple trajectories for road users is important for automated driving systems: ego-vehicle motion planning indeed requires a clear view of the possible motions of the surrounding agents. However, the generative models used for multiple-trajectory forecasting suffer from a lack of diversity in their proposals. To avoid this form of collapse, we propose a novel method for structured prediction of diverse trajectories. To this end, we complement an underlying pretrained generative model with a diversity component, based on a determinantal point process (DPP). We balance and structure this diversity with the inclusion of knowledge-based quality constraints, independent from the underlying generative model. We combine these two novel components with a gating operation, ensuring that the predictions are both diverse and within the drivable area. We demonstrate on the nuScenes driving dataset the relevance of our compound approach, which yields significant improvements in the diversity and the quality of the generated trajectories.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 219, + "label": 24, + "text": "Title: Representation Learning via Variational Bayesian Networks\nAbstract: We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the \"long-tail'', where the data is scarce. VBN provides better modeling for long-tail entities via two complementary mechanisms: First, VBN employs informative hierarchical priors that enable information propagation between entities sharing common ancestors. Additionally, VBN models explicit relations between entities that enforce complementary structure and consistency, guiding the learned representations towards a more meaningful arrangement in space. Second, VBN represents entities by densities (rather than vectors), hence modeling uncertainty that plays a complementary role in coping with data scarcity. Finally, we propose a scalable Variational Bayes optimization algorithm that enables fast approximate Bayesian inference. We evaluate the effectiveness of VBN on linguistic, recommendations, and medical inference tasks. Our findings show that VBN outperforms other existing methods across multiple datasets, and especially in the long-tail.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 220, + "label": 16, + "text": "Title: Vision Transformer for Action Units Detection\nAbstract: Facial Action Units detection (FAUs) represents a fine-grained classification problem that involves identifying different units on the human face, as defined by the Facial Action Coding System. In this paper, we present a simple yet efficient Vision Transformer-based approach for addressing the task of Action Units (AU) detection in the context of Affective Behavior Analysis in-the-wild (ABAW) competition. We employ the Video Vision Transformer(ViViT) Network to capture the temporal facial change in the video. Besides, to reduce massive size of the Vision Transformers model, we replace the ViViT feature extraction layers with the CNN backbone (Regnet). Our model outperform the baseline model of ABAW 2023 challenge, with a notable 14% difference in result. Furthermore, the achieved results are comparable to those of the top three teams in the previous ABAW 2022 challenge.", + "neighbors": [ + 1241, + 1533, + 1541 + ], + "mask": "Train" + }, + { + "node_id": 221, + "label": 16, + "text": "Title: Entropy Transformer Networks: A Learning Approach via Tangent Bundle Data Manifold\nAbstract: This paper focuses on an accurate and fast interpolation approach for image transformation employed in the design of CNN architectures. Standard Spatial Transformer Networks (STNs) use bilinear or linear interpolation as their interpolation, with unrealistic assumptions about the underlying data distributions, which leads to poor performance under scale variations. Moreover, STNs do not preserve the norm of gradients in propagation due to their dependency on sparse neighboring pixels. To address this problem, a novel Entropy STN (ESTN) is proposed that interpolates on the data manifold distributions. In particular, random samples are generated for each pixel in association with the tangent space of the data manifold, and construct a linear approximation of their intensity values with an entropy regularizer to compute the transformer parameters. A simple yet effective technique is also proposed to normalize the non-zero values of the convolution operation, to fine-tune the layers for gradients' norm-regularization during training. Experiments on challenging benchmarks show that the proposed ESTN can improve predictive accuracy over a range of computer vision tasks, including image reconstruction, and classification, while reducing the computational cost.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 222, + "label": 15, + "text": "Title: LEAPS: Topological-Layout-Adaptable Multi-die FPGA Placement for Super Long Line Minimization\nAbstract: Multi-die FPGAs are crucial components in modern computing systems, particularly for high-performance applications such as artificial intelligence and data centers. Super long lines (SLLs) provide interconnections between super logic regions (SLRs) for a multi-die FPGA on a silicon interposer. They have significantly higher delay compared to regular interconnects, which need to be minimized. With the increase in design complexity, the growth of SLLs gives rise to challenges in timing and power closure. Existing placement algorithms focus on optimizing the number of SLLs but often face limitations due to specific topologies of SLRs. Furthermore, they fall short of achieving continuous optimization of SLLs throughout the entire placement process. This highlights the necessity for more advanced and adaptable solutions. In this paper, we propose LEAPS, a comprehensive, systematic, and adaptable multi-die FPGA placement algorithm for SLL minimization. Our contributions are threefold: 1) proposing a high-performance global placement algorithm for multi-die FPGAs that optimizes the number of SLLs while addressing other essential design constraints such as wirelength, routability, and clock routing; 2) introducing a versatile method for more complex SLR topologies of multi-die FPGAs, surpassing the limitations of existing approaches; and 3) executing continuous optimization of SLLs across the whole placement stages, including global placement (GP), legalization (LG), and detailed placement (DP). Experimental results demonstrate the effectiveness of LEAPS in reducing SLLs and enhancing circuit performance. Compared with the most recent state-of-the-art (SOTA) method, LEAPS achieves an average reduction of 40.19% in SLLs and 9.99% in HPWL, while exhibiting a notable 34.34$\\times$ improvement in runtime.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 223, + "label": 4, + "text": "Title: Sparsity and Privacy in Secret Sharing: A Fundamental Trade-Off\nAbstract: This work investigates the design of sparse secret sharing schemes that encode a sparse private matrix into sparse shares. This investigation is motivated by distributed computing, where the multiplication of sparse and private matrices is moved from a computationally weak main node to untrusted worker machines. Classical secret-sharing schemes produce dense shares. However, sparsity can help speed up the computation. We show that, for matrices with i.i.d. entries, sparsity in the shares comes at a fundamental cost of weaker privacy. We derive a fundamental tradeoff between sparsity and privacy and construct optimal sparse secret sharing schemes that produce shares that leak the minimum amount of information for a desired sparsity of the shares. We apply our schemes to distributed sparse and private matrix multiplication schemes with no colluding workers while tolerating stragglers. For the setting of two non-communicating clusters of workers, we design a sparse one-time pad so that no private information is leaked to a cluster of untrusted and colluding workers, and the shares with bounded but non-zero leakage are assigned to a cluster of partially trusted workers. We conclude by discussing the necessity of using permutations for matrices with correlated entries.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 224, + "label": 30, + "text": "Title: Query-Utterance Attention with Joint modeling for Query-Focused Meeting Summarization\nAbstract: Query-focused meeting summarization (QFMS) aims to generate summaries from meeting transcripts in response to a given query. Previous works typically concatenate the query with meeting transcripts and implicitly model the query relevance only at the token level with attention mechanism. However, due to the dilution of key query-relevant information caused by long meeting transcripts, the original transformer-based model is insufficient to highlight the key parts related to the query. In this paper, we propose a query-aware framework with joint modeling token and utterance based on Query-Utterance Attention. It calculates the utterance-level relevance to the query with a dense retrieval module. Then both token-level query relevance and utterance-level query relevance are combined and incorporated into the generation process with attention mechanism explicitly. We show that the query relevance of different granularities contributes to generating a summary more related to the query. Experimental results on the QMSum dataset show that the proposed model achieves new state-of-the-art performance.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 225, + "label": 30, + "text": "Title: Systematic Offensive Stereotyping (SOS) Bias in Language Models\nAbstract: Research has shown that language models (LMs) are socially biased. However, toxicity and offensive stereotyping bias in LMs are understudied. In this paper, we investigate the systematic offensive stereotype (SOS) bias in LMs. We propose a method to measure it. Then, we validate the SOS bias and investigate the effectiveness of debias methods from the literature on removing it. Finally, we investigate the impact of the SOS bias in LMs on their performance and their fairness on the task of hate speech detection. Our results suggest that all the inspected LMs are SOS biased. The results suggest that the SOS bias in LMs is reflective of the hate experienced online by the inspected marginalized groups. The results indicate that removing the SOS bias in LMs, using a popular debias method from the literature, leads to worse SOS bias scores. Finally, Our results show no strong evidence that the SOS bias in LMs is impactful on their performance on hate speech detection. On the other hand, there is evidence that the SOS bias in LMs is impactful on their fairness.", + "neighbors": [ + 767, + 1705 + ], + "mask": "Train" + }, + { + "node_id": 226, + "label": 28, + "text": "Title: Stacked Intelligent Metasurfaces for Efficient Holographic MIMO Communications in 6G\nAbstract: A revolutionary technology relying on Stacked Intelligent Metasurfaces (SIM) is capable of carrying out advanced signal processing directly in the native electromagnetic (EM) wave regime. An SIM is fabricated by a sophisticated amalgam of multiple stacked metasurface layers, which may outperform its single-layer metasurface counterparts, such as reconfigurable intelligent surfaces (RIS) and metasurface lenses. We harness this new SIM for implementing holographic multiple-input multiple-output (HMIMO) communications without requiring excessive radio-frequency (RF) chains, which is a substantial benefit compared to existing implementations. First of all, we propose an HMIMO communication system based on a pair of SIM at the transmitter (TX) and receiver (RX), respectively. In sharp contrast to the conventional MIMO designs, SIM is capable of automatically accomplishing transmit precoding and receiver combining, as the EM waves propagate through them. As such, each spatial stream can be directly radiated and recovered from the corresponding transmit and receive port. Secondly, we formulate the problem of minimizing the error between the actual end-to-end channel matrix and the target diagonal one, representing a flawless interference-free system of parallel subchannels. This is achieved by jointly optimizing the phase shifts associated with all the metasurface layers of both the TX-SIM and RX-SIM. We then design a gradient descent algorithm to solve the resultant non-convex problem. Furthermore, we theoretically analyze the HMIMO channel capacity bound and provide some fundamental insights. Finally, extensive simulation results are provided for characterizing our SIM-aided HMIMO system, which quantifies its substantial performance benefits, e.g., 150% capacity improvement over both conventional MIMO and its RIS-aided counterparts.", + "neighbors": [ + 745, + 865, + 2269 + ], + "mask": "Train" + }, + { + "node_id": 227, + "label": 7, + "text": "Title: A New Algorithm to determine Adomian Polynomials for nonlinear polynomial functions\nAbstract: We present a new algorithm by which the Adomian polynomials can be determined for scalar-valued nonlinear polynomial functional in a Hilbert space. This algorithm calculates the Adomian polynomials without the complicated operations such as parametrization, expansion, regrouping, differentiation, etc. The algorithm involves only some matrix operations. Because of the simplicity in the mathematical operations, the new algorithm is faster and more efficient than the other algorithms previously reported in the literature. We also implement the algorithm in the MATHEMATICA code. The computing speed and efficiency of the new algorithm are compared with some other algorithms in the one-dimensional case.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 228, + "label": 30, + "text": "Title: Improving Speech Translation by Cross-Modal Multi-Grained Contrastive Learning\nAbstract: The end-to-end speech translation (E2E-ST) model has gradually become a mainstream paradigm due to its low latency and less error propagation. However, it is non-trivial to train such a model well due to the task complexity and data scarcity. The speech-and-text modality differences result in the E2E-ST model performance usually inferior to the corresponding machine translation (MT) model. Based on the above observation, existing methods often use sharing mechanisms to carry out implicit knowledge transfer by imposing various constraints. However, the final model often performs worse on the MT task than the MT model trained alone, which means that the knowledge transfer ability of this method is also limited. To deal with these problems, we propose the FCCL (Fine- and Coarse- Granularity Contrastive Learning) approach for E2E-ST, which makes explicit knowledge transfer through cross-modal multi-grained contrastive learning. A key ingredient of our approach is applying contrastive learning at both sentence- and frame-level to give the comprehensive guide for extracting speech representations containing rich semantic information. In addition, we adopt a simple whitening method to alleviate the representation degeneration in the MT model, which adversely affects contrast learning. Experiments on the MuST-C benchmark show that our proposed approach significantly outperforms the state-of-the-art E2E-ST baselines on all eight language pairs. Further analysis indicates that FCCL can free up its capacity from learning grammatical structure information and force more layers to learn semantic information.", + "neighbors": [ + 582, + 637 + ], + "mask": "Train" + }, + { + "node_id": 229, + "label": 24, + "text": "Title: An Enhanced V-cycle MgNet Model for Operator Learning in Numerical Partial Differential Equations\nAbstract: This study used a multigrid-based convolutional neural network architecture known as MgNet in operator learning to solve numerical partial differential equations (PDEs). Given the property of smoothing iterations in multigrid methods where low-frequency errors decay slowly, we introduced a low-frequency correction structure for residuals to enhance the standard V-cycle MgNet. The enhanced MgNet model can capture the low-frequency features of solutions considerably better than the standard V-cycle MgNet. The numerical results obtained using some standard operator learning tasks are better than those obtained using many state-of-the-art methods, demonstrating the efficiency of our model.Moreover, numerically, our new model is more robust in case of low- and high-resolution data during training and testing, respectively.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 230, + "label": 4, + "text": "Title: FACE-AUDITOR: Data Auditing in Facial Recognition Systems\nAbstract: Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images during the model deployment phase. However, the power of facial recognition systems enables entities with moderate resources to canvas the Internet and build well-performed facial recognition models without people's awareness and consent. To prevent the face images from being misused, one straightforward approach is to modify the raw face images before sharing them, which inevitably destroys the semantic information, increases the difficulty of retroactivity, and is still prone to adaptive attacks. Therefore, an auditing method that does not interfere with the facial recognition model's utility and cannot be quickly bypassed is urgently needed. In this paper, we formulate the auditing process as a user-level membership inference problem and propose a complete toolkit FACE-AUDITOR that can carefully choose the probing set to query the few-shot-based facial recognition model and determine whether any of a user's face images is used in training the model. We further propose to use the similarity scores between the original face images as reference information to improve the auditing performance. Extensive experiments on multiple real-world face image datasets show that FACE-AUDITOR can achieve auditing accuracy of up to $99\\%$. Finally, we show that FACE-AUDITOR is robust in the presence of several perturbation mechanisms to the training images or the target models. The source code of our experiments can be found at \\url{https://github.com/MinChen00/Face-Auditor}.", + "neighbors": [ + 1409 + ], + "mask": "Train" + }, + { + "node_id": 231, + "label": 28, + "text": "Title: Randomly punctured Reed-Solomon codes achieve list-decoding capacity over linear-sized fields\nAbstract: Reed--Solomon codes are a classic family of error-correcting codes consisting of evaluations of low-degree polynomials over a finite field on some sequence of distinct field elements. They are widely known for their optimal unique-decoding capabilities, but their list-decoding capabilities are not fully understood. Given the prevalence of Reed-Solomon codes, a fundamental question in coding theory is determining if Reed--Solomon codes can optimally achieve list-decoding capacity. A recent breakthrough by Brakensiek, Gopi, and Makam, established that Reed--Solomon codes are combinatorially list-decodable all the way to capacity. However, their results hold for randomly-punctured Reed--Solomon codes over an exponentially large field size $2^{O(n)}$, where $n$ is the block length of the code. A natural question is whether Reed--Solomon codes can still achieve capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon codes are list-decodable to capacity with field size $O(n^2)$. We show that Reed--Solomon codes are list-decodable to capacity with linear field size $O(n)$, which is optimal up to the constant factor. We also give evidence that the ratio between the alphabet size $q$ and code length $n$ cannot be bounded by an absolute constant. Our techniques also show that random linear codes are list-decodable up to (the alphabet-independent) capacity with optimal list-size $O(1/\\varepsilon)$ and near-optimal alphabet size $2^{O(1/\\varepsilon^2)}$, where $\\varepsilon$ is the gap to capacity. As far as we are aware, list-decoding up to capacity with optimal list-size $O(1/\\varepsilon)$ was previously not known to be achievable with any linear code over a constant alphabet size (even non-constructively). Our proofs are based on the ideas of Guo and Zhang, and we additionally exploit symmetries of reduced intersection matrices.", + "neighbors": [ + 2134 + ], + "mask": "Train" + }, + { + "node_id": 232, + "label": 31, + "text": "Title: Reconciling the accuracy-diversity trade-off in recommendations\nAbstract: In recommendation settings, there is an apparent trade-off between the goals of accuracy (to recommend items a user is most likely to want) and diversity (to recommend items representing a range of categories). As such, real-world recommender systems often explicitly incorporate diversity separately from accuracy. This approach, however, leaves a basic question unanswered: Why is there a trade-off in the first place? We show how the trade-off can be explained via a user's consumption constraints -- users typically only consume a few of the items they are recommended. In a stylized model we introduce, objectives that account for this constraint induce diverse recommendations, while objectives that do not account for this constraint induce homogeneous recommendations. This suggests that accuracy and diversity appear misaligned because standard accuracy metrics do not consider consumption constraints. Our model yields precise and interpretable characterizations of diversity in different settings, giving practical insights into the design of diverse recommendations.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 233, + "label": 4, + "text": "Title: Application-aware Energy Attack Mitigation in the Battery-less Internet of Things\nAbstract: We study how to mitigate the effects of energy attacks in the batteryless Internet of Things (IoT). Battery-less IoT devices live and die with ambient energy, as they use energy harvesting to power their operation. They are employed in a multitude of applications, including safety-critical ones such as biomedical implants. Due to scarce energy intakes and limited energy buffers, their executions become intermittent, alternating periods of active operation with periods of recharging their energy buffers. Experimental evidence exists that shows how controlling ambient energy allows an attacker to steer a device execution in unintended ways: energy provisioning effectively becomes an attack vector. We design, implement, and evaluate a mitigation system for energy attacks. By taking into account the specific application requirements and the output of an attack detection module, we tune task execution rates and optimize energy management. This ensures continued application execution in the event of an energy attack. When a device is under attack, our solution ensures the execution of 23.3% additional application cycles compared to the baselines we consider and increases task schedulability by at least 21%, while enabling a 34% higher peripheral availability.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 234, + "label": 4, + "text": "Title: Hybrid DLT as a data layer for real-time, data-intensive applications\nAbstract: We propose a new approach, termed Hybrid DLT, to address a broad range of industrial use cases where certain properties of both private and public DLTs are valuable, while other properties may be unnecessary or detrimental. The Hybrid DLT approach involves a system where private ledgers, with limited data block dissemination, are collaboratively created by nodes within a private network. The Notary, a publicly auditable authoritative component, maintains a single, official, coherent history for each private ledger without requiring access to data blocks. This is achieved by leveraging a public DLT solution to render the ledger histories tamper-proof, consequently providing tamper-evidence for ledger data disclosed to external actors. We present Traent Hybrid Blockchain, a commercial implementation of the Hybrid DLT approach: a real-time, data-intensive collaboration system for organizations seeking immutable data while also needing to comply with the European General Data Protection Regulation (GDPR).", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 235, + "label": 24, + "text": "Title: Treat Different Negatives Differently: Enriching Loss Functions with Domain and Range Constraints for Link Prediction\nAbstract: Knowledge graph embedding models (KGEMs) are used for various tasks related to knowledge graphs (KGs), including link prediction. They are trained with loss functions that are computed considering a batch of scored triples and their corresponding labels. Traditional approaches consider the label of a triple to be either true or false. However, recent works suggest that all negative triples should not be valued equally. In line with this recent assumption, we posit that negative triples that are semantically valid w.r.t. domain and range constraints might be high-quality negative triples. As such, loss functions should treat them differently from semantically invalid negative ones. To this aim, we propose semantic-driven versions for the three main loss functions for link prediction. In an extensive and controlled experimental setting, we show that the proposed loss functions systematically provide satisfying results on three public benchmark KGs underpinned with different schemas, which demonstrates both the generality and superiority of our proposed approach. In fact, the proposed loss functions do (1) lead to better MRR and Hits@10 values, (2) drive KGEMs towards better semantic awareness as measured by the Sem@K metric. This highlights that semantic information globally improves KGEMs, and thus should be incorporated into loss functions. Domains and ranges of relations being largely available in schema-defined KGs, this makes our approach both beneficial and widely usable in practice.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 236, + "label": 24, + "text": "Title: Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks\nAbstract: In the graph node embedding problem, embedding spaces can vary significantly for different data types, leading to the need for different GNN model types. In this paper, we model the embedding update of a node feature as a Hamiltonian orbit over time. Since the Hamiltonian orbits generalize the exponential maps, this approach allows us to learn the underlying manifold of the graph in training, in contrast to most of the existing literature that assumes a fixed graph embedding manifold with a closed exponential map solution. Our proposed node embedding strategy can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset even if it has diverse geometries. We test Hamiltonian functions of different forms and verify the performance of our approach on two graph node embedding downstream tasks: node classification and link prediction. Numerical experiments demonstrate that our approach adapts better to different types of graph datasets than popular state-of-the-art graph node embedding GNNs. The code is available at \\url{https://github.com/zknus/Hamiltonian-GNN}.", + "neighbors": [ + 108 + ], + "mask": "Train" + }, + { + "node_id": 237, + "label": 24, + "text": "Title: Explainability Techniques for Chemical Language Models\nAbstract: Explainability techniques are crucial in gaining insights into the reasons behind the predictions of deep learning models, which have not yet been applied to chemical language models. We propose an explainable AI technique that attributes the importance of individual atoms towards the predictions made by these models. Our method backpropagates the relevance information towards the chemical input string and visualizes the importance of individual atoms. We focus on self-attention Transformers operating on molecular string representations and leverage a pretrained encoder for finetuning. We showcase the method by predicting and visualizing solubility in water and organic solvents. We achieve competitive model performance while obtaining interpretable predictions, which we use to inspect the pretrained model.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 238, + "label": 28, + "text": "Title: Interference Leakage Minimization in RIS-assisted MIMO Interference Channels\nAbstract: We address the problem of interference leakage (IL) minimization in the $K$-user multiple-input multiple-output (MIMO) interference channel (IC) assisted by a reconfigurable intelligent surface (RIS). We describe an iterative algorithm based on block coordinate descent to minimize the IL cost function. A reformulation of the problem provides a geometric interpretation and shows interesting connections with envelope precoding and phase-only zero-forcing beamforming problems. As a result of this analysis, we derive a set of necessary (but not sufficient) conditions for a phase-optimized RIS to be able to perfectly cancel the interference on the $K$-user MIMO IC.", + "neighbors": [ + 1143, + 1410, + 2253 + ], + "mask": "Train" + }, + { + "node_id": 239, + "label": 27, + "text": "Title: Multi-Shooting Differential Dynamic Programming for Hybrid Systems using Analytical Derivatives\nAbstract: Differential Dynamic Programming (DDP) is a popular technique used to generate motion for dynamic-legged robots in the recent past. However, in most cases, only the first-order partial derivatives of the underlying dynamics are used, resulting in the iLQR approach. Neglecting the second-order terms often slows down the convergence rate compared to full DDP. Multi-Shooting is another popular technique to improve robustness, especially if the dynamics are highly non-linear. In this work, we consider Multi-Shooting DDP for trajectory optimization of a bounding gait for a simplified quadruped model. As the main contribution, we develop Second-Order analytical partial derivatives of the rigid-body contact dynamics, extending our previous results for fixed/floating base models with multi-DoF joints. Finally, we show the benefits of a novel Quasi-Newton method for approximating second-order derivatives of the dynamics, leading to order-of-magnitude speedups in the convergence compared to the full DDP method.", + "neighbors": [ + 264 + ], + "mask": "Test" + }, + { + "node_id": 240, + "label": 10, + "text": "Title: Cognitive Architectures for Language Agents\nAbstract: Recent efforts have incorporated large language models (LLMs) with external resources (e.g., the Internet) or internal control flows (e.g., prompt chaining) for tasks requiring grounding or reasoning. However, these efforts have largely been piecemeal, lacking a systematic framework for constructing a fully-fledged language agent. To address this challenge, we draw on the rich history of agent design in symbolic artificial intelligence to develop a blueprint for a new wave of cognitive language agents. We first show that LLMs have many of the same properties as production systems, and recent efforts to improve their grounding or reasoning mirror the development of cognitive architectures built around production systems. We then propose Cognitive Architectures for Language Agents (CoALA), a conceptual framework to systematize diverse methods for LLM-based reasoning, grounding, learning, and decision making as instantiations of language agents in the framework. Finally, we use the CoALA framework to highlight gaps and propose actionable directions toward more capable language agents in the future.", + "neighbors": [ + 57, + 127, + 247, + 704, + 817, + 1047, + 1203, + 1267, + 1490, + 1840, + 1863, + 1877, + 1878, + 2029, + 2100, + 2136 + ], + "mask": "Train" + }, + { + "node_id": 241, + "label": 37, + "text": "Title: Fast Searching The Densest Subgraph And Decomposition With Local Optimality\nAbstract: Densest Subgraph Problem (DSP) is an important primitive problem with a wide range of applications, including fraud detection, community detection and DNA motif discovery. Edge-based density is one of the most common metrics in DSP. Although a maximum flow algorithm can exactly solve it in polynomial time, the increasing amount of data and the high complexity of algorithms motivate scientists to find approximation algorithms. Among these, its duality of linear programming derives several iterative algorithms including Greedy++, Frank-Wolfe and FISTA which redistribute edge weights to find the densest subgraph, however, these iterative algorithms vibrate around the optimal solution, which are not satisfactory for fast convergence. We propose our main algorithm Locally Optimal Weight Distribution (LOWD) to distribute the remaining edge weights in a locally optimal operation to converge to the optimal solution monotonically. Theoretically, we show that it will reach the optimal state of a specific linear programming which is called locally-dense decomposition. Besides, we show that it is not necessary to consider most of the edges in the original graph. Therefore, we develop a pruning algorithm using a modified Counting Sort to prune graphs by removing unnecessary edges and nodes, and then we can search the densest subgraph in a much smaller graph.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 242, + "label": 23, + "text": "Title: Understanding and Remediating Open-Source License Incompatibilities in the PyPI Ecosystem\nAbstract: The reuse and distribution of open-source software must be in compliance with its accompanying open-source license. In modern packaging ecosystems, maintaining such compliance is challenging because a package may have a complex multi-layered dependency graph with many packages, any of which may have an incompatible license. Although prior research finds that license incompatibilities are prevalent, empirical evidence is still scarce in some modern packaging ecosystems (e.g., PyPI). It also remains unclear how developers remediate the license incompatibilities in the dependency graphs of their packages (including direct and transitive dependencies), let alone any automated approaches. To bridge this gap, we conduct a large-scale empirical study of license incompatibilities and their remediation practices in the PyPI ecosystem. We find that 7.27% of the PyPI package releases have license incompatibilities and 61.3% of them are caused by transitive dependencies, causing challenges in their remediation; for remediation, developers can apply one of the five strategies: migration, removal, pinning versions, changing their own licenses, and negotiation. Inspired by our findings, we propose SILENCE, an SMT-solver-based approach to recommend license incompatibility remediations with minimal costs in package dependency graph. Our evaluation shows that the remediations proposed by SILENCE can match 19 historical real-world cases (except for migrations not covered by an existing knowledge base) and have been accepted by five popular PyPI packages whose developers were previously unaware of their license incompatibilities.", + "neighbors": [ + 1139 + ], + "mask": "Train" + }, + { + "node_id": 243, + "label": 24, + "text": "Title: Gradient Derivation for Learnable Parameters in Graph Attention Networks\nAbstract: This work provides a comprehensive derivation of the parameter gradients for GATv2 [4], a widely used implementation of Graph Attention Networks (GATs). GATs have proven to be powerful frameworks for processing graph-structured data and, hence, have been used in a range of applications. However, the achieved performance by these attempts has been found to be inconsistent across different datasets and the reasons for this remains an open research question. As the gradient flow provides valuable insights into the training dynamics of statistically learning models, this work obtains the gradients for the trainable model parameters of GATv2. The gradient derivations supplement the efforts of [2], where potential pitfalls of GATv2 are investigated.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 244, + "label": 7, + "text": "Title: PGD reduced-order modeling for structural dynamics applications\nAbstract: nan", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 245, + "label": 24, + "text": "Title: Fundamental limits of overparametrized shallow neural networks for supervised learning\nAbstract: We carry out an information-theoretical analysis of a two-layer neural network trained from input-output pairs generated by a teacher network with matching architecture, in overparametrized regimes. Our results come in the form of bounds relating i) the mutual information between training data and network weights, or ii) the Bayes-optimal generalization error, to the same quantities but for a simpler (generalized) linear model for which explicit expressions are rigorously known. Our bounds, which are expressed in terms of the number of training samples, input dimension and number of hidden units, thus yield fundamental performance limits for any neural network (and actually any learning procedure) trained from limited data generated according to our two-layer teacher neural network model. The proof relies on rigorous tools from spin glasses and is guided by ``Gaussian equivalence principles'' lying at the core of numerous recent analyses of neural networks. With respect to the existing literature, which is either non-rigorous or restricted to the case of the learning of the readout weights only, our results are information-theoretic (i.e. are not specific to any learning algorithm) and, importantly, cover a setting where all the network parameters are trained.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 246, + "label": 27, + "text": "Title: High-speed electrical connector assembly by structured compliance in a finray-effect gripper\nAbstract: Fine assembly tasks such as electrical connector insertion have tight tolerances and sensitive components, requiring compensation of alignment errors while applying sufficient force in the insertion direction, ideally at high speeds and while grasping a range of components. Vision, tactile, or force sensors can compensate alignment errors, but have limited bandwidth, limiting the safe assembly speed. Passive compliance such as silicone-based fingers can reduce collision forces and grasp a range of components, but often cannot provide the accuracy or assembly forces required. To support high-speed mechanical search and self-aligning insertion, this paper proposes monolithic additively manufactured fingers which realize a moderate, structured compliance directly proximal to the gripped object. The geometry of finray-effect fingers are adapted to add form-closure features and realize a directionally-dependent stiffness at the fingertip, with a high stiffness to apply insertion forces and lower transverse stiffness to support alignment. Design parameters and mechanical properties of the fingers are investigated with FEM and empirical studies, analyzing the stiffness, maximum load, and viscoelastic effects. The fingers realize a remote center of compliance, which is shown to depend on the rib angle, and a directional stiffness ratio of $14-36$. The fingers are applied to a plug insertion task, realizing a tolerance window of $7.5$ mm and approach speeds of $1.3$ m/s.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 247, + "label": 30, + "text": "Title: ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate\nAbstract: Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMs' potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experimental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recognizing that best practices of human evaluation processes often involve multiple human annotators collaborating in the evaluation, we resort to a multi-agent debate framework, moving beyond single-agent prompting strategies. The multi-agent-based approach enables a group of LLMs to synergize with an array of intelligent counterparts, harnessing their distinct capabilities and expertise to enhance efficiency and effectiveness in handling intricate tasks. In this paper, we construct a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models on open-ended questions and traditional natural language generation (NLG) tasks. Our analysis shows that ChatEval transcends mere textual scoring, offering a human-mimicking evaluation process for reliable assessments. Our code is available at https://github.com/chanchimin/ChatEval.", + "neighbors": [ + 240, + 652, + 811, + 989, + 1203, + 1227, + 1346, + 1354, + 1566, + 1878, + 1949, + 2042, + 2087 + ], + "mask": "Train" + }, + { + "node_id": 248, + "label": 10, + "text": "Title: Complexity and scalability of defeasible reasoning in many-valued weighted knowledge bases with typicality\nAbstract: Weighted knowledge bases for description logics with typicality under a\"concept-wise\"multi-preferential semantics provide a logical interpretation of MultiLayer Perceptrons. In this context, Answer Set Programming (ASP) has been shown to be suitable for addressing defeasible reasoning in the finitely many-valued case, providing a $\\Pi^p_2$ upper bound on the complexity of the problem, nonetheless leaving unknown the exact complexity and only providing a proof-of-concept implementation. This paper fulfils the lack by providing a $P^{NP[log]}$-completeness result and new ASP encodings that deal with weighted knowledge bases with large search spaces.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 249, + "label": 22, + "text": "Title: Further Decimating the Inductive Programming Search Space with Instruction Digrams\nAbstract: Overlapping instruction subsets derived from human originated code have previously been shown to dramatically shrink the inductive programming search space, often by many orders of magnitude. Here we extend the instruction subset approach to consider direct instruction-instruction applications (or instruction digrams) as an additional search heuristic for inductive programming. In this study we analyse the frequency distribution of instruction digrams in a large sample of open source code. This indicates that the instruction digram distribution is highly skewed with over 93% of possible instruction digrams not represnted in the code sample. We demonstrate that instruction digrams can be used to constrain instruction selection during search, further reducing size of the the search space, in some cases by several orders of magnitude. This significantly increases the size of programs that can be generated using search based inductive programming techniques. We discuss the results and provide some suggestions for further work.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 250, + "label": 24, + "text": "Title: When Can Linear Learners be Robust to Indiscriminate Poisoning Attacks?\nAbstract: We study indiscriminate poisoning for linear learners where an adversary injects a few crafted examples into the training data with the goal of forcing the induced model to incur higher test error. Inspired by the observation that linear learners on some datasets are able to resist the best known attacks even without any defenses, we further investigate whether datasets can be inherently robust to indiscriminate poisoning attacks for linear learners. For theoretical Gaussian distributions, we rigorously characterize the behavior of an optimal poisoning attack, defined as the poisoning strategy that attains the maximum risk of the induced model at a given poisoning budget. Our results prove that linear learners can indeed be robust to indiscriminate poisoning if the class-wise data distributions are well-separated with low variance and the size of the constraint set containing all permissible poisoning points is also small. These findings largely explain the drastic variation in empirical attack performance of the state-of-the-art poisoning attacks on linear learners across benchmark datasets, making an important initial step towards understanding the underlying reasons some learning tasks are vulnerable to data poisoning attacks.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 251, + "label": 37, + "text": "Title: DIALITE: Discover, Align and Integrate Open Data Tables\nAbstract: We demonstrate a novel table discovery pipeline called DIALITE that allows users to discover, integrate and analyze open data tables. DIALITE has three main stages. First, it allows users to discover tables from open data platforms using state-of-the-art table discovery techniques. Second, DIALITE integrates the discovered tables to produce an integrated table. Finally, it allows users to analyze the integration result by applying different downstreaming tasks over it. Our pipeline is flexible such that the user can easily add and compare additional discovery and integration algorithms.", + "neighbors": [ + 1157 + ], + "mask": "Train" + }, + { + "node_id": 252, + "label": 27, + "text": "Title: What You See Is (not) What You Get: A VR Framework for Correcting Robot Errors\nAbstract: Many solutions tailored for intuitive visualization or teleoperation of virtual, augmented and mixed (VAM) reality systems are not robust to robot failures, such as the inability to detect and recognize objects in the environment or planning unsafe trajectories. In this paper, we present a novel virtual reality (VR) framework where users can (i) recognize when the robot has failed to detect a real- world object, (ii) correct the error in VR, (iii) modify proposed object trajectories and, (iv) implement behaviors on a real-world robot. Finally, we propose a user study aimed at testing the efficacy of our framework. Project materials can be found in the OSF repository.", + "neighbors": [ + 1726 + ], + "mask": "Train" + }, + { + "node_id": 253, + "label": 24, + "text": "Title: Homological Convolutional Neural Networks\nAbstract: Deep learning methods have demonstrated outstanding performances on classification and regression tasks on homogeneous data types (e.g., image, audio, and text data). However, tabular data still poses a challenge with classic machine learning approaches being often computationally cheaper and equally effective than increasingly complex deep learning architectures. The challenge arises from the fact that, in tabular data, the correlation among features is weaker than the one from spatial or semantic relationships in images or natural languages, and the dependency structures need to be modeled without any prior information. In this work, we propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations to gain spatial information from sparse tabular data. The resulting model leverages the power of convolutions and is centered on a limited number of concepts from network topology to guarantee (i) a data-centric, deterministic building pipeline; (ii) a high level of interpretability over the inference process; and (iii) an adequate room for scalability. We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models demonstrating that our approach reaches state-of-the-art performances on these challenging datasets. The code to reproduce all our experiments is provided at https://github.com/FinancialComputingUCL/HomologicalCNN.", + "neighbors": [ + 671, + 1368 + ], + "mask": "Train" + }, + { + "node_id": 254, + "label": 23, + "text": "Title: AutoLog: A Log Sequence Synthesis Framework for Anomaly Detection\nAbstract: The rapid progress of modern computing systems has led to a growing interest in informative run-time logs. Various log-based anomaly detection techniques have been proposed to ensure software reliability. However, their implementation in the industry has been limited due to the lack of high-quality public log resources as training datasets. While some log datasets are available for anomaly detection, they suffer from limitations in (1) comprehensiveness of log events; (2) scalability over diverse systems; and (3) flexibility of log utility. To address these limitations, we propose AutoLog, the first automated log generation methodology for anomaly detection. AutoLog uses program analysis to generate run-time log sequences without actually running the system. AutoLog starts with probing comprehensive logging statements associated with the call graphs of an application. Then, it constructs execution graphs for each method after pruning the call graphs to find log-related execution paths in a scalable manner. Finally, AutoLog propagates the anomaly label to each acquired execution path based on human knowledge. It generates flexible log sequences by walking along the log execution paths with controllable parameters. Experiments on 50 popular Java projects show that AutoLog acquires significantly more (9x-58x) log events than existing log datasets from the same system, and generates log messages much faster (15x) with a single machine than existing passive data collection approaches. We hope AutoLog can facilitate the benchmarking and adoption of automated log analysis techniques.", + "neighbors": [ + 939 + ], + "mask": "Test" + }, + { + "node_id": 255, + "label": 5, + "text": "Title: Workflows Community Summit 2022: A Roadmap Revolution\nAbstract: Scientific workflows have become integral tools in broad scientific computing use cases. Science discovery is increasingly dependent on workflows to orchestrate large and complex scientific experiments that range from execution of a cloud-based data preprocessing pipeline to multi-facility instrument-to-edge-to-HPC computational workflows. Given the changing landscape of scientific computing and the evolving needs of emerging scientific applications, it is paramount that the development of novel scientific workflows and system functionalities seek to increase the efficiency, resilience, and pervasiveness of existing systems and applications. Specifically, the proliferation of machine learning/artificial intelligence (ML/AI) workflows, need for processing large scale datasets produced by instruments at the edge, intensification of near real-time data processing, support for long-term experiment campaigns, and emergence of quantum computing as an adjunct to HPC, have significantly changed the functional and operational requirements of workflow systems. Workflow systems now need to, for example, support data streams from the edge-to-cloud-to-HPC enable the management of many small-sized files, allow data reduction while ensuring high accuracy, orchestrate distributed services (workflows, instruments, data movement, provenance, publication, etc.) across computing and user facilities, among others. Further, to accelerate science, it is also necessary that these systems implement specifications/standards and APIs for seamless (horizontal and vertical) integration between systems and applications, as well as enabling the publication of workflows and their associated products according to the FAIR principles. This document reports on discussions and findings from the 2022 international edition of the Workflows Community Summit that took place on November 29 and 30, 2022.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 256, + "label": 24, + "text": "Title: A study on a Q-Learning algorithm application to a manufacturing assembly problem\nAbstract: nan", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 257, + "label": 24, + "text": "Title: IoT Federated Blockchain Learning at the Edge\nAbstract: IoT devices are sorely underutilized in the medical field, especially within machine learning for medicine, yet they offer unrivaled benefits. IoT devices are low-cost, energy-efficient, small and intelligent devices. In this paper, we propose a distributed federated learning framework for IoT devices, more specifically for IoMT (Internet of Medical Things), using blockchain to allow for a decentralized scheme improving privacy and efficiency over a centralized system; this allows us to move from the cloud-based architectures, that are prevalent, to the edge. The system is designed for three paradigms: 1) Training neural networks on IoT devices to allow for collaborative training of a shared model whilst decoupling the learning from the dataset to ensure privacy. Training is performed in an online manner simultaneously amongst all participants, allowing for the training of actual data that may not have been present in a dataset collected in the traditional way and dynamically adapt the system whilst it is being trained. 2) Training of an IoMT system in a fully private manner such as to mitigate the issue with confidentiality of medical data and to build robust, and potentially bespoke, models where not much, if any, data exists. 3) Distribution of the actual network training, something federated learning itself does not do, to allow hospitals, for example, to utilize their spare computing resources to train network models.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 258, + "label": 27, + "text": "Title: Digital twin in virtual reality for human-vehicle interactions in the context of autonomous driving\nAbstract: This paper presents the results of tests of interactions between real humans and simulated vehicles in a virtual scenario. Human activity is inserted into the virtual world via a virtual reality interface for pedestrians. The autonomous vehicle is equipped with a virtual Human-Machine interface (HMI) and drives through the digital twin of a real crosswalk. The HMI was combined with gentle and aggressive braking maneuvers when the pedestrian intended to cross. The results of the interactions were obtained through questionnaires and measurable variables such as the distance to the vehicle when the pedestrian initiated the crossing action. The questionnaires show that pedestrians feel safer whenever HMI is activated and that varying the braking maneuver does not influence their perception of danger as much, while the measurable variables show that both HMI activation and the gentle braking maneuver cause the pedestrian to cross earlier.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 259, + "label": 6, + "text": "Title: ChameleonControl: Teleoperating Real Human Surrogates through Mixed Reality Gestural Guidance for Remote Hands-on Classrooms\nAbstract: We present ChameleonControl, a real-human teleoperation system for scalable remote instruction in hands-on classrooms. In contrast to existing video or AR/VR-based remote hands-on education, ChameleonControl uses a real human as a surrogate of a remote instructor. Building on existing human-based telepresence approaches, we contribute a novel method to teleoperate a human surrogate through synchronized mixed reality hand gestural navigation and verbal communication. By overlaying the remote instructor\u2019s virtual hands in the local user\u2019s MR view, the remote instructor can guide and control the local user as if they were physically present. This allows the local user/surrogate to synchronize their hand movements and gestures with the remote instructor, effectively teleoperating a real human. We deploy and evaluate our system in classrooms of physiotherapy training, as well as other application domains such as mechanical assembly, sign language and cooking lessons. The study results confirm that our approach can increase engagement and the sense of co-presence, showing potential for the future of remote hands-on classrooms.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 260, + "label": 16, + "text": "Title: Chasing Consistency in Text-to-3D Generation from a Single Image\nAbstract: Text-to-3D generation from a single-view image is a popular but challenging task in 3D vision. Although numerous methods have been proposed, existing works still suffer from the inconsistency issues, including 1) semantic inconsistency, 2) geometric inconsistency, and 3) saturation inconsistency, resulting in distorted, overfitted, and over-saturated generations. In light of the above issues, we present Consist3D, a three-stage framework Chasing for semantic-, geometric-, and saturation-Consistent Text-to-3D generation from a single image, in which the first two stages aim to learn parameterized consistency tokens, and the last stage is for optimization. Specifically, the semantic encoding stage learns a token independent of views and estimations, promoting semantic consistency and robustness. Meanwhile, the geometric encoding stage learns another token with comprehensive geometry and reconstruction constraints under novel-view estimations, reducing overfitting and encouraging geometric consistency. Finally, the optimization stage benefits from the semantic and geometric tokens, allowing a low classifier-free guidance scale and therefore preventing oversaturation. Experimental results demonstrate that Consist3D produces more consistent, faithful, and photo-realistic 3D assets compared to previous state-of-the-art methods. Furthermore, Consist3D also allows background and object editing through text prompts.", + "neighbors": [ + 436, + 955, + 1125, + 1173, + 1902, + 2205 + ], + "mask": "Validation" + }, + { + "node_id": 261, + "label": 27, + "text": "Title: Collaborative Trolley Transportation System with Autonomous Nonholonomic Robots\nAbstract: Cooperative object transportation using multiple robots has been intensively studied in the control and robotics literature, but most approaches are either only applicable to omnidirectional robots or lack a complete navigation and decision-making framework that operates in real time. This paper presents an autonomous nonholonomic multi-robot system and an end-to-end hierarchical autonomy framework for collaborative luggage trolley transportation. This framework finds kinematic-feasible paths, computes online motion plans, and provides feedback that enables the multi-robot system to handle long lines of luggage trolleys and navigate obstacles and pedestrians while dealing with multiple inherently complex and coupled constraints. We demonstrate the designed collaborative trolley transportation system through practical transportation tasks, and the experiment results reveal their effectiveness and reliability in complex and dynamic environments.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 262, + "label": 24, + "text": "Title: AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices\nAbstract: In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic communication overhead. An enhanced parameter aggregation is conducted in an element-wise manner to improve the model performance. Focusing on AnycostFL, we further propose an optimization design to minimize the global training loss with personalized latency and energy constraints. By revealing the theoretical insights of the convergence analysis, personalized training strategies are deduced for different devices to match their locally available resources. Experiment results indicate that, when compared to the state-of-the-art efficient FL algorithms, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy. Moreover, the results also demonstrate that, our approach significantly improves the converged global accuracy.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 263, + "label": 24, + "text": "Title: simpleKT: A Simple But Tough-to-Beat Baseline for Knowledge Tracing\nAbstract: Knowledge tracing (KT) is the problem of predicting students' future performance based on their historical interactions with intelligent tutoring systems. Recently, many works present lots of special methods for applying deep neural networks to KT from different perspectives like model architecture, adversarial augmentation and etc., which make the overall algorithm and system become more and more complex. Furthermore, due to the lack of standardized evaluation protocol \\citep{liu2022pykt}, there is no widely agreed KT baselines and published experimental comparisons become inconsistent and self-contradictory, i.e., the reported AUC scores of DKT on ASSISTments2009 range from 0.721 to 0.821 \\citep{minn2018deep,yeung2018addressing}. Therefore, in this paper, we provide a strong but simple baseline method to deal with the KT task named \\textsc{simpleKT}. Inspired by the Rasch model in psychometrics, we explicitly model question-specific variations to capture the individual differences among questions covering the same set of knowledge components that are a generalization of terms of concepts or skills needed for learners to accomplish steps in a task or a problem. Furthermore, instead of using sophisticated representations to capture student forgetting behaviors, we use the ordinary dot-product attention function to extract the time-aware information embedded in the student learning interactions. Extensive experiments show that such a simple baseline is able to always rank top 3 in terms of AUC scores and achieve 57 wins, 3 ties and 16 loss against 12 DLKT baseline methods on 7 public datasets of different domains. We believe this work serves as a strong baseline for future KT research. Code is available at \\url{https://github.com/pykt-team/pykt-toolkit}\\footnote{We merged our model to the \\textsc{pyKT} benchmark at \\url{https://pykt.org/}.}.", + "neighbors": [ + 124 + ], + "mask": "Train" + }, + { + "node_id": 264, + "label": 27, + "text": "Title: On Second-Order Derivatives of Rigid-Body Dynamics: Theory & Implementation\nAbstract: Model-based control for robots has increasingly been dependent on optimization-based methods like Differential Dynamic Programming and iterative LQR (iLQR). These methods can form the basis of Model-Predictive Control (MPC), which is commonly used for controlling legged robots. Computing the partial derivatives of the dynamics is often the most expensive part of these algorithms, regardless of whether analytical methods, Finite Difference, Automatic Differentiation (AD), or Chain-Rule accumulation is used. Since the second-order derivatives of dynamics result in tensor computations, they are often ignored, leading to the use of iLQR, instead of the full second-order DDP method. In this paper, we present analytical methods to compute the second-order derivatives of inverse and forward dynamics for open-chain rigid-body systems with multi-DoF joints and fixed/floating bases. An extensive comparison of accuracy and run-time performance with AD and other methods is provided, including the consideration of code-generation techniques in C/C++ to speed up the computations. For the 36 DoF ATLAS humanoid, the second-order Inverse, and the Forward dynamics derivatives take approx 200 mu s, and approx 2.1 ms respectively, resulting in a 3x speedup over the AD approach.", + "neighbors": [ + 239 + ], + "mask": "Test" + }, + { + "node_id": 265, + "label": 2, + "text": "Title: Decentralized Stream Runtime Verification for Timed Asynchronous Networks\nAbstract: Problem: We study the problem of monitoring distributed systems such as smart buildings, ambient living, wide area networks and other distributed systems that get monitored periodically in human scale times. In these systems computers communicate using message passing and share an almost synchronized clock. This is a realistic scenario for networks where the speed of the monitoring is sufficiently slow (like seconds or tens of seconds) to permit efficient clock synchronization, where clock deviations are small compared to the time precision and frequency required by the monitoring. Solution: More concretely, we propose a solution to monitor decentralized systems where monitors are expressed as stream runtime verification specifications. We solve the problem for \u201ctimed asynchronous networks\u201d, where computational nodes where the monitors run have a synchronized clock with a small bounded maximum drift. These nodes communicate using a network, where messages can take arbitrarily long but cannot be duplicated or lost. This setting is common in many cyber-physical systems like smart buildings and ambient living. This assumption generalizes the synchronous monitoring case. Previous approaches to decentralized monitoring were limited to synchronous networks, which are not easily implemented in practice because of network failures. Even when network failures are unusual, they can require several monitoring cycles to be repaired. Methodology: We describe formally the monitoring problem for timed-asynchronous networks, we describe a decentralized algorithm and provide proofs of its correctness. Afterwards, we formally analyze the complexity of our solutions and provide two analysis techniques to approximate the memory requirements. Finally, we implement the algorithm and perform an empirical evaluation with real data extracted from four different datasets. Contributions: We propose a solution to the timed asynchronous decentralized monitoring problem. We study the specifications and conditions on the network behavior that allow the monitoring to take place with bounded resources, independently of the trace length. Finally, we report the results of an empirical evaluation of an implementation and verify the theoretical results in terms of effectiveness and efficiency.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 266, + "label": 28, + "text": "Title: Secure Communication for Spatially Correlated RIS-Aided Multiuser Massive MIMO Systems: Analysis and Optimization\nAbstract: This letter investigates the secure communication in a reconfigurable intelligent surface (RIS)-aided multiuser massive multiple-input multiple-output (MIMO) system exploiting artificial noise (AN). We first derive a closed-form expression of the ergodic secrecy rate under spatially correlated MIMO channels. By using this derived result, we further optimize the power fraction of AN in closed form and the RIS phase shifts by developing a gradient-based algorithm, which requires only statistical channel state information (CSI). Our analysis shows that spatial correlation at the RIS provides an additional dimension for optimizing the RIS phase shifts. Numerical simulations validate the analytical results which show the insightful interplay among the system parameters and the degradation of secrecy performance due to high spatial correlation at the RIS.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 267, + "label": 24, + "text": "Title: Mutual Information Regularization for Vertical Federated Learning\nAbstract: Vertical Federated Learning (VFL) is widely utilized in real-world applications to enable collaborative learning while protecting data privacy and safety. However, previous works show that parties without labels (passive parties) in VFL can infer the sensitive label information owned by the party with labels (active party) or execute backdoor attacks to VFL. Meanwhile, active party can also infer sensitive feature information from passive party. All these pose new privacy and security challenges to VFL systems. We propose a new general defense method which limits the mutual information between private raw data, including both features and labels, and intermediate outputs to achieve a better trade-off between model utility and privacy. We term this defense Mutual Information Regularization Defense (MID). We theoretically and experimentally testify the effectiveness of our MID method in defending existing attacks in VFL, including label inference attacks, backdoor attacks and feature reconstruction attacks.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 268, + "label": 30, + "text": "Title: WizardCoder: Empowering Code Large Language Models with Evol-Instruct\nAbstract: Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning, by adapting the Evol-Instruct method to the domain of code. Through comprehensive experiments on four prominent code generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we unveil the exceptional capabilities of our model. It surpasses all other open-source Code LLMs by a substantial margin. Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+. Our code, model weights, and data are public at https://github.com/nlpxucan/WizardLM", + "neighbors": [ + 855, + 1052, + 1112, + 1114, + 1171, + 1249, + 1515, + 1606, + 1735, + 1840, + 1863, + 1879, + 1907, + 1950 + ], + "mask": "Train" + }, + { + "node_id": 269, + "label": 16, + "text": "Title: From Text to Mask: Localizing Entities Using the Attention of Text-to-Image Diffusion Models\nAbstract: Diffusion models have revolted the field of text-to-image generation recently. The unique way of fusing text and image information contributes to their remarkable capability of generating highly text-related images. From another perspective, these generative models imply clues about the precise correlation between words and pixels. In this work, a simple but effective method is proposed to utilize the attention mechanism in the denoising network of text-to-image diffusion models. Without re-training nor inference-time optimization, the semantic grounding of phrases can be attained directly. We evaluate our method on Pascal VOC 2012 and Microsoft COCO 2014 under weakly-supervised semantic segmentation setting and our method achieves superior performance to prior methods. In addition, the acquired word-pixel correlation is found to be generalizable for the learned text embedding of customized generation methods, requiring only a few modifications. To validate our discovery, we introduce a new practical task called\"personalized referring image segmentation\"with a new dataset. Experiments in various situations demonstrate the advantages of our method compared to strong baselines on this task. In summary, our work reveals a novel way to extract the rich multi-modal knowledge hidden in diffusion models for segmentation.", + "neighbors": [ + 1262, + 2009, + 2186 + ], + "mask": "Validation" + }, + { + "node_id": 270, + "label": 6, + "text": "Title: Deimos: A Grammar of Dynamic Embodied Immersive Visualisation Morphs and Transitions\nAbstract: We present Deimos, a grammar for specifying dynamic embodied immersive visualisation morphs and transitions. A morph is a collection of animated transitions that are dynamically applied to immersive visualisations at runtime and is conceptually modelled as a state machine. It is comprised of state, transition, and signal specifications. States in a morph are used to generate animation keyframes, with transitions connecting two states together. A transition is controlled by signals, which are composable data streams that can be used to enable embodied interaction techniques. Morphs allow immersive representations of data to transform and change shape through user interaction, facilitating the embodied cognition process. We demonstrate the expressivity of Deimos in an example gallery and evaluate its usability in an expert user study of six immersive analytics researchers. Participants found the grammar to be powerful and expressive, and showed interest in drawing upon Deimos\u2019 concepts and ideas in their own research.", + "neighbors": [ + 760, + 1313 + ], + "mask": "Train" + }, + { + "node_id": 271, + "label": 24, + "text": "Title: NoiseCAM: Explainable AI for the Boundary Between Noise and Adversarial Attacks\nAbstract: Deep Learning (DL) and Deep Neural Networks (DNNs) are widely used in various domains. However, adversarial attacks can easily mislead a neural network and lead to wrong decisions. Defense mechanisms are highly preferred in safety-critical applications. In this paper, firstly, we use the gradient class activation map (GradCAM) to analyze the behavior deviation of the VGG-16 network when its inputs are mixed with adversarial perturbation or Gaussian noise. In particular, our method can locate vulnerable layers that are sensitive to adversarial perturbation and Gaussian noise. We also show that the behavior deviation of vulnerable layers can be used to detect adversarial examples. Secondly, we propose a novel NoiseCAM algorithm that integrates information from globally and pixel-level weighted class activation maps. Our algorithm is susceptible to adversarial perturbations and will not respond to Gaussian random noise mixed in the inputs. Third, we compare detecting adversarial examples using both behavior deviation and NoiseCAM, and we show that NoiseCAM outperforms behavior deviation modeling in its overall performance. Our work could provide a useful tool to defend against certain adversarial attacks on deep neural networks.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 272, + "label": 24, + "text": "Title: An Adaptive Optimization Approach to Personalized Financial Incentives in Mobile Behavioral Weight Loss Interventions\nAbstract: Obesity is a critical healthcare issue affecting the United States. The least risky treatments available for obesity are behavioral interventions meant to promote diet and exercise. Often these interventions contain a mobile component that allows interventionists to collect participants level data and provide participants with incentives and goals to promote long term behavioral change. Recently, there has been interest in using direct financial incentives to promote behavior change. However, adherence is challenging in these interventions, as each participant will react differently to different incentive structure and amounts, leading researchers to consider personalized interventions. The key challenge for personalization, is that the clinicians do not know a priori how best to administer incentives to participants, and given finite intervention budgets how to disburse costly resources efficiently. In this paper, we consider this challenge of designing personalized weight loss interventions that use direct financial incentives to motivate weight loss while remaining within a budget. We create a machine learning approach that is able to predict how individuals may react to different incentive schedules within the context of a behavioral intervention. We use this predictive model in an adaptive framework that over the course of the intervention computes what incentives to disburse to participants and remain within the study budget. We provide both theoretical guarantees for our modeling and optimization approaches as well as demonstrate their performance in a simulated weight loss study. Our results highlight the cost efficiency and effectiveness of our personalized intervention design for weight loss.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 273, + "label": 16, + "text": "Title: Training-Free Layout Control with Cross-Attention Guidance\nAbstract: Recent diffusion-based generators can produce high-quality images based only on textual prompts. However, they do not correctly interpret instructions that specify the spatial layout of the composition. We propose a simple approach that can achieve robust layout control without requiring training or fine-tuning the image generator. Our technique, which we call layout guidance, manipulates the cross-attention layers that the model uses to interface textual and visual information and steers the reconstruction in the desired direction given, e.g., a user-specified layout. In order to determine how to best guide attention, we study the role of different attention maps when generating images and experiment with two alternative strategies, forward and backward guidance. We evaluate our method quantitatively and qualitatively with several experiments, validating its effectiveness. We further demonstrate its versatility by extending layout guidance to the task of editing the layout and context of a given real image.", + "neighbors": [ + 706, + 908, + 955, + 1079, + 1902, + 2161, + 2277, + 2306 + ], + "mask": "Validation" + }, + { + "node_id": 274, + "label": 34, + "text": "Title: Learning-Augmented Online TSP on Rings, Trees, Flowers and (almost) Everywhere Else\nAbstract: We study the Online Traveling Salesperson Problem (OLTSP) with predictions. In OLTSP, a sequence of initially unknown requests arrive over time at points (locations) of a metric space. The goal is, starting from a particular point of the metric space (the origin), to serve all these requests while minimizing the total time spent. The server moves with unit speed or is\"waiting\"(zero speed) at some location. We consider two variants: in the open variant, the goal is achieved when the last request is served. In the closed one, the server additionally has to return to the origin. We adopt a prediction model, introduced for OLTSP on the line, in which the predictions correspond to the locations of the requests and extend it to more general metric spaces. We first propose an oracle-based algorithmic framework, inspired by previous work. This framework allows us to design online algorithms for general metric spaces that provide competitive ratio guarantees which, given perfect predictions, beat the best possible classical guarantee (consistency). Moreover, they degrade gracefully along with the increase in error (smoothness), but always within a constant factor of the best known competitive ratio in the classical case (robustness). Having reduced the problem to designing suitable efficient oracles, we describe how to achieve this for general metric spaces as well as specific metric spaces (rings, trees and flowers), the resulting algorithms being tractable in the latter case. The consistency guarantees of our algorithms are tight in almost all cases, and their smoothness guarantees only suffer a linear dependency on the error, which we show is necessary. Finally, we provide robustness guarantees improving previous results.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 275, + "label": 24, + "text": "Title: An active inference model of car following: Advantages and applications\nAbstract: Driver process models play a central role in the testing, verification, and development of automated and autonomous vehicle technologies. Prior models developed from control theory and physics-based rules are limited in automated vehicle applications due to their restricted behavioral repertoire. Data-driven machine learning models are more capable than rule-based models but are limited by the need for large training datasets and their lack of interpretability, i.e., an understandable link between input data and output behaviors. We propose a novel car following modeling approach using active inference, which has comparable behavioral flexibility to data-driven models while maintaining interpretability. We assessed the proposed model, the Active Inference Driving Agent (AIDA), through a benchmark analysis against the rule-based Intelligent Driver Model, and two neural network Behavior Cloning models. The models were trained and tested on a real-world driving dataset using a consistent process. The testing results showed that the AIDA predicted driving controls significantly better than the rule-based Intelligent Driver Model and had similar accuracy to the data-driven neural network models in three out of four evaluations. Subsequent interpretability analyses illustrated that the AIDA's learned distributions were consistent with driver behavior theory and that visualizations of the distributions could be used to directly comprehend the model's decision making process and correct model errors attributable to limited training data. The results indicate that the AIDA is a promising alternative to black-box data-driven models and suggest a need for further research focused on modeling driving style and model training with more diverse datasets.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 276, + "label": 24, + "text": "Title: Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits\nAbstract: The study of collaborative multi-agent bandits has attracted significant attention recently. In light of this, we initiate the study of a new collaborative setting, consisting of $N$ agents such that each agent is learning one of $M$ stochastic multi-armed bandits to minimize their group cumulative regret. We develop decentralized algorithms which facilitate collaboration between the agents under two scenarios. We characterize the performance of these algorithms by deriving the per agent cumulative regret and group regret upper bounds. We also prove lower bounds for the group regret in this setting, which demonstrates the near-optimal behavior of the proposed algorithms.", + "neighbors": [ + 1982 + ], + "mask": "Train" + }, + { + "node_id": 277, + "label": 16, + "text": "Title: ASPIRE: Language-Guided Augmentation for Robust Image Classification\nAbstract: Neural image classifiers can often learn to make predictions by overly relying on non-predictive features that are spuriously correlated with the class labels in the training data. This leads to poor performance in real-world atypical scenarios where such features are absent. Supplementing the training dataset with images without such spurious features can aid robust learning against spurious correlations via better generalization. This paper presents ASPIRE (Language-guided data Augmentation for SPurIous correlation REmoval), a simple yet effective solution for expanding the training dataset with synthetic images without spurious features. ASPIRE, guided by language, generates these images without requiring any form of additional supervision or existing examples. Precisely, we employ LLMs to first extract foreground and background features from textual descriptions of an image, followed by advanced language-guided image editing to discover the features that are spuriously correlated with the class label. Finally, we personalize a text-to-image generation model to generate diverse in-domain images without spurious features. We demonstrate the effectiveness of ASPIRE on 4 datasets, including the very challenging Hard ImageNet dataset, and 9 baselines and show that ASPIRE improves the classification accuracy of prior methods by 1% - 38%. Code soon at: https://github.com/Sreyan88/ASPIRE.", + "neighbors": [ + 2235 + ], + "mask": "Train" + }, + { + "node_id": 278, + "label": 24, + "text": "Title: Doubly Robust Counterfactual Classification\nAbstract: We study counterfactual classification as a new tool for decision-making under hypothetical (contrary to fact) scenarios. We propose a doubly-robust nonparametric estimator for a general counterfactual classifier, where we can incorporate flexible constraints by casting the classification problem as a nonlinear mathematical program involving counterfactuals. We go on to analyze the rates of convergence of the estimator and provide a closed-form expression for its asymptotic distribution. Our analysis shows that the proposed estimator is robust against nuisance model misspecification, and can attain fast $\\sqrt{n}$ rates with tractable inference even when using nonparametric machine learning approaches. We study the empirical performance of our methods by simulation and apply them for recidivism risk prediction.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 279, + "label": 15, + "text": "Title: CUDA-PIM: End-to-End Integration of Digital Processing-in-Memory from High-Level C++ to Microarchitectural Design\nAbstract: Digital processing-in-memory (PIM) architectures mitigate the memory wall problem by facilitating parallel bitwise operations directly within memory. Recent works have demonstrated their algorithmic potential for accelerating data-intensive applications; however, there remains a significant gap in the programming model and microarchitectural design. This is further exacerbated by the emerging model of partitions, which significantly complicates control and periphery. Therefore, inspired by NVIDIA CUDA, this paper provides an end-to-end architectural integration of digital memristive PIM from an abstract high-level C++ programming interface for vector operations to the low-level microarchitecture. We begin by proposing an efficient microarchitecture and instruction set architecture (ISA) that bridge the gap between the low-level control periphery and an abstraction of PIM parallelism into warps and threads. We subsequently propose a PIM compilation library that converts high-level C++ to ISA instructions, and a PIM driver that translates ISA instructions into PIM micro-operations. This drastically simplifies the development of PIM applications and enables PIM integration within larger existing C++ CPU/GPU programs for heterogeneous computing with significant ease. Lastly, we present an efficient GPU-accelerated simulator for the proposed PIM microarchitecture. Although slower than a theoretical PIM chip, this simulator provides an accessible platform for developers to start executing and debugging PIM algorithms. To validate our approach, we implement state-of-the-art matrix operations and FFT PIM-based algorithms as case studies. These examples demonstrate drastically simplified development without compromising performance, showing the potential and significance of CUDA-PIM.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 280, + "label": 31, + "text": "Title: Information Retrieval: Recent Advances and Beyond\nAbstract: This paper provides an extensive and thorough overview of the models and techniques utilized in the first and second stages of the typical information retrieval processing chain. Our discussion encompasses the current state-of-the-art models, covering a wide range of methods and approaches in the field of information retrieval. We delve into the historical development of these models, analyze the key advancements and breakthroughs, and address the challenges and limitations faced by researchers and practitioners in the domain. By offering a comprehensive understanding of the field, this survey is a valuable resource for researchers, practitioners, and newcomers to the information retrieval domain, fostering knowledge growth, innovation, and the development of novel ideas and techniques.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 281, + "label": 16, + "text": "Title: Complementary Pseudo Multimodal Feature for Point Cloud Anomaly Detection\nAbstract: Point cloud (PCD) anomaly detection steadily emerges as a promising research area. This study aims to improve PCD anomaly detection performance by combining handcrafted PCD descriptions with powerful pre-trained 2D neural networks. To this end, this study proposes Complementary Pseudo Multimodal Feature (CPMF) that incorporates local geometrical information in 3D modality using handcrafted PCD descriptors and global semantic information in the generated pseudo 2D modality using pre-trained 2D neural networks. For global semantics extraction, CPMF projects the origin PCD into a pseudo 2D modality containing multi-view images. These images are delivered to pre-trained 2D neural networks for informative 2D modality feature extraction. The 3D and 2D modality features are aggregated to obtain the CPMF for PCD anomaly detection. Extensive experiments demonstrate the complementary capacity between 2D and 3D modality features and the effectiveness of CPMF, with 95.15% image-level AU-ROC and 92.93% pixel-level PRO on the MVTec3D benchmark. Code is available on https://github.com/caoyunkang/CPMF.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 282, + "label": 16, + "text": "Title: Causalainer: Causal Explainer for Automatic Video Summarization\nAbstract: The goal of video summarization is to automatically shorten videos such that it conveys the overall story without losing relevant information. In many application scenarios, improper video summarization can have a large impact. For example in forensics, the quality of the generated video summary will affect an investigator\u2019s judgment while in journalism it might yield undesired bias. Because of this, modeling explainability is a key concern. One of the best ways to address the explainability challenge is to uncover the causal relations that steer the process and lead to the result. Current machine learning-based video summarization algorithms learn optimal parameters but do not uncover causal relationships. Hence, they suffer from a relative lack of explainability. In this work, a Causal Explainer, dubbed Causalainer, is proposed to address this issue. Multiple meaningful random variables and their joint distributions are introduced to characterize the behaviors of key components in the problem of video summarization. In addition, helper distributions are introduced to enhance the effectiveness of model training. In visual-textual input scenarios, the extra input can decrease the model performance. A causal semantics extractor is designed to tackle this issue by effectively distilling the mutual information from the visual and textual inputs. Experimental results on commonly used benchmarks demonstrate that the proposed method achieves state-of-the-art performance while being more explainable.", + "neighbors": [ + 1951, + 2266 + ], + "mask": "Train" + }, + { + "node_id": 283, + "label": 13, + "text": "Title: Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design\nAbstract: Sparse and event-driven spiking neural network (SNN) algorithms are the ideal candidate solution for energy-efficient edge computing. Yet, with the growing complexity of SNN algorithms, it isn't easy to properly benchmark and optimize their computational cost without hardware in the loop. Although digital neuromorphic processors have been widely adopted to benchmark SNN algorithms, their black-box nature is problematic for algorithm-hardware co-optimization. In this work, we open the black box of the digital neuromorphic processor for algorithm designers by presenting the neuron processing instruction set and detailed energy consumption of the SENeCA neuromorphic architecture. For convenient benchmarking and optimization, we provide the energy cost of the essential neuromorphic components in SENeCA, including neuron models and learning rules. Moreover, we exploit the SENeCA's hierarchical memory and exhibit an advantage over existing neuromorphic processors. We show the energy efficiency of SNN algorithms for video processing and online learning, and demonstrate the potential of our work for optimizing algorithm designs. Overall, we present a practical approach to enable algorithm designers to accurately benchmark SNN algorithms and pave the way towards effective algorithm-hardware co-design.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 284, + "label": 24, + "text": "Title: A Tale of Two Approximations: Tightening Over-Approximation for DNN Robustness Verification via Under-Approximation\nAbstract: The robustness of deep neural networks (DNNs) is crucial to the hosting system\u2019s reliability and security. Formal verification has been demonstrated to be effective in providing provable robustness guarantees. To improve its scalability, over-approximating the non-linear activation functions in DNNs by linear constraints has been widely adopted, which transforms the verification problem into an efficiently solvable linear programming problem. Many efforts have been dedicated to defining the so-called tightest approximations to reduce overestimation imposed by over-approximation. In this paper, we study existing approaches and identify a dominant factor in defining tight approximation, namely the approximation domain of the activation function. We find out that tight approximations defined on approximation domains may not be as tight as the ones on their actual domains, yet existing approaches all rely only on approximation domains. Based on this observation, we propose a novel dual-approximation approach to tighten overapproximations, leveraging an activation function\u2019s underestimated domain to define tight approximation bounds. We implement our approach with two complementary algorithms based respectively on Monte Carlo simulation and gradient descent into a tool called DualApp. We assess it on a comprehensive benchmark of DNNs with different architectures. Our experimental results show that DualApp significantly outperforms the state-of-the-art approaches with 100% \u2212 1000% improvement on the verified robustness ratio and 10.64% on average (up to 66.53%) on the certified lower bound.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 285, + "label": 27, + "text": "Title: Distributed Optimization Methods for Multi-Robot Systems: Part I - A Tutorial\nAbstract: Distributed optimization provides a framework for deriving distributed algorithms for a variety of multi-robot problems. This tutorial constitutes the first part of a two-part series on distributed optimization applied to multi-robot problems, which seeks to advance the application of distributed optimization in robotics. In this tutorial, we demonstrate that many canonical multi-robot problems can be cast within the distributed optimization framework, such as multi-robot simultaneous localization and planning (SLAM), multi-robot target tracking, and multi-robot task assignment problems. We identify three broad categories of distributed optimization algorithms: distributed first-order methods, distributed sequential convex programming, and the alternating direction method of multipliers (ADMM). We describe the basic structure of each category and provide representative algorithms within each category. We then work through a simulation case study of multiple drones collaboratively tracking a ground vehicle. We compare solutions to this problem using a number of different distributed optimization algorithms. In addition, we implement a distributed optimization algorithm in hardware on a network of Rasberry Pis communicating with XBee modules to illustrate robustness to the challenges of real-world communication networks.", + "neighbors": [ + 539, + 2164 + ], + "mask": "Train" + }, + { + "node_id": 286, + "label": 16, + "text": "Title: Watch Your Steps: Local Image and Scene Editing by Text Instructions\nAbstract: Denoising diffusion models have enabled high-quality image generation and editing. We present a method to localize the desired edit region implicit in a text instruction. We leverage InstructPix2Pix (IP2P) and identify the discrepancy between IP2P predictions with and without the instruction. This discrepancy is referred to as the relevance map. The relevance map conveys the importance of changing each pixel to achieve the edits, and is used to to guide the modifications. This guidance ensures that the irrelevant pixels remain unchanged. Relevance maps are further used to enhance the quality of text-guided editing of 3D scenes in the form of neural radiance fields. A field is trained on relevance maps of training views, denoted as the relevance field, defining the 3D region within which modifications should be made. We perform iterative updates on the training views guided by rendered relevance maps from the relevance field. Our method achieves state-of-the-art performance on both image and NeRF editing tasks. Project page: https://ashmrz.github.io/WatchYourSteps/", + "neighbors": [ + 48, + 1125, + 1355 + ], + "mask": "Validation" + }, + { + "node_id": 287, + "label": 24, + "text": "Title: Adversarial Attacks on Adversarial Bandits\nAbstract: We study a security threat to adversarial multi-armed bandits, in which an attacker perturbs the loss or reward signal to control the behavior of the victim bandit player. We show that the attacker is able to mislead any no-regret adversarial bandit algorithm into selecting a suboptimal target arm in every but sublinear (T-o(T)) number of rounds, while incurring only sublinear (o(T)) cumulative attack cost. This result implies critical security concern in real-world bandit-based systems, e.g., in online recommendation, an attacker might be able to hijack the recommender system and promote a desired product. Our proposed attack algorithms require knowledge of only the regret rate, thus are agnostic to the concrete bandit algorithm employed by the victim player. We also derived a theoretical lower bound on the cumulative attack cost that any victim-agnostic attack algorithm must incur. The lower bound matches the upper bound achieved by our attack, which shows that our attack is asymptotically optimal.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 288, + "label": 16, + "text": "Title: Deep OC-SORT: Multi-Pedestrian Tracking by Adaptive Re-Identification\nAbstract: Motion-based association for Multi-Object Tracking (MOT) has recently re-achieved prominence with the rise of powerful object detectors. Despite this, little work has been done to incorporate appearance cues beyond simple heuristic models that lack robustness to feature degradation. In this paper, we propose a novel way to leverage objects' appearances to adaptively integrate appearance matching into existing high-performance motion-based methods. Building upon the pure motion-based method OC-SORT, we achieve 1st place on MOT20 and 2nd place on MOT17 with 63.9 and 64.9 HOTA, respectively. We also achieve 61.3 HOTA on the challenging DanceTrack benchmark as a new state-of-the-art even compared to more heavily-designed methods. The code and models are available at \\url{https://github.com/GerardMaggiolino/Deep-OC-SORT}.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 289, + "label": 37, + "text": "Title: Lero: A Learning-to-Rank Query Optimizer\nAbstract: \n A recent line of works apply machine learning techniques to assist or rebuild cost-based query optimizers in DBMS. While exhibiting superiority in some benchmarks, their deficiencies, e.g., unstable performance, high training cost, and slow model updating, stem from the inherent hardness of predicting the cost or latency of execution plans using machine learning models. In this paper, we introduce a\n learning-to-rank\n query optimizer, called Lero, which builds on top of a native query optimizer and continuously learns to improve the optimization performance. The key observation is that the relative order or\n rank\n of plans, rather than the exact cost or latency, is sufficient for query optimization. Lero employs a\n pairwise\n approach to train a classifier to compare any two plans and tell which one is better. Such a binary classification task is much easier than the regression task to predict the cost or latency, in terms of model efficiency and accuracy. Rather than building a learned optimizer from scratch, Lero is designed to leverage decades of wisdom of databases and improve the native query optimizer. With its non-intrusive design, Lero can be implemented on top of any existing DBMS with minimal integration efforts. We implement Lero and demonstrate its outstanding performance using PostgreSQL. In our experiments, Lero achieves near optimal performance on several benchmarks. It reduces the plan execution time of the native optimizer in PostgreSQL by up to 70% and other learned query optimizers by up to 37%. Meanwhile, Lero continuously learns and automatically adapts to query workloads and changes in data.\n", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 290, + "label": 4, + "text": "Title: A First Study of MEV on an Up-and-Coming Blockchain: Algorand\nAbstract: Maximal Extractable Value (MEV) significantly influences network incentives, consensus safety, and economic dynamics, and has been extensively studied within the Ethereum blockchain domain. However, MEV is not specific to Ethereum, and extends to other blockchain platforms with differing properties, such as Algorand. Algorand, a smart-contract-based blockchain employing a Byzantine-Fault Tolerant consensus mechanism and Pure-Proof-of-Stake, is characterized by a First-Come-First-Serve transaction ordering mechanism and minimal fixed transaction fees. This paper provides the first exploration of the MEV landscape on Algorand, focusing on arbitrage MEV patterns, key actors, their strategic preferences, transaction positioning strategies, and the influence of Algorand's network infrastructure on MEV searching. We observed 1,142,970 arbitrage cases, with a single searcher executing 653,001. Different searchers demonstrated diverse strategies, reflected in the varied distribution of profitable block positions. Nonetheless, the even spread of arbitrage positions across a block indicates an emphasis on immediate backrunning executions. Furthermore, we identified 265,637 instances of Batch Transaction Issuances, where an address occupied over 80% of a block with a singular transaction type.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 291, + "label": 24, + "text": "Title: PerAda: Parameter-Efficient and Generalizable Federated Learning Personalization with Guarantees\nAbstract: Personalized Federated Learning (pFL) has emerged as a promising solution to tackle data heterogeneity across clients in FL. However, existing pFL methods either (1) introduce high communication and computation costs or (2) overfit to local data, which can be limited in scope, and are vulnerable to evolved test samples with natural shifts. In this paper, we propose PerAda, a parameter-efficient pFL framework that reduces communication and computational costs and exhibits superior generalization performance, especially under test-time distribution shifts. PerAda reduces the costs by leveraging the power of pretrained models and only updates and communicates a small number of additional parameters from adapters. PerAda has good generalization since it regularizes each client's personalized adapter with a global adapter, while the global adapter uses knowledge distillation to aggregate generalized information from all clients. Theoretically, we provide generalization bounds to explain why PerAda improves generalization, and we prove its convergence to stationary points under non-convex settings. Empirically, PerAda demonstrates competitive personalized performance (+4.85% on CheXpert) and enables better out-of-distribution generalization (+5.23% on CIFAR-10-C) on different datasets across natural and medical domains compared with baselines, while only updating 12.6% of parameters per model based on the adapter.", + "neighbors": [ + 487 + ], + "mask": "Train" + }, + { + "node_id": 292, + "label": 4, + "text": "Title: ODDFuzz: Discovering Java Deserialization Vulnerabilities via Structure-Aware Directed Greybox Fuzzing\nAbstract: Java deserialization vulnerability is a severe threat in practice. Researchers have proposed static analysis solutions to locate candidate vulnerabilities and fuzzing solutions to generate proof-of-concept (PoC) serialized objects to trigger them. However, existing solutions have limited effectiveness and efficiency.In this paper, we propose a novel hybrid solution ODDFuzz to efficiently discover Java deserialization vulnerabilities. First, ODDFuzz performs lightweight static taint analysis to identify candidate gadget chains that may cause deserialization vulnerabilities. In this step, ODDFuzz tries to locate all candidates and avoid false negatives. Then, ODDFuzz performs directed greybox fuzzing (DGF) to explore those candidates and generate PoC testcases to mitigate false positives. Specifically, ODDFuzz applies a structure-aware seed generation method to guarantee the validity of the testcases, and adopts a novel hybrid feedback and a step-forward strategy to guide the directed fuzzing.We implemented a prototype of ODDFuzz and evaluated it on the popular Java deserialization repository ysoserial. Results show that, ODDFuzz could discover 16 out of 34 known gadget chains, while two state-of-the-art baselines only identify three of them. In addition, we evaluated ODDFuzz on real-world applications including Oracle WebLogic Server, Apache Dubbo, Sonatype Nexus, and protostuff, and found six previously unreported exploitable gadget chains with five CVEs assigned.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 293, + "label": 26, + "text": "Title: Non-Markovian paths and cycles in NFT trades\nAbstract: Recent years have witnessed the availability of richer and richer datasets in a variety of domains, where signals often have a multi-modal nature, blending temporal, relational and semantic information. Within this context, several works have shown that standard network models are sometimes not sufficient to properly capture the complexity of real-world interacting systems. For this reason, different attempts have been made to enrich the network language, leading to the emerging field of higher-order networks. In this work, we investigate the possibility of applying methods from higher-order networks to extract information from the online trade of Non-fungible tokens (NFTs), leveraging on their intrinsic temporal and non-Markovian nature. While NFTs as a technology open up the realms for many exciting applications, its future is marred by challenges of proof of ownership, scams, wash trading and possible money laundering. We demonstrate that by investigating time-respecting non-Markovian paths exhibited by NFT trades, we provide a practical path-based approach to fraud detection.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 294, + "label": 10, + "text": "Title: Explaining Groups of Instances Counterfactually for XAI: A Use Case, Algorithm and User Study for Group-Counterfactuals\nAbstract: Counterfactual explanations are an increasingly popular form of post hoc explanation due to their (i) applicability across problem domains, (ii) proposed legal compliance (e.g., with GDPR), and (iii) reliance on the contrastive nature of human explanation. Although counterfactual explanations are normally used to explain individual predictive-instances, we explore a novel use case in which groups of similar instances are explained in a collective fashion using ``group counterfactuals'' (e.g., to highlight a repeating pattern of illness in a group of patients). These group counterfactuals meet a human preference for coherent, broad explanations covering multiple events/instances. A novel, group-counterfactual algorithm is proposed to generate high-coverage explanations that are faithful to the to-be-explained model. This explanation strategy is also evaluated in a large, controlled user study (N=207), using objective (i.e., accuracy) and subjective (i.e., confidence, explanation satisfaction, and trust) psychological measures. The results show that group counterfactuals elicit modest but definite improvements in people's understanding of an AI system. The implications of these findings for counterfactual methods and for XAI are discussed.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 295, + "label": 4, + "text": "Title: Fast and Private Inference of Deep Neural Networks by Co-designing Activation Functions\nAbstract: Machine Learning as a Service (MLaaS) is an increasingly popular design where a company with abundant computing resources trains a deep neural network and offers query access for tasks like image classification. The challenge with this design is that MLaaS requires the client to reveal their potentially sensitive queries to the company hosting the model. Multi-party computation (MPC) protects the client's data by allowing encrypted inferences. However, current approaches suffer prohibitively large inference times. The inference time bottleneck in MPC is the evaluation of non-linear layers such as ReLU activation functions. Motivated by the success of previous work co-designing machine learning and MPC aspects, we develop an activation function co-design. We replace all ReLUs with a polynomial approximation and evaluate them with single-round MPC protocols, which give state-of-the-art inference times in wide-area networks. Furthermore, to address the accuracy issues previously encountered with polynomial activations, we propose a novel training algorithm that gives accuracy competitive with plaintext models. Our evaluation shows between $4$ and $90\\times$ speedups in inference time on large models with up to $23$ million parameters while maintaining competitive inference accuracy.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 296, + "label": 24, + "text": "Title: Mitigating Semantic Confusion from Hostile Neighborhood for Graph Active Learning\nAbstract: Graph Active Learning (GAL), which aims to find the most informative nodes in graphs for annotation to maximize the Graph Neural Networks (GNNs) performance, has attracted many research efforts but remains non-trivial challenges. One major challenge is that existing GAL strategies may introduce semantic confusion to the selected training set, particularly when graphs are noisy. Specifically, most existing methods assume all aggregating features to be helpful, ignoring the semantically negative effect between inter-class edges under the message-passing mechanism. In this work, we present Semantic-aware Active learning framework for Graphs (SAG) to mitigate the semantic confusion problem. Pairwise similarities and dissimilarities of nodes with semantic features are introduced to jointly evaluate the node influence. A new prototype-based criterion and query policy are also designed to maintain diversity and class balance of the selected nodes, respectively. Extensive experiments on the public benchmark graphs and a real-world financial dataset demonstrate that SAG significantly improves node classification performances and consistently outperforms previous methods. Moreover, comprehensive analysis and ablation study also verify the effectiveness of the proposed framework.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 297, + "label": 16, + "text": "Title: DeepSegmenter: Temporal Action Localization for Detecting Anomalies in Untrimmed Naturalistic Driving Videos\nAbstract: Identifying unusual driving behaviors exhibited by drivers during driving is essential for understanding driver behavior and the underlying causes of crashes. Previous studies have primarily approached this problem as a classification task, assuming that naturalistic driving videos come discretized. However, both activity segmentation and classification are required for this task due to the continuous nature of naturalistic driving videos. The current study therefore departs from conventional approaches and introduces a novel methodological framework, DeepSegmenter, that simultaneously performs activity segmentation and classification in a single framework. The proposed framework consists of four major modules namely Data Module, Activity Segmentation Module, Classification Module and Postprocessing Module. Our proposed method won 8th place in the 2023 AI City Challenge, Track 3, with an activity overlap score of 0.5426 on experimental validation data. The experimental results demonstrate the effectiveness, efficiency, and robustness of the proposed system. The code is available at https://github.com/aboah1994/DeepSegment.git.", + "neighbors": [ + 2202 + ], + "mask": "Train" + }, + { + "node_id": 298, + "label": 16, + "text": "Title: Population-Based Evolutionary Gaming for Unsupervised Person Re-identification\nAbstract: nan", + "neighbors": [ + 718 + ], + "mask": "Train" + }, + { + "node_id": 299, + "label": 26, + "text": "Title: GeoCovaxTweets: COVID-19 Vaccines and Vaccination-specific Global Geotagged Twitter Conversations\nAbstract: Social media platforms provide actionable information during crises and pandemic outbreaks. The COVID-19 pandemic has imposed a chronic public health crisis worldwide, with experts considering vaccines as the ultimate prevention to achieve herd immunity against the virus. A proportion of people may turn to social media platforms to oppose vaccines and vaccination, hindering government efforts to eradicate the virus. This paper presents the COVID-19 vaccines and vaccination-specific global geotagged tweets dataset, GeoCovaxTweets, that contains more than 1.8 million tweets, with location information and longer temporal coverage, originating from 233 countries and territories between January 2020 and November 2022. The paper discusses the dataset's curation method and how it can be re-created locally, and later explores the dataset through multiple tweets distributions and briefly discusses its potential use cases. We anticipate that the dataset will assist the researchers in the crisis computing domain to explore the conversational dynamics of COVID-19 vaccines and vaccination Twitter discourse through numerous spatial and temporal dimensions concerning trends, shifts in opinions, misinformation, and anti-vaccination campaigns.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 300, + "label": 24, + "text": "Title: Active Learning in Symbolic Regression with Physical Constraints\nAbstract: Evolutionary symbolic regression (SR) fits a symbolic equation to data, which gives a concise interpretable model. We explore using SR as a method to propose which data to gather in an active learning setting with physical constraints. SR with active learning proposes which experiments to do next. Active learning is done with query by committee, where the Pareto frontier of equations is the committee. The physical constraints improve proposed equations in very low data settings. These approaches reduce the data required for SR and achieves state of the art results in data required to rediscover known equations.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 301, + "label": 16, + "text": "Title: Deep Features for Contactless Fingerprint Presentation Attack Detection: Can They Be Generalized?\nAbstract: The rapid evolution of high-end smartphones with advanced high-resolution cameras has resulted in contactless capture of fingerprint biometrics that are more reliable and suitable for verification. Similar to other biometric systems, contactless fingerprint-verification systems are vulnerable to presentation attacks. In this paper, we present a comparative study on the generalizability of seven different pre-trained Convolutional Neural Networks (CNN) and a Vision Transformer (ViT) to reliably detect presentation attacks. Extensive experiments were carried out on publicly available smartphone-based presentation attack datasets using four different Presentation Attack Instruments (PAI). The detection performance of the eighth deep feature technique was evaluated using the leave-one-out protocol to benchmark the generalization performance for unseen PAI. The obtained results indicated the best generalization performance with the ResNet50 CNN.", + "neighbors": [ + 138 + ], + "mask": "Test" + }, + { + "node_id": 302, + "label": 24, + "text": "Title: Importance of methodological choices in data manipulation for validating epileptic seizure detection models\nAbstract: Epilepsy is a chronic neurological disorder that affects a significant portion of the human population and imposes serious risks in the daily life of patients. Despite advances in machine learning and IoT, small, nonstigmatizing wearable devices for continuous monitoring and detection in outpatient environments are not yet available. Part of the reason is the complexity of epilepsy itself, including highly imbalanced data, multimodal nature, and very subject-specific signatures. However, another problem is the heterogeneity of methodological approaches in research, leading to slower progress, difficulty comparing results, and low reproducibility. Therefore, this article identifies a wide range of methodological decisions that must be made and reported when training and evaluating the performance of epilepsy detection systems. We characterize the influence of individual choices using a typical ensemble random-forest model and the publicly available CHB-MIT database, providing a broader picture of each decision and giving good-practice recommendations, based on our experience, where possible.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 303, + "label": 28, + "text": "Title: Moments of Autocorrelation Demerit Factors of Binary Sequences\nAbstract: Sequences with low aperiodic autocorrelation are used in communications and remote sensing for synchronization and ranging. The autocorrelation demerit factor of a sequence is the sum of the squared magnitudes of its autocorrelation values at every nonzero shift when we normalize the sequence to have unit Euclidean length. The merit factor, introduced by Golay, is the reciprocal of the demerit factor. We consider the uniform probability measure on the $2^\\ell$ binary sequences of length $\\ell$ and investigate the distribution of the demerit factors of these sequences. Previous researchers have calculated the mean and variance of this distribution. We develop new combinatorial techniques to calculate the $p$th central moment of the demerit factor for binary sequences of length $\\ell$. These techniques prove that for $p\\geq 2$ and $\\ell \\geq 4$, all the central moments are strictly positive. For any given $p$, one may use the technique to obtain an exact formula for the $p$th central moment of the demerit factor as a function of the length $\\ell$. The previously obtained formula for variance is confirmed by our technique with a short calculation, and we demonstrate that our techniques go beyond this by also deriving an exact formula for the skewness.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 304, + "label": 24, + "text": "Title: On the Convergence of Stochastic Gradient Descent in Low-precision Number Formats\nAbstract: Deep learning models are dominating almost all artificial intelligence tasks such as vision, text, and speech processing. Stochastic Gradient Descent (SGD) is the main tool for training such models, where the computations are usually performed in single-precision floating-point number format. The convergence of single-precision SGD is normally aligned with the theoretical results of real numbers since they exhibit negligible error. However, the numerical error increases when the computations are performed in low-precision number formats. This provides compelling reasons to study the SGD convergence adapted for low-precision computations. We present both deterministic and stochastic analysis of the SGD algorithm, obtaining bounds that show the effect of number format. Such bounds can provide guidelines as to how SGD convergence is affected when constraints render the possibility of performing high-precision computations remote.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 305, + "label": 23, + "text": "Title: Best performance and reliability for your time: budget-aware search-based optimization of software model refactoring\nAbstract: Context: Software model optimization is a process that automatically generates design alternatives, typically to enhance quantifiable non-functional properties of software systems, such as performance and reliability. Multi-objective evolutionary algorithms have shown to be effective in this context for assisting the designer in identifying trade-offs between the desired non-functional properties. Objective: In this work, we investigate the effects of imposing a time budget to limit the search for design alternatives, which inevitably affects the quality of the resulting alternatives. Method: The effects of time budgets are analyzed by investigating both the quality of the generated design alternatives and their structural features when varying the budget and the genetic algorithm (NSGA-II, PESA2, SPEA2). This is achieved by employing multi-objective quality indicators and a tree-based representation of the search space. Results: The study reveals that the time budget significantly affects the quality of Pareto fronts, especially for performance and reliability. NSGA-II is the fastest algorithm, while PESA2 generates the highest-quality solutions. The imposition of a time budget results in structurally distinct models compared to those obtained without a budget, indicating that the search process is influenced by both the budget and algorithm selection. Conclusions: In software model optimization, imposing a time budget can be effective in saving optimization time, but designers should carefully consider the trade-off between time and solution quality in the Pareto front, along with the structural characteristics of the generated models. By making informed choices about the specific genetic algorithm, designers can achieve different trade-offs.", + "neighbors": [ + 403 + ], + "mask": "Validation" + }, + { + "node_id": 306, + "label": 16, + "text": "Title: Lightweight, Pre-trained Transformers for Remote Sensing Timeseries\nAbstract: Machine learning algorithms for parsing remote sensing data have a wide range of societally relevant applications, but labels used to train these algorithms can be difficult or impossible to acquire. This challenge has spurred research into self-supervised learning for remote sensing data aiming to unlock the use of machine learning in geographies or application domains where labelled datasets are small. Current self-supervised learning approaches for remote sensing data draw significant inspiration from techniques applied to natural images. However, remote sensing data has important differences from natural images -- for example, the temporal dimension is critical for many tasks and data is collected from many complementary sensors. We show that designing models and self-supervised training techniques specifically for remote sensing data results in both smaller and more performant models. We introduce the Pretrained Remote Sensing Transformer (Presto), a transformer-based model pre-trained on remote sensing pixel-timeseries data. Presto excels at a wide variety of globally distributed remote sensing tasks and outperforms much larger models. Presto can be used for transfer learning or as a feature extractor for simple models, enabling efficient deployment at scale.", + "neighbors": [ + 1561 + ], + "mask": "Train" + }, + { + "node_id": 307, + "label": 22, + "text": "Title: Suspension Analysis and Selective Continuation-Passing Style for Higher-Order Probabilistic Programming Languages\nAbstract: Probabilistic programming languages (PPLs) make encoding and automatically solving statistical inference problems relatively easy by separating models from the inference algorithm. A popular choice for solving inference problems is to use Monte Carlo inference algorithms. For higher-order functional PPLs, these inference algorithms rely on execution suspension to perform inference, most often enabled through a full continuation-passing style (CPS) transformation. However, standard CPS transformations for PPL compilers introduce significant overhead, a problem the community has generally overlooked. State-of-the-art solutions either perform complete CPS transformations with performance penalties due to unnecessary closure allocations or use efficient, but complex, low-level solutions that are often not available in high-level languages. In contrast to prior work, we develop a new approach that is both efficient and easy to implement using higher-order languages. Specifically, we design a novel static suspension analysis technique that determines the parts of a program that require suspension, given a particular inference algorithm. The analysis result allows selectively CPS transforming the program only where necessary. We formally prove the correctness of the suspension analysis and implement both the suspension analysis and selective CPS transformation in the Miking CorePPL compiler. We evaluate the implementation for a large number of Monte Carlo inference algorithms on real-world models from phylogenetics, epidemiology, and topic modeling. The evaluation results demonstrate significant improvements across all models and inference algorithms.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 308, + "label": 30, + "text": "Title: The Touch\u00e923-ValueEval Dataset for Identifying Human Values behind Arguments\nAbstract: We present the Touch\\'e23-ValueEval Dataset for Identifying Human Values behind Arguments. To investigate approaches for the automated detection of human values behind arguments, we collected 9324 arguments from 6 diverse sources, covering religious texts, political discussions, free-text arguments, newspaper editorials, and online democracy platforms. Each argument was annotated by 3 crowdworkers for 54 values. The Touch\\'e23-ValueEval dataset extends the Webis-ArgValues-22. In comparison to the previous dataset, the effectiveness of a 1-Baseline decreases, but that of an out-of-the-box BERT model increases. Therefore, though the classification difficulty increased as per the label distribution, the larger dataset allows for training better models.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 309, + "label": 24, + "text": "Title: Conformal PID Control for Time Series Prediction\nAbstract: We study the problem of uncertainty quantification for time series prediction, with the goal of providing easy-to-use algorithms with formal guarantees. The algorithms we present build upon ideas from conformal prediction and control theory, are able to prospectively model conformal scores in an online setting, and adapt to the presence of systematic errors due to seasonality, trends, and general distribution shifts. Our theory both simplifies and strengthens existing analyses in online conformal prediction. Experiments on 4-week-ahead forecasting of statewide COVID-19 death counts in the U.S. show an improvement in coverage over the ensemble forecaster used in official CDC communications. We also run experiments on predicting electricity demand, market returns, and temperature using autoregressive, Theta, Prophet, and Transformer models. We provide an extendable codebase for testing our methods and for the integration of new algorithms, data sets, and forecasting rules.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 310, + "label": 16, + "text": "Title: Joint-Relation Transformer for Multi-Person Motion Prediction\nAbstract: Multi-person motion prediction is a challenging problem due to the dependency of motion on both individual past movements and interactions with other people. Transformer-based methods have shown promising results on this task, but they miss the explicit relation representation between joints, such as skeleton structure and pairwise distance, which is crucial for accurate interaction modeling. In this paper, we propose the Joint-Relation Transformer, which utilizes relation information to enhance interaction modeling and improve future motion prediction. Our relation information contains the relative distance and the intra-/inter-person physical constraints. To fuse relation and joint information, we design a novel joint-relation fusion layer with relation-aware attention to update both features. Additionally, we supervise the relation information by forecasting future distance. Experiments show that our method achieves a 13.4% improvement of 900ms VIM on 3DPW-SoMoF/RC and 17.8%/12.0% improvement of 3s MPJPE on CMU-Mpcap/MuPoTS-3D dataset.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 311, + "label": 27, + "text": "Title: Lavender Autonomous Navigation with Semantic Segmentation at the Edge\nAbstract: Achieving success in agricultural activities heavily relies on precise navigation in row crop fields. Recently, segmentation-based navigation has emerged as a reliable technique when GPS-based localization is unavailable or higher accuracy is needed due to vegetation or unfavorable weather conditions. It also comes in handy when plants are growing rapidly and require an online adaptation of the navigation algorithm. This work applies a segmentation-based visual agnostic navigation algorithm to lavender fields, considering both simulation and real-world scenarios. The effectiveness of this approach is validated through a wide set of experimental tests, which show the capability of the proposed solution to generalize over different scenarios and provide highly-reliable results.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 312, + "label": 30, + "text": "Title: Making the Implicit Explicit: Implicit Content as a First Class Citizen in NLP\nAbstract: Language is multifaceted. A given utterance can be re-expressed in equivalent forms, and its implicit and explicit content support various logical and pragmatic inferences. When processing an utterance, we consider these different aspects, as mediated by our interpretive goals -- understanding that\"it's dark in here\"may be a veiled direction to turn on a light. Nonetheless, NLP methods typically operate over the surface form alone, eliding this nuance. In this work, we represent language with language, and direct an LLM to decompose utterances into logical and plausible inferences. The reduced complexity of the decompositions makes them easier to embed, opening up novel applications. Variations on our technique lead to state-of-the-art improvements on sentence embedding benchmarks, a substantive application in computational political science, and to a novel construct-discovery process, which we validate with human annotations.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 313, + "label": 16, + "text": "Title: AIGCIQA2023: A Large-scale Image Quality Assessment Database for AI Generated Images: from the Perspectives of Quality, Authenticity and Correspondence\nAbstract: In this paper, in order to get a better understanding of the human visual preferences for AIGIs, a large-scale IQA database for AIGC is established, which is named as AIGCIQA2023. We first generate over 2000 images based on 6 state-of-the-art text-to-image generation models using 100 prompts. Based on these images, a well-organized subjective experiment is conducted to assess the human visual preferences for each image from three perspectives including quality, authenticity and correspondence. Finally, based on this large-scale database, we conduct a benchmark experiment to evaluate the performance of several state-of-the-art IQA metrics on our constructed database.", + "neighbors": [ + 1902, + 1969, + 2007 + ], + "mask": "Train" + }, + { + "node_id": 314, + "label": 16, + "text": "Title: SpaceYOLO: A Human-Inspired Model for Real-time, On-board Spacecraft Feature Detection\nAbstract: The rapid proliferation of non-cooperative spacecraft and space debris in orbit has precipitated a surging demand for on-orbit servicing and space debris removal at a scale that only autonomous missions can address, but the prerequisite autonomous navigation and flightpath planning to safely capture an unknown, non-cooperative, tumbling space object is an open problem. Planning safe, effective trajectories requires real-time, automated spacecraft feature recognition algorithms to pinpoint the locations of collision hazards (e.g., solar panels or antennas) and safe docking features (e.g., satellite bodies or thrusters). Prior work in this area reveals that computer vision models' performance is highly dependent on the training dataset and its coverage of scenarios visually similar to the real scenarios that occur in deployment. Hence, the algorithm may have degraded performance under certain lighting conditions even when the rendezvous maneuver conditions of the chaser to the target spacecraft are the same. This work delves into how humans perform these tasks through a survey of how people experienced with spacecraft shapes and components recognize features of the three spacecraft: Landsat, Envisat, Anik, and the orbiter Mir. The survey reveals that the most common patterns in the human detection process were to consider the shape and texture of the features-antenna, solar panels, thrusters, and satellite bodies. This work introduces a novel algorithm called Space YOLO, which uses context-based decision processes, specifically shape and texture information, to perform object detection. Unlike traditional object detectors, the method demands far fewer labor hours for synthetic data preparation. Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments under different lighting and chaser maneuver conditions at the ORION facility at Florida Tech.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 315, + "label": 16, + "text": "Title: Are Diffusion Models Vulnerable to Membership Inference Attacks?\nAbstract: Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern. Our results indicate that existing MIAs designed for GANs or VAE are largely ineffective on diffusion models, either due to inapplicable scenarios (e.g., requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer distances between synthetic samples and member samples). To address this gap, we propose Step-wise Error Comparing Membership Inference (SecMI), a query-based MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep. SecMI follows the common overfitting assumption in MIA where member samples normally have smaller estimation errors, compared with hold-out samples. We consider both the standard diffusion models, e.g., DDPM, and the text-to-image diffusion models, e.g., Latent Diffusion Models and Stable Diffusion. Experimental results demonstrate that our methods precisely infer the membership with high confidence on both of the two scenarios across multiple different datasets. Code is available at https://github.com/jinhaoduan/SecMI.", + "neighbors": [ + 579, + 1481, + 1713, + 2279 + ], + "mask": "Validation" + }, + { + "node_id": 316, + "label": 24, + "text": "Title: QBSD: Quartile-Based Seasonality Decomposition for Cost-Effective Time Series Forecasting\nAbstract: In the telecom domain, precise forecasting of time series patterns, such as cell key performance indicators (KPIs), plays a pivotal role in enhancing service quality and operational efficiency. State-of-the-art forecasting approaches prioritize forecasting accuracy at the expense of computational performance, rendering them less suitable for data-intensive applications encompassing systems with a multitude of time series variables. To address this issue, we introduce QBSD, a live forecasting approach tailored to optimize the trade-off between accuracy and computational complexity. We have evaluated the performance of QBSD against state-of-the-art forecasting approaches on publicly available datasets. We have also extended this investigation to our curated network KPI dataset, now publicly accessible, to showcase the effect of dynamic operating ranges that varies with time. The results demonstrate that the proposed method excels in runtime efficiency compared to the leading algorithms available while maintaining competitive forecast accuracy.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 317, + "label": 17, + "text": "Title: Neural Shadow Mapping\nAbstract: We present a neural extension of basic shadow mapping for fast, high quality hard and soft shadows. We compare favorably to fast pre-filtering shadow mapping, all while producing visual results on par with ray traced hard and soft shadows. We show that combining memory bandwidth-aware architecture specialization and careful temporal-window training leads to a fast, compact and easy-to-train neural shadowing method. Our technique is memory bandwidth conscious, eliminates the need for post-process temporal anti-aliasing or denoising, and supports scenes with dynamic view, emitters and geometry while remaining robust to unseen objects.", + "neighbors": [ + 201 + ], + "mask": "Train" + }, + { + "node_id": 318, + "label": 24, + "text": "Title: Adaptive Hierarchical SpatioTemporal Network for Traffic Forecasting\nAbstract: Accurate traffic forecasting is vital to intelligent transportation systems, which are widely adopted to solve urban traffic issues. Existing traffic forecasting studies focus on modeling spatial-temporal dynamics in traffic data, among which the graph convolution network (GCN) is at the center for exploiting the spatial dependency embedded in the road network graphs. However, these GCN-based methods operate intrinsically on the node level (e.g., road and intersection) only whereas overlooking the spatial hierarchy of the whole city. Nodes such as intersections and road segments can form clusters (e.g., regions), which could also have interactions with each other and share similarities at a higher level. In this work, we propose an Adaptive Hierarchical SpatioTemporal Network (AHSTN) to promote traffic forecasting by exploiting the spatial hierarchy and modeling multi-scale spatial correlations. Apart from the node-level spatiotemporal blocks, AHSTN introduces the adaptive spatiotemporal downsampling module to infer the spatial hierarchy for spatiotemporal modeling at the cluster level. Then, an adaptive spatiotemporal upsampling module is proposed to upsample the cluster-level representations to the node-level and obtain the multi-scale representations for generating predictions. Experiments on two real-world datasets show that AHSTN achieves better performance over several strong baselines.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 319, + "label": 16, + "text": "Title: Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models\nAbstract: ChatGPT is attracting a cross-field interest as it provides a language interface with remarkable conversational competency and reasoning capabilities across many domains. However, since ChatGPT is trained with languages, it is currently not capable of processing or generating images from the visual world. At the same time, Visual Foundation Models, such as Visual Transformers or Stable Diffusion, although showing great visual understanding and generation capabilities, they are only experts on specific tasks with one-round fixed inputs and outputs. To this end, We build a system called \\textbf{Visual ChatGPT}, incorporating different Visual Foundation Models, to enable the user to interact with ChatGPT by 1) sending and receiving not only languages but also images 2) providing complex visual questions or visual editing instructions that require the collaboration of multiple AI models with multi-steps. 3) providing feedback and asking for corrected results. We design a series of prompts to inject the visual model information into ChatGPT, considering models of multiple inputs/outputs and models that require visual feedback. Experiments show that Visual ChatGPT opens the door to investigating the visual roles of ChatGPT with the help of Visual Foundation Models. Our system is publicly available at \\url{https://github.com/microsoft/visual-chatgpt}.", + "neighbors": [ + 0, + 34, + 170, + 173, + 176, + 392, + 522, + 618, + 719, + 817, + 855, + 887, + 902, + 929, + 1026, + 1050, + 1060, + 1129, + 1148, + 1262, + 1315, + 1327, + 1348, + 1353, + 1467, + 1574, + 1626, + 1659, + 1788, + 1810, + 1899, + 1902, + 1906, + 1913, + 1990, + 2018, + 2030, + 2036, + 2064, + 2095, + 2113, + 2155, + 2166, + 2216, + 2274, + 2286 + ], + "mask": "Test" + }, + { + "node_id": 320, + "label": 24, + "text": "Title: Implicit Bias of Gradient Descent for Logistic Regression at the Edge of Stability\nAbstract: Recent research has observed that in machine learning optimization, gradient descent (GD) often operates at the edge of stability (EoS) [Cohen, et al., 2021], where the stepsizes are set to be large, resulting in non-monotonic losses induced by the GD iterates. This paper studies the convergence and implicit bias of constant-stepsize GD for logistic regression on linearly separable data in the EoS regime. Despite the presence of local oscillations, we prove that the logistic loss can be minimized by GD with any constant stepsize over a long time scale. Furthermore, we prove that with any constant stepsize, the GD iterates tend to infinity when projected to a max-margin direction (the hard-margin SVM direction) and converge to a fixed vector that minimizes a strongly convex potential when projected to the orthogonal complement of the max-margin direction. In contrast, we also show that in the EoS regime, GD iterates may diverge catastrophically under the exponential loss, highlighting the superiority of the logistic loss. These theoretical findings are in line with numerical simulations and complement existing theories on the convergence and implicit bias of GD, which are only applicable when the stepsizes are sufficiently small.", + "neighbors": [ + 1746 + ], + "mask": "Train" + }, + { + "node_id": 321, + "label": 28, + "text": "Title: Wireless Channel Charting: Theory, Practice, and Applications\nAbstract: Channel charting is a recently proposed framework that applies dimensionality reduction to channel state information (CSI) in wireless systems with the goal of associating a pseudo-position to each mobile user in a low-dimensional space: the channel chart. Channel charting summarizes the entire CSI dataset in a self-supervised manner, which opens up a range of applications that are tied to user location. In this article, we introduce the theoretical underpinnings of channel charting and present an overview of recent algorithmic developments and experimental results obtained in the field. We furthermore discuss concrete application examples of channel charting to network- and user-related applications, and we provide a perspective on future developments and challenges as well as the role of channel charting in next-generation wireless networks.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 322, + "label": 16, + "text": "Title: CLR: Channel-wise Lightweight Reprogramming for Continual Learning\nAbstract: Continual learning aims to emulate the human ability to continually accumulate knowledge over sequential tasks. The main challenge is to maintain performance on previously learned tasks after learning new tasks, i.e., to avoid catastrophic forgetting. We propose a Channel-wise Lightweight Reprogramming (CLR) approach that helps convolutional neural networks (CNNs) overcome catastrophic forgetting during continual learning. We show that a CNN model trained on an old task (or self-supervised proxy task) could be ``reprogrammed\"to solve a new task by using our proposed lightweight (very cheap) reprogramming parameter. With the help of CLR, we have a better stability-plasticity trade-off to solve continual learning problems: To maintain stability and retain previous task ability, we use a common task-agnostic immutable part as the shared ``anchor\"parameter set. We then add task-specific lightweight reprogramming parameters to reinterpret the outputs of the immutable parts, to enable plasticity and integrate new knowledge. To learn sequential tasks, we only train the lightweight reprogramming parameters to learn each new task. Reprogramming parameters are task-specific and exclusive to each task, which makes our method immune to catastrophic forgetting. To minimize the parameter requirement of reprogramming to learn new tasks, we make reprogramming lightweight by only adjusting essential kernels and learning channel-wise linear mappings from anchor parameters to task-specific domain knowledge. We show that, for general CNNs, the CLR parameter increase is less than 0.6\\% for any new task. Our method outperforms 13 state-of-the-art continual learning baselines on a new challenging sequence of 53 image classification datasets. Code and data are available at https://github.com/gyhandy/Channel-wise-Lightweight-Reprogramming", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 323, + "label": 24, + "text": "Title: Improving physics-informed DeepONets with hard constraints\nAbstract: Current physics-informed (standard or operator) neural networks still rely on accurately learning the initial conditions of the system they are solving. In contrast, standard numerical methods evolve such initial conditions without needing to learn these. In this study, we propose to improve current physics-informed deep learning strategies such that initial conditions do not need to be learned and are represented exactly in the predicted solution. Moreover, this method guarantees that when a DeepONet is applied multiple times to time step a solution, the resulting function is continuous.", + "neighbors": [ + 1435 + ], + "mask": "Train" + }, + { + "node_id": 324, + "label": 27, + "text": "Title: ImMesh: An Immediate LiDAR Localization and Meshing Framework\nAbstract: In this paper, we propose a novel LiDAR(-inertial) odometry and mapping framework to achieve the goal of simultaneous localization and meshing in real-time. This proposed framework termed ImMesh comprises four tightly-coupled modules: receiver, localization, meshing, and broadcaster. The localization module utilizes the prepossessed sensor data from the receiver, estimates the sensor pose online by registering LiDAR scans to maps, and dynamically grows the map. Then, our meshing module takes the registered LiDAR scan for incrementally reconstructing the triangle mesh on the fly. Finally, the real-time odometry, map, and mesh are published via our broadcaster. The key contribution of this work is the meshing module, which represents a scene by an efficient hierarchical voxels structure, performs fast finding of voxels observed by new scans, and reconstructs triangle facets in each voxel in an incremental manner. This voxel-wise meshing operation is delicately designed for the purpose of efficiency; it first performs a dimension reduction by projecting 3D points to a 2D local plane contained in the voxel, and then executes the meshing operation with pull, commit and push steps for incremental reconstruction of triangle facets. To the best of our knowledge, this is the first work in literature that can reconstruct online the triangle mesh of large-scale scenes, just relying on a standard CPU without GPU acceleration. To share our findings and make contributions to the community, we make our code publicly available on our GitHub: https://github.com/hku-mars/ImMesh.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 325, + "label": 27, + "text": "Title: Constrained Bayesian Optimization for Automatic Underwater Vehicle Hull Design\nAbstract: Automatic underwater vehicle hull Design optimization is a complex engineering process for generating a UUV hull with optimized properties on a given requirement. First, it involves the integration of involved computationally complex engineering simulation tools. Second, it needs integration of a sample efficient optimization framework with the integrated toolchain. To this end, we integrated the CAD tool called FreeCAD with CFD tool openFoam for automatic design evaluation. For optimization, we chose Bayesian optimization (BO), which is a well-known technique developed for optimizing time-consuming expensive engineering simulations and has proven to be very sample efficient in a variety of problems, including hyper-parameter tuning and experimental design. During the optimization process, we can handle infeasible design as constraints integrated into the optimization process. By integrating domain-specific toolchain with AI-based optimization, we executed the automatic design optimization of underwater vehicle hull design. For empirical evaluation, we took two different use cases of real-world underwater vehicle design to validate the execution of our tool. The code for running the experimentation and installing the toolchain can be found here https://github.com/vardhah/ConstraintBOUUVHullDesign.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 326, + "label": 16, + "text": "Title: MedViT: A Robust Vision Transformer for Generalized Medical Image Classification\nAbstract: nan", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 327, + "label": 10, + "text": "Title: Tractable Diversity: Scalable Multiperspective Ontology Management via Standpoint EL\nAbstract: The tractability of the lightweight description logic EL has allowed for the construction of large and widely used ontologies that support semantic interoperability. However, comprehensive domains with a broad user base are often at odds with strong axiomatisations otherwise useful for inferencing, since these are usually context dependent and subject to diverging perspectives.\n\n\n\nIn this paper we introduce Standpoint EL, a multi-modal extension of EL that allows for the integrated representation of domain knowledge relative to diverse, possibly conflicting standpoints (or contexts), which can be hierarchically organised and put in relation to each other. We establish that Standpoint EL still exhibits EL's favourable PTime standard reasoning, whereas introducing additional features like empty standpoints, rigid roles, and nominals makes standard reasoning tasks intractable.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 328, + "label": 3, + "text": "Title: A Comparative Study of Reference Reliability in Multiple Language Editions of Wikipedia\nAbstract: Information presented in Wikipedia articles must be attributable to reliable published sources in the form of references. This study examines over 5 million Wikipedia articles to assess the reliability of references in multiple language editions. We quantify the cross-lingual patterns of the perennial sources list, a collection of reliability labels for web domains identified and collaboratively agreed upon by Wikipedia editors. We discover that some sources (or web domains) deemed untrustworthy in one language (i.e., English) continue to appear in articles in other languages. This trend is especially evident with sources tailored for smaller communities. Furthermore, non-authoritative sources found in the English version of a page tend to persist in other language versions of that page. We finally present a case study on the Chinese, Russian, and Swedish Wikipedias to demonstrate a discrepancy in reference reliability across cultures. Our finding highlights future challenges in coordinating global knowledge on source reliability.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 329, + "label": 29, + "text": "Title: An Analysis of the Completion Time of the BB84 Protocol\nAbstract: The BB84 QKD protocol is based on the idea that the sender and the receiver can reconcile a certain fraction of the teleported qubits to detect eavesdropping or noise and decode the rest to use as a private key. Under the present hardware infrastructure, decoherence of quantum states poses a significant challenge to performing perfect or efficient teleportation, meaning that a teleportation-based protocol must be run multiple times to observe success. Thus, performance analyses of such protocols usually consider the completion time, i.e., the time until success, rather than the duration of a single attempt. Moreover, due to decoherence, the success of an attempt is in general dependent on the duration of individual phases of that attempt, as quantum states must wait in memory while the success or failure of a generation phase is communicated to the relevant parties. In this work, we do a performance analysis of the completion time of the BB84 protocol in a setting where the sender and the receiver are connected via a single quantum repeater and the only quantum channel between them does not see any adversarial attack. Assuming certain distributional forms for the generation and communication phases of teleportation, we provide a method to compute the MGF of the completion time and subsequently derive an estimate of the CDF and a bound on the tail probability. This result helps us gauge the (tail) behaviour of the completion time in terms of the parameters characterising the elementary phases of teleportation, without having to run the protocol multiple times. We also provide an efficient simulation scheme to generate the completion time, which relies on expressing the completion time in terms of aggregated teleportation times. We numerically compare our approach with a full-scale simulation and observe good agreement between them.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 330, + "label": 16, + "text": "Title: Control4D: Dynamic Portrait Editing by Learning 4D GAN from 2D Diffusion-based Editor\nAbstract: Recent years have witnessed considerable achievements in editing images with text instructions. When applying these editors to dynamic scene editing, the new-style scene tends to be temporally inconsistent due to the frame-by-frame nature of these 2D editors. To tackle this issue, we propose Control4D, a novel approach for high-fidelity and temporally consistent 4D portrait editing. Control4D is built upon an efficient 4D representation with a 2D diffusion-based editor. Instead of using direct supervisions from the editor, our method learns a 4D GAN from it and avoids the inconsistent supervision signals. Specifically, we employ a discriminator to learn the generation distribution based on the edited images and then update the generator with the discrimination signals. For more stable training, multi-level information is extracted from the edited images and used to facilitate the learning of the generator. Experimental results show that Control4D surpasses previous approaches and achieves more photo-realistic and consistent 4D editing performances. The link to our project website is https://control4darxiv.github.io.", + "neighbors": [ + 48, + 1020, + 1125, + 1251, + 1279, + 1449, + 1902, + 2190 + ], + "mask": "Validation" + }, + { + "node_id": 331, + "label": 16, + "text": "Title: AdvART: Adversarial Art for Camouflaged Object Detection Attacks\nAbstract: A majority of existing physical attacks in the real world result in conspicuous and eye-catching patterns for generated patches, which made them identifiable/detectable by humans. To overcome this limitation, recent work has proposed several approaches that aim at generating naturalistic patches using generative adversarial networks (GANs), which may not catch human's attention. However, these approaches are computationally intensive and do not always converge to natural looking patterns. In this paper, we propose a novel lightweight framework that systematically generates naturalistic adversarial patches without using GANs. To illustrate the proposed approach, we generate adversarial art (AdvART), which are patches generated to look like artistic paintings while maintaining high attack efficiency. In fact, we redefine the optimization problem by introducing a new similarity objective. Specifically, we leverage similarity metrics to construct a similarity loss that is added to the optimized objective function. This component guides the patch to follow a predefined artistic patterns while maximizing the victim model's loss function. Our patch achieves high success rates with $12.53\\%$ mean average precision (mAP) on YOLOv4tiny for INRIA dataset.", + "neighbors": [ + 1287, + 1737 + ], + "mask": "Train" + }, + { + "node_id": 332, + "label": 24, + "text": "Title: Personalized Tucker Decomposition: Modeling Commonality and Peculiarity on Tensor Data\nAbstract: We propose personalized Tucker decomposition (perTucker) to address the limitations of traditional tensor decomposition methods in capturing heterogeneity across different datasets. perTucker decomposes tensor data into shared global components and personalized local components. We introduce a mode orthogonality assumption and develop a proximal gradient regularized block coordinate descent algorithm that is guaranteed to converge to a stationary point. By learning unique and common representations across datasets, we demonstrate perTucker's effectiveness in anomaly detection, client classification, and clustering through a simulation study and two case studies on solar flare detection and tonnage signal classification.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 333, + "label": 30, + "text": "Title: Using Large Language Models to Automate Category and Trend Analysis of Scientific Articles: An Application in Ophthalmology\nAbstract: Purpose: In this paper, we present an automated method for article classification, leveraging the power of Large Language Models (LLM). The primary focus is on the field of ophthalmology, but the model is extendable to other fields. Methods: We have developed a model based on Natural Language Processing (NLP) techniques, including advanced LLMs, to process and analyze the textual content of scientific papers. Specifically, we have employed zero-shot learning (ZSL) LLM models and compared against Bidirectional and Auto-Regressive Transformers (BART) and its variants, and Bidirectional Encoder Representations from Transformers (BERT), and its variant such as distilBERT, SciBERT, PubmedBERT, BioBERT. Results: The classification results demonstrate the effectiveness of LLMs in categorizing large number of ophthalmology papers without human intervention. Results: To evalute the LLMs, we compiled a dataset (RenD) of 1000 ocular disease-related articles, which were expertly annotated by a panel of six specialists into 15 distinct categories. The model achieved mean accuracy of 0.86 and mean F1 of 0.85 based on the RenD dataset. Conclusion: The proposed framework achieves notable improvements in both accuracy and efficiency. Its application in the domain of ophthalmology showcases its potential for knowledge organization and retrieval in other domains too. We performed trend analysis that enables the researchers and clinicians to easily categorize and retrieve relevant papers, saving time and effort in literature review and information gathering as well as identification of emerging scientific trends within different disciplines. Moreover, the extendibility of the model to other scientific fields broadens its impact in facilitating research and trend analysis across diverse disciplines.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 334, + "label": 24, + "text": "Title: Approximation Algorithms for Fair Range Clustering\nAbstract: This paper studies the fair range clustering problem in which the data points are from different demographic groups and the goal is to pick $k$ centers with the minimum clustering cost such that each group is at least minimally represented in the centers set and no group dominates the centers set. More precisely, given a set of $n$ points in a metric space $(P,d)$ where each point belongs to one of the $\\ell$ different demographics (i.e., $P = P_1 \\uplus P_2 \\uplus \\cdots \\uplus P_\\ell$) and a set of $\\ell$ intervals $[\\alpha_1, \\beta_1], \\cdots, [\\alpha_\\ell, \\beta_\\ell]$ on desired number of centers from each group, the goal is to pick a set of $k$ centers $C$ with minimum $\\ell_p$-clustering cost (i.e., $(\\sum_{v\\in P} d(v,C)^p)^{1/p}$) such that for each group $i\\in \\ell$, $|C\\cap P_i| \\in [\\alpha_i, \\beta_i]$. In particular, the fair range $\\ell_p$-clustering captures fair range $k$-center, $k$-median and $k$-means as its special cases. In this work, we provide efficient constant factor approximation algorithms for fair range $\\ell_p$-clustering for all values of $p\\in [1,\\infty)$.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 335, + "label": 10, + "text": "Title: Biomedical Knowledge Graph Embeddings with Negative Statements\nAbstract: A knowledge graph is a powerful representation of real-world entities and their relations. The vast majority of these relations are defined as positive statements, but the importance of negative statements is increasingly recognized, especially under an Open World Assumption. Explicitly considering negative statements has been shown to improve performance on tasks such as entity summarization and question answering or domain-specific tasks such as protein function prediction. However, no attention has been given to the exploration of negative statements by knowledge graph embedding approaches despite the potential of negative statements to produce more accurate representations of entities in a knowledge graph. We propose a novel approach, TrueWalks, to incorporate negative statements into the knowledge graph representation learning process. In particular, we present a novel walk-generation method that is able to not only differentiate between positive and negative statements but also take into account the semantic implications of negation in ontology-rich knowledge graphs. This is of particular importance for applications in the biomedical domain, where the inadequacy of embedding approaches regarding negative statements at the ontology level has been identified as a crucial limitation. We evaluate TrueWalks in ontology-rich biomedical knowledge graphs in two different predictive tasks based on KG embeddings: protein-protein interaction prediction and gene-disease association prediction. We conduct an extensive analysis over established benchmarks and demonstrate that our method is able to improve the performance of knowledge graph embeddings on all tasks.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 336, + "label": 30, + "text": "Title: Automated Labeling of German Chest X-Ray Radiology Reports using Deep Learning\nAbstract: Radiologists are in short supply globally, and deep learning models offer a promising solution to address this shortage as part of clinical decision-support systems. However, training such models often requires expensive and time-consuming manual labeling of large datasets. Automatic label extraction from radiology reports can reduce the time required to obtain labeled datasets, but this task is challenging due to semantically similar words and missing annotated data. In this work, we explore the potential of weak supervision of a deep learning-based label prediction model, using a rule-based labeler. We propose a deep learning-based CheXpert label prediction model, pre-trained on reports labeled by a rule-based German CheXpert model and fine-tuned on a small dataset of manually labeled reports. Our results demonstrate the effectiveness of our approach, which significantly outperformed the rule-based model on all three tasks. Our findings highlight the benefits of employing deep learning-based models even in scenarios with sparse data and the use of the rule-based labeler as a tool for weak supervision.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 337, + "label": 5, + "text": "Title: Towards General and Efficient Online Tuning for Spark\nAbstract: The distributed data analytic system -- Spark is a common choice for processing massive volumes of heterogeneous data, while it is challenging to tune its parameters to achieve high performance. Recent studies try to employ auto-tuning techniques to solve this problem but suffer from three issues: limited functionality, high overhead, and inefficient search. In this paper, we present a general and efficient Spark tuning framework that can deal with the three issues simultaneously. First, we introduce a generalized tuning formulation, which can support multiple tuning goals and constraints conveniently, and a Bayesian optimization (BO) based solution to solve this generalized optimization problem. Second, to avoid high overhead from additional offline evaluations in existing methods, we propose to tune parameters along with the actual periodic executions of each job (i.e., online evaluations). To ensure safety during online job executions, we design a safe configuration acquisition method that models the safe region. Finally, three innovative techniques are leveraged to further accelerate the search process: adaptive sub-space generation, approximate gradient descent, and meta-learning method. We have implemented this framework as an independent cloud service, and applied it to the data platform in Tencent. The empirical results on both public benchmarks and large-scale production tasks demonstrate its superiority in terms of practicality, generality, and efficiency. Notably, this service saves an average of 57.00% memory cost and 34.93% CPU cost on 25K in-production tasks within 20 iterations, respectively.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 338, + "label": 36, + "text": "Title: Complexity of Conformant Election Manipulation\nAbstract: It is important to study how strategic agents can affect the outcome of an election. There has been a long line of research in the computational study of elections on the complexity of manipulative actions such as manipulation and bribery. These problems model scenarios such as voters casting strategic votes and agents campaigning for voters to change their votes to make a desired candidate win. A common assumption is that the preferences of the voters follow the structure of a domain restriction such as single peakedness, and so manipulators only consider votes that also satisfy this restriction. We introduce the model where the preferences of the voters define their own restriction and strategic actions must ``conform'' by using only these votes. In this model, the election after manipulation will retain common domain restrictions. We explore the computational complexity of conformant manipulative actions and we discuss how conformant manipulative actions relate to other manipulative actions.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 339, + "label": 16, + "text": "Title: NVTC: Nonlinear Vector Transform Coding\nAbstract: In theory, vector quantization (VQ) is always better than scalar quantization (SQ) in terms of rate-distortion (RD) performance [33]. Recent state-of-the-art methods for neural image compression are mainly based on nonlinear transform coding (NTC) with uniform scalar quantization, overlooking the benefits of VQ due to its exponentially increased complexity. In this paper, we first investigate on some toy sources, demonstrating that even if modern neural networks considerably enhance the compression performance of SQ with nonlinear transform, there is still an insurmountable chasm between SQ and VQ. Therefore, revolving around VQ, we propose a novel framework for neural image compression named Nonlinear Vector Transform Coding (NVTC). NVTC solves the critical complexity issue of VQ through (1) a multi-stage quantization strategy and (2) nonlinear vector transforms. In addition, we apply entropy-constrained VQ in latent space to adaptively determine the quantization boundaries for joint rate-distortion optimization, which improves the performance both theoretically and experimentally. Compared to previous NTC approaches, NVTC demonstrates superior rate-distortion performance, faster decoding speed, and smaller model size. Our code is available at https://github.com/USTC-IMCL/NVTC.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 340, + "label": 16, + "text": "Title: Orientation-Independent Chinese Text Recognition in Scene Images\nAbstract: Scene text recognition (STR) has attracted much attention due to its broad applications. The previous works pay more attention to dealing with the recognition of Latin text images with complex backgrounds by introducing language models or other auxiliary networks. Different from Latin texts, many vertical Chinese texts exist in natural scenes, which brings difficulties to current state-of-the-art STR methods. In this paper, we take the first attempt to extract orientation-independent visual features by disentangling content and orientation information of text images, thus recognizing both horizontal and vertical texts robustly in natural scenes. Specifically, we introduce a Character Image Reconstruction Network (CIRN) to recover corresponding printed character images with disentangled content and orientation information. We conduct experiments on a scene dataset for benchmarking Chinese text recognition, and the results demonstrate that the proposed method can indeed improve performance through disentangling content and orientation information. To further validate the effectiveness of our method, we additionally collect a Vertical Chinese Text Recognition (VCTR) dataset. The experimental results show that the proposed method achieves 45.63\\% improvement on VCTR when introducing CIRN to the baseline model.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 341, + "label": 16, + "text": "Title: Faster Segment Anything: Towards Lightweight SAM for Mobile Applications\nAbstract: Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need to be run on resource-constraint edge devices, like mobile phones. In this work, we aim to make SAM mobile-friendly by replacing the heavyweight image encoder with a lightweight one. A naive way to train such a new SAM as in the original SAM paper leads to unsatisfactory performance, especially when limited training sources are available. We find that this is mainly caused by the coupled optimization of the image encoder and mask decoder, motivated by which we propose decoupled distillation. Concretely, we distill the knowledge from the heavy image encoder (ViT-H in the original SAM) to a lightweight image encoder, which can be automatically compatible with the mask decoder in the original SAM. The training can be completed on a single GPU within less than one day, and the resulting lightweight SAM is termed MobileSAM which is more than 60 times smaller yet performs on par with the original SAM. For inference speed, With a single GPU, MobileSAM runs around 10ms per image: 8ms on the image encoder and 4ms on the mask decoder. With superior performance, our MobileSAM is around 5 times faster than the concurrent FastSAM and 7 times smaller, making it more suitable for mobile applications. Moreover, we show that MobileSAM can run relatively smoothly on CPU. The code for our project is provided at \\href{https://github.com/ChaoningZhang/MobileSAM}{\\textcolor{red}{MobileSAM}}), with a demo showing that MobileSAM can run relatively smoothly on CPU.", + "neighbors": [ + 719, + 1006, + 1084, + 1207, + 1663, + 1690, + 1932 + ], + "mask": "Train" + }, + { + "node_id": 342, + "label": 30, + "text": "Title: Larger language models do in-context learning differently\nAbstract: We study how in-context learning (ICL) in language models is affected by semantic priors versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with semantically-unrelated labels-across various model families (GPT-3, InstructGPT, Codex, PaLM, and Flan-PaLM). First, experiments on ICL with flipped labels show that overriding semantic priors is an emergent ability of model scale. While small language models ignore flipped labels presented in-context and thus rely primarily on semantic priors from pretraining, large models can override semantic priors when presented with in-context exemplars that contradict priors, despite the stronger semantic priors that larger models may hold. We next study semantically-unrelated label ICL (SUL-ICL), in which labels are semantically unrelated to their inputs (e.g., foo/bar instead of negative/positive), thereby forcing language models to learn the input-label mappings shown in in-context exemplars in order to perform the task. The ability to do SUL-ICL also emerges primarily with scale, and large-enough language models can even perform linear classification in a SUL-ICL setting. Finally, we evaluate instruction-tuned models and find that instruction tuning strengthens both the use of semantic priors and the capacity to learn input-label mappings, but more of the former.", + "neighbors": [ + 1026, + 1146, + 1306, + 1617, + 1867, + 2226 + ], + "mask": "Validation" + }, + { + "node_id": 343, + "label": 25, + "text": "Title: Audio-visual video-to-speech synthesis with synthesized input audio\nAbstract: Video-to-speech synthesis involves reconstructing the speech signal of a speaker from a silent video. The implicit assumption of this task is that the sound signal is either missing or contains a high amount of noise/corruption such that it is not useful for processing. Previous works in the literature either use video inputs only or employ both video and audio inputs during training, and discard the input audio pathway during inference. In this work we investigate the effect of using video and audio inputs for video-to-speech synthesis during both training and inference. In particular, we use pre-trained video-to-speech models to synthesize the missing speech signals and then train an audio-visual-to-speech synthesis model, using both the silent video and the synthesized speech as inputs, to predict the final reconstructed speech. Our experiments demonstrate that this approach is successful with both raw waveforms and mel spectrograms as target outputs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 344, + "label": 24, + "text": "Title: AutoML in the Age of Large Language Models: Current Challenges, Future Opportunities and Risks\nAbstract: The fields of both Natural Language Processing (NLP) and Automated Machine Learning (AutoML) have achieved remarkable results over the past years. In NLP, especially Large Language Models (LLMs) have experienced a rapid series of breakthroughs very recently. We envision that the two fields can radically push the boundaries of each other through tight integration. To showcase this vision, we explore the potential of a symbiotic relationship between AutoML and LLMs, shedding light on how they can benefit each other. In particular, we investigate both the opportunities to enhance AutoML approaches with LLMs from different perspectives and the challenges of leveraging AutoML to further improve LLMs. To this end, we survey existing work, and we critically assess risks. We strongly believe that the integration of the two fields has the potential to disrupt both fields, NLP and AutoML. By highlighting conceivable synergies, but also risks, we aim to foster further exploration at the intersection of AutoML and LLMs.", + "neighbors": [ + 1052, + 1307, + 1972, + 2109, + 2113 + ], + "mask": "Test" + }, + { + "node_id": 345, + "label": 4, + "text": "Title: HiNoVa: A Novel Open-Set Detection Method for Automating RF Device Authentication\nAbstract: New capabilities in wireless network security have been enabled by deep learning, which leverages patterns in radio frequency (RF) data to identify and authenticate devices. Open-set detection is an area of deep learning that identifies samples captured from new devices during deployment that were not part of the training set. Past work in open-set detection has mostly been applied to independent and identically distributed data such as images. In contrast, RF signal data present a unique set of challenges as the data forms a time series with non-linear time dependencies among the samples. We introduce a novel open-set detection approach based on the patterns of the hidden state values within a Convolutional Neural Network Long Short-Term Memory model. Our approach greatly improves the Area Under the Precision-Recall Curve on LoRa, Wireless-WiFi, and Wired-WiFi datasets, and hence, can be used successfully to monitor and control unauthorized network access of wireless devices.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 346, + "label": 30, + "text": "Title: Sample-Efficient Unsupervised Domain Adaptation of Speech Recognition Systems A case study for Modern Greek\nAbstract: \u2014Modern speech recognition systems exhibits rapid performance degradation under domain shift. This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited. In this work we propose M2DS2, a simple and sample-ef\ufb01cient \ufb01netuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision. We \ufb01nd that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations. For evaluation, we collect HParl, a 120 hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament. We merge HParl with two popular Greek corpora to create GREC-MD, a testbed for multi-domain evaluation of Greek ASR systems. In our experiments we \ufb01nd that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environ- ment, M2DS2 yields signi\ufb01cant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available. When we relax the problem in a weakly supervised setting, we \ufb01nd that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines.", + "neighbors": [ + 2078 + ], + "mask": "Train" + }, + { + "node_id": 347, + "label": 24, + "text": "Title: Domain Generalization without Excess Empirical Risk\nAbstract: Given data from diverse sets of distinct distributions, domain generalization aims to learn models that generalize to unseen distributions. A common approach is designing a data-driven surrogate penalty to capture generalization and minimize the empirical risk jointly with the penalty. We argue that a significant failure mode of this recipe is an excess risk due to an erroneous penalty or hardness in joint optimization. We present an approach that eliminates this problem. Instead of jointly minimizing empirical risk with the penalty, we minimize the penalty under the constraint of optimality of the empirical risk. This change guarantees that the domain generalization penalty cannot impair optimization of the empirical risk, i.e., in-distribution performance. To solve the proposed optimization problem, we demonstrate an exciting connection to rate-distortion theory and utilize its tools to design an efficient method. Our approach can be applied to any penalty-based domain generalization method, and we demonstrate its effectiveness by applying it to three examplar methods from the literature, showing significant improvements.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 348, + "label": 24, + "text": "Title: Data-Driven Identification of Quadratic Symplectic Representations of Nonlinear Hamiltonian Systems\nAbstract: We present a framework for learning Hamiltonian systems using data. This work is based on the lifting hypothesis, which posits that nonlinear Hamiltonian systems can be written as nonlinear systems with cubic Hamiltonians. By leveraging this, we obtain quadratic dynamics that are Hamiltonian in a transformed coordinate system. To that end, for given generalized position and momentum data, we propose a methodology to learn quadratic dynamical systems, enforcing the Hamiltonian structure in combination with a symplectic auto-encoder. The enforced Hamiltonian structure exhibits long-term stability of the system, while the cubic Hamiltonian function provides relatively low model complexity. For low-dimensional data, we determine a higher-order transformed coordinate system, whereas, for high-dimensional data, we find a lower-order coordinate system with the desired properties. We demonstrate the proposed methodology by means of both low-dimensional and high-dimensional nonlinear Hamiltonian systems.", + "neighbors": [ + 1385 + ], + "mask": "Train" + }, + { + "node_id": 349, + "label": 24, + "text": "Title: A Framework for Incentivized Collaborative Learning\nAbstract: Collaborations among various entities, such as companies, research labs, AI agents, and edge devices, have become increasingly crucial for achieving machine learning tasks that cannot be accomplished by a single entity alone. This is likely due to factors such as security constraints, privacy concerns, and limitations in computation resources. As a result, collaborative learning (CL) research has been gaining momentum. However, a significant challenge in practical applications of CL is how to effectively incentivize multiple entities to collaborate before any collaboration occurs. In this study, we propose ICL, a general framework for incentivized collaborative learning, and provide insights into the critical issue of when and why incentives can improve collaboration performance. Furthermore, we show the broad applicability of ICL to specific cases in federated learning, assisted learning, and multi-armed bandit with both theory and experimental results.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 350, + "label": 27, + "text": "Title: On the Simulation of Perception Errors in Autonomous Vehicles\nAbstract: Even though virtual testing of Autonomous Vehicles (AVs) has been well recognized as essential for safety assessment, AV simulators are still undergoing active development. One particularly challenging question is to effectively include the Sensing and Perception (S&P) subsystem into the simulation loop. In this article, we define Perception Error Models (PEM), a virtual simulation component that can enable the analysis of the impact of perception errors on AV safety, without the need to model the sensors themselves. We propose a generalized data-driven procedure towards parametric modeling and evaluate it using Apollo, an open-source driving software, and nuScenes, a public AV dataset. Additionally, we implement PEMs in SVL, an open-source vehicle simulator. Furthermore, we demonstrate the usefulness of PEM-based virtual tests, by evaluating camera, LiDAR, and camera-LiDAR setups. Our virtual tests highlight limitations in the current evaluation metrics, and the proposed approach can help study the impact of perception errors on AV safety.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 351, + "label": 10, + "text": "Title: The Update Equivalence Framework for Decision-Time Planning\nAbstract: The process of revising (or constructing) a policy immediately prior to execution -- known as decision-time planning -- is key to achieving superhuman performance in perfect-information settings like chess and Go. A recent line of work has extended decision-time planning to more general imperfect-information settings, leading to superhuman performance in poker. However, these methods requires considering subgames whose sizes grow quickly in the amount of non-public information, making them unhelpful when the amount of non-public information is large. Motivated by this issue, we introduce an alternative framework for decision-time planning that is not based on subgames but rather on the notion of update equivalence. In this framework, decision-time planning algorithms simulate updates of synchronous learning algorithms. This framework enables us to introduce a new family of principled decision-time planning algorithms that do not rely on public information, opening the door to sound and effective decision-time planning in settings with large amounts of non-public information. In experiments, members of this family produce comparable or superior results compared to state-of-the-art approaches in Hanabi and improve performance in 3x3 Abrupt Dark Hex and Phantom Tic-Tac-Toe.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 352, + "label": 30, + "text": "Title: Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks\nAbstract: Large language models (LLMs) are remarkable data annotators. They can be used to generate high-fidelity supervised training data, as well as survey and experimental data. With the widespread adoption of LLMs, human gold--standard annotations are key to understanding the capabilities of LLMs and the validity of their results. However, crowdsourcing, an important, inexpensive way to obtain human annotations, may itself be impacted by LLMs, as crowd workers have financial incentives to use LLMs to increase their productivity and income. To investigate this concern, we conducted a case study on the prevalence of LLM usage by crowd workers. We reran an abstract summarization task from the literature on Amazon Mechanical Turk and, through a combination of keystroke detection and synthetic text classification, estimate that 33-46% of crowd workers used LLMs when completing the task. Although generalization to other, less LLM-friendly tasks is unclear, our results call for platforms, researchers, and crowd workers to find new ways to ensure that human data remain human, perhaps using the methodology proposed here as a stepping stone. Code/data: https://github.com/epfl-dlab/GPTurk", + "neighbors": [ + 42, + 75, + 401, + 1384, + 1487, + 1949, + 1992, + 2094, + 2305 + ], + "mask": "Train" + }, + { + "node_id": 353, + "label": 16, + "text": "Title: FAIR: Frequency-aware Image Restoration for Industrial Visual Anomaly Detection\nAbstract: Image reconstruction-based anomaly detection models are widely explored in industrial visual inspection. However, existing models usually suffer from the trade-off between normal reconstruction fidelity and abnormal reconstruction distinguishability, which damages the performance. In this paper, we find that the above trade-off can be better mitigated by leveraging the distinct frequency biases between normal and abnormal reconstruction errors. To this end, we propose Frequency-aware Image Restoration (FAIR), a novel self-supervised image restoration task that restores images from their high-frequency components. It enables precise reconstruction of normal patterns while mitigating unfavorable generalization to anomalies. Using only a simple vanilla UNet, FAIR achieves state-of-the-art performance with higher efficiency on various defect detection datasets. Code: https://github.com/liutongkun/FAIR.", + "neighbors": [ + 2098 + ], + "mask": "Train" + }, + { + "node_id": 354, + "label": 30, + "text": "Title: InfoCTM: A Mutual Information Maximization Perspective of Cross-Lingual Topic Modeling\nAbstract: Cross-lingual topic models have been prevalent for cross-lingual text analysis by revealing aligned latent topics. However, most existing methods suffer from producing repetitive topics that hinder further analysis and performance decline caused by low-coverage dictionaries. In this paper, we propose the Cross-lingual Topic Modeling with Mutual Information (InfoCTM). Instead of the direct alignment in previous work, we propose a topic alignment with mutual information method. This works as a regularization to properly align topics and prevent degenerate topic representations of words, which mitigates the repetitive topic issue. To address the low-coverage dictionary issue, we further propose a cross-lingual vocabulary linking method that finds more linked cross-lingual words for topic alignment beyond the translations of a given dictionary. Extensive experiments on English, Chinese, and Japanese datasets demonstrate that our method outperforms state-of-the-art baselines, producing more coherent, diverse, and well-aligned topics and showing better transferability for cross-lingual classification tasks.", + "neighbors": [ + 953, + 1184 + ], + "mask": "Validation" + }, + { + "node_id": 355, + "label": 27, + "text": "Title: Exploring Levels of Control for a Navigation Assistant for Blind Travelers\nAbstract: Only a small percentage of blind and low-vision people use traditional mobility aids such as a cane or a guide dog. Various assistive technologies have been proposed to address the limitations of traditional mobility aids. These devices often give either the user or the device majority of the control. In this work, we explore how varying levels of control affect the users' sense of agency, trust in the device, confidence, and successful navigation. We present Glide, a novel mobility aid with two modes for control: Glide-directed and User-directed. We employ Glide in a study (N=9) in which blind or low-vision participants used both modes to navigate through an indoor environment. Overall, participants found that Glide was easy to use and learn. Most participants trusted Glide despite its current limitations, and their confidence and performance increased as they continued to use Glide. Users' control mode preferences varied in different situations; no single mode \"won\" in all situations.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 356, + "label": 16, + "text": "Title: Advancing Volumetric Medical Image Segmentation via Global-Local Masked Autoencoder\nAbstract: Masked autoencoder (MAE) is a promising self-supervised pre-training technique that can improve the representation learning of a neural network without human intervention. However, applying MAE directly to volumetric medical images poses two challenges: (i) a lack of global information that is crucial for understanding the clinical context of the holistic data, (ii) no guarantee of stabilizing the representations learned from randomly masked inputs. To address these limitations, we propose the \\textbf{G}lobal-\\textbf{L}ocal \\textbf{M}asked \\textbf{A}uto\\textbf{E}ncoder (GL-MAE), a simple yet effective self-supervised pre-training strategy. In addition to reconstructing masked local views, as in previous methods, GL-MAE incorporates global context learning by reconstructing masked global views. Furthermore, a complete global view is integrated as an anchor to guide the reconstruction and stabilize the learning process through global-to-global consistency learning and global-to-local consistency learning. Finetuning results on multiple datasets demonstrate the superiority of our method over other state-of-the-art self-supervised algorithms, highlighting its effectiveness on versatile volumetric medical image segmentation tasks, even when annotations are scarce. Our codes and models will be released upon acceptance.", + "neighbors": [ + 1488, + 1525 + ], + "mask": "Train" + }, + { + "node_id": 357, + "label": 16, + "text": "Title: SyncDreamer: Generating Multiview-consistent Images from a Single-view Image\nAbstract: In this paper, we present a novel diffusion model called that generates multiview-consistent images from a single-view image. Using pretrained large-scale 2D diffusion models, recent work Zero123 demonstrates the ability to generate plausible novel views from a single-view image of an object. However, maintaining consistency in geometry and colors for the generated images remains a challenge. To address this issue, we propose a synchronized multiview diffusion model that models the joint probability distribution of multiview images, enabling the generation of multiview-consistent images in a single reverse process. SyncDreamer synchronizes the intermediate states of all the generated images at every step of the reverse process through a 3D-aware feature attention mechanism that correlates the corresponding features across different views. Experiments show that SyncDreamer generates images with high consistency across different views, thus making it well-suited for various 3D generation tasks such as novel-view-synthesis, text-to-3D, and image-to-3D.", + "neighbors": [ + 1125, + 1418, + 2049, + 2117, + 2205 + ], + "mask": "Test" + }, + { + "node_id": 358, + "label": 24, + "text": "Title: Active Reinforcement Learning for Personalized Stress Monitoring in Everyday Settings\nAbstract: Most existing sensor-based monitoring frameworks presume that a large available labeled dataset is processed to train accurate detection models. However, in settings where personalization is necessary at deployment time to fine-tune the model, a person-specific dataset needs to be collected online by interacting with the users. Optimizing the collection of labels in such phase is instrumental to impose a tolerable burden on the users while maximizing personal improvement. In this paper, we consider a fine-grain stress detection problem based on wearable sensors targeting everyday settings, and propose a novel context-aware active learning strategy capable of jointly maximizing the meaningfulness of the signal samples we request the user to label and the response rate. We develop a multilayered sensor-edge-cloud platform to periodically capture physiological signals and process them in real-time, as well as to collect labels and retrain the detection model. We collect a large dataset and show that the context-aware active learning technique we propose achieves a desirable detection performance using 88% and 32% fewer queries from users compared to a randomized strategy and a traditional active learning strategy, respectively.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 359, + "label": 16, + "text": "Title: Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems\nAbstract: Deep learning models are susceptible to adversarial samples in white and black-box environments. Although previous studies have shown high attack success rates, coupling DNN models with interpretation models could offer a sense of security when a human expert is involved, who can identify whether a given sample is benign or malicious. However, in white-box environments, interpretable deep learning systems (IDLSes) have been shown to be vulnerable to malicious manipulations. In black-box settings, as access to the components of IDLSes is limited, it becomes more challenging for the adversary to fool the system. In this work, we propose a Query-efficient Score-based black-box attack against IDLSes, QuScore, which requires no knowledge of the target model and its coupled interpretation model. QuScore is based on transfer-based and score-based methods by employing an effective microbial genetic algorithm. Our method is designed to reduce the number of queries necessary to carry out successful attacks, resulting in a more efficient process. By continuously refining the adversarial samples created based on feedback scores from the IDLS, our approach effectively navigates the search space to identify perturbations that can fool the system. We evaluate the attack's effectiveness on four CNN models (Inception, ResNet, VGG, DenseNet) and two interpretation models (CAM, Grad), using both ImageNet and CIFAR datasets. Our results show that the proposed approach is query-efficient with a high attack success rate that can reach between 95% and 100% and transferability with an average success rate of 69% in the ImageNet and CIFAR datasets. Our attack method generates adversarial examples with attribution maps that resemble benign samples. We have also demonstrated that our attack is resilient against various preprocessing defense techniques and can easily be transferred to different DNN models.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 360, + "label": 24, + "text": "Title: Practical Differentially Private Hyperparameter Tuning with Subsampling\nAbstract: Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms often requires use of sensitive data and this may leak private information via hyperparameter values. Recently, Papernot and Steinke (2022) proposed a certain class of DP hyperparameter tuning algorithms, where the number of random search samples is randomized itself. Commonly, these algorithms still considerably increase the DP privacy parameter $\\varepsilon$ over non-tuned DP ML model training and can be computationally heavy as evaluating each hyperparameter candidate requires a new training run. We focus on lowering both the DP bounds and the computational cost of these methods by using only a random subset of the sensitive data for the hyperparameter tuning and by extrapolating the optimal values to a larger dataset. We provide a R\\'enyi differential privacy analysis for the proposed method and experimentally show that it consistently leads to better privacy-utility trade-off than the baseline method by Papernot and Steinke.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 361, + "label": 10, + "text": "Title: Discussion Paper: The Threat of Real Time Deepfakes\nAbstract: Generative deep learning models are able to create realistic audio and video. This technology has been used to impersonate the faces and voices of individuals. These ``deepfakes'' are being used to spread misinformation, enable scams, perform fraud, and blackmail the innocent. The technology continues to advance and today attackers have the ability to generate deepfakes in real-time. This new capability poses a significant threat to society as attackers begin to exploit the technology in advances social engineering attacks. In this paper, we discuss the implications of this emerging threat, identify the challenges with preventing these attacks and suggest a better direction for researching stronger defences.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 362, + "label": 27, + "text": "Title: Exploiting Unlabeled Data for Feedback Efficient Human Preference based Reinforcement Learning\nAbstract: Preference Based Reinforcement Learning has shown much promise for utilizing human binary feedback on queried trajectory pairs to recover the underlying reward model of the Human in the Loop (HiL). While works have attempted to better utilize the queries made to the human, in this work we make two observations about the unlabeled trajectories collected by the agent and propose two corresponding loss functions that ensure participation of unlabeled trajectories in the reward learning process, and structure the embedding space of the reward model such that it reflects the structure of state space with respect to action distances. We validate the proposed method on one locomotion domain and one robotic manipulation task and compare with the state-of-the-art baseline PEBBLE. We further present an ablation of the proposed loss components across both the domains and find that not only each of the loss components perform better than the baseline, but the synergic combination of the two has much better reward recovery and human feedback sample efficiency.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 363, + "label": 30, + "text": "Title: API-Bank: A Benchmark for Tool-Augmented LLMs\nAbstract: Recent research has shown that Large Language Models (LLMs) can utilize external tools to improve their contextual processing abilities, moving away from the pure language modeling paradigm and paving the way for Artificial General Intelligence. Despite this, there has been a lack of systematic evaluation to demonstrate the efficacy of LLMs using tools to respond to human instructions. This paper presents API-Bank, the first benchmark tailored for Tool-Augmented LLMs. API-Bank includes 53 commonly used API tools, a complete Tool-Augmented LLM workflow, and 264 annotated dialogues that encompass a total of 568 API calls. These resources have been designed to thoroughly evaluate LLMs' ability to plan step-by-step API calls, retrieve relevant APIs, and correctly execute API calls to meet human needs. The experimental results show that GPT-3.5 emerges the ability to use the tools relative to GPT3, while GPT-4 has stronger planning performance. Nevertheless, there remains considerable scope for further improvement when compared to human performance. Additionally, detailed error analysis and case studies demonstrate the feasibility of Tool-Augmented LLMs for daily use, as well as the primary challenges that future research needs to address.", + "neighbors": [ + 57, + 183, + 704, + 817, + 1001, + 1044, + 1353, + 1430, + 1878, + 2166, + 2265 + ], + "mask": "Train" + }, + { + "node_id": 364, + "label": 16, + "text": "Title: NUWA-XL: Diffusion over Diffusion for eXtremely Long Video Generation\nAbstract: In this paper, we propose NUWA-XL, a novel Diffusion over Diffusion architecture for eXtremely Long video generation. Most current work generates long videos segment by segment sequentially, which normally leads to the gap between training on short videos and inferring long videos, and the sequential generation is inefficient. Instead, our approach adopts a \u201ccoarse-to-fine\u201d process, in which the video can be generated in parallel at the same granularity. A global diffusion model is applied to generate the keyframes across the entire time range, and then local diffusion models recursively fill in the content between nearby frames. This simple yet effective strategy allows us to directly train on long videos (3376 frames) to reduce the training-inference gap and makes it possible to generate all segments in parallel. To evaluate our model, we build FlintstonesHD dataset, a new benchmark for long video generation. Experiments show that our model not only generates high-quality long videos with both global and local coherence, but also decreases the average inference time from 7.55min to 26s (by 94.26%) at the same hardware setting when generating 1024 frames. The homepage link is [NUWA-XL](https://msra-nuwa.azurewebsites.net)", + "neighbors": [ + 1707, + 2085 + ], + "mask": "Validation" + }, + { + "node_id": 365, + "label": 5, + "text": "Title: Supercharging Distributed Computing Environments For High Performance Data Engineering\nAbstract: The data engineering and data science community has embraced the idea of using Python&R dataframes for regular applications. Driven by the big data revolution and artificial intelligence, these applications are now essential in order to process terabytes of data. They can easily exceed the capabilities of a single machine, but also demand significant developer time&effort. Therefore it is essential to design scalable dataframe solutions. There have been multiple attempts to tackle this problem, the most notable being the dataframe systems developed using distributed computing environments such as Dask and Ray. Even though Dask/Ray distributed computing features look very promising, we perceive that the Dask Dataframes/Ray Datasets still have room for optimization. In this paper, we present CylonFlow, an alternative distributed dataframe execution methodology that enables state-of-the-art performance and scalability on the same Dask/Ray infrastructure (thereby supercharging them!). To achieve this, we integrate a high performance dataframe system Cylon, which was originally based on an entirely different execution paradigm, into Dask and Ray. Our experiments show that on a pipeline of dataframe operators, CylonFlow achieves 30x more distributed performance than Dask Dataframes. Interestingly, it also enables superior sequential performance due to the native C++ execution of Cylon. We believe the success of Cylon&CylonFlow extends beyond the data engineering domain, and can be used to consolidate high performance computing and distributed computing ecosystems.", + "neighbors": [ + 922 + ], + "mask": "Validation" + }, + { + "node_id": 366, + "label": 4, + "text": "Title: ProvG-Searcher: A Graph Representation Learning Approach for Efficient Provenance Graph Search\nAbstract: We present ProvG-Searcher, a novel approach for detecting known APT behaviors within system security logs. Our approach leverages provenance graphs, a comprehensive graph representation of event logs, to capture and depict data provenance relations by mapping system entities as nodes and their interactions as edges. We formulate the task of searching provenance graphs as a subgraph matching problem and employ a graph representation learning method. The central component of our search methodology involves embedding of subgraphs in a vector space where subgraph relationships can be directly evaluated. We achieve this through the use of order embeddings that simplify subgraph matching to straightforward comparisons between a query and precomputed subgraph representations. To address challenges posed by the size and complexity of provenance graphs, we propose a graph partitioning scheme and a behavior-preserving graph reduction method. Overall, our technique offers significant computational efficiency, allowing most of the search computation to be performed offline while incorporating a lightweight comparison step during query execution. Experimental results on standard datasets demonstrate that ProvG-Searcher achieves superior performance, with an accuracy exceeding 99% in detecting query behaviors and a false positive rate of approximately 0.02%, outperforming other approaches.", + "neighbors": [ + 1442 + ], + "mask": "Train" + }, + { + "node_id": 367, + "label": 16, + "text": "Title: Hierarchical Skeleton Meta-Prototype Contrastive Learning with Hard Skeleton Mining for Unsupervised Person Re-Identification\nAbstract: nan", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 368, + "label": 24, + "text": "Title: Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs\nAbstract: In reward-free reinforcement learning (RL), an agent explores the environment first without any reward information, in order to achieve certain learning goals afterwards for any given reward. In this paper we focus on reward-free RL under low-rank MDP models, in which both the representation and linear weight vectors are unknown. Although various algorithms have been proposed for reward-free low-rank MDPs, the corresponding sample complexity is still far from being satisfactory. In this work, we first provide the first known sample complexity lower bound that holds for any algorithm under low-rank MDPs. This lower bound implies it is strictly harder to find a near-optimal policy under low-rank MDPs than under linear MDPs. We then propose a novel model-based algorithm, coined RAFFLE, and show it can both find an $\\epsilon$-optimal policy and achieve an $\\epsilon$-accurate system identification via reward-free exploration, with a sample complexity significantly improving the previous results. Such a sample complexity matches our lower bound in the dependence on $\\epsilon$, as well as on $K$ in the large $d$ regime, where $d$ and $K$ respectively denote the representation dimension and action space cardinality. Finally, we provide a planning algorithm (without further interaction with true environment) for RAFFLE to learn a near-accurate representation, which is the first known representation learning guarantee under the same setting.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 369, + "label": 3, + "text": "Title: Designing the Metaverse: A Scoping Review to Map Current Research Effort on Ethical Implications\nAbstract: The metaverse and digital, virtual environments have been part of recent history as places in which people can socialize, work and spend time playing games. However, the infancy of the development of these digital, virtual environments brings some challenges that are still not fully depicted. With this article, we seek to identify and map the currently available knowledge and scientific effort to discover what principles, guidelines, laws, policies, and practices are currently in place to allow for the design of digital, virtual environments, and the metaverse. Through a scoping review, we aimed to systematically survey the existing literature and discern gaps in knowledge within the domain of metaverse research from sociological, anthropological, cultural, and experiential perspectives. The objective of this review was twofold: (1) to examine the focus of the literature studying the metaverse from various angles and (2) to formulate a research agenda for the design and development of ethical digital, virtual environments. With this paper, we identified several works and articles detailing experiments and research on the design of digital, virtual environments and metaverses. We found an increased number of publications in the year 2022. This finding, together with the fact that only a few articles were focused on the domain of ethics, culture and society shows that there is still a vast amount of work to be done to create awareness, principles and policies that could help to design safe, secure and inclusive digital, virtual environments and metaverses.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 370, + "label": 16, + "text": "Title: StrucTexTv2: Masked Visual-Textual Prediction for Document Image Pre-training\nAbstract: In this paper, we present StrucTexTv2, an effective document image pre-training framework, by performing masked visual-textual prediction. It consists of two self-supervised pre-training tasks: masked image modeling and masked language modeling, based on text region-level image masking. The proposed method randomly masks some image regions according to the bounding box coordinates of text words. The objectives of our pre-training tasks are reconstructing the pixels of masked image regions and the corresponding masked tokens simultaneously. Hence the pre-trained encoder can capture more textual semantics in comparison to the masked image modeling that usually predicts the masked image patches. Compared to the masked multi-modal modeling methods for document image understanding that rely on both the image and text modalities, StrucTexTv2 models image-only input and potentially deals with more application scenarios free from OCR pre-processing. Extensive experiments on mainstream benchmarks of document image understanding demonstrate the effectiveness of StrucTexTv2. It achieves competitive or even new state-of-the-art performance in various downstream tasks such as image classification, layout analysis, table structure recognition, document OCR, and information extraction under the end-to-end scenario.", + "neighbors": [ + 2025 + ], + "mask": "Train" + }, + { + "node_id": 371, + "label": 16, + "text": "Title: Auxiliary Tasks Benefit 3D Skeleton-based Human Motion Prediction\nAbstract: Exploring spatial-temporal dependencies from observed motions is one of the core challenges of human motion prediction. Previous methods mainly focus on dedicated network structures to model the spatial and temporal dependencies. This paper considers a new direction by introducing a model learning framework with auxiliary tasks. In our auxiliary tasks, partial body joints' coordinates are corrupted by either masking or adding noise and the goal is to recover corrupted coordinates depending on the rest coordinates. To work with auxiliary tasks, we propose a novel auxiliary-adapted transformer, which can handle incomplete, corrupted motion data and achieve coordinate recovery via capturing spatial-temporal dependencies. Through auxiliary tasks, the auxiliary-adapted transformer is promoted to capture more comprehensive spatial-temporal dependencies among body joints' coordinates, leading to better feature learning. Extensive experimental results have shown that our method outperforms state-of-the-art methods by remarkable margins of 7.2%, 3.7%, and 9.4% in terms of 3D mean per joint position error (MPJPE) on the Human3.6M, CMU Mocap, and 3DPW datasets, respectively. We also demonstrate that our method is more robust under data missing cases and noisy data cases. Code is available at https://github.com/MediaBrain-SJTU/AuxFormer.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 372, + "label": 8, + "text": "Title: QUIC Library Hunter: Identifying Server Libraries Across the Internet\nAbstract: The new QUIC protocol can be implemented in user space, and various implementations already exist. While they follow the same specification and general interoperability is given, differences in performance, functionality, but also security (e.g., due to bugs) can be expected. Therefore, knowledge about the implementation of an endpoint on the Internet can help researchers, operators and users to better analyze connections, evaluations and findings. We provide an approach to identify used libraries of QUIC servers based on CONNECTION_CLOSE frames and transport parameter orders. We apply our methodology to Internet-wide scans and identify at least one deployment for 18 QUIC libraries. In total, we can identify the library of 8.8 M IPv4 and 2.5 M IPv6 addresses.", + "neighbors": [ + 578 + ], + "mask": "Train" + }, + { + "node_id": 373, + "label": 16, + "text": "Title: Compensation Learning in Semantic Segmentation\nAbstract: Label noise and ambiguities between similar classes are challenging problems in developing new models and annotating new data for semantic segmentation. In this paper, we propose Compensation Learning in Semantic Segmentation, a framework to identify and compensate ambiguities as well as label noise. More specifically, we add a ground truth depending and globally learned bias to the classification logits and introduce a novel uncertainty branch for neural networks to induce the compensation bias only to relevant regions. Our method is employed into state-of-the-art segmentation frameworks and several experiments demonstrate that our proposed compensation learns inter-class relations that allow global identification of challenging ambiguities as well as the exact localization of subsequent label noise. Additionally, it enlarges robustness against label noise during training and allows target-oriented manipulation during inference. We evaluate the proposed method on Cityscapes, KITTI-STEP, ADE20k, and COCO-stuff10k.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 374, + "label": 24, + "text": "Title: Addressing Discontinuous Root-Finding for Subsequent Differentiability in Machine Learning, Inverse Problems, and Control\nAbstract: There are many physical processes that have inherent discontinuities in their mathematical formulations. This paper is motivated by the specific case of collisions between two rigid or deformable bodies and the intrinsic nature of that discontinuity. The impulse response to a collision is discontinuous with the lack of any response when no collision occurs, which causes difficulties for numerical approaches that require differentiability which are typical in machine learning, inverse problems, and control. We theoretically and numerically demonstrate that the derivative of the collision time with respect to the parameters becomes infinite as one approaches the barrier separating colliding from not colliding, and use lifting to complexify the solution space so that solutions on the other side of the barrier are directly attainable as precise values. Subsequently, we mollify the barrier posed by the unbounded derivatives, so that one can tunnel back and forth in a smooth and reliable fashion facilitating the use of standard numerical approaches. Moreover, we illustrate that standard approaches fail in numerous ways mostly due to a lack of understanding of the mathematical nature of the problem (e.g. typical backpropagation utilizes many rules of differentiation, but ignores L'Hopital's rule).", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 375, + "label": 16, + "text": "Title: Urban Regional Function Guided Traffic Flow Prediction\nAbstract: The prediction of traffic flow is a challenging yet crucial problem in spatial-temporal analysis, which has recently gained increasing interest. In addition to spatial-temporal correlations, the functionality of urban areas also plays a crucial role in traffic flow prediction. However, the exploration of regional functional attributes mainly focuses on adding additional topological structures, ignoring the influence of functional attributes on regional traffic patterns. Different from the existing works, we propose a novel module named POI-MetaBlock, which utilizes the functionality of each region (represented by Point of Interest distribution) as metadata to further mine different traffic characteristics in areas with different functions. Specifically, the proposed POI-MetaBlock employs a self-attention architecture and incorporates POI and time information to generate dynamic attention parameters for each region, which enables the model to fit different traffic patterns of various areas at different times. Furthermore, our lightweight POI-MetaBlock can be easily integrated into conventional traffic flow prediction models. Extensive experiments demonstrate that our module significantly improves the performance of traffic flow prediction and outperforms state-of-the-art methods that use metadata.", + "neighbors": [ + 591, + 1620 + ], + "mask": "Train" + }, + { + "node_id": 376, + "label": 24, + "text": "Title: Variance-reduced Clipping for Non-convex Optimization\nAbstract: Gradient clipping is a standard training technique used in deep learning applications such as large-scale language modeling to mitigate exploding gradients. Recent experimental studies have demonstrated a fairly special behavior in the smoothness of the training objective along its trajectory when trained with gradient clipping. That is, the smoothness grows with the gradient norm. This is in clear contrast to the well-established assumption in folklore non-convex optimization, a.k.a. $L$--smoothness, where the smoothness is assumed to be bounded by a constant $L$ globally. The recently introduced $(L_0,L_1)$--smoothness is a more relaxed notion that captures such behavior in non-convex optimization. In particular, it has been shown that under this relaxed smoothness assumption, SGD with clipping requires $O(\\epsilon^{-4})$ stochastic gradient computations to find an $\\epsilon$--stationary solution. In this paper, we employ a variance reduction technique, namely SPIDER, and demonstrate that for a carefully designed learning rate, this complexity is improved to $O(\\epsilon^{-3})$ which is order-optimal. Our designed learning rate comprises the clipping technique to mitigate the growing smoothness. Moreover, when the objective function is the average of $n$ components, we improve the existing $O(n\\epsilon^{-2})$ bound on the stochastic gradient complexity to $O(\\sqrt{n} \\epsilon^{-2} + n)$, which is order-optimal as well. In addition to being theoretically optimal, SPIDER with our designed parameters demonstrates comparable empirical performance against variance-reduced methods such as SVRG and SARAH in several vision tasks.", + "neighbors": [ + 1347 + ], + "mask": "Train" + }, + { + "node_id": 377, + "label": 30, + "text": "Title: Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4\nAbstract: Harnessing logical reasoning ability is a comprehensive natural language understanding endeavor. With the release of Generative Pretrained Transformer 4 (GPT-4), highlighted as\"advanced\"at reasoning tasks, we are eager to learn the GPT-4 performance on various logical reasoning tasks. This report analyses multiple logical reasoning datasets, with popular benchmarks like LogiQA and ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice reading comprehension and natural language inference tasks with benchmarks requiring logical reasoning. We further construct a logical reasoning out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4. We also make a performance comparison between ChatGPT and GPT-4. Experiment results show that ChatGPT performs significantly better than the RoBERTa fine-tuning method on most logical reasoning benchmarks. With early access to the GPT-4 API we are able to conduct intense experiments on the GPT-4 model. The results show GPT-4 yields even higher performance on most logical reasoning datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known datasets like LogiQA and ReClor. However, the performance drops significantly when handling newly released and out-of-distribution datasets. Logical reasoning remains challenging for ChatGPT and GPT-4, especially on out-of-distribution and natural language inference datasets. We release the prompt-style logical reasoning datasets as a benchmark suite and name it LogiEval.", + "neighbors": [ + 682, + 857, + 975, + 1001, + 1259, + 1713, + 1915, + 1952, + 2215 + ], + "mask": "Test" + }, + { + "node_id": 378, + "label": 16, + "text": "Title: TrickVOS: A Bag of Tricks for Video Object Segmentation\nAbstract: Space-time memory (STM) network methods have been dominant in semi-supervised video object segmentation (SVOS) due to their remarkable performance. In this work, we identify three key aspects where we can improve such methods; i) supervisory signal, ii) pretraining and iii) spatial awareness. We then propose TrickVOS; a generic, method-agnostic bag of tricks addressing each aspect with i) a structure-aware hybrid loss, ii) a simple decoder pretraining regime and iii) a cheap tracker that imposes spatial constraints in model predictions. Finally, we propose a lightweight network and show that when trained with TrickVOS, it achieves competitive results to state-of-the-art methods on DAVIS and YouTube benchmarks, while being one of the first STM-based SVOS methods that can run in real-time on a mobile device.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 379, + "label": 10, + "text": "Title: DANES: Deep Neural Network Ensemble Architecture for Social and Textual Context-aware Fake News Detection\nAbstract: The growing popularity of social media platforms has simplified the creation and distribution of news articles but also creates a conduit for spreading fake news. In consequence, the need arises for effective context-aware fake news detection mechanisms, where the contextual information can be built either from the textual content of posts or from available social data (e.g., information about the users, reactions to posts, or the social network). In this paper, we propose DANES, a Deep Neural Network Ensemble Architecture for Social and Textual Context-aware Fake News Detection. DANES comprises a Text Branch for a textual content-based context and a Social Branch for the social context. These two branches are used to create a novel Network Embedding. Preliminary ablation results on 3 real-world datasets, i.e., BuzzFace, Twitter15, and Twitter16, are promising, with an accuracy that outperforms state-of-the-art solutions when employing both social and textual content features.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 380, + "label": 16, + "text": "Title: BiBench: Benchmarking and Analyzing Network Binarization\nAbstract: Network binarization emerges as one of the most promising compression approaches offering extraordinary computation and memory savings by minimizing the bit-width. However, recent research has shown that applying existing binarization algorithms to diverse tasks, architectures, and hardware in realistic scenarios is still not straightforward. Common challenges of binarization, such as accuracy degradation and efficiency limitation, suggest that its attributes are not fully understood. To close this gap, we present BiBench, a rigorously designed benchmark with in-depth analysis for network binarization. We first carefully scrutinize the requirements of binarization in the actual production and define evaluation tracks and metrics for a comprehensive and fair investigation. Then, we evaluate and analyze a series of milestone binarization algorithms that function at the operator level and with extensive influence. Our benchmark reveals that 1) the binarized operator has a crucial impact on the performance and deployability of binarized networks; 2) the accuracy of binarization varies significantly across different learning tasks and neural architectures; 3) binarization has demonstrated promising efficiency potential on edge devices despite the limited hardware support. The results and analysis also lead to a promising paradigm for accurate and efficient binarization. We believe that BiBench will contribute to the broader adoption of binarization and serve as a foundation for future research. The code for our BiBench is released https://github.com/htqin/BiBench .", + "neighbors": [ + 1777 + ], + "mask": "Train" + }, + { + "node_id": 381, + "label": 28, + "text": "Title: Optimized Design of Joint Mirror Array and Liquid Crystal-Based RIS-Aided VLC Systems\nAbstract: Most studies of reflecting intelligent surfaces (RISs)-assisted visible light communication (VLC) systems have focused on the integration of RISs in the channel to combat the line-of-sight (LoS) blockage and to enhance the corresponding achievable data rate. Some recent efforts have investigated the integration of liquid crystal (LC)-RIS in the VLC receiver to also improve the corresponding achievable data rate. To jointly benefit from the previously mentioned appealing capabilities of the RIS technology in both the channel and the receiver, in this work, we propose a novel indoor VLC system that is jointly assisted by a mirror array-based RIS in the channel and an LC-based RIS aided-VLC receiver. To illustrate the performance of the proposed system, a rate maximization problem is formulated, solved, and evaluated. This maximization problem jointly optimizes the roll and yaw angles of the mirror array-based RIS as well as the refractive index of the LC-based RIS VLC receiver. Moreover, this maximization problem considers practical assumptions, such as the presence of non-users blockers in the LoS path between the transmitter-receiver pair and the user's random device orientation (i.e., the user's self-blockage). Due to the non-convexity of the formulated optimization problem, a low-complexity algorithm is utilized to get the global optimal solution. A multi-user scenario of the proposed scheme is also presented. Furthermore, the energy efficiency of the proposed system is also investigated. Simulation results are provided, confirming that the proposed system yields a noteworthy improvement in data rate and energy efficiency performances compared to several baseline schemes.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 382, + "label": 22, + "text": "Title: SEER: Super-Optimization Explorer for HLS using E-graph Rewriting with MLIR\nAbstract: High-level synthesis (HLS) is a process that automatically translates a software program in a high-level language into a low-level hardware description. However, the hardware designs produced by HLS tools still suffer from a significant performance gap compared to manual implementations. This is because the input HLS programs must still be written using hardware design principles. Existing techniques either leave the program source unchanged or perform a fixed sequence of source transformation passes, potentially missing opportunities to find the optimal design. We propose a super-optimization approach for HLS that automatically rewrites an arbitrary software program into efficient HLS code that can be used to generate an optimized hardware design. We developed a toolflow named SEER, based on the e-graph data structure, to efficiently explore equivalent implementations of a program at scale. SEER provides an extensible framework, orchestrating existing software compiler passes and hardware synthesis optimizers. Our work is the first attempt to exploit e-graph rewriting for large software compiler frameworks, such as MLIR. Across a set of open-source benchmarks, we show that SEER achieves up to 38x the performance within 1.4x the area of the original program. Via an Intel-provided case study, SEER demonstrates the potential to outperform manually optimized designs produced by hardware experts.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 383, + "label": 10, + "text": "Title: Markov Conditions and Factorization in Logical Credal Networks\nAbstract: We examine the recently proposed language of Logical Credal Networks, in particular investigating the consequences of various Markov conditions. We introduce the notion of structure for a Logical Credal Network and show that a structure without directed cycles leads to a well-known factorization result. For networks with directed cycles, we analyze the differences between Markov conditions, factorization results, and specification requirements.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 384, + "label": 6, + "text": "Title: Trust and Reliance in Consensus-Based Explanations from an Anti-Misinformation Agent\nAbstract: The illusion of consensus occurs when people believe there is consensus across multiple sources, but the sources are the same and thus there is no \"true\" consensus. We explore this phenomenon in the context of an AI-based intelligent agent designed to augment metacognition on social media. Misinformation, especially on platforms like Twitter, is a global problem for which there is currently no good solution. As an explainable AI (XAI) system, the agent provides explanations for its decisions on the misinformed nature of social media content. In this late-breaking study, we explored the roles of trust (attitude) and reliance (behaviour) as key elements of XAI user experience (UX) and whether these influenced the illusion of consensus. Findings show no effect of trust, but an effect of reliance on consensus-based explanations. This work may guide the design of anti-misinformation systems that use XAI, especially the user-centred design of explanations.", + "neighbors": [ + 695 + ], + "mask": "Train" + }, + { + "node_id": 385, + "label": 28, + "text": "Title: Unsourced Random Access with the MIMO Receiver: Projection Decoding Analysis\nAbstract: We consider unsourced random access with MIMO receiver - a crucial communication scenario for future 5G/6G wireless networks. We perform a projection-based decoder analysis and derive energy efficiency achievability bounds when channel state information is unknown at transmitters and the receiver (no-CSI scenario). The comparison to the maximum-likelihood (ML) achievability bounds by Gao et al. (2023) is performed. We show that there is a region where the new bound outperforms the ML bound. The latter fact should not surprise the reader as both decoding criteria are suboptimal when considering per-user probability of error (PUPE). Moreover, transition to projection decoding allows for significant dimensionality reduction, which greatly reduces the computation time.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 386, + "label": 24, + "text": "Title: Probing the limit of hydrologic predictability with the Transformer network\nAbstract: For a number of years since its introduction to hydrology, recurrent neural networks like long short-term memory (LSTM) have proven remarkably difficult to surpass in terms of daily hydrograph metrics on known, comparable benchmarks. Outside of hydrology, Transformers have now become the model of choice for sequential prediction tasks, making it a curious architecture to investigate. Here, we first show that a vanilla Transformer architecture is not competitive against LSTM on the widely benchmarked CAMELS dataset, and lagged especially for the high-flow metrics due to short-term processes. However, a recurrence-free variant of Transformer can obtain mixed comparisons with LSTM, producing the same Kling-Gupta efficiency coefficient (KGE), along with other metrics. The lack of advantages for the Transformer is linked to the Markovian nature of the hydrologic prediction problem. Similar to LSTM, the Transformer can also merge multiple forcing dataset to improve model performance. While the Transformer results are not higher than current state-of-the-art, we still learned some valuable lessons: (1) the vanilla Transformer architecture is not suitable for hydrologic modeling; (2) the proposed recurrence-free modification can improve Transformer performance so future work can continue to test more of such modifications; and (3) the prediction limits on the dataset should be close to the current state-of-the-art model. As a non-recurrent model, the Transformer may bear scale advantages for learning from bigger datasets and storing knowledge. This work serves as a reference point for future modifications of the model.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 387, + "label": 16, + "text": "Title: On the challenges to learn from Natural Data Streams\nAbstract: \u2014In real-world contexts, sometimes data are available in form of Natural Data Streams, i.e . data characterized by a streaming nature, unbalanced distribution, data drift over a long time frame and strong correlation of samples in short time ranges. Moreover, a clear separation between the tradi- tional training and deployment phases is usually lacking. This data organization and fruition represents an interesting and challenging scenario for both traditional Machine and Deep Learning algorithms and incremental learning agents, i.e . agents that have the ability to incrementally improve their knowledge through the past experience. In this paper, we investigate the classi\ufb01cation performance of a variety of algorithms that belong to various research \ufb01eld, i.e . Continual, Streaming and Online Learning, that receives as training input Natural Data Streams. The experimental validation is carried out on three different datasets, expressly organized to replicate this challenging setting.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 388, + "label": 24, + "text": "Title: Revisiting Weighted Strategy for Non-stationary Parametric Bandits\nAbstract: Non-stationary parametric bandits have attracted much attention recently. There are three principled ways to deal with non-stationarity, including sliding-window, weighted, and restart strategies. As many non-stationary environments exhibit gradual drifting patterns, the weighted strategy is commonly adopted in real-world applications. However, previous theoretical studies show that its analysis is more involved and the algorithms are either computationally less efficient or statistically suboptimal. This paper revisits the weighted strategy for non-stationary parametric bandits. In linear bandits (LB), we discover that this undesirable feature is due to an inadequate regret analysis, which results in an overly complex algorithm design. We propose a refined analysis framework, which simplifies the derivation and importantly produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies. Furthermore, our new framework can be used to improve regret bounds of other parametric bandits, including Generalized Linear Bandits (GLB) and Self-Concordant Bandits (SCB). For example, we develop a simple weighted GLB algorithm with an $\\widetilde{O}(k_\\mu^{\\frac{5}{4}} c_\\mu^{-\\frac{3}{4}} d^{\\frac{3}{4}} P_T^{\\frac{1}{4}}T^{\\frac{3}{4}})$ regret, improving the $\\widetilde{O}(k_\\mu^{2} c_\\mu^{-1}d^{\\frac{9}{10}} P_T^{\\frac{1}{5}}T^{\\frac{4}{5}})$ bound in prior work, where $k_\\mu$ and $c_\\mu$ characterize the reward model's nonlinearity, $P_T$ measures the non-stationarity, $d$ and $T$ denote the dimension and time horizon.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 389, + "label": 24, + "text": "Title: Understanding Concept Identification as Consistent Data Clustering Across Multiple Feature Spaces\nAbstract: Identifying meaningful concepts in large data sets can provide valuable insights into engineering design problems. Concept identification aims at identifying non-overlapping groups of design instances that are similar in a joint space of all features, but which are also similar when considering only subsets of features. These subsets usually comprise features that characterize a design with respect to one specific context, for example, constructive design parameters, performance values, or operation modes. It is desirable to evaluate the quality of design concepts by considering several of these feature subsets in isolation. In particular, meaningful concepts should not only identify dense, well separated groups of data instances, but also provide non-overlapping groups of data that persist when considering pre-defined feature subsets separately. In this work, we propose to view concept identification as a special form of clustering algorithm with a broad range of potential applications beyond engineering design. To illustrate the differences between concept identification and classical clustering algorithms, we apply a recently proposed concept identification algorithm to two synthetic data sets and show the differences in identified solutions. In addition, we introduce the mutual information measure as a metric to evaluate whether solutions return consistent clusters across relevant subsets. To support the novel understanding of concept identification, we consider a simulated data set from a decision-making problem in the energy management domain and show that the identified clusters are more interpretable with respect to relevant feature subsets than clusters found by common clustering algorithms and are thus more suitable to support a decision maker.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 390, + "label": 4, + "text": "Title: Up-to-date Threat Modelling for Soft Privacy on Smart Cars\nAbstract: Physical persons playing the role of car drivers consume data that is sourced from the Internet and, at the same time, themselves act as sources of relevant data. It follows that citizens' privacy is potentially at risk while they drive, hence the need to model privacy threats in this application domain. This paper addresses the privacy threats by updating a recent threat-modelling methodology and by tailoring it specifically to the soft privacy target property, which ensures citizens' full control on their personal data. The methodology now features the sources of documentation as an explicit variable that is to be considered. It is demonstrated by including a new version of the de-facto standard LINDDUN methodology as well as an additional source by ENISA which is found to be relevant to soft privacy. The main findings are a set of 23 domain-independent threats, 43 domain-specific assets and 525 domain-dependent threats for the target property in the automotive domain. While these exceed their previous versions, their main value is to offer self-evident support to at least two arguments. One is that LINDDUN has evolved much the way our original methodology already advocated because a few of our previously suggested extensions are no longer outstanding. The other one is that ENISA's treatment of privacy aboard smart cars should be extended considerably because our 525 threats fall in the same scope.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 391, + "label": 24, + "text": "Title: DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning\nAbstract: Differential private optimization for nonconvex smooth objective is considered. In the previous work, the best known utility bound is $\\widetilde O(\\sqrt{d}/(n\\varepsilon_\\mathrm{DP}))$ in terms of the squared full gradient norm, which is achieved by Differential Private Gradient Descent (DP-GD) as an instance, where $n$ is the sample size, $d$ is the problem dimensionality and $\\varepsilon_\\mathrm{DP}$ is the differential privacy parameter. To improve the best known utility bound, we propose a new differential private optimization framework called \\emph{DIFF2 (DIFFerential private optimization via gradient DIFFerences)} that constructs a differential private global gradient estimator with possibly quite small variance based on communicated \\emph{gradient differences} rather than gradients themselves. It is shown that DIFF2 with a gradient descent subroutine achieves the utility of $\\widetilde O(d^{2/3}/(n\\varepsilon_\\mathrm{DP})^{4/3})$, which can be significantly better than the previous one in terms of the dependence on the sample size $n$. To the best of our knowledge, this is the first fundamental result to improve the standard utility $\\widetilde O(\\sqrt{d}/(n\\varepsilon_\\mathrm{DP}))$ for nonconvex objectives. Additionally, a more computational and communication efficient subroutine is combined with DIFF2 and its theoretical analysis is also given. Numerical experiments are conducted to validate the superiority of DIFF2 framework.", + "neighbors": [ + 1347 + ], + "mask": "Train" + }, + { + "node_id": 392, + "label": 16, + "text": "Title: MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models\nAbstract: Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 12 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization.", + "neighbors": [ + 34, + 136, + 319, + 522, + 836, + 887, + 1001, + 1047, + 1052, + 1148, + 1344, + 1485, + 1537, + 1765, + 1863, + 2036, + 2064, + 2113, + 2155, + 2216 + ], + "mask": "Train" + }, + { + "node_id": 393, + "label": 3, + "text": "Title: Monitoring Gender Gaps via LinkedIn Advertising Estimates: the case study of Italy\nAbstract: Women remain underrepresented in the labour market. Although significant advancements are being made to increase female participation in the workforce, the gender gap is still far from being bridged. We contribute to the growing literature on gender inequalities in the labour market, evaluating the potential of the LinkedIn estimates to monitor the evolution of the gender gaps sustainably, complementing the official data sources. In particular, assessing the labour market patterns at a subnational level in Italy. Our findings show that the LinkedIn estimates accurately capture the gender disparities in Italy regarding sociodemographic attributes such as gender, age, geographic location, seniority, and industry category. At the same time, we assess data biases such as the digitalisation gap, which impacts the representativity of the workforce in an imbalanced manner, confirming that women are under-represented in Southern Italy. Additionally to confirming the gender disparities to the official census, LinkedIn estimates are a valuable tool to provide dynamic insights; we showed an immigration flow of highly skilled women, predominantly from the South. Digital surveillance of gender inequalities with detailed and timely data is particularly significant to enable policymakers to tailor impactful campaigns.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 394, + "label": 16, + "text": "Title: RBSR: Efficient and Flexible Recurrent Network for Burst Super-Resolution\nAbstract: Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images, which is conducive to enhancing the imaging effects of smartphones with limited sensors. The main challenge of BurstSR is to effectively combine the complementary information from input frames, while existing methods still struggle with it. In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network. In particular, we emphasize the role of the base-frame and utilize it as a key prompt to guide the knowledge acquisition from other frames in every recurrence. Moreover, we introduce an implicit weighting loss to improve the model's flexibility in facing input frames with variable numbers. Extensive experiments on both synthetic and real-world datasets demonstrate that our method achieves better results than state-of-the-art ones. Codes and pre-trained models are available at https://github.com/ZcsrenlongZ/RBSR.", + "neighbors": [ + 153, + 1019 + ], + "mask": "Test" + }, + { + "node_id": 395, + "label": 5, + "text": "Title: Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning\nAbstract: Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel processors (e.g., RISC-V), non-fully connected network topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing us to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on x86-64 and ARM platforms and an emerging RISC-V one. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.", + "neighbors": [ + 991 + ], + "mask": "Test" + }, + { + "node_id": 396, + "label": 16, + "text": "Title: ComPtr: Towards Diverse Bi-source Dense Prediction Tasks via A Simple yet General Complementary Transformer\nAbstract: Deep learning (DL) has advanced the field of dense prediction, while gradually dissolving the inherent barriers between different tasks. However, most existing works focus on designing architectures and constructing visual cues only for the specific task, which ignores the potential uniformity introduced by the DL paradigm. In this paper, we attempt to construct a novel \\underline{ComP}lementary \\underline{tr}ansformer, \\textbf{ComPtr}, for diverse bi-source dense prediction tasks. Specifically, unlike existing methods that over-specialize in a single task or a subset of tasks, ComPtr starts from the more general concept of bi-source dense prediction. Based on the basic dependence on information complementarity, we propose consistency enhancement and difference awareness components with which ComPtr can evacuate and collect important visual semantic cues from different image sources for diverse tasks, respectively. ComPtr treats different inputs equally and builds an efficient dense interaction model in the form of sequence-to-sequence on top of the transformer. This task-generic design provides a smooth foundation for constructing the unified model that can simultaneously deal with various bi-source information. In extensive experiments across several representative vision tasks, i.e. remote sensing change detection, RGB-T crowd counting, RGB-D/T salient object detection, and RGB-D semantic segmentation, the proposed method consistently obtains favorable performance. The code will be available at \\url{https://github.com/lartpang/ComPtr}.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 397, + "label": 10, + "text": "Title: Ontologies in Digital Twins: A Systematic Literature Review\nAbstract: Digital Twins (DT) facilitate monitoring and reasoning processes in cyber-physical systems. They have progressively gained popularity over the past years because of intense research activity and industrial advancements. Cognitive Twins is a novel concept, recently coined to refer to the involvement of Semantic Web technology in DTs. Recent studies address the relevance of ontologies and knowledge graphs in the context of DTs, in terms of knowledge representation, interoperability and automatic reasoning. However, there is no comprehensive analysis of how semantic technologies, and specifically ontologies, are utilized within DTs. This Systematic Literature Review (SLR) is based on the analysis of 82 research articles, that either propose or benefit from ontologies with respect to DT. The paper uses different analysis perspectives, including a structural analysis based on a reference DT architecture, and an application-specific analysis to specifically address the different domains, such as Manufacturing and Infrastructure. The review also identifies open issues and possible research directions on the usage of ontologies and knowledge graphs in DTs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 398, + "label": 24, + "text": "Title: Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey\nAbstract: In graph machine learning, data collection, sharing, and analysis often involve multiple parties, each of which may require varying levels of data security and privacy. To this end, preserving privacy is of great importance in protecting sensitive information. In the era of big data, the relationships among data entities have become unprecedentedly complex, and more applications utilize advanced data structures (i.e., graphs) that can support network structures and relevant attribute information. To date, many graph-based AI models have been proposed (e.g., graph neural networks) for various domain tasks, like computer vision and natural language processing. In this paper, we focus on reviewing privacypreserving techniques of graph machine learning. We systematically review related works from the data to the computational aspects. We rst review methods for generating privacy-preserving graph data. Then we describe methods for transmitting privacy-preserved information (e.g., graph model parameters) to realize the optimization-based computation when data sharing among multiple parties is risky or impossible. In addition to discussing relevant theoretical methodology and software tools, we also discuss current challenges and highlight several possible future research opportunities for privacy-preserving graph machine learning. Finally, we envision a uni ed and comprehensive secure graph machine learning system.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 399, + "label": 4, + "text": "Title: Resource-aware Cyber Deception in Cloud-Native Environments\nAbstract: Cyber deception can be a valuable addition to traditional cyber defense mechanisms, especially for modern cloud-native environments with a fading security perimeter. However, pre-built decoys used in classical computer networks are not effective in detecting and mitigating malicious actors due to their inability to blend with the variety of applications in such environments. On the other hand, decoys cloning the deployed microservices of an application can offer a high-fidelity deception mechanism to intercept ongoing attacks within production environments. However, to fully benefit from this approach, it is essential to use a limited amount of decoy resources and devise a suitable cloning strategy to minimize the impact on legitimate services performance. Following this observation, we formulate a non-linear integer optimization problem that maximizes the number of attack paths intercepted by the allocated decoys within a fixed resource budget. Attack paths represent the attacker's movements within the infrastructure as a sequence of violated microservices. We also design a heuristic decoy placement algorithm to approximate the optimal solution and overcome the computational complexity of the proposed formulation. We evaluate the performance of the optimal and heuristic solutions against other schemes that use local vulnerability metrics to select which microservices to clone as decoys. Our results show that the proposed allocation strategy achieves a higher number of intercepted attack paths compared to these schemes while requiring approximately the same number of decoys.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 400, + "label": 24, + "text": "Title: Temporal Difference Learning with Compressed Updates: Error-Feedback meets Reinforcement Learning\nAbstract: In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck. These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays. Perhaps surprisingly, despite the surge of interest in large-scale, multi-agent reinforcement learning, almost nothing is known about the analogous question: Are common reinforcement learning (RL) algorithms also robust to similar perturbations? In this paper, we investigate this question by studying a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction, where a general compression operator is used to model the perturbation. Our main technical contribution is to show that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic theoretical guarantees as their SGD counterparts. We then extend our results significantly to nonlinear stochastic approximation algorithms and multi-agent settings. In particular, we prove that for multi-agent TD learning, one can achieve linear convergence speedups in the number of agents while communicating just $\\tilde{O}(1)$ bits per agent at each time step. Our work is the first to provide finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling. Our analysis hinges on studying the drift of a novel Lyapunov function that captures the dynamics of a memory variable introduced by error feedback.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 401, + "label": 30, + "text": "Title: Uncertainty in Natural Language Generation: From Theory to Applications\nAbstract: Recent advances of powerful Language Models have allowed Natural Language Generation (NLG) to emerge as an important technology that can not only perform traditional tasks like summarisation or translation, but also serve as a natural language interface to a variety of applications. As such, it is crucial that NLG systems are trustworthy and reliable, for example by indicating when they are likely to be wrong; and supporting multiple views, backgrounds and writing styles -- reflecting diverse human sub-populations. In this paper, we argue that a principled treatment of uncertainty can assist in creating systems and evaluation protocols better aligned with these goals. We first present the fundamental theory, frameworks and vocabulary required to represent uncertainty. We then characterise the main sources of uncertainty in NLG from a linguistic perspective, and propose a two-dimensional taxonomy that is more informative and faithful than the popular aleatoric/epistemic dichotomy. Finally, we move from theory to applications and highlight exciting research directions that exploit uncertainty to power decoding, controllable generation, self-assessment, selective answering, active learning and more.", + "neighbors": [ + 57, + 352, + 1044, + 1052, + 2235 + ], + "mask": "Train" + }, + { + "node_id": 402, + "label": 10, + "text": "Title: Integrated Conflict Management for UAM with Strategic Demand Capacity Balancing and Learning-based Tactical Deconfliction\nAbstract: Urban air mobility (UAM) has the potential to revolutionize our daily transportation, offering rapid and efficient deliveries of passengers and cargo between dedicated locations within and around the urban environment. Before the commercialization and adoption of this emerging transportation mode, however, aviation safety must be guaranteed, i.e., all the aircraft have to be safely separated by strategic and tactical deconfliction. Reinforcement learning has demonstrated effectiveness in the tactical deconfliction of en route commercial air traffic in simulation. However, its performance is found to be dependent on the traffic density. In this project, we propose a novel framework that combines demand capacity balancing (DCB) for strategic conflict management and reinforcement learning for tactical separation. By using DCB to precondition traffic to proper density levels, we show that reinforcement learning can achieve much better performance for tactical safety separation. Our results also indicate that this DCB preconditioning can allow target levels of safety to be met that are otherwise impossible. In addition, combining strategic DCB with reinforcement learning for tactical separation can meet these safety levels while achieving greater operational efficiency than alternative solutions.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 403, + "label": 23, + "text": "Title: Many-Objective Optimization of Non-Functional Attributes based on Refactoring of Software Models\nAbstract: Software quality estimation is a challenging and time-consuming activity, and models are crucial to face the complexity of such activity on modern software applications. In this context, software refactoring is a crucial activity within development life-cycles where requirements and functionalities rapidly evolve. One main challenge is that the improvement of distinctive quality attributes may require contrasting refactoring actions on software, as for trade-off between performance and reliability (or other non-functional attributes). In such cases, multi-objective optimization can provide the designer with a wider view on these trade-offs and, consequently, can lead to identify suitable refactoring actions that take into account independent or even competing objectives. In this paper, we present an approach that exploits NSGA-II as the genetic algorithm to search optimal Pareto frontiers for software refactoring while considering many objectives. We consider performance and reliability variations of a model alternative with respect to an initial model, the amount of performance antipatterns detected on the model alternative, and the architectural distance, which quantifies the effort to obtain a model alternative from the initial one. We applied our approach on two case studies: a Train Ticket Booking Service, and CoCoME. We observed that our approach is able to improve performance (by up to 42\\%) while preserving or even improving the reliability (by up to 32\\%) of generated model alternatives. We also observed that there exists an order of preference of refactoring actions among model alternatives. We can state that performance antipatterns confirmed their ability to improve performance of a subject model in the context of many-objective optimization. In addition, the metric that we adopted for the architectural distance seems to be suitable for estimating the refactoring effort.", + "neighbors": [ + 305, + 1328, + 1875 + ], + "mask": "Validation" + }, + { + "node_id": 404, + "label": 36, + "text": "Title: Stability of Multi-Agent Learning: Convergence in Network Games with Many Players\nAbstract: The behaviour of multi-agent learning in many player games has been shown to display complex dynamics outside of restrictive examples such as network zero-sum games. In addition, it has been shown that convergent behaviour is less likely to occur as the number of players increase. To make progress in resolving this problem, we study Q-Learning dynamics and determine a sufficient condition for the dynamics to converge to a unique equilibrium in any network game. We find that this condition depends on the nature of pairwise interactions and on the network structure, but is explicitly independent of the total number of agents in the game. We evaluate this result on a number of representative network games and show that, under suitable network conditions, stable learning dynamics can be achieved with an arbitrary number of agents.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 405, + "label": 2, + "text": "Title: A Complete Inference System for Skip-free Guarded Kleene Algebra with Tests\nAbstract: Guarded Kleene Algebra with Tests (GKAT) is a fragment of Kleene Algebra with Tests (KAT) that was recently introduced to reason efficiently about imperative programs. In contrast to KAT, GKAT does not have an algebraic axiomatization, but relies on an analogue of Salomaa's axiomatization of Kleene Algebra. In this paper, we present an algebraic axiomatization and prove two completeness results for a large fragment of GKAT consisting of skip-free programs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 406, + "label": 26, + "text": "Title: Random Walk on Multiple Networks\nAbstract: Random Walk is a basic algorithm to explore the structure of networks, which can be used in many tasks, such as local community detection and network embedding. Existing random walk methods are based on single networks that contain limited information. In contrast, real data often contain entities with different types or/and from different sources, which are comprehensive and can be better modeled by multiple networks. To take the advantage of rich information in multiple networks and make better inferences on entities, in this study, we propose random walk on multiple networks, $\\mathsf {RWM}$RWM. $\\mathsf {RWM}$RWM is flexible and supports both multiplex networks and general multiple networks, which may form many-to-many node mappings between networks. $\\mathsf {RWM}$RWM sends a random walker on each network to obtain the local proximity (i.e., node visiting probabilities) w.r.t. the starting nodes. Walkers with similar visiting probabilities reinforce each other. We theoretically analyze the convergence properties of $\\mathsf {RWM}$RWM. Two approximation methods with theoretical performance guarantees are proposed for efficient computation. We apply $\\mathsf {RWM}$RWM in link prediction, network embedding, and local community detection. Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the effectiveness and efficiency of $\\mathsf {RWM}$RWM.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 407, + "label": 2, + "text": "Title: Bottom-Up Stratified Probabilistic Logic Programming with Fusemate\nAbstract: This paper introduces the Fusemate probabilistic logic programming system. Fusemate's inference engine comprises a grounding component and a variable elimination method for probabilistic inference. Fusemate differs from most other systems by grounding the program in a bottom-up way instead of the common top-down way. While bottom-up grounding is attractive for a number of reasons, e.g., for dynamically creating distributions of varying support sizes, it makes it harder to control the amount of ground clauses generated. We address this problem by interleaving grounding with a query-guided relevance test which prunes rules whose bodies are inconsistent with the query. % This is done We present our method in detail and demonstrate it with examples that involve\"time\", such as (hidden) Markov models. Our experiments demonstrate competitive or better performance compared to a state-of-the art probabilistic logic programming system, in particular for high branching problems.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 408, + "label": 6, + "text": "Title: DeepLens: Interactive Out-of-distribution Data Detection in NLP Models\nAbstract: Machine Learning (ML) has been widely used in Natural Language Processing (NLP) applications. A fundamental assumption in ML is that training data and real-world data should follow a similar distribution. However, a deployed ML model may suffer from out-of-distribution (OOD) issues due to distribution shifts in the real-world data. Though many algorithms have been proposed to detect OOD data from text corpora, there is still a lack of interactive tool support for ML developers. In this work, we propose DeepLens, an interactive system that helps users detect and explore OOD issues in massive text corpora. Users can efficiently explore different OOD types in DeepLens with the help of a text clustering method. Users can also dig into a specific text by inspecting salient words highlighted through neuron activation analysis. In a within-subjects user study with 24 participants, participants using DeepLens were able to find nearly twice more types of OOD issues accurately with 22% more confidence compared with a variant of DeepLens that has no interaction or visualization support.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 409, + "label": 6, + "text": "Title: Design and Assessment of a Bimanual Haptic Epidural Needle Insertion Simulator.\nAbstract: The case experience of anesthesiologists is one of the leading causes of accidental dural punctures and failed epidurals - the most common complications of epidural analgesia used for pain relief during delivery. We designed a bimanual haptic simulator to train anesthesiologists and optimize epidural analgesia skill acquisition. We present an assessment study conducted with 22 anesthesiologists of different competency levels from several Israeli hospitals. Our simulator emulates the forces applied to the epidural (Touhy) needle, held by one hand, and those applied to the Loss of Resistance (LOR) syringe, held by the other one. The resistance is calculated based on a model of the epidural region layers parameterized by the weight of the patient. We measured the movements of both haptic devices and quantified the results' rate (success, failed epidurals, and dural punctures), insertion strategies, and the participants' answers to questionnaires about their perception of the simulation realism. We demonstrated good construct validity by showing that the simulator can distinguish between real-life novices and experts. Face and content validity were examined by studying users' impressions regarding the simulator's realism and fulfillment of purpose. We found differences in strategies between different level anesthesiologists, and suggest trainee-based instruction in advanced training stages.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 410, + "label": 10, + "text": "Title: Representing Timed Automata and Timing Anomalies of Cyber-Physical Production Systems in Knowledge Graphs\nAbstract: Model-Based Anomaly Detection has been a successful approach to identify deviations from the expected behavior of Cyber-Physical Production Systems. Since manual creation of these models is a time-consuming process, it is advantageous to learn them from data and represent them in a generic formalism like timed automata. However, these models - and by extension, the detected anomalies - can be challenging to interpret due to a lack of additional information about the system. This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system. Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies. The authors additionally propose an ontology of the necessary concepts. The approach was validated on a five-tank mixing CPPS and was able to formally define both automata model as well as timing anomalies in automata execution.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 411, + "label": 26, + "text": "Title: Social Media Analytics in Disaster Response: A Comprehensive Review\nAbstract: Social media has emerged as a valuable resource for disaster management, revolutionizing the way emergency response and recovery efforts are conducted during natural disasters. This review paper aims to provide a comprehensive analysis of social media analytics for disaster management. The abstract begins by highlighting the increasing prevalence of natural disasters and the need for effective strategies to mitigate their impact. It then emphasizes the growing influence of social media in disaster situations, discussing its role in disaster detection, situational awareness, and emergency communication. The abstract explores the challenges and opportunities associated with leveraging social media data for disaster management purposes. It examines methodologies and techniques used in social media analytics, including data collection, preprocessing, and analysis, with a focus on data mining and machine learning approaches. The abstract also presents a thorough examination of case studies and best practices that demonstrate the successful application of social media analytics in disaster response and recovery. Ethical considerations and privacy concerns related to the use of social media data in disaster scenarios are addressed. The abstract concludes by identifying future research directions and potential advancements in social media analytics for disaster management. The review paper aims to provide practitioners and researchers with a comprehensive understanding of the current state of social media analytics in disaster management, while highlighting the need for continued research and innovation in this field.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 412, + "label": 30, + "text": "Title: Syntactically Robust Training on Partially-Observed Data for Open Information Extraction\nAbstract: Open Information Extraction models have shown promising results with sufficient supervision. However, these models face a fundamental challenge that the syntactic distribution of training data is partially observable in comparison to the real world. In this paper, we propose a syntactically robust training framework that enables models to be trained on a syntactic-abundant distribution based on diverse paraphrase generation. To tackle the intrinsic problem of knowledge deformation of paraphrasing, two algorithms based on semantic similarity matching and syntactic tree walking are used to restore the expressionally transformed knowledge. The training framework can be generally applied to other syntactic partial observable domains. Based on the proposed framework, we build a new evaluation set called CaRB-AutoPara, a syntactically diverse dataset consistent with the real-world setting for validating the robustness of the models. Experiments including a thorough analysis show that the performance of the model degrades with the increase of the difference in syntactic distribution, while our framework gives a robust boundary. The source code is publicly available at https://github.com/qijimrc/RobustOIE.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 413, + "label": 24, + "text": "Title: Deep Learning From Crowdsourced Labels: Coupled Cross-entropy Minimization, Identifiability, and Regularization\nAbstract: Using noisy crowdsourced labels from multiple annotators, a deep learning-based end-to-end (E2E) system aims to learn the label correction mechanism and the neural classifier simultaneously. To this end, many E2E systems concatenate the neural classifier with multiple annotator-specific ``label confusion'' layers and co-train the two parts in a parameter-coupled manner. The formulated coupled cross-entropy minimization (CCEM)-type criteria are intuitive and work well in practice. Nonetheless, theoretical understanding of the CCEM criterion has been limited. The contribution of this work is twofold: First, performance guarantees of the CCEM criterion are presented. Our analysis reveals for the first time that the CCEM can indeed correctly identify the annotators' confusion characteristics and the desired ``ground-truth'' neural classifier under realistic conditions, e.g., when only incomplete annotator labeling and finite samples are available. Second, based on the insights learned from our analysis, two regularized variants of the CCEM are proposed. The regularization terms provably enhance the identifiability of the target model parameters in various more challenging cases. A series of synthetic and real data experiments are presented to showcase the effectiveness of our approach.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 414, + "label": 25, + "text": "Title: AudioLDM: Text-to-Audio Generation with Latent Diffusion Models\nAbstract: Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at https://audioldm.github.io.", + "neighbors": [ + 736, + 1156, + 1307, + 1958, + 2103 + ], + "mask": "Train" + }, + { + "node_id": 415, + "label": 10, + "text": "Title: Towards a Better Understanding of Learning with Multiagent Teams\nAbstract: While it has long been recognized that a team of individual learning agents can be greater than the sum of its parts, recent work has shown that larger teams are not necessarily more effective than smaller ones. In this paper, we study why and under which conditions certain team structures promote effective learning for a population of individual learning agents. We show that, depending on the environment, some team structures help agents learn to specialize into specific roles, resulting in more favorable global results. However, large teams create credit assignment challenges that reduce coordination, leading to large teams performing poorly compared to smaller ones. We support our conclusions with both theoretical analysis and empirical results.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 416, + "label": 10, + "text": "Title: LogicRec: Recommendation with Users' Logical Requirements\nAbstract: Users may demand recommendations with highly personalized requirements involving logical operations, e.g., the intersection of two requirements, where such requirements naturally form structured logical queries on knowledge graphs (KGs). To date, existing recommender systems lack the capability to tackle users' complex logical requirements. In this work, we formulate the problem of recommendation with users' logical requirements (LogicRec) and construct benchmark datasets for LogicRec. Furthermore, we propose an initial solution for LogicRec based on logical requirement retrieval and user preference retrieval, where we face two challenges. First, KGs are incomplete in nature. Therefore, there are always missing true facts, which entails that the answers to logical requirements can not be completely found in KGs. In this case, item selection based on the answers to logical queries is not applicable. We thus resort to logical query embedding (LQE) to jointly infer missing facts and retrieve items based on logical requirements. Second, answer sets are under-exploited. Existing LQE methods can only deal with query-answer pairs, where queries in our case are the intersected user preferences and logical requirements. However, the logical requirements and user preferences have different answer sets, offering us richer knowledge about the requirements and preferences by providing requirement-item and preference-item pairs. Thus, we design a multi-task knowledge-sharing mechanism to exploit these answer sets collectively. Extensive experimental results demonstrate the significance of the LogicRec task and the effectiveness of our proposed method.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 417, + "label": 30, + "text": "Title: Probing Taxonomic and Thematic Embeddings for Taxonomic Information\nAbstract: Modelling taxonomic and thematic relatedness is important for building AI with comprehen-sive natural language understanding. The goal of this paper is to learn more about how taxonomic information is structurally encoded in embeddings. To do this, we design a new hypernym-hyponym probing task and perform a comparative probing study of taxonomic and thematic SGNS and GloVe embeddings. Our experiments indicate that both types of embeddings encode some taxonomic information, but the amount, as well as the geometric properties of the encodings, are independently related to both the encoder architecture, as well as the embedding training data. Speci\ufb01cally, we \ufb01nd that only taxonomic embeddings carry taxonomic information in their norm, which is determined by the underlying distribution in the data.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 418, + "label": 24, + "text": "Title: STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-19\nAbstract: Human mobility estimation is crucial during the COVID-19 pandemic due to its significant guidance for policymakers to make non-pharmaceutical interventions. While deep learning approaches outperform conventional estimation techniques on tasks with abundant training data, the continuously evolving pandemic poses a significant challenge to solving this problem due to data non-stationarity, limited observations, and complex social contexts. Prior works on mobility estimation either focus on a single city or lack the ability to model the spatio-temporal dependencies across cities and time periods. To address these issues, we make the first attempt to tackle the cross-city human mobility estimation problem through a deep meta-generative framework. We propose a Spatio-Temporal Meta-Generative Adversarial Network (STORM-GAN) model that estimates dynamic human mobility responses under a set of social and policy conditions related to COVID-19. Facilitated by a novel spatio-temporal task-based graph (STTG) embedding, STORM-GAN is capable of learning shared knowledge from a spatio-temporal distribution of estimation tasks and quickly adapting to new cities and time periods with limited training samples. The STTG embedding component is designed to capture the similarities among cities to mitigate cross-task heterogeneity. Experimental results on real-world data show that the proposed approach can greatly improve estimation performance and outperform baselines.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 419, + "label": 16, + "text": "Title: Enhancing General Face Forgery Detection via Vision Transformer with Low-Rank Adaptation\nAbstract: Nowadays, forgery faces pose pressing security concerns over fake news, fraud, impersonation, etc. Despite the demonstrated success in intra-domain face forgery detection, existing detection methods lack generalization capability and tend to suffer from dramatic performance drops when deployed to unforeseen domains. To mitigate this issue, this paper designs a more general fake face detection model based on the vision transformer(ViT) architecture. In the training phase, the pretrained ViT weights are freezed, and only the Low-Rank Adaptation(LoRA) modules are updated. Additionally, the Single Center Loss(SCL) is applied to supervise the training process, further improving the generalization capability of the model. The proposed method achieves state-of-the-arts detection performances in both cross-manipulation and cross-dataset evaluations.", + "neighbors": [ + 1314 + ], + "mask": "Train" + }, + { + "node_id": 420, + "label": 10, + "text": "Title: Alien Coding\nAbstract: We introduce a self-learning algorithm for synthesizing programs for OEIS sequences. The algorithm starts from scratch initially generating programs at random. Then it runs many iterations of a self-learning loop that interleaves (i) training neural machine translation to learn the correspondence between sequences and the programs discovered so far, and (ii) proposing many new programs for each OEIS sequence by the trained neural machine translator. The algorithm discovers on its own programs for more than 78000 OEIS sequences, sometimes developing unusual programming methods. We analyze its behavior and the invented programs in several experiments.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 421, + "label": 16, + "text": "Title: Combating Online Misinformation Videos: Characterization, Detection, and Future Directions\nAbstract: With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem. Though previous studies have made much progress in detecting misinformation in text and image formats, video-based misinformation brings new and unique challenges to automatic detection systems: 1) high information heterogeneity brought by various modalities, 2) blurred distinction between misleading video manipulation and nonmalicious artistic video editing, and 3) new patterns of misinformation propagation due to the dominant role of recommendation systems on online video platforms. To facilitate research on this challenging task, we conduct this survey to present advances in misinformation video detection. We first analyze and characterize the misinformation video from three levels including signals, semantics, and intents. Based on the characterization, we systematically review existing works for detection from features of various modalities to techniques for clue integration. We also introduce existing resources including representative datasets and useful tools. Besides summarizing existing studies, we discuss related areas and outline open issues and future directions to encourage and guide more research on misinformation video detection. The corresponding repository is at https://github.com/ICTMCG/Awesome-Misinfo-Video-Detection.", + "neighbors": [ + 2136 + ], + "mask": "Train" + }, + { + "node_id": 422, + "label": 16, + "text": "Title: Diffusion-Based 3D Human Pose Estimation with Multi-Hypothesis Aggregation\nAbstract: In this paper, a novel Diffusion-based 3D Pose estimation (D3DP) method with Joint-wise reProjection-based Multi-hypothesis Aggregation (JPMA) is proposed for probabilistic 3D human pose estimation. On the one hand, D3DP generates multiple possible 3D pose hypotheses for a single 2D observation. It gradually diffuses the ground truth 3D poses to a random distribution, and learns a denoiser conditioned on 2D keypoints to recover the uncontaminated 3D poses. The proposed D3DP is compatible with existing 3D pose estimators and supports users to balance efficiency and accuracy during inference through two customizable parameters. On the other hand, JPMA is proposed to assemble multiple hypotheses generated by D3DP into a single 3D pose for practical use. It reprojects 3D pose hypotheses to the 2D camera plane, selects the best hypothesis joint-by-joint based on the reprojection errors, and combines the selected joints into the final pose. The proposed JPMA conducts aggregation at the joint level and makes use of the 2D prior information, both of which have been overlooked by previous approaches. Extensive experiments on Human3.6M and MPI-INF-3DHP datasets show that our method outperforms the state-of-the-art deterministic and probabilistic approaches by 1.5% and 8.9%, respectively. Code is available at https://github.com/paTRICK-swk/D3DP.", + "neighbors": [ + 158 + ], + "mask": "Train" + }, + { + "node_id": 423, + "label": 24, + "text": "Title: Causal Reasoning in the Presence of Latent Confounders via Neural ADMG Learning\nAbstract: Latent confounding has been a long-standing obstacle for causal reasoning from observational data. One popular approach is to model the data using acyclic directed mixed graphs (ADMGs), which describe ancestral relations between variables using directed and bidirected edges. However, existing methods using ADMGs are based on either linear functional assumptions or a discrete search that is complicated to use and lacks computational tractability for large datasets. In this work, we further extend the existing body of work and develop a novel gradient-based approach to learning an ADMG with non-linear functional relations from observational data. We first show that the presence of latent confounding is identifiable under the assumptions of bow-free ADMGs with non-linear additive noise models. With this insight, we propose a novel neural causal model based on autoregressive flows for ADMG learning. This not only enables us to determine complex causal structural relationships behind the data in the presence of latent confounding, but also estimate their functional relationships (hence treatment effects) simultaneously. We further validate our approach via experiments on both synthetic and real-world datasets, and demonstrate the competitive performance against relevant baselines.", + "neighbors": [ + 1189 + ], + "mask": "Train" + }, + { + "node_id": 424, + "label": 31, + "text": "Title: Recommender Systems in the Era of Large Language Models (LLMs)\nAbstract: With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of Large Language Models (LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.", + "neighbors": [ + 36, + 840, + 1001, + 1182, + 1238, + 1327, + 1560, + 1611, + 1667, + 1762, + 1863, + 2013, + 2113 + ], + "mask": "Train" + }, + { + "node_id": 425, + "label": 24, + "text": "Title: Federated Learning for Water Consumption Forecasting in Smart Cities\nAbstract: \u2014Water consumption remains a major concern among the world\u2019s future challenges. For applications like load moni-toring and demand response, deep learning models are trained using enormous volumes of consumption data in smart cities. On the one hand, the information used is private. For instance, the precise information gathered by a smart meter that is a part of the system\u2019s IoT architecture at a consumer\u2019s residence may give details about the appliances and, consequently, the consumer\u2019s behavior at home. On the other hand, enormous data volumes with suf\ufb01cient variation are needed for the deep learning models to be trained properly. This paper introduces a novel model for water consumption prediction in smart cities while preserving privacy regarding monthly consumption. The pro- posed approach leverages federated learning (FL) as a machine learning paradigm designed to train a machine learning model in a distributed manner while avoiding sharing the users data with a central training facility. In addition, this approach is promising to reduce the overhead utilization through decreasing the frequency of data transmission between the users and the central entity. Extensive simulation illustrate that the proposed approach shows an enhancement in predicting water consumption for different households.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 426, + "label": 5, + "text": "Title: Managing data replication and distribution in the fog with FReD\nAbstract: The heterogeneous, geographically distributed infrastructure of fog computing poses challenges in data replication, data distribution, and data mobility for fog applications. Fog computing is still missing the necessary abstractions to manage application data, and fog application developers need to re\u2010implement data management for every new piece of software. Proposed solutions are limited to certain application domains, such as the IoT, are not flexible in regard to network topology, or do not provide the means for applications to control the movement of their data. In this paper, we present FReD, a data replication middleware for the fog. FReD serves as a building block for configurable fog data distribution and enables low\u2010latency, high\u2010bandwidth, and privacy\u2010sensitive applications. FReD is a common data access interface across heterogeneous infrastructure and network topologies, provides transparent and controllable data distribution, and can be integrated with applications from different domains. To evaluate our approach, we present a prototype implementation of FReD and show the benefits of developing with FReD using three case studies of fog computing applications.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 427, + "label": 16, + "text": "Title: DINOv2: Learning Robust Visual Features without Supervision\nAbstract: The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels.", + "neighbors": [ + 1052, + 1211, + 1926, + 2031, + 2093, + 2216 + ], + "mask": "Train" + }, + { + "node_id": 428, + "label": 16, + "text": "Title: DUAW: Data-free Universal Adversarial Watermark against Stable Diffusion Customization\nAbstract: Stable Diffusion (SD) customization approaches enable users to personalize SD model outputs, greatly enhancing the flexibility and diversity of AI art. However, they also allow individuals to plagiarize specific styles or subjects from copyrighted images, which raises significant concerns about potential copyright infringement. To address this issue, we propose an invisible data-free universal adversarial watermark (DUAW), aiming to protect a myriad of copyrighted images from different customization approaches across various versions of SD models. First, DUAW is designed to disrupt the variational autoencoder during SD customization. Second, DUAW operates in a data-free context, where it is trained on synthetic images produced by a Large Language Model (LLM) and a pretrained SD model. This approach circumvents the necessity of directly handling copyrighted images, thereby preserving their confidentiality. Once crafted, DUAW can be imperceptibly integrated into massive copyrighted images, serving as a protective measure by inducing significant distortions in the images generated by customized SD models. Experimental results demonstrate that DUAW can effectively distort the outputs of fine-tuned SD models, rendering them discernible to both human observers and a simple classifier.", + "neighbors": [ + 887, + 1237, + 1730, + 1731, + 2154 + ], + "mask": "Train" + }, + { + "node_id": 429, + "label": 4, + "text": "Title: Game Theory in Distributed Systems Security: Foundations, Challenges, and Future Directions\nAbstract: Many of our critical infrastructure systems and personal computing systems have a distributed computing systems structure. The incentives to attack them have been growing rapidly as has their attack surface due to increasing levels of connectedness. Therefore, we feel it is time to bring in rigorous reasoning to secure such systems. The distributed system security and the game theory technical communities can come together to effectively address this challenge. In this article, we lay out the foundations from each that we can build upon to achieve our goals. Next, we describe a set of research challenges for the community, organized into three categories -- analytical, systems, and integration challenges, each with\"short term\"time horizon (2-3 years) and\"long term\"(5-10 years) items. This article was conceived of through a community discussion at the 2022 NSF SaTC PI meeting.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 430, + "label": 30, + "text": "Title: Pretraining Language Models with Human Preferences\nAbstract: Language models (LMs) are pretrained to imitate internet text, including content that would violate human preferences if generated by an LM: falsehoods, offensive comments, personally identifiable information, low-quality or buggy code, and more. Here, we explore alternative objectives for pretraining LMs in a way that also guides them to generate text aligned with human preferences. We benchmark five objectives for pretraining with human feedback across three tasks and study how they affect the trade-off between alignment and capabilities of pretrained LMs. We find a Pareto-optimal and simple approach among those we explored: conditional training, or learning distribution over tokens conditional on their human preference scores given by a reward model. Conditional training reduces the rate of undesirable content by up to an order of magnitude, both when generating without a prompt and with an adversarially-chosen prompt. Moreover, conditional training maintains the downstream task performance of standard LM pretraining, both before and after task-specific finetuning. Pretraining with human feedback results in much better preference satisfaction than standard LM pretraining followed by finetuning with feedback, i.e., learning and then unlearning undesirable behavior. Our results suggest that we should move beyond imitation learning when pretraining LMs and incorporate human preferences from the start of training.", + "neighbors": [ + 126, + 899, + 1114, + 1237, + 1490, + 1647, + 1735, + 2016, + 2235, + 2257, + 2258, + 2305 + ], + "mask": "Train" + }, + { + "node_id": 431, + "label": 16, + "text": "Title: Incomplete Multimodal Learning for Remote Sensing Data Fusion\nAbstract: The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, our proposed approach introduces a novel model for incomplete multimodal learning in the context of remote sensing data fusion. This approach can be used in both supervised and self-supervised pretraining paradigms and leverages the additional learned fusion tokens in combination with Bi-LSTM attention and masked self-attention mechanisms to collect multimodal signals. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pre-training while allowing for random modality combinations as inputs in network training. Our approach delivers state-of-the-art performance on two multimodal datasets for tasks such as building instance / semantic segmentation and land-cover mapping tasks when dealing with incomplete inputs during inference.", + "neighbors": [ + 2228 + ], + "mask": "Train" + }, + { + "node_id": 432, + "label": 23, + "text": "Title: Predicting Defective Visual Code Changes in a Multi-Language AAA Video Game Project\nAbstract: Video game development increasingly relies on using visual programming languages as the primary way to build video game features. The aim of using visual programming is to move game logic into the hands of game designers, who may not be as well versed in textual coding. In this paper, we empirically observe that there are more defect-inducing commits containing visual code than textual code in a AAA video game project codebase. This indicates that the existing textual code Just-in-Time (JIT) defect prediction models under evaluation by Electronic Arts (EA) may be ineffective as they do not account for changes in visual code. Thus, we focus our research on constructing visual code defect prediction models that encompass visual code metrics and evaluate the models against defect prediction models that use language agnostic features, and textual code metrics. We test our models using features extracted from the historical codebase of a AAA video game project, as well as the historical codebases of 70 open source projects that use textual and visual code. We find that defect prediction models have better performance overall in terms of the area under the ROC curve (AUC), and Mathews Correlation Coefficient (MCC) when incorporating visual code features for projects that contain more commits with visual code than textual code.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 433, + "label": 34, + "text": "Title: The Complexity of Distributed Approximation of Packing and Covering Integer Linear Programs\nAbstract: In this paper, we present a low-diameter decomposition algorithm in the LOCAL model of distributed computing that succeeds with probability 1 \u2212 1/poly(n). Specifically, we show how to compute an (\u03f5, O((log n) / \u03f5)) low-diameter decomposition in O((log3(1/\u03f5) log n) / \u03f5) rounds. Further developing our techniques, we show new distributed algorithms for approximating general packing and covering integer linear programs in the LOCAL model. For packing problems, our algorithm finds an (1 \u2212 \u03f5)-approximate solution in O((log3(1/\u03f5) log n) / \u03f5) rounds with probability 1 \u2212 1/poly (n). For covering problems, our algorithm finds an (1 + \u03f5)-approximate solution in O(((log log n+log(1/\u03f5))3 log n) / \u03f5) rounds with probability 1 \u2212 1/poly(n). These results improve upon the previous O((log3 n) / \u03f5)-round algorithm by Ghaffari, Kuhn, and Maus [STOC 2017] which is based on network decompositions. Our algorithms are near-optimal for many fundamental combinatorial graph optimization problems in the LOCAL model, such as minimum vertex cover and minimum dominating set, as their (1 \u00b1 \u03f5)-approximate solutions require \u03a9 ((log n) / \u03f5) rounds to compute.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 434, + "label": 4, + "text": "Title: A Survey on Cross-Architectural IoT Malware Threat Hunting\nAbstract: In recent years, the increase in non-Windows malware threats had turned the focus of the cybersecurity community. Research works on hunting Windows PE-based malwares are maturing, whereas the developments on Linux malware threat hunting are relatively scarce. With the advent of the Internet of Things (IoT) era, smart devices that are getting integrated into human life have become a hackers\u2019 highway for their malicious activities. The IoT devices employ various Unix-based architectures that follow ELF (Executable and Linkable Format) as their standard binary file specification. This study aims at providing a comprehensive survey on the latest developments in cross-architectural IoT malware detection and classification approaches. Aided by a modern taxonomy, we discuss the feature representations, feature extraction techniques, and machine learning models employed in the surveyed works. We further provide more insights on the practical challenges involved in cross-architectural IoT malware threat hunting and discuss various avenues to instill potential future research.", + "neighbors": [ + 1127 + ], + "mask": "Train" + }, + { + "node_id": 435, + "label": 24, + "text": "Title: A new Gradient TD Algorithm with only One Step-size: Convergence Rate Analysis using L-\u03bb Smoothness\nAbstract: Gradient Temporal Difference (GTD) algorithms (Sutton et al., 2008, 2009) are the first $O(d)$ ($d$ is the number features) algorithms that have convergence guarantees for off-policy learning with linear function approximation. Liu et al. (2015) and Dalal et. al. (2018) proved the convergence rates of GTD, GTD2 and TDC are $O(t^{-\\alpha/2})$ for some $\\alpha \\in (0,1)$. This bound is tight (Dalal et al., 2020), and slower than $O(1/\\sqrt{t})$. GTD algorithms also have two step-size parameters, which are difficult to tune. In literature, there is a\"single-time-scale\"formulation of GTD. However, this formulation still has two step-size parameters. This paper presents a truly single-time-scale GTD algorithm for minimizing the Norm of Expected td Update (NEU) objective, and it has only one step-size parameter. We prove that the new algorithm, called Impression GTD, converges at least as fast as $O(1/t)$. Furthermore, based on a generalization of the expected smoothness (Gower et al. 2019), called $L$-$\\lambda$ smoothness, we are able to prove that the new GTD converges even faster, in fact, with a linear rate. Our rate actually also improves Gower et al.'s result with a tighter bound under a weaker assumption. Besides Impression GTD, we also prove the rates of three other GTD algorithms, one by Yao and Liu (2008), another called A-transpose-TD (Sutton et al., 2008), and a counterpart of A-transpose-TD. The convergence rates of all the four GTD algorithms are proved in a single generic GTD framework to which $L$-$\\lambda$ smoothness applies. Empirical results on Random walks, Boyan chain, and Baird counterexample show that Impression GTD converges much faster than existing GTD algorithms for both on-policy and off-policy learning problems, with well-performing step-sizes in a big range.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 436, + "label": 16, + "text": "Title: ReVersion: Diffusion-Based Relation Inversion from Images\nAbstract: Diffusion models gain increasing popularity for their generative capabilities. Recently, there have been surging needs to generate customized images by inverting diffusion models from exemplar images. However, existing inversion methods mainly focus on capturing object appearances. How to invert object relations, another important pillar in the visual world, remains unexplored. In this work, we propose ReVersion for the Relation Inversion task, which aims to learn a specific relation (represented as\"relation prompt\") from exemplar images. Specifically, we learn a relation prompt from a frozen pre-trained text-to-image diffusion model. The learned relation prompt can then be applied to generate relation-specific images with new objects, backgrounds, and styles. Our key insight is the\"preposition prior\"- real-world relation prompts can be sparsely activated upon a set of basis prepositional words. Specifically, we propose a novel relation-steering contrastive learning scheme to impose two critical properties of the relation prompt: 1) The relation prompt should capture the interaction between objects, enforced by the preposition prior. 2) The relation prompt should be disentangled away from object appearances. We further devise relation-focal importance sampling to emphasize high-level interactions over low-level appearances (e.g., texture, color). To comprehensively evaluate this new task, we contribute ReVersion Benchmark, which provides various exemplar images with diverse relations. Extensive experiments validate the superiority of our approach over existing methods across a wide range of visual relations.", + "neighbors": [ + 260, + 1582 + ], + "mask": "Validation" + }, + { + "node_id": 437, + "label": 27, + "text": "Title: Towards a Safe Real-Time Motion Planning Framework for Autonomous Driving Systems: An MPPI Approach\nAbstract: Planning safe trajectories in Autonomous Driving Systems (ADS) is a complex problem to solve in real-time. The main challenge to solve this problem arises from the various conditions and constraints imposed by road geometry, semantics and traffic rules, as well as the presence of dynamic agents. Recently, Model Predictive Path Integral (MPPI) has shown to be an effective framework for optimal motion planning and control in robot navigation in unstructured and highly uncertain environments. In this paper, we formulate the motion planning problem in ADS as a nonlinear stochastic dynamic optimization problem that can be solved using an MPPI strategy. The main technical contribution of this work is a method to handle obstacles within the MPPI formulation safely. In this method, obstacles are approximated by circles that can be easily integrated into the MPPI cost formulation while considering safety margins. The proposed MPPI framework has been efficiently implemented in our autonomous vehicle and experimentally validated using three different primitive scenarios. Experimental results show that generated trajectories are safe, feasible and perfectly achieve the planning objective. The video results as well as the open-source implementation are available at: https://gitlab.uni.lu/360lab-public/mppi", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 438, + "label": 16, + "text": "Title: VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection\nAbstract: The recent contrastive language-image pre-training (CLIP) model has shown great success in a wide range of image-level tasks, revealing remarkable ability for learning powerful visual representations with rich semantics. An open and worthwhile problem is efficiently adapting such a strong model to the video domain and designing a robust video anomaly detector. In this work, we propose VadCLIP, a new paradigm for weakly supervised video anomaly detection (WSVAD) by leveraging the frozen CLIP model directly without any pre-training and fine-tuning process. Unlike current works that directly feed extracted features into the weakly supervised classifier for frame-level binary classification, VadCLIP makes full use of fine-grained associations between vision and language on the strength of CLIP and involves dual branch. One branch simply utilizes visual features for coarse-grained binary classification, while the other fully leverages the fine-grained language-image alignment. With the benefit of dual branch, VadCLIP achieves both coarse-grained and fine-grained video anomaly detection by transferring pre-trained knowledge from CLIP to WSVAD task. We conduct extensive experiments on two commonly-used benchmarks, demonstrating that VadCLIP achieves the best performance on both coarse-grained and fine-grained WSVAD, surpassing the state-of-the-art methods by a large margin. Specifically, VadCLIP achieves 84.51% AP and 88.02% AUC on XD-Violence and UCF-Crime, respectively. Code and features will be released to facilitate future VAD research.", + "neighbors": [ + 2115 + ], + "mask": "Train" + }, + { + "node_id": 439, + "label": 24, + "text": "Title: Primal-Dual Contextual Bayesian Optimization for Control System Online Optimization with Time-Average Constraints\nAbstract: This paper studies the problem of online performance optimization of constrained closed-loop control systems, where both the objective and the constraints are unknown black-box functions affected by exogenous time-varying contextual disturbances. A primal-dual contextual Bayesian optimization algorithm is proposed that achieves sublinear cumulative regret with respect to the dynamic optimal solution under certain regularity conditions. Furthermore, the algorithm achieves zero time-average constraint violation, ensuring that the average value of the constraint function satisfies the desired constraint. The method is applied to both sampled instances from Gaussian processes and a continuous stirred tank reactor parameter tuning problem; simulation results show that the method simultaneously provides close-to-optimal performance and maintains constraint feasibility on average. This contrasts current state-of-the-art methods, which either suffer from large cumulative regret or severe constraint violations for the case studies presented.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 440, + "label": 24, + "text": "Title: Latent Dynamics Networks (LDNets): learning the intrinsic dynamics of spatio-temporal processes\nAbstract: Predicting the evolution of systems that exhibit spatio-temporal dynamics in response to external stimuli is a key enabling technology fostering scientific innovation. Traditional equations-based approaches leverage first principles to yield predictions through the numerical approximation of high-dimensional systems of differential equations, thus calling for large-scale parallel computing platforms and requiring large computational costs. Data-driven approaches, instead, enable the description of systems evolution in low-dimensional latent spaces, by leveraging dimensionality reduction and deep learning algorithms. We propose a novel architecture, named Latent Dynamics Network (LDNet), which is able to discover low-dimensional intrinsic dynamics of possibly non-Markovian dynamical systems, thus predicting the time evolution of space-dependent fields in response to external inputs. Unlike popular approaches, in which the latent representation of the solution manifold is learned by means of auto-encoders that map a high-dimensional discretization of the system state into itself, LDNets automatically discover a low-dimensional manifold while learning the latent dynamics, without ever operating in the high-dimensional space. Furthermore, LDNets are meshless algorithms that do not reconstruct the output on a predetermined grid of points, but rather at any point of the domain, thus enabling weight-sharing across query-points. These features make LDNets lightweight and easy-to-train, with excellent accuracy and generalization properties, even in time-extrapolation regimes. We validate our method on several test cases and we show that, for a challenging highly-nonlinear problem, LDNets outperform state-of-the-art methods in terms of accuracy (normalized error 5 times smaller), by employing a dramatically smaller number of trainable parameters (more than 10 times fewer).", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 441, + "label": 25, + "text": "Title: The HCCL system for VoxCeleb Speaker Recognition Challenge 2022\nAbstract: This report describes our submission to track1 and track3 for VoxCeleb Speaker Recognition Challenge 2022(VoxSRC2022). Our best system achieves minDCF 0.1397 and EER 2.414 in track1, minDCF 0.388 and EER 7.030 in track3.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 442, + "label": 4, + "text": "Title: Smart Learning to Find Dumb Contracts\nAbstract: We introduce the Deep Learning Vulnerability Analyzer (DLVA) for Ethereum smart contracts based on neural networks. We train DLVA to judge bytecode even though the supervising oracle can only judge source. DLVA's training algorithm is general: we extend a source code analysis to bytecode without any manual feature engineering, predefined patterns, or expert rules. DLVA's training algorithm is also robust: it overcame a 1.25% error rate mislabeled contracts, and--the student surpassing the teacher--found vulnerable contracts that Slither mislabeled. DLVA is much faster than other smart contract vulnerability detectors: DLVA checks contracts for 29 vulnerabilities in 0.2 seconds, a 10-1,000x speedup. DLVA has three key components. First, Smart Contract to Vector (SC2V) uses neural networks to map smart contract bytecode to a high-dimensional floating-point vector. We benchmark SC2V against 4 state-of-the-art graph neural networks and show that it improves model differentiation by 2.2%. Second, Sibling Detector (SD) classifies contracts when a target contract's vector is Euclidian-close to a labeled contract's vector in a training set; although only able to judge 55.7% of the contracts in our test set, it has a Slither-predictive accuracy of 97.4% with a false positive rate of only 0.1%. Third, Core Classifier (CC) uses neural networks to infer vulnerable contracts regardless of vector distance. We benchmark DLVA's CC with 10 ML techniques and show that the CC improves accuracy by 11.3%. Overall, DLVA predicts Slither's labels with an overall accuracy of 92.7% and associated false positive rate of 7.2%. Lastly, we benchmark DLVA against nine well-known smart contract analysis tools. Despite using much less analysis time, DLVA completed every query, leading the pack with an average accuracy of 99.7%, pleasingly balancing high true positive rates with low false positive rates.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 443, + "label": 24, + "text": "Title: Human-Timescale Adaptation in an Open-Ended Task Space\nAbstract: Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent's capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 444, + "label": 26, + "text": "Title: No Love Among Haters: Negative Interactions Reduce Hate Community Engagement\nAbstract: While online hate groups pose significant risks to the health of online platforms and safety of marginalized groups, little is known about what causes users to become active in hate groups and the effect of social interactions on furthering their engagement. We address this gap by first developing tools to find hate communities within Reddit, and then augment 11 subreddits extracted with 14 known hateful subreddits (25 in total). Using causal inference methods, we evaluate the effect of replies on engagement in hateful subreddits by comparing users who receive replies to their first comment (the treatment) to equivalent control users who do not. We find users who receive replies are less likely to become engaged in hateful subreddits than users who do not, while the opposite effect is observed for a matched sample of similar-sized non-hateful subreddits. Using the Google Perspective API and VADER, we discover that hateful community first-repliers are more toxic, negative, and attack the posters more often than non-hateful first-repliers. In addition, we uncover a negative correlation between engagement and attacks or toxicity of first-repliers. We simulate the cumulative engagement of hateful and non-hateful subreddits under the contra-positive scenario of friendly first-replies, finding that attacks dramatically reduce engagement in hateful subreddits. These results counter-intuitively imply that, although under-moderated communities allow hate to fester, the resulting environment is such that direct social interaction does not encourage further participation, thus endogenously constraining the harmful role that these communities could play as recruitment venues for antisocial beliefs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 445, + "label": 10, + "text": "Title: Relevant Entity Selection: Knowledge Graph Bootstrapping via Zero-Shot Analogical Pruning\nAbstract: Knowledge Graph Construction (KGC) can be seen as an iterative process starting from a high quality nucleus that is refined by knowledge extraction approaches in a virtuous loop. Such a nucleus can be obtained from knowledge existing in an open KG like Wikidata. However, due to the size of such generic KGs, integrating them as a whole may entail irrelevant content and scalability issues. We propose an analogy-based approach that starts from seed entities of interest in a generic KG, and keeps or prunes their neighboring entities. We evaluate our approach on Wikidata through two manually labeled datasets that contain either domain-homogeneous or -heterogeneous seed entities. We empirically show that our analogy-based approach outperforms LSTM, Random Forest, SVM, and MLP, with a drastically lower number of parameters. We also evaluate its generalization potential in a transfer learning setting. These results advocate for the further integration of analogy-based inference in tasks related to the KG lifecycle.", + "neighbors": [ + 2250 + ], + "mask": "Test" + }, + { + "node_id": 446, + "label": 24, + "text": "Title: Targeted Image Reconstruction by Sampling Pre-trained Diffusion Model\nAbstract: A trained neural network model contains information on the training data. Given such a model, malicious parties can leverage the\"knowledge\"in this model and design ways to print out any usable information (known as model inversion attack). Therefore, it is valuable to explore the ways to conduct a such attack and demonstrate its severity. In this work, we proposed ways to generate a data point of the target class without prior knowledge of the exact target distribution by using a pre-trained diffusion model.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 447, + "label": 30, + "text": "Title: Improving Contextualized Topic Models with Negative Sampling\nAbstract: Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 448, + "label": 16, + "text": "Title: BEVPlace: Learning LiDAR-based Place Recognition using Bird's Eye View Images\nAbstract: Place recognition is a key module for long-term SLAM systems. Current LiDAR-based place recognition methods usually use representations of point clouds such as unordered points or range images. These methods achieve high recall rates of retrieval, but their performance may degrade in the case of view variation or scene changes. In this work, we explore the potential of a different representation in place recognition, i.e. bird's eye view (BEV) images. We observe that the structural contents of BEV images are less influenced by rotations and translations of point clouds. We validate that, without any delicate design, a simple VGGNet trained on BEV images achieves comparable performance with the state-of-the-art place recognition methods in scenes of slight viewpoint changes. For more robust place recognition, we design a rotation-invariant network called BEVPlace. We use group convolution to extract rotation-equivariant local features from the images and NetVLAD for global feature aggregation. In addition, we observe that the distance between BEV features is correlated with the geometry distance of point clouds. Based on the observation, we develop a method to estimate the position of the query cloud, extending the usage of place recognition. The experiments conducted on large-scale public datasets show that our method 1) achieves state-of-the-art performance in terms of recall rates, 2) is robust to view changes, 3) shows strong generalization ability, and 4) can estimate the positions of query point clouds. Source codes are publicly available at https://github.com/zjuluolun/BEVPlace.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 449, + "label": 16, + "text": "Title: Robust Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers\nAbstract: Deep neural networks (DNNs) can be manipulated to exhibit specific behaviors when exposed to specific trigger patterns, without affecting their performance on normal samples. This type of attack is known as a backdoor attack. Recent research has focused on designing invisible triggers for backdoor attacks to ensure visual stealthiness. These triggers have demonstrated strong attack performance even under backdoor defense, which aims to eliminate or suppress the backdoor effect in the model. However, through experimental observations, we have noticed that these carefully designed invisible triggers are often susceptible to visual distortion during inference, such as Gaussian blurring or environmental variations in real-world scenarios. This phenomenon significantly undermines the effectiveness of attacks in practical applications. Unfortunately, this issue has not received sufficient attention and has not been thoroughly investigated. To address this limitation, we propose a novel approach called the Visible, Semantic, Sample-Specific, and Compatible trigger (VSSC-trigger), which leverages a recent powerful image method known as the stable diffusion model. In this approach, a text trigger is utilized as a prompt and combined with a benign image. The resulting combination is then processed by a pre-trained stable diffusion model, generating a corresponding semantic object. This object is seamlessly integrated with the original image, resulting in a new realistic image, referred to as the poisoned image. Extensive experimental results and analysis validate the effectiveness and robustness of our proposed attack method, even in the presence of visual distortion. We believe that the new trigger proposed in this work, along with the proposed idea to address the aforementioned issues, will have significant prospective implications for further advancements in this direction.", + "neighbors": [ + 1902 + ], + "mask": "Train" + }, + { + "node_id": 450, + "label": 24, + "text": "Title: Fast Submodular Function Maximization\nAbstract: Submodular functions have many real-world applications, such as document summarization, sensor placement, and image segmentation. For all these applications, the key building block is how to compute the maximum value of a submodular function efficiently. We consider both the online and offline versions of the problem: in each iteration, the data set changes incrementally or is not changed, and a user can issue a query to maximize the function on a given subset of the data. The user can be malicious, issuing queries based on previous query results to break the competitive ratio for the online algorithm. Today, the best-known algorithm for online submodular function maximization has a running time of $O(n k d^2)$ where $n$ is the total number of elements, $d$ is the feature dimension and $k$ is the number of elements to be selected. We propose a new method based on a novel search tree data structure. Our algorithm only takes $\\widetilde{O}(nk + kd^2 + nd)$ time.", + "neighbors": [ + 748, + 1559 + ], + "mask": "Test" + }, + { + "node_id": 451, + "label": 31, + "text": "Title: Towards Hierarchical Policy Learning for Conversational Recommendation with Hypergraph-based Reinforcement Learning\nAbstract: Conversational recommendation systems (CRS) aim to timely and proactively acquire user dynamic preferred attributes through conversations for item recommendation. In each turn of CRS, there naturally have two decision-making processes with different roles that influence each other: 1) director, which is to select the follow-up option (i.e., ask or recommend) that is more effective for reducing the action space and acquiring user preferences; and 2) actor, which is to accordingly choose primitive actions (i.e., asked attribute or recommended item) to estimate the effectiveness of the director\u2019s option. However, existing methods heavily rely on a unified decision-making module or heuristic rules, while neglecting to distinguish the roles of different decision procedures, as well as the mutual influences between them. To address this, we propose a novel Director-Actor Hierarchical Conversational Recommender (DAHCR), where the director selects the most effective option, followed by the actor accordingly choosing primitive actions that satisfy user preferences. Specifically, we develop a dynamic hypergraph to model user preferences and introduce an intrinsic motivation to train from weak supervision over the director. Finally, to alleviate the bad effect of model bias on the mutual influence between the director and actor, we model the director\u2019s option by sampling from a categorical distribution. Extensive experiments demonstrate that DAHCR outperforms state-of-the-art methods.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 452, + "label": 6, + "text": "Title: Let's Play Together through Channels: Understanding the Practices and Experience of Danmaku Participation Game Players in China\nAbstract: Live streaming is becoming increasingly popular in recent years, as most channels prioritize the delivery of engaging content to their viewers. Among various live streaming channels, Danmaku participation game (DPG) has emerged in China as a mixture of live streaming and online gaming, offering an immersive gaming experience to players. Although prior research has explored audience participation games (APGs) in North America and Europe, it primarily focuses on discussing prototypes and lacks observation of players in natural settings. Little is known about how players perceive DPGs and their player experience. To fill the research gap, we observed a series of DPG channels and conducted an interview-based study to gain insights into the practices and experiences of DPG players. Our work reveals that DPGs can effectively synergize live streaming and online games, amplifying both player engagement and a profound sense of accomplishment to players.", + "neighbors": [ + 110 + ], + "mask": "Train" + }, + { + "node_id": 453, + "label": 16, + "text": "Title: DopUS-Net: Quality-Aware Robotic Ultrasound Imaging based on Doppler Signal\nAbstract: Medical ultrasound (US) is widely used to evaluate and stage vascular diseases, in particular for the preliminary screening program, due to the advantage of being radiation-free. However, automatic segmentation of small tubular structures (e.g., the ulnar artery) from cross-sectional US images is still challenging. To address this challenge, this paper proposes the DopUS-Net and a vessel re-identification module that leverage the Doppler effect to enhance the final segmentation result. Firstly, the DopUS-Net combines the Doppler images with B-mode images to increase the segmentation accuracy and robustness of small blood vessels. It incorporates two encoders to exploit the maximum potential of the Doppler signal and recurrent neural network modules to preserve sequential information. Input to the first encoder is a two-channel duplex image representing the combination of the grey-scale Doppler and B-mode images to ensure anatomical spatial correctness. The second encoder operates on the pure Doppler images to provide a region proposal. Secondly, benefiting from the Doppler signal, this work first introduces an online artery re-identification module to qualitatively evaluate the real-time segmentation results and automatically optimize the probe pose for enhanced Doppler images. This quality-aware module enables the closed-loop control of robotic screening to further improve the confidence and robustness of image segmentation. The experimental results demonstrate that the proposed approach with the re-identification process can significantly improve the accuracy and robustness of the segmentation results (dice score: from 0:54 to 0:86; intersection over union: from 0:47 to 0:78).", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 454, + "label": 27, + "text": "Title: Time Optimal Ergodic Search\nAbstract: Robots with the ability to balance time against the thoroughness of search have the potential to provide time-critical assistance in applications such as search and rescue. Current advances in ergodic coverage-based search methods have enabled robots to completely explore and search an area in a fixed amount of time. However, optimizing time against the quality of autonomous ergodic search has yet to be demonstrated. In this paper, we investigate solutions to the time-optimal ergodic search problem for fast and adaptive robotic search and exploration. We pose the problem as a minimum time problem with an ergodic inequality constraint whose upper bound regulates and balances the granularity of search against time. Solutions to the problem are presented analytically using Pontryagin's conditions of optimality and demonstrated numerically through a direct transcription optimization approach. We show the efficacy of the approach in generating time-optimal ergodic search trajectories in simulation and with drone experiments in a cluttered environment. Obstacle avoidance is shown to be readily integrated into our formulation, and we perform ablation studies that investigate parameter dependence on optimized time and trajectory sensitivity for search.", + "neighbors": [ + 1494 + ], + "mask": "Train" + }, + { + "node_id": 455, + "label": 27, + "text": "Title: Bipedal Walking on Constrained Footholds with MPC Footstep Control\nAbstract: Bipedal robots promise the ability to traverse rough terrain quickly and efficiently, and indeed, humanoid robots can now use strong ankles and careful foot placement to traverse discontinuous terrain. However, more agile underactuated bipeds have small feet and weak ankles, and must constantly adjust their planned footstep position to maintain balance. We introduce a new model-predictive footstep controller which jointly optimizes over the robot's discrete choice of stepping surface, impending footstep position sequence, ankle torque in the sagittal plane, and center of mass trajectory, to track a velocity command. The controller is formulated as a single Mixed Integer Quadratic Program (MIQP) which is solved at 50-200 Hz, depending on terrain complexity. We implement a state of the art real-time elevation mapping and convex terrain decomposition framework to inform the controller of its surroundings in the form on convex polygons representing steppable terrain. We investigate the capabilities and challenges of our approach through hardware experiments on the underactuated biped Cassie.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 456, + "label": 16, + "text": "Title: Do humans and machines have the same eyes? Human-machine perceptual differences on image classification\nAbstract: Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels. Most efforts in recent vision research focus on measuring the model task performance using standardized benchmarks. Limited work has been done to understand the perceptual difference between humans and machines. To fill this gap, our study first quantifies and analyzes the statistical distributions of mistakes from the two sources. We then explore human vs. machine expertise after ranking tasks by difficulty levels. Even when humans and machines have similar overall accuracies, the distribution of answers may vary. Leveraging the perceptual difference between humans and machines, we empirically demonstrate a post-hoc human-machine collaboration that outperforms humans or machines alone.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 457, + "label": 28, + "text": "Title: A Simplistic Model of Neural Scaling Laws: Multiperiodic Santa Fe Processes\nAbstract: It was observed that large language models exhibit a power-law decay of cross entropy with respect to the number of parameters and training tokens. When extrapolated literally, this decay implies that the entropy rate of natural language is zero. To understand this phenomenon -- or an artifact -- better, we construct a simple stationary stochastic process and its memory-based predictor that exhibit a power-law decay of cross entropy with the vanishing entropy rate. Our example is based on previously discussed Santa Fe processes, which decompose a random text into a process of narration and time-independent knowledge. Previous discussions assumed that narration is a memoryless source with Zipf's distribution. In this paper, we propose a model of narration that has the vanishing entropy rate and applies a randomly chosen deterministic sequence called a multiperiodic sequence. Under a suitable parameterization, multiperiodic sequences exhibit asymptotic relative frequencies given by Zipf's law. Remaining agnostic about the value of the entropy rate of natural language, we discuss relevance of similar constructions for language modeling.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 458, + "label": 16, + "text": "Title: No Free Lunch in Self Supervised Representation Learning\nAbstract: Self-supervised representation learning in computer vision relies heavily on hand-crafted image transformations to learn meaningful and invariant features. However few extensive explorations of the impact of transformation design have been conducted in the literature. In particular, the dependence of downstream performances to transformation design has been established, but not studied in depth. In this work, we explore this relationship, its impact on a domain other than natural images, and show that designing the transformations can be viewed as a form of supervision. First, we demonstrate that not only do transformations have an effect on downstream performance and relevance of clustering, but also that each category in a supervised dataset can be impacted in a different way. Following this, we explore the impact of transformation design on microscopy images, a domain where the difference between classes is more subtle and fuzzy than in natural images. In this case, we observe a greater impact on downstream tasks performances. Finally, we demonstrate that transformation design can be leveraged as a form of supervision, as careful selection of these by a domain expert can lead to a drastic increase in performance on a given downstream task.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 459, + "label": 16, + "text": "Title: Dual-path TokenLearner for Remote Photoplethysmography-based Physiological Measurement with Facial Videos\nAbstract: Remote photoplethysmography (rPPG) based physiological measurement is an emerging yet crucial vision task, whose challenge lies in exploring accurate rPPG prediction from facial videos accompanied by noises of illumination variations, facial occlusions, head movements, \\etc, in a non-contact manner. Existing mainstream CNN-based models make efforts to detect physiological signals by capturing subtle color changes in facial regions of interest (ROI) caused by heartbeats. However, such models are constrained by the limited local spatial or temporal receptive fields in the neural units. Unlike them, a native Transformer-based framework called Dual-path TokenLearner (Dual-TL) is proposed in this paper, which utilizes the concept of learnable tokens to integrate both spatial and temporal informative contexts from the global perspective of the video. Specifically, the proposed Dual-TL uses a Spatial TokenLearner (S-TL) to explore associations in different facial ROIs, which promises the rPPG prediction far away from noisy ROI disturbances. Complementarily, a Temporal TokenLearner (T-TL) is designed to infer the quasi-periodic pattern of heartbeats, which eliminates temporal disturbances such as head movements. The two TokenLearners, S-TL and T-TL, are executed in a dual-path mode. This enables the model to reduce noise disturbances for final rPPG signal prediction. Extensive experiments on four physiological measurement benchmark datasets are conducted. The Dual-TL achieves state-of-the-art performances in both intra- and cross-dataset testings, demonstrating its immense potential as a basic backbone for rPPG measurement. The source code is available at \\href{https://github.com/VUT-HFUT/Dual-TL}{https://github.com/VUT-HFUT/Dual-TL}", + "neighbors": [ + 2055 + ], + "mask": "Train" + }, + { + "node_id": 460, + "label": 28, + "text": "Title: On the difficulty to beat the first linear programming bound for binary codes\nAbstract: The first linear programming bound of McEliece, Rodemich, Rumsey, and Welch is the best known asymptotic upper bound for binary codes, for a certain subrange of distances. Starting from the work of Friedman and Tillich, there are, by now, some arguably easier and more direct arguments for this bound. We show that this more recent line of argument runs into certain difficulties if one tries to go beyond this bound (say, towards the second linear programming bound of McEliece, Rodemich, Rumsey, and Welch).", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 461, + "label": 24, + "text": "Title: Incremental Profit per Conversion: a Response Transformation for Uplift Modeling in E-Commerce Promotions\nAbstract: Promotions play a crucial role in e-commerce platforms, and various cost structures are employed to drive user engagement. This paper focuses on promotions with response-dependent costs, where expenses are incurred only when a purchase is made. Such promotions include discounts and coupons. While existing uplift model approaches aim to address this challenge, these approaches often necessitate training multiple models, like meta-learners, or encounter complications when estimating profit due to zero-inflated values stemming from non-converted individuals with zero cost and profit. To address these challenges, we introduce Incremental Profit per Conversion (IPC), a novel uplift measure of promotional campaigns' efficiency in unit economics. Through a proposed response transformation, we demonstrate that IPC requires only converted data, its propensity, and a single model to be estimated. As a result, IPC resolves the issues mentioned above while mitigating the noise typically associated with the class imbalance in conversion datasets and biases arising from the many-to-one mapping between search and purchase data. Lastly, we validate the efficacy of our approach by presenting results obtained from a synthetic simulation of a discount coupon campaign.", + "neighbors": [], + "mask": "Test" + }, + { + "node_id": 462, + "label": 4, + "text": "Title: Trojan Playground: A Reinforcement Learning Framework for Hardware Trojan Insertion and Detection\nAbstract: Current Hardware Trojan (HT) detection techniques are mostly developed based on a limited set of HT benchmarks. Existing HT benchmarks circuits are generated with multiple shortcomings, i.e., i) they are heavily biased by the designers' mindset when they are created, and ii) they are created through a one-dimensional lens, mainly the signal activity of nets. To address these shortcomings, we introduce the first automated reinforcement learning (RL) HT insertion and detection framework. In the insertion phase, an RL agent explores the circuits and finds different locations that are best for keeping inserted HTs hidden. On the defense side, we introduce a multi-criteria RL-based detector that generates test vectors to discover the existence of HTs. Using the proposed framework, one can explore the HT insertion and detection design spaces to break the human mindset limitations as well as the benchmark issues, ultimately leading toward the next-generation of innovative detectors. Our HT toolset is open-source to accelerate research in this field and reduce the initial setup time for newcomers. We demonstrate the efficacy of our framework on ISCAS-85 benchmarks and provide the attack and detection success rates and define a methodology for comparing our techniques.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 463, + "label": 16, + "text": "Title: Deep Metric Multi-View Hashing for Multimedia Retrieval\nAbstract: Learning the hash representation of multi-view heterogeneous data is an important task in multimedia retrieval. However, existing methods fail to effectively fuse the multi-view features and utilize the metric information provided by the dissimilar samples, leading to limited retrieval precision. Current methods utilize weighted sum or concatenation to fuse the multi-view features. We argue that these fusion methods cannot capture the interaction among different views. Furthermore, these methods ignored the information provided by the dissimilar samples. We propose a novel deep metric multi-view hashing (DMMVH) method to address the mentioned problems. Extensive empirical evidence is presented to show that gate-based fusion is better than typical methods. We introduce deep metric learning to the multi-view hashing problems, which can utilize metric information of dissimilar samples. On the MIR-Flickr25K, MS COCO, and NUS-WIDE, our method outperforms the current state-of-the-art methods by a large margin (up to 15.28 mean Average Precision (mAP) improvement).", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 464, + "label": 6, + "text": "Title: Challenges and Opportunities in Data Visualization Education: A Call to Action\nAbstract: This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper-educators and researchers in data visualization-identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals&Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 465, + "label": 31, + "text": "Title: Evaluating Online Bandit Exploration In Large-Scale Recommender System\nAbstract: Bandit learning has been an increasingly popular design choice for recommender system. Despite the strong interest in bandit learning from the community, there remains multiple bottlenecks that prevent many bandit learning approaches from productionalization. One major bottleneck is how to test the effectiveness of bandit algorithm with fairness and without data leakage. Different from supervised learning algorithms, bandit learning algorithms emphasize greatly on the data collection process through their explorative nature. Such explorative behavior may induce unfair evaluation in a classic A/B test setting. In this work, we apply upper confidence bound (UCB) to our large scale short video recommender system and present a test framework for the production bandit learning life-cycle with a new set of metrics. Extensive experiment results show that our experiment design is able to fairly evaluate the performance of bandit learning in the recommender system.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 466, + "label": 16, + "text": "Title: Constructing Holistic Spatio-Temporal Scene Graph for Video Semantic Role Labeling\nAbstract: Video Semantic Role Labeling (VidSRL) aims to detect the salient events from given videos, by recognizing the predict-argument event structures and the interrelationships between events. While recent endeavors have put forth methods for VidSRL, they can be mostly subject to two key drawbacks, including the lack of fine-grained spatial scene perception and the insufficiently modeling of video temporality. Towards this end, this work explores a novel holistic spatio-temporal scene graph (namely HostSG) representation based on the existing dynamic scene graph structures, which well model both the fine-grained spatial semantics and temporal dynamics of videos for VidSRL. Built upon the HostSG, we present a nichetargeting VidSRL framework. A scene-event mapping mechanism is first designed to bridge the gap between the underlying scene structure and the high-level event semantic structure, resulting in an overall hierarchical scene-event (termed ICE) graph structure. We further perform iterative structure refinement to optimize the ICE graph, such that the overall structure representation can best coincide with end task demand. Finally, three subtask predictions of VidSRL are jointly decoded, where the end-to-end paradigm effectively avoids error propagation. On the benchmark dataset, our framework boosts significantly over the current best-performing model. Further analyses are shown for a better understanding of the advances of our methods.", + "neighbors": [ + 754, + 1792 + ], + "mask": "Train" + }, + { + "node_id": 467, + "label": 28, + "text": "Title: Cascaded Code Distributed Computing With Low Complexity and Improved Flexibility\nAbstract: Coded distributed computing, proposed by Li et al., offers significant potential for reducing the communication load in MapReduce computing systems. In the setting of the \\emph{cascaded} coded distributed computing that consisting of $K$ nodes, $N$ input files, and $Q$ output functions, the objective is to compute each output function through $s\\geq 1$ nodes with a computation load $r\\geq 1$, enabling the application of coding techniques during the Shuffle phase to achieve minimum communication load. However, for most existing coded distributed computing schemes, a major limitation lies in their demand for splitting the original data into an exponentially growing number of input files in terms of $N/\\binom{K}{r} \\in\\mathbb{N}$ and requiring an exponentially large number of output functions $Q/\\binom{K}{s} \\in\\mathbb{N}$, which imposes stringent requirements for implementation and results in significant coding complexity when $K$ is large. In this paper, we focus on the cascaded case of $K/s\\in\\mathbb{N} $, deliberately designing the strategy of input files store and output functions assignment based on a grouping method, such that a low-complexity two-round Shuffle phase is available. The main advantages of our proposed scheme contains: 1) the communication load is quilt close to or surprisingly better than the optimal state-of-the-art scheme proposed by Li et al.; 2) our scheme requires significantly less number of input files and output functions; 3) all the operations are implemented over the minimum binary field $\\mathbb{F}_2$.", + "neighbors": [ + 1674, + 1998 + ], + "mask": "Train" + }, + { + "node_id": 468, + "label": 34, + "text": "Title: Locally Consistent Decomposition of Strings with Applications to Edit Distance Sketching\nAbstract: In this paper we provide a new locally consistent decomposition of strings. Each string x is decomposed into blocks that can be described by grammars of size O(k) (using some amount of randomness). If we take two strings x and y of edit distance at most k then their block decomposition uses the same number of grammars and the i-th grammar of x is the same as the i-th grammar of y except for at most k indexes i. The edit distance of x and y equals to the sum of edit distances of pairs of blocks where x and y differ. Our decomposition can be used to design a sketch of size O(k2) for edit distance, and also a rolling sketch for edit distance of size O(k2). The rolling sketch allows to update the sketched string by appending a symbol or removing a symbol from the beginning of the string.", + "neighbors": [ + 1318 + ], + "mask": "Train" + }, + { + "node_id": 469, + "label": 22, + "text": "Title: Exact Probabilistic Inference Using Generating Functions\nAbstract: Probabilistic programs are typically normal-looking programs describing posterior probability distributions. They intrinsically code up randomized algorithms and have long been at the heart of modern machine learning and approximate computing. We explore the theory of generating functions [19] and investigate its usage in the exact quantitative reasoning of probabilistic programs. Important topics include the exact representation of program semantics [13], proving exact program equivalence [5], and -- as our main focus in this extended abstract -- exact probabilistic inference. In probabilistic programming, inference aims to derive a program's posterior distribution. In contrast to approximate inference, inferring exact distributions comes with several benefits [8], e.g., no loss of precision, natural support for symbolic parameters, and efficiency on models with certain structures. Exact probabilistic inference, however, is a notoriously hard task [6,12,17,18]. The challenges mainly arise from three program constructs: (1) unbounded while-loops and/or recursion, (2) infinite-support distributions, and (3) conditioning (via posterior observations). We present our ongoing research in addressing these challenges (with a focus on conditioning) leveraging generating functions and show their potential in facilitating exact probabilistic inference for discrete probabilistic programs.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 470, + "label": 16, + "text": "Title: Standing Between Past and Future: Spatio-Temporal Modeling for Multi-Camera 3D Multi-Object Tracking\nAbstract: This work proposes an end-to-end multi-camera 3D multi-object tracking (MOT) framework. It emphasizes spatio-temporal continuity and integrates both past and future reasoning for tracked objects. Thus, we name it \u201cPast- and-Future reasoning for Tracking\u201d (PF-Track). Specifically, our method adopts the \u201ctracking by attention\u201d framework and represents tracked instances coherently over time with object queries. To explicitly use historical cues, our \u201cPast Reasoning\u201d module learns to refine the tracks and enhance the object features by cross-attending to queries from previous frames and other objects. The \u201cFuture Reasoning\u201d module digests historical information and predicts robust future trajectories. In the case of long-term occlusions, our method maintains the object positions and enables re-association by integrating motion predictions. On the nuScenes dataset, our method improves AMOTA by a large margin and remarkably reduces ID-Switches by 90% compared to prior approaches, which is an order of magnitude less. The code and models are made available at https://github.com/TRI-ML/PF-Track.", + "neighbors": [ + 1283 + ], + "mask": "Train" + }, + { + "node_id": 471, + "label": 30, + "text": "Title: Automated speech- and text-based classification of neuropsychiatric conditions in a multidiagnostic setting\nAbstract: Speech patterns have been identified as potential diagnostic markers for neuropsychiatric conditions. However, most studies only compare a single clinical group to healthy controls, whereas clinical practice often requires differentiating between multiple potential diagnoses (multiclass settings). To address this, we assembled a dataset of repeated recordings from 420 participants (67 with major depressive disorder, 106 with schizophrenia and 46 with autism, as well as matched controls), and tested the performance of a range of conventional machine learning models and advanced Transformer models on both binary and multiclass classification, based on voice and text features. While binary models performed comparably to previous research (F1 scores between 0.54-0.75 for autism spectrum disorder, ASD; 0.67-0.92 for major depressive disorder, MDD; and 0.71-0.83 for schizophrenia); when differentiating between multiple diagnostic groups performance decreased markedly (F1 scores between 0.35-0.44 for ASD, 0.57-0.75 for MDD, 0.15-0.66 for schizophrenia, and 0.38-0.52 macro F1). Combining voice and text-based models yielded increased performance, suggesting that they capture complementary diagnostic information. Our results indicate that models trained on binary classification may learn to rely on markers of generic differences between clinical and non-clinical populations, or markers of clinical features that overlap across conditions, rather than identifying markers specific to individual conditions. We provide recommendations for future research in the field, suggesting increased focus on developing larger transdiagnostic datasets that include more fine-grained clinical features, and that can support the development of models that better capture the complexity of neuropsychiatric conditions and naturalistic diagnostic assessment.", + "neighbors": [], + "mask": "Validation" + }, + { + "node_id": 472, + "label": 24, + "text": "Title: Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation\nAbstract: Within the tensor singular value decomposition (T-SVD) framework, existing robust low-rank tensor completion approaches have made great achievements in various areas of science and engineering. Nevertheless, these methods involve the T-SVD based low-rank approximation, which suffers from high computational costs when dealing with large-scale tensor data. Moreover, most of them are only applicable to third-order tensors. Against these issues, in this article, two efficient low-rank tensor approximation approaches fusing randomized techniques are first devised under the order-d (d>= 3) T-SVD framework. On this basis, we then further investigate the robust high-order tensor completion (RHTC) problem, in which a double nonconvex model along with its corresponding fast optimization algorithms with convergence guarantees are developed. To the best of our knowledge, this is the first study to incorporate the randomized low-rank approximation into the RHTC problem. Empirical studies on large-scale synthetic and real tensor data illustrate that the proposed method outperforms other state-of-the-art approaches in terms of both computational efficiency and estimated precision.", + "neighbors": [ + 625 + ], + "mask": "Train" + }, + { + "node_id": 473, + "label": 24, + "text": "Title: Canonical and Noncanonical Hamiltonian Operator Inference\nAbstract: A method for the nonintrusive and structure-preserving model reduction of canonical and noncanonical Hamiltonian systems is presented. Based on the idea of operator inference, this technique is provably convergent and reduces to a straightforward linear solve given snapshot data and gray-box knowledge of the system Hamiltonian. Examples involving several hyperbolic partial differential equations show that the proposed method yields reduced models which, in addition to being accurate and stable with respect to the addition of basis modes, preserve conserved quantities well outside the range of their training data.", + "neighbors": [ + 1385 + ], + "mask": "Train" + }, + { + "node_id": 474, + "label": 16, + "text": "Title: Robust Generalization Against Photon-Limited Corruptions via Worst-Case Sharpness Minimization\nAbstract: Robust generalization aims to tackle the most challenging data distributions which are rare in the training set and contain severe noises, i.e., photon-limited corruptions. Common solutions such as distributionally robust optimization (DRO) focus on the worst-case empirical risk to ensure low training error on the uncommon noisy distributions. However, due to the over-parameterized model being optimized on scarce worst-case data, DRO fails to produce a smooth loss landscape, thus struggling on generalizing well to the test set. Therefore, instead of focusing on the worst-case risk minimization, we propose SharpDRO by penalizing the sharpness of the worst-case distribution, which measures the loss changes around the neighbor of learning parameters. Through worst-case sharpness minimization, the proposed method successfully produces a flat loss curve on the corrupted distributions, thus achieving robust generalization. Moreover, by considering whether the distribution annotation is available, we apply SharpDRO to two problem settings and design a worst-case selection process for robust generalization. Theoretically, we show that SharpDRO has a great convergence guarantee. Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 475, + "label": 36, + "text": "Title: The Good, the Bad and the Submodular: Fairly Allocating Mixed Manna Under Order-Neutral Submodular Preferences\nAbstract: We study the problem of fairly allocating indivisible goods (positively valued items) and chores (negatively valued items) among agents with decreasing marginal utilities over items. Our focus is on instances where all the agents have simple preferences; specifically, we assume the marginal value of an item can be either $-1$, $0$ or some positive integer $c$. Under this assumption, we present an efficient algorithm to compute leximin allocations for a broad class of valuation functions we call order-neutral submodular valuations. Order-neutral submodular valuations strictly contain the well-studied class of additive valuations but are a strict subset of the class of submodular valuations. We show that these leximin allocations are Lorenz dominating and approximately proportional. We also show that, under further restriction to additive valuations, these leximin allocations are approximately envy-free and guarantee each agent their maxmin share. We complement this algorithmic result with a lower bound showing that the problem of computing leximin allocations is NP-hard when $c$ is a rational number.", + "neighbors": [], + "mask": "Train" + }, + { + "node_id": 476, + "label": 24, + "text": "Title: Projection-free Online Exp-concave Optimization\nAbstract: We consider the setting of online convex optimization (OCO) with \\textit{exp-concave} losses. The best regret bound known for this setting is $O(n\\log{}T)$, where $n$ is the dimension and $T$ is the number of prediction rounds (treating all other quantities as constants and assuming $T$ is sufficiently large), and is attainable via the well-known Online Newton Step algorithm (ONS). However, ONS requires on each iteration to compute a projection (according to some matrix-induced norm) onto the feasible convex set, which is often computationally prohibitive in high-dimensional settings and when the feasible set admits a non-trivial structure. In this work we consider projection-free online algorithms for exp-concave and smooth losses, where by projection-free we refer to algorithms that rely only on the availability of a linear optimization oracle (LOO) for the feasible set, which in many applications of interest admits much more efficient implementations than a projection oracle. We present an LOO-based ONS-style algorithm, which using overall $O(T)$ calls to a LOO, guarantees in worst case regret bounded by $\\widetilde{O}(n^{2/3}T^{2/3})$ (ignoring all quantities except for $n,T$). However, our algorithm is most interesting in an important and plausible low-dimensional data scenario: if the gradients (approximately) span a subspace of dimension at most $\\rho$, $\\rho<