File size: 31,712 Bytes
99ed059
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{"Query": "Feature Extraction领域的Forrest Sheng Bao的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Forrest Sheng Bao in Feature Extraction field?", "Answer": "In this paper, we employ Probabilistic Neural Network (PNN) with image and data processing techniques to implement a general purpose automated leaf recognition for plant classification. 12 leaf features are extracted and orthogonalized into 5 principal variables which consist the input vector of the PNN. The PNN is trained by 1800 leaves to classify 32 kinds of plants with an accuracy greater than 90%. Compared with other approaches, our algorithm is an accurate artificial intelligence approach which is fast in execution and easy in implementation.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Forrest Sheng Bao", "organization": "Department of Computer Science, Iowa State University", "interest": "Feature Extraction"}}
{"Query": "Online Control领域的Guanya Shi的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Guanya Shi in Online Control field?", "Answer": "Precise near-ground trajectory control is difficult for multi-rotor drones, due to the complex aerodynamic effects caused by interactions between multi-rotor airflow and the environment. Conventional control methods often fail to properly account for these complex effects and fall short in accomplishing smooth landing. In this paper, we present a novel deep-learning-based robust nonlinear controller (Neural-Lander) that improves control performance of a quadrotor during landing. Our approach combines a nominal dynamics model with a Deep Neural Network (DNN) that learns high-order interactions. We apply spectral normalization (SN) to constrain the Lipschitz constant of the DNN. Leveraging this Lipschitz property, we design a nonlinear feedback linearization controller using the learned model and prove system stability with disturbance rejection. To the best of our knowledge, this is the first DNN-based nonlinear feedback controller with stability guarantees that can utilize arbitrarily large neural nets. Experimental results demonstrate that the proposed controller significantly outperforms a Baseline Nonlinear Tracking Controller in both landing and cross-table trajectory tracking cases. We also empirically show that the DNN generalizes well to unseen data outside the training domain.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Guanya Shi", "organization": "Department of Computing and Mathematical Sciences, Caltech", "interest": "Online Control"}}
{"Query": "Reinforcement Learning领域的Weinan Zhang的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Weinan Zhang in Reinforcement Learning field?", "Answer": "As a new way of training generative models, Generative AdversarialNet (GAN) that uses a discriminative model to guidethe training of the generative model has enjoyed considerablesuccess in generating real-valued data. However, it has limitationswhen the goal is for generating sequences of discretetokens. A major reason lies in that the discrete outputs fromthe generative model make it difficult to pass the gradient updatefrom the discriminative model to the generative model.Also, the discriminative model can only assess a completesequence, while for a partially generated sequence, it is nontrivialto balance its current score and the future one oncethe entire sequence has been generated. In this paper, we proposea sequence generation framework, called SeqGAN, tosolve the problems. Modeling the data generator as a stochasticpolicy in reinforcement learning (RL), SeqGAN bypassesthe generator differentiation problem by directly performinggradient policy update. The RL reward signal comes fromthe GAN discriminator judged on a complete sequence, andis passed back to the intermediate state-action steps usingMonte Carlo search. Extensive experiments on synthetic dataand real-world tasks demonstrate significant improvementsover strong baselines.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Weinan Zhang", "organization": "Department of Computer Science & Engineering, Shanghai Jiao Tong University", "interest": "Reinforcement Learning"}}
{"Query": "Western United States领域的Jessica Hwang的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Jessica Hwang in Western United States field?", "Answer": "Water managers in the western United States (U.S.) rely on longterm forecasts of temperature and precipitation to prepare for droughts and other wet weather extremes. To improve the accuracy of these longterm forecasts, the U.S. Bureau of Reclamation and the National Oceanic and Atmospheric Administration (NOAA) launched the Subseasonal Climate Forecast Rodeo, a year-long real-time forecasting challenge in which participants aimed to skillfully predict temperature and precipitation in the western U.S. two to four weeks and four to six weeks in advance. Here we present and evaluate our machine learning approach to the Rodeo and release our SubseasonalRodeo dataset, collected to train and evaluate our forecasting system. Our system is an ensemble of two regression models. The first integrates the diverse collection of meteorological measurements and dynamic model forecasts in the SubseasonalRodeo dataset and prunes irrelevant predictors using a customized multitask model selection procedure. The second uses only historical measurements of the target variable (temperature or precipitation) and introduces multitask nearest neighbor features into a weighted local linear regression. Each model alone is significantly more accurate than the debiased operational U.S. Climate Forecasting System (CFSv2), and our ensemble skill exceeds that of the top Rodeo competitor for each target variable and forecast horizon. Moreover, over 2011-2018, an ensemble of our regression models and debiased CFSv2 improves debiased CFSv2 skill by 40-50% for temperature and 129-169% for precipitation. We hope that both our dataset and our methods will help to advance the state of the art in subseasonal forecasting.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Jessica Hwang", "organization": "Stanford\nUniversity", "interest": "Western United States"}}
{"Query": "Cloud-native Applications领域的Neel Sundaresan的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Neel Sundaresan in Cloud-native Applications field?", "Answer": "Benchmark datasets have a significant impact on accelerating research in programming language tasks. In this paper, we introduce CodeXGLUE, a benchmark dataset to foster machine learning research for program understanding and generation. CodeXGLUE includes a collection of 10 tasks across 14 datasets and a platform for model evaluation and comparison. CodeXGLUE also features three baseline systems, including the BERT-style, GPT-style, and Encoder-Decoder models, to make it easy for researchers to use the platform. The availability of such data and baselines can help the development and validation of new methods that can be applied to various program understanding and generation problems.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Neel Sundaresan", "organization": "Microsoft", "interest": "Cloud-native Applications"}}
{"Query": "Stein's Method领域的Lester Mackey的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Lester Mackey in Stein's Method field?", "Answer": "In analogy to the PCA setting, the sparse PCA problem is often solved by iter- atively alternating between two subtasks: cardinality-co nstrained rank-one vari- ance maximization and matrix deflation. While the former has r eceived a great deal of attention in the literature, the latter is seldom ana lyzed and is typically borrowed without justification from the PCA context. In this work, we demon- strate that the standard PCA deflation procedure is seldom ap propriate for the sparse PCA setting. To rectify the situation, we first develo p several deflation al- ternatives better suited to the cardinality-constrained c ontext. We then reformulate the sparse PCA optimization problem to explicitly reflect th e maximum additional variance objective on each round. The result is a generalized deflation procedure that typically outperforms more standard techniques on real-world datasets.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Lester Mackey", "organization": "Stanford University", "interest": "Stein's Method"}}
{"Query": "Remote Sensing领域的Soroosh Sorooshian的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Soroosh Sorooshian in Remote Sensing field?", "Answer": "The successful application of a conceptual rainfall-runoff (CRR) model depends on how well it is calibrated. Despite the popularity of CRR models, reports in the literature indicate that it is typically difficult, if not impossible, to obtain unique optimal values for their parameters using automatic calibration methods. Unless the best set of parameters associated with a given calibration data set can be found, it is difficult to determine how sensitive the parameter estimates (and hence the model forecasts) are to factors such as input and output data error, model error, quantity and quality of data, objective function used, and so on. Results are presented that establish clearly the nature of the multiple optima problem for the research CRR model SIXPAR. These results suggest that the CRR model optimization problem is more difficult than had been previously thought and that currently used local search procedures have a very low probability of successfully finding the optimal parameter sets. Next, the performance of three existing global search procedures are evaluated on the model SIXPAR. Finally, a powerful new global optimization procedure is presented, entitled the shuffled complex evolution (SCE-UA) method, which was able to consistently locate the global optimum of the SIXPAR model, and appears to be capable of efficiently and effectively solving the CRR model optimization problem.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Soroosh Sorooshian", "organization": "Center for Hydrometeorology and Remote Sensing, University of California, Irvine", "interest": "Remote Sensing"}}
{"Query": "Steady State领域的Parnian Kassraie的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Parnian Kassraie in Steady State field?", "Answer": "Contextual bandits are a rich model for sequential decision making given side information, with important applications, e.g., in recommender systems. We propose novel algorithms for contextual bandits harnessing neural networks to approximate the unknown reward function. We resolve the open problem of proving sublinear regret bounds in this setting for general context sequences, considering both fully-connected and convolutional networks. To this end, we first analyze NTK-UCB, a kernelized bandit optimization algorithm employing the Neural Tangent Kernel (NTK), and bound its regret in terms of the NTK maximum information gain gamma(T), a complexity parameter capturing the difficulty of learning. Our bounds on gamma(T) for the NTK may be of independent interest. We then introduce our neural network based algorithm NN-UCB, and show that its regret closely tracks that of NTK-UCB. Under broad non-parametric assumptions about the reward function, our approach converges to the optimal policy at a (O) over tilde (T-1/2d) rate, where d is the dimension of the context.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Parnian Kassraie", "organization": "Department of Computer Science, ETH Zurich", "interest": "Steady State"}}
{"Query": "Chest Ct领域的Ruixiang Tang的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Ruixiang Tang in Chest Ct field?", "Answer": "With the widespread use of deep neural networks (DNNs) in high-stake applications, the security problem of the DNN models has received extensive attention. In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers. We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset. Specifically, we do not change parameters in the original model but insert a tiny trojan module (TrojanNet) into the target model. The infected model with a malicious trojan can misclassify inputs into a target label when the inputs are stamped with the special trigger. The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts comparing to conventional trojan attack methods. The experimental results show that TrojanNet can inject the trojan into all labels simultaneously (all-label trojan attack) and achieves 100% attack success rate without affecting model accuracy on original tasks. Experimental analysis further demonstrates that state-of-the-art trojan detection algorithms fail to detect TrojanNet attack. The code is available at https://github.com/trx14/TrojanNet.\n\n", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Ruixiang Tang", "organization": "Department of Computer Science & Engineering, Texas A&M University", "interest": "Chest Ct"}}
{"Query": "Online Convex Optimization领域的Weiwei Tu的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Weiwei Tu in Online Convex Optimization field?", "Answer": "Machine learning techniques have deeply rooted in our everyday life. However, since it is knowledge- and labor-intensive to pursue good learning performance, human experts are heavily involved in every aspect of machine learning. In order to make machine learning techniques easier to apply and reduce the demand for experienced human experts, automated machine learning (AutoML) has emerged as a hot topic with both industrial and academic interest. In this paper, we provide an up to date survey on AutoML. First, we introduce and define the AutoML problem, with inspiration from both realms of automation and machine learning. Then, we propose a general AutoML framework that not only covers most existing approaches to date but also can guide the design for new methods. Subsequently, we categorize and review the existing works from two aspects, i.e., the problem setup and the employed techniques. Finally, we provide a detailed analysis of AutoML approaches and explain the reasons underneath their successful applications. We hope this survey can serve as not only an insightful guideline for AutoML beginners but also an inspiration for future research.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Weiwei Tu", "organization": "Department of Computer Science and Technology, Nanjing University", "interest": "Online Convex Optimization"}}
{"Query": "Proxemic Interactions领域的Nathan Tsoi的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Nathan Tsoi in Proxemic Interactions field?", "Answer": "Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that IoU can be directly used as a regression loss. However, IoU has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the this weakness by introducing a generalized version of IoU as both a new loss and a new metric. By incorporating this generalized IoU (GIoU) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, IoU based, and new, GIoU based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Nathan Tsoi", "organization": "Computer Science Department, Stanford University", "interest": "Proxemic Interactions"}}
{"Query": "Data Streams领域的Albert Bifet的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Albert Bifet in Data Streams field?", "Answer": "Concept drift primarily refers to an online supervised learning scenario when the relation between the input data and the target variable changes over time. Assuming a general knowledge of supervised learning in this article, we characterize adaptive learning processes; categorize existing strategies for handling concept drift; overview the most representative, distinct, and popular techniques and algorithms; discuss evaluation methodology of adaptive algorithms; and present a set of illustrative applications. The survey covers the different facets of concept drift in an integrated way to reflect on the existing scattered state of the art. Thus, it aims at providing a comprehensive introduction to the concept drift adaptation for researchers, industry analysts, and practitioners.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Albert Bifet", "organization": "LTCI, Telecom Paris", "interest": "Data Streams"}}
{"Query": "Deep Learning领域的Martin Pavlovski的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Martin Pavlovski in Deep Learning field?", "Answer": "Objective: We sought to predict if patients with type 2 diabetes mellitus (DM2) would develop 10 selected complications. Accurate prediction of complications could help with more targeted measures that would prevent or slow down their development.Materials and Methods: Experiments were conducted on the Healthcare Cost and Utilization Project State Inpatient Databases of California for the period of 2003 to 2011. Recurrent neural network (RNN) long short-term memory (LSTM) and RNN gated recurrent unit (GRU) deep learning methods were designed and compared with random forest and multilayer perceptron traditional models. Prediction accuracy of selected complications were compared on 3 settings corresponding to minimum number of hospitalizations between diabetes diagnosis and the diagnosis of complications.Results: The diagnosis domain was used for experiments. The best results were achieved with RNN GRU model, followed by RNN LSTM model. The prediction accuracy achieved with RNN GRU model was between 73% (myocardial infarction) and 83% (chronic ischemic heart disease), while accuracy of traditional models was between 66% - 76%.Discussion: The number of hospitalizations was an important factor for the prediction accuracy. Experiments with 4 hospitalizations achieved significantly better accuracy than with 2 hospitalizations. To achieve improved accuracy deep learning models required training on at least 1000 patients and accuracy significantly dropped if training datasets contained 500 patients. The prediction accuracy of complications decreases over time period. Considering individual complications, the best accuracy was achieved on depressive disorder and chronic ischemic heart disease.Conclusions: The RNN GRU model was the best choice for electronic medical record type of data, based on the achieved results.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Martin Pavlovski", "organization": "Yahoo Inc.", "interest": "Deep Learning"}}
{"Query": "Crystal Structure领域的Jing Wang的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Jing Wang in Crystal Structure field?", "Answer": "Network embedding, aiming to learn the low-dimensional representations of nodes in networks, is of paramount importance in many real applications. One basic requirement of network embedding is to preserve the structure and inherent properties of the networks. While previous network embedding methods primarily preserve the microscopic structure, such as the first-and second-order proximities of nodes, the mesoscopic community structure, which is one of the most prominent feature of networks, is largely ignored. In this paper, we propose a novel Modularized Nonnegative Matrix Factorization (M-NMF) model to incorporate the community structure into network embedding. We exploit the consensus relationship between the representations of nodes and community structure, and then jointly optimize NMF based representation learning model and modularity based community detection model in a unified framework, which enables the learned representations of nodes to preserve both of the microscopic and community structures. We also provide efficient updating rules to infer the parameters of our model, together with the correctness and convergence guarantees. Extensive experimental results on a variety of real-world networks show the superior performance of the proposed method over the state-of-the-arts.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Jing Wang", "organization": "The University of Tokyo", "interest": "Crystal Structure"}}
{"Query": "ROP领域的Peng Tian的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Peng Tian in ROP field?", "Answer": "We consider learning from comparison labels generated as follows: given two samples in a dataset, a labeler produces a label indicating their relative order. Such comparison labels scale quadratically with the dataset size; most importantly, in practice, they often exhibit lower variance compared to class labels. We propose a new neural network architecture based on siamese networks to incorporate both class and comparison labels in the same training pipeline, using Bradley–Terry and Thurstone loss functions. Our architecture leads to a significant improvement in predicting both class and comparison labels, increasing classification AUC by as much as 35% and comparison AUC by as much as 6% on several real-life datasets. We further show that, by incorporating comparisons, training from few samples becomes possible: a deep neural network of 5.9 million parameters trained on 80 images attains a 0.92 AUC when incorporating comparisons.", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Peng Tian", "organization": "Northeastern University", "interest": "ROP"}}
{"Query": "Faster Training领域的Elias Tragos的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Elias Tragos in Faster Training field?", "Answer": "Federated learning (FL) is quickly becoming the de facto standard for the distributed training of deep recommendation models, using on-device user data and reducing server costs. In a typical FL process, a central server tasks end-users to train a shared recommendation model using their local data. The local models are trained over several rounds on the users' devices and the server combines them into a global model, which is sent to the devices for the purpose of providing recommendations. Standard FL approaches use randomly selected users for training at each round, and simply average their local models to compute the global model. The resulting federated recommendation models require significant client effort to train and many communication rounds before they converge to a satisfactory accuracy. Users are left with poor quality recommendations until the late stages of training. We present a novel technique, FedFast, to accelerate distributed learning which achieves good accuracy for all users very early in the training process. We achieve this by sampling from a diverse set of participating clients in each training round and applying an active aggregation method that propagates the updated model to the other clients. Consequently, with FedFast the users benefit from far lower communication costs and more accurate models that can be consumed anytime during the training process even at the very early stages. We demonstrate the efficacy of our approach across a variety of benchmark datasets and in comparison to state-of-the-art recommendation techniques.\n\n", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Elias Tragos", "organization": "Insight Centre for Data Analytics, University College Dublin", "interest": "Faster Training"}}
{"Query": "Reinforcement Learning Theory领域的Yu Bai的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Yu Bai in Reinforcement Learning Theory field?", "Answer": "  Self-play, where the algorithm learns by playing against itself without requiring any direct supervision, has become the new weapon in modern Reinforcement Learning (RL) for achieving superhuman performance in practice. However, the majority of exisiting theory in reinforcement learning only applies to the setting where the agent plays against a fixed environment. It remains largely open whether self-play algorithms can be provably effective, especially when it is necessary to manage the exploration/exploitation tradeoff.   We study self-play in competitive reinforcement learning under the setting of Markov games, a generalization of Markov decision processes to the two-player case. We introduce a self-play algorithm---Value Iteration with Upper/Lower Confidence Bound (VI-ULCB), and show that it achieves regret $\\mathcal{\\tilde{O}}(\\sqrt{T})$ after playing $T$ steps of the game. The regret is measured by the agent's performance against a \\emph{fully adversarial} opponent who can exploit the agent's strategy at \\emph{any} step. We also introduce an explore-then-exploit style algorithm, which achieves a slightly worse regret of $\\mathcal{\\tilde{O}}(T^{2/3})$, but is guaranteed to run in polynomial time even in the worst case. To the best of our knowledge, our work presents the first line of provably sample-efficient self-play algorithms for competitive reinforcement learning. ", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Yu Bai", "organization": "Salesforce AI Research", "interest": "Reinforcement Learning Theory"}}
{"Query": "Survival Analysis领域的Yan Li的代表作的摘要是?", "Query_en": "What is the abstract of the representative work of Yan Li in Survival Analysis field?", "Answer": "Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.\n\n", "Base_Question_zh": "XX领域的XXX的代表作的摘要是?", "Base_Question_en": "What is the abstract of the representative work of XXX in XX field?", "Inputs": "name, interest", "Outputs": "abstract", "Entity_Information": {"name": "Yan Li", "organization": "Alibaba Group", "interest": "Survival Analysis"}}