text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: CoTask: Correlation-aware task offloading in edge computing Abstract: In this paper, we first study the problem of Correlation-aware Task computation offloading (CoTask) in mobile edge computing. Specifically, considering the correlation among multiple computation tasks, we study how to determine a joint task offloading decision and resource allocation strategy under the constraints of feasible offloading decisions and edge servers’ computation capacity, such that the overall task latency is minimized. Before addressing the challenging CoTask problem, we first investigate the case without considering task correlation, namely, NonCoTask. We prove that, NonCoTask with two combinatorial integer-continuous optimization variables (i.e., integral offloading decision variable and continuous resource allocation variable) can be equivalently solved by solving a 0-1 integer programming problem with respect to the offloading decision variable only, while the closed-form optimal resource allocation can be efficiently obtained under any given offloading decision. The 0-1 integer programming problem further falls into the realm of minimizing a supermodular set function with a matroid base constraint. We then propose a performance-guaranteed algorithm for NonCoTask. Next, we rigorously analyze the performance gap between CoTask and NonCoTask, and develop a correlation-aware offloading decision and resource allocation algorithm with theoretic performance guarantee for CoTask, via a correlation-aware exchange search based on the solution to NonCoTask. Extensive evaluation results show that our proposed algorithm outperforms several state-of-the-art algorithms as well as their enhanced counterparts with correlation in terms of overall task latency.
1,062
Title: Edge computing empowered anomaly detection framework with dynamic insertion and deletion schemes on data streams Abstract: Anomaly detection plays a crucial role in many Internet of Things (IoT) applications such as traffic anomaly detection for smart transportation and medical diagnosis for smart healthcare. With the explosion of IoT data, anomaly detection on data streams raises higher requirements for real-time response and strong robustness on large-scale data arriving at the same time and various application fields. However, existing methods are either slow or application-specific. Inspired by the edge computing and generic anomaly detection technique, we propose an isolation forest based framework with dynamic Insertion and Deletion schemes (IDForest), which can incrementally update the forest to detect anomalies on data streams. Besides, IDForest is deployed on edge servers in parallel through packing each tree into a subtask, which facilitates the fast anomaly detection on data streams. Extensive experiments on both synthetic and real-life datasets demonstrate the efficiency and robustness of our framework for anomaly detection.
1,063
Title: A threat recognition solution of edge data security in industrial internet Abstract: Edge computing, as a computing model closer to industrial data sources, has gradually become the focus of industrial Internet. The edge computing of the Industrial Internet has the characteristics of distribution, openness, and interconnection. Therefore, the resources distributed on the edge are more vulnerable to malicious attacks from the outside world, resulting in economic losses. Especially in recent years, the means of multi-step attack are more complex and more secret, and the traditional threat identification algorithm may be difficult to find the real destruction strategy behind the attacker. In this paper, an intrusion detection System based on improved Chimp Optimization Algorithm (IChOA) and a attribute association-based multi-step attack threat recognition algorithm (AABMATR) is proposed, which can solve the problems of low accuracy of existing multi-step attack association algorithm and difficulty in accurately discovering the attack mode behind malicious attackers. By calculating the multi-step attack frequency matrix and similarity, a multi-step attack directed graph is generated, and the forward correlation strength is calculated to predict the attacker's next attack plan. Experiments were conducted on the public industrial control traffic data set 4ICS Geek Lounge, and compared with existing algorithms, higher correlation accuracy was obtained. This paper combines the existing research results to improve and innovate, and proposes the edge computing threat recognition algorithm of the industrial Internet, and has obtained good experimental results, which has a certain positive significance for protecting the resource security of the industrial Internet, especially the data security.
1,064
Title: Design of vehicle certification schemes in IoV based on blockchain Abstract: Because of a large number of vehicles in Internet of Vehicle(IoV), distributed nodes and complex driving environment, data security and certification speed are easily affected. Blockchain enables different devices that do not trust each other to work together, maintain the general state in the process of information dissemination and sharing, and protect the privacy of devices. However, at present, the speed of vehicle certification in IoV is slow, and the use of idle resources is not considered. To address this problem, this paper provides a blockchain-based vehicle identity verification scheme by using a hybrid identity code verification method to ensure the nodes in the network securely share information. Meanwhile, a task processing algorithm based on time window is proposed to optimize the utilization of idle resources. In addition, the method is evaluated by simulation experiment, and the designed scheme can reduce malicious behavior of a registered vehicle in the network, and can shorten the processing task delay.
1,065
Title: Determinantal point process-based new radio unlicensed link scheduling for multi-access edge computing Abstract: The combination of Multi-access Edge Computing (MEC) and New Radio Unlicensed (NR-U) can provide more powerful computing power and wireless access for the Internet of Things (IoT). An essential function in NR-U is to allocate time-frequency resources for IoT devices while ensuring coexistence with wireless local area networks (Wi-Fi) in the same spectrum. However, in the licensed assisted access (LAA) based NR-U, the listen-before-talk (LBT) mechanism makes the nodes located at the edges of the NR-U suffer from the hidden node problem. This can lead to a waste of time-frequency resources in NR-U and degrade the system performance. In this paper, we propose a determinantal point process (DPP) based link scheduling to improve the terminals downlink performance in the hidden nodes zone. Instead of scheduling relay links based on assignment methods, we introduce a recommendation scheme that recommends the D2D transmission nodes from a set of candidate nodes for hidden nodes and establishes D2D relay links. The recommendation scheme is designed based on a DPP model. The simulation results demonstrate that the proposed DPP method is capable of effectively selecting the optimal transmitting nodes and optimizing more hidden nodes while maximizing the sum rate.
1,066
Title: HWOA: an intelligent hybrid whale optimization algorithm for multi-objective task selection strategy in edge cloud computing system Abstract: Edge computing is a popular computing modality that works by placing computing resources as close as possible to the sensor data to relieve the burden of network bandwidth and data centers in cloud computing. However, as the volume of data and the scale of tasks processed by edge terminals continue to increase, the problem of how to optimize task selection based on execution time with limited computing resources becomes a pressing one. To this end, a hybrid whale optimization algorithm (HWOA) is proposed for multi-objective edge computing task selection. In addition to the execution time of the task, economic profits are also considered to optimize task selection. Specifically, a fuzzy function is designed to address the uncertainty of task’s economic profits and execution time. Five interactive constraints among tasks are presented and formulated to improve the performance of task selection. Furthermore, some improved strategies are designed to solve the problem that the whale optimization algorithm (WOA) is subject to local optima entrapment. Finally, an extensive experimental assessment of synthetic datasets is implemented to evaluate the multi-objective optimization performance. Compared with the traditional WOA, the diversity metric (Δ-spread), the hypervolume (HV) and other evaluation metrics are significantly improved. The experiment results also indicate the proposed approach achieves remarkable performance compared with other competitive methods.
1,067
Title: The notion of event in probability and causality: Situating myself relative to Bruno de Finetti Abstract: As Bruno de Finetti taught us, the notion of event in a theory of probability is fundamental, perhaps determinative. In this paper, I compare the notion of event in de Finetti's subjective theory of probability with the more situated notion of event that underlies the theory of probability and causality that I developed in the 1990s.
1,091
Title: Causal interpretation of graphical models Abstract: Shafer and Vovk have shown how to base probability theory on game theory. In this framework, we give probabilities an empirical and predictive meaning by means of a form of Cournot's principle, which says that reality will not permit a gambler to win disproportionately to the capital he risks. How does this principle apply to the causal interpretation of graphical models?
1,094
Title: Renyi Entropy and Free Energy Abstract: The Renyi entropy is a generalization of the usual concept of entropy which depends on a parameter q. In fact, Renyi entropy is closely related to free energy. Suppose we start with a system in thermal equilibrium and then suddenly divide the temperature by q. Then the maximum amount of work the system can perform as it moves to equilibrium at the new temperature divided by the change in temperature equals the system's Renyi entropy in its original state. This result applies to both classical and quantum systems. Mathematically, we can express this result as follows: the Renyi entropy of a system in thermal equilibrium is without the 'q(-1) -derivative' of its free energy with respect to the temperature. This shows that Renyi entropy is a q-deformation of the usual concept of entropy.
1,095
Title: Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting Abstract: Performing exact Bayesian inference for complex models is computationally intractable. Markov chain Monte Carlo (MCMC) algorithms can provide reliable approximations of the posterior distribution but are expensive for large data sets and high-dimensional models. A standard approach to mitigate this complexity consists in using subsampling techniques or distributing the data across a cluster. However, these approaches are typically unreliable in high-dimensional scenarios. We focus here on a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated alternating direction method of multipliers (ADMM) optimization algorithm. These methods appear to provide empirically state-of-the-art performance but their theoretical behavior in high dimension is currently unknown. In this paper, we propose a detailed theoretical study of one of these algorithms known as the split Gibbs sampler. Under regularity conditions, we establish explicit convergence rates for this scheme using Ricci curvature and coupling ideas. We support our theory with numerical illustrations.
2,133
Title: Laissez-faire versus Pareto Abstract: Consider two principles for social evaluation. The first, “laissez-faire”, says that mean-preserving redistribution away from laissez-faire incomes should be regarded as a social worsening. This principle captures a key aspect of libertarian political philosophy. The second, weak Pareto, states that an increase in the disposable income of each individual should be regarded as a social improvement. We show that the combination of the two principles implies that total disposable income ought to be maximized. Strikingly, the relationship between disposable incomes and laissez-faire incomes must therefore be ignored, leaving little room for libertarian values.
2,195
Title: Large scale tensor regression using kernels and variational inference Abstract: We outline an inherent flaw of tensor factorization models when latent factors are expressed as a function of side information and propose a novel method to mitigate this. We coin our methodology kernel fried tensor (KFT) and present it as a large-scale prediction and forecasting tool for high dimensional data. Our results show superior performance against LightGBM and Field aware factorization machines (FFM), two algorithms with proven track records, widely used in large-scale prediction. We also develop a variational inference framework for KFT which enables associating the predictions and forecasts with calibrated uncertainty estimates on several datasets.
2,201
Title: Kernel dependence regularizers and Gaussian processes with applications to algorithmic fairness Abstract: •A general framework of empirical risk minimization with fairness regularizers and an analysis of its risk and fairness statistical consistency results are presented.•A Gaussian Process (GP) formulation of the fairness regularization framework is derived, which allows uncertainty quantification and principled hyperparameter selection.•A normalized version of the fairness regularizer which makes it less sensitive to the choice of kernel parameters is derived.
2,202
Title: Model-driven engineering for mobile robotic systems: a systematic mapping study Abstract: Mobile robots operate in various environments (e.g. aquatic, aerial, or terrestrial), they come in many diverse shapes and they are increasingly becoming parts of our lives. The successful engineering of mobile robotics systems demands the interdisciplinary collaboration of experts from different domains, such as mechanical and electrical engineering, artificial intelligence, and systems engineering. Research and industry have tried to tackle this heterogeneity by proposing a multitude of model-driven solutions to engineer the software of mobile robotics systems. However, there is no systematic study of the state of the art in model-driven engineering (MDE) for mobile robotics systems that could guide research or practitioners in finding model-driven solutions and tools to efficiently engineer mobile robotics systems. The paper is contributing to this direction by providing a map of software engineering research in MDE that investigates (1) which types of robots are supported by existing MDE approaches, (2) the types and characteristics of MRSs that are engineered using MDE approaches, (3) a description of how MDE approaches support the engineering of MRSs, (4) how existing MDE approaches are validated, and (5) how tools support existing MDE approaches. We also provide a replication package to assess, extend, and/or replicate the study. The results of this work and the highlighted challenges can guide researchers and practitioners from robotics and software engineering through the research landscape.
2,212
Title: Edge guards for polyhedra in three-space Abstract: It is shown that every polyhedron in R3 can be guarded by at most 56 of its edges. This result holds even if the boundary of the polyhedron is disconnected (i.e., if the polyhedron has “holes”), and regardless of the genus of each connected component of its boundary.
2,227
Title: On comprehensive cascade control strategy considering a class of overactuated autonomous non-rigid space systems with model uncertainties Abstract: With a focus on a number of state-of-the-art techniques in the area of autonomous non-rigid space systems control, realization of a comprehensive strategy is worthy of investigation to handle a set of parameters of the present overactuated processes with model uncertainties. In a word, the subject behind the research is to guarantee the desirable performance of a class of the autonomous space systems, which can be considered through the moments of inertia, the central of mass, the profile of the thrust vector and the misalignments of the propellant engine to deal with mission operation plans. There is the attitude cascade strategy including the low thrust three-axis engine off mode control, the low thrust x-axis engine on mode control and finally the high thrust y,z-axis engine on mode control, respectively. The control strategy is realized in a number of loops, as long as the on and off modes of the propellant engine are focused on the Euler angles control, in finite burn time, and quaternion vector control, in non-burn time, respectively, in line with parameters variations. It is to note that the parameters variations are coherently different in each one of the engine modes. The dynamics of high-low thrusters are taken into real consideration, where the control allocations in association with the pulse-width pulse-frequency modulators are employed to cope with a set of on–off reaction thrusters. The investigated results are finally analyzed in line with some related well-known benchmarks to verify the approach performance. The main contribution and motivation of the strategy investigated here is to propose a novel three-axis comprehensive cascade robust control solution to be able to deal with the parameters of autonomous non-rigid space systems under control with model uncertainties, in a synchronous manner, once the results regarding the tracking of the three-axis referenced commands are efficient with high accuracy along with the recent potential outcomes, researched in this area.
2,239
Title: The Art Gallery Problem is ∃ℝ-complete Abstract: AbstractThe Art Gallery Problem (AGP) is a classic problem in computational geometry, introduced in 1973 by Victor Klee. Given a simple polygon 풫 and an integer k, the goal is to decide if there exists a set G of k guards within 풫 such that every point p∈ 풫 is seen by at least one guard g∈ G. Each guard corresponds to a point in the polygon 풫, and we say that a guard g sees a point p if the line segment pg is contained in 풫.We prove that the AGP is ∃ ℝ-complete, implying that (1) any system of polynomial equations over the real numbers can be encoded as an instance of the AGP, and (2) the AGP is not in the complexity class NP unless NP = ∃ ℝ. As a corollary of our construction, we prove that for any real algebraic number α, there is an instance of the AGP where one of the coordinates of the guards equals α in any guard set of minimum cardinality. That rules out many natural geometric approaches to the problem, as it shows that any approach based on constructing a finite set of candidate points for placing guards has to include points with coordinates being roots of polynomials with arbitrary degree. As an illustration of our techniques, we show that for every compact semi-algebraic set S⊆ [0, 1]2, there exists a polygon with corners at rational coordinates such that for every p∈ [0, 1]2, there is a set of guards of minimum cardinality containing p if and only if p∈ S.In the ∃ ℝ-hardness proof for the AGP, we introduce a new ∃ ℝ-complete problem ETR-INV. We believe that this problem is of independent interest, as it has already been used to obtain ∃ ℝ-hardness proofs for other problems.
2,261
Title: Session-typed concurrent contracts Abstract: In sequential languages, dynamic contracts are usually expressed as boolean functions without externally observable effects, written within the language. We propose an analogous notion of concurrent contracts for languages with session-typed message-passing concurrency. Concurrent contracts are partial identity processes that monitor the bidirectional communication along channels and raise an alarm if a contract is violated. Concurrent contracts are session-typed in the usual way and must also satisfy a transparency requirement, which guarantees that terminating compliant programs with and without the contracts are observationally equivalent. We illustrate concurrent contracts with several examples. We also show how to generate contracts from a refinement session-type system and show that the resulting monitors are redundant for programs that can statically be seen to be well-typed.
2,290
Title: Redividing the cake Abstract: The paper considers fair allocation of resources that are already allocated in an unfair way. This setting requires a careful balance between the fairness considerations and the rights of the present owners. The paper presents re-division algorithms that attain various trade-off points between fairness and ownership rights, in various settings differing in the geometric constraints on the allotments: (a) no geometric constraints; (b) connectivity-the cake is a one-dimensional interval and each piece must be a contiguous interval; (c) rectangularity-the cake is a two-dimensional rectangle or rectilinear polygon and the pieces should be rectangles; (d) convexity-the cake is a two-dimensional convex polygon and the pieces should be convex. These re-division algorithms have implications on another problem: the price-of-fairness-the loss of social welfare caused by fairness requirements. Each algorithm implies an upper bound on the price-of-fairness with the respective geometric constraints.
2,303
Title: Small Hazard-free Transducers. Abstract: Recently, an unconditional exponential separation between the hazard-free complexity and (standard) circuit complexity of explicit functions has been shown~\cite{ikenmeyer18complexity}. This raises the question: which classes of functions permit efficient hazard-free circuits? Our main result is as follows. A \emph{transducer} is a finite state machine that transcribes, symbol by symbol, an input string of length $n$ into an output string of length $n$. We prove that any function arising from a transducer with $s$ states receiving input symbols encoded by $\ell$ bits has a hazard-free circuit of size $2^{O(s+\ell)}\cdot n$ and depth $O(\ell+ s\cdot \log n)$; in particular, if $s, \ell\in O(1)$, size and depth are asymptotically optimal. We utilize our main result to derive efficient circuits for \emph{$k$-recoverable addition}. Informally speaking, a code is \emph{$k$-recoverable} if it does not increase uncertainty regarding the encoded value, so long as it is guaranteed that it is from $\{x,x+1,\ldots,x+k\}$ for some $x\in \mathbb{N}_0$. We provide an asymptotically optimal $k$-recoverable code. We also realize a transducer with $O(k)$ states that adds two codewords from this $k$-recoverable code. Combined with our main result, we obtain a hazard-free adder circuit of size $2^{O(k)}n$ and depth $O(k\log n)$ with respect to this code, i.e., a $k$-recoverable adder circuit that adds two codewords of $n$ bits each. In other words, $k$-recoverable addition is fixed-parameter tractable with respect to $k$. We then reduce the maximum size of the state machines involved to $O(1)$, resulting in a circuit for $k$-recoverable addition of size $O(n+k\log k)$ and depth $O(\log n)$. Thus, if the uncertainties of each of the addends span intervals of length $O(n/\log n)$, there is an \emph{asymptotically optimal} adder that attains the best possible output uncertainty.
2,340
Title: Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification Abstract: Pain is an unpleasant feeling that has been shown to be an important factor for the recovery of patients. Since this is costly in human resources and difficult to do objectively, there is the need for automatic systems to measure it. In this paper, contrary to current state-of-the-art techniques in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first uses convolutional neural networks (CNNs) to learn facial features from VGG_Faces, which are then linked to a long short-term memory to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized appearance versus taking into account the whole image. As a result, we outperform current state-of-the-art area under the curve performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also report competitive results in the Cohn Kanade+ facial expression database.
2,631
Title: Summative Usability Assessments of STAR-Vote: A Cryptographically Secure e2e Voting System That Has Been Empirically Proven to Be Easy to Use Abstract: Background: From the project's inception, STAR-Vote was intended to be one of the first usable, end-to-end (e2e) voting systems with sophisticated security. To realize STAR-Vote, computer security experts, statistical auditors, human factors (HF)/human-computer interaction (HCI) researchers, and election officials collaborated throughout the project and relied upon a user-centered, iterative design and development process, which included human factors research and usability testing, to make certain the system would be both usable and secure. Objective: While best practices in HF/HCI methods for design were used and all apparent usability problems were identified and fixed, summative system usability assessments were conducted toward the end of the user-centered design process to determine whether STAR-Vote is in fact easy to use. Method and Results: After collecting efficiency, effectiveness, and satisfaction measurements per ISO 9241-11's system usability criteria, an analysis of the data revealed that there is evidence for STAR-Vote being the most usable, cryptographically secure voting system to date when compared with the previously tested e2e systems: Helios, Pret a Voter, and Scantegrity. Conclusion and Application: STAR-Vote being one of the first e2e voting systems that is both highly usable and secure is a significant accomplishment, because tamper-resistant voting systems can be used in U.S. elections to ensure the integrity of the electoral process, while still ensuring that voter intent is accurately reflected in the cast ballots. Moreover, this research empirically shows that a complex, secure system can still be usable-meaning that implemented security is not an excuse for poor usability.
2,634
Title: A multilayer recognition model for twitter user geolocation Abstract: Geolocation is important for many emerging applications such as disaster management and recommendation system. In this paper, we propose a multilayer recognition model (MRM) to predict the city-level location for social network users, solely based on the user’s tweet content. Through a series of optimizations such as entity selection, spatial clustering and outlier filtering, suitable features are extracted to model the geographic coordinates of tweet users. Then, the Multinomial Naive Bayes is applied to classify the datasets into different groups. The model is evaluated by comparing with an existing algorithm on twitter datasets. The experimental results reveal that our method achieves a better prediction accuracy of 54.82% on the test set, and the average error is reduced to 400.97 miles at best.
2,888
Title: Two-stage linear decision rules for multi-stage stochastic programming Abstract: Multi-stage stochastic linear programs (MSLPs) are notoriously hard to solve in general. Linear decision rules (LDRs) yield an approximation of an MSLP by restricting the decisions at each stage to be an affine function of the observed uncertain parameters. Finding an optimal LDR is a static optimization problem that provides an upper bound on the optimal value of the MSLP, and, under certain assumptions, can be formulated as an explicit linear program. Similarly, as proposed by Kuhn et al. (Math Program 130(1):177–209, 2011) a lower bound for an MSLP can be obtained by restricting decisions in the dual of the MSLP to follow an LDR. We propose a new approximation approach for MSLPs, two-stage LDRs. The idea is to require only the state variables in an MSLP to follow an LDR, which is sufficient to obtain an approximation of an MSLP that is a two-stage stochastic linear program (2SLP). We similarly propose to apply LDR only to a subset of the variables in the dual of the MSLP, which yields a 2SLP approximation of the dual that provides a lower bound on the optimal value of the MSLP. Although solving the corresponding 2SLP approximations exactly is intractable in general, we investigate how approximate solution approaches that have been developed for solving 2SLP can be applied to solve these approximation problems, and derive statistical upper and lower bounds on the optimal value of the MSLP. In addition to potentially yielding better policies and bounds, this approach requires many fewer assumptions than are required to obtain an explicit reformulation when using the standard static LDR approach. A computational study on two example problems demonstrates that using a two-stage LDR can yield significantly better primal policies and modestly better dual policies than using policies based on a static LDR.
3,006
Title: A binary decision diagram based algorithm for solving a class of binary two-stage stochastic programs Abstract: We consider a special class of two-stage stochastic integer programming problems with binary variables appearing in both stages. The class of problems we consider constrains the second-stage variables to belong to the intersection of sets corresponding to first-stage binary variables that equal one. Our approach seeks to uncover strong dual formulations to the second-stage problems by transforming them into dynamic programming (DP) problems parameterized by first-stage variables. We demonstrate how these DPs can be formed by use of binary decision diagrams, which then yield traditional Benders inequalities that can be strengthened based on observations regarding the structure of the underlying DPs. We demonstrate the efficacy of our approach on a set of stochastic traveling salesman problems.
3,008
Title: Quantitative stability analysis for minimax distributionally robust risk optimization Abstract: This paper considers distributionally robust formulations of a two stage stochastic programming problem with the objective of minimizing a distortion risk of the minimal cost incurred at the second stage. We carry out a stability analysis by looking into variations of the ambiguity set under the Wasserstein metric, decision spaces at both stages and the support set of the random variables. In the case when the risk measure is risk neutral, the stability result is presented with the variation of the ambiguity set being measured by generic metrics of \(\zeta \)-structure, which provides a unified framework for quantitative stability analysis under various metrics including total variation metric and Kantorovich metric. When the ambiguity set is structured by a \(\zeta \)-ball, we find that the Hausdorff distance between two \(\zeta \)-balls is bounded by the distance of their centers and difference of their radii. The findings allow us to strengthen some recent convergence results on distributionally robust optimization where the center of the Wasserstein ball is constructed by the empirical probability distribution.
3,013
Title: Scenario reduction revisited: fundamental limits and guarantees Abstract: The goal of scenario reduction is to approximate a given discrete distribution with another discrete distribution that has fewer atoms. We distinguish continuous scenario reduction, where the new atoms may be chosen freely, and discrete scenario reduction, where the new atoms must be chosen from among the existing ones. Using the Wasserstein distance as measure of proximity between distributions, we identify those n-point distributions on the unit ball that are least susceptible to scenario reduction, i.e., that have maximum Wasserstein distance to their closest m-point distributions for some prescribed \(m<n\). We also provide sharp bounds on the added benefit of continuous over discrete scenario reduction. Finally, to our best knowledge, we propose the first polynomial-time constant-factor approximations for both discrete and continuous scenario reduction as well as the first exact exponential-time algorithms for continuous scenario reduction.
3,018
Title: Problem-based optimal scenario generation and reduction in stochastic programming Abstract: Scenarios are indispensable ingredients for the numerical solution of stochastic programs. Earlier approaches to optimal scenario generation and reduction are based on stability arguments involving distances of probability measures. In this paper we review those ideas and suggest to make use of stability estimates based only on problem specific data. For linear two-stage stochastic programs we show that the problem-based approach to optimal scenario generation can be reformulated as best approximation problem for the expected recourse function which in turn can be rewritten as a generalized semi-infinite program. We show that the latter is convex if either right-hand sides or costs are random and can be transformed into a semi-infinite program in a number of cases. We also consider problem-based optimal scenario reduction for two-stage models and optimal scenario generation for chance constrained programs. Finally, we discuss problem-based scenario generation for the classical newsvendor problem.
3,019
Title: A Generic Transformation for Optimal Node Repair in MDS Array Codes Over F 2 Abstract: For high-rate linear systematic maximum distance separable (MDS) codes, most early constructions could initially optimally repair all the systematic nodes but not all the parity nodes. Fortunately, this issue was first solved by Li et al. in (IEEE Trans. Inform. Theory, 64(9), 6257-6267, 2018), where a transformation that can convert any nonbinary MDS array code into another one w...
3,210
Title: Interpretable Optimal Stopping Abstract: Optimal stopping is the problem of deciding when to stop a stochastic system to obtain the greatest reward, arising in numerous application areas such as finance, healthcare, and marketing. State-of-the-art methods for high-dimensional optimal stopping involve approximating the value function or the continuation value and then using that approximation within a greedy policy. Although such policies can perform very well, they are generally not guaranteed to be interpretable; that is, a decision maker may not be able to easily see the link between the current system state and the policy's action. In this paper, we propose a new approach to optimal stopping wherein the policy is represented as a binary tree, in the spirit of naturally interpretable tree models commonly used in machine learning. We show that the class of tree policies is rich enough to approximate the optimal policy. We formulate the problem of learning such policies from observed trajectories of the stochastic system as a sample average approximation (SAA) problem. We prove that the SAA problem converges under mild conditions as the sample size increases but that, computationally, even immediate simplifications of the SAA problem are theoretically intractable. We thus propose a tractable heuristic for approximately solving the SAA problem by greedily constructing the tree from the top down. We demonstrate the value of our approach by applying it to the canonical problem of option pricing, using both synthetic instances and instances using real Standard & Pool's 500 Index data. Our method obtains policies that (1) outperform state-of-the-art noninterpretable methods, based on simulation regression and martingale duality, and (2) possess a remarkably simple and intuitive structure.
3,212
Title: Inference in functional linear quantile regression Abstract: In this paper, we study statistical inference in functional quantile regression for scalar response and a functional covariate. Specifically, we consider a functional linear quantile regression model where the effect of the covariate on the quantile of the response is modeled through the inner product between the functional covariate and an unknown smooth regression parameter function that varies with the level of quantile. The objective is to test that the regression parameter is constant across several quantile levels of interest. The parameter function is estimated by combining ideas from functional principal component analysis and quantile regression. An adjusted Wald testing procedure is proposed for this hypothesis of interest, and its chi-square asymptotic null distribution is derived. The testing procedure is investigated numerically in simulations involving sparse and noisy functional covariates and in a capital bike share data application. The proposed approach is easy to implement and the R code is published online at https://github.com/xylimeng/fQR-testing.
3,652
Title: Classical and Quantum Spherical Pendulum Abstract: The seminal paper by Niels Bohr followed by a paper by Arnold Sommerfeld led to a revolutionary Bohr-Sommerfeld theory of atomic spectra. We are interested in the information about the structure of quantum mechanics encoded in this theory. In particular, we want to extend Bohr-Sommerfeld theory to a full quantum theory of completely integrable Hamiltonian systems, which is compatible with geometric quantization. In the general case, we use geometric quantization to prove analogues of the Bohr-Sommerfeld quantization conditions for the prequantum operators P-f. If a prequantum operator P-f satisfies the Bohr-Sommerfeld conditions and if it restricts to a directly quantized operator Q(f) in the representation corresponding to the polarization F, then Q(f) also satisfies the Bohr-Sommerfeld conditions. The proof that the quantum spherical pendulum is a quantum system of the type we are looking for requires a new treatment of the classical action functions and their properties. For the sake of completeness we have provided an extensive presentation of the classical spherical pendulum. In our approach to Bohr-Sommerfeld theory, which we call Bohr-Sommerfeld-Heisenberg quantization, we define shifting operators that provide transitions between different quantum states. Moreover, we relate these shifting operators to quantization of functions on the phase space of the theory. We use Bohr-Sommerfeld-Heisenberg theory to study the properties of the quantum spherical pendulum, in particular, the boundary conditions for the shifting operators and quantum monodromy.
3,653
Title: Effective approximation of the solutions of algebraic equations Abstract: Let F be a holomorphic map whose components satisfy some polynomial relations. We present an algorithm for constructing Nash maps locally approximating F, whose components satisfy the same relations.
3,656
Title: Model theory of adeles I Abstract: We study the model theory of the ring of adeles AK of a number field K. We work in the language of rings and various extensions, and obtain quantifier elimination. We give a description of the definable subsets of AKn, for any n≥1, and prove that they are measurable. We introduce some techniques for computing their measures. We describe the definable sets of minimal idempotents in AK. We show that the quotient of the space of adele classes by the action of the maximal compact subgroup of the idele class group is interpretable in the adeles. We give a short proof of decidability of AQ, and consider uniformity of adelic quantifier elimination in the number field.
3,667
Title: Integrating life-cycle assessment and multi-criteria decision analysis to compare alternative biodiesel chains Abstract: The transport sector is highly dependent on fossil fuels with significant environmental impacts. This motivates the environmental assessment of alternative fuel options, including biodiesel based on agricultural crops. The assessment of biofuel alternatives for transportation can be facilitated by the integration of Life-Cycle Assessment (LCA) and Multi-Criteria Decision Analysis (MCDA). In this article, we compare four Rapeseed Methyl Ester biodiesel production chains, corresponding to four different feedstock origins. The environmental impact of each chain is assessed in the context of a LCA encompassing cultivation, transportation to Portugal, extraction and transesterification. We apply two different MCDA additive aggregation methodologies to aggregate various impact categories resulting from the Life Cycle Impact Assessment (LCIA) phase of the LCA. The chosen MCDA methodologies, Stochastic Multicriteria Analysis and Variable Interdependent Parameter Analysis, are two complementary approaches to address one of the main difficulties of MCDA: setting the relative weights of the evaluation criteria. Indeed, weighting the various impacts in the LCIA phase is a controversial issue in LCA research and studies. The LCIA–MCDA approach proposed in this work does not require choosing a specific weighting vector, seeking to assess which conclusions are robust given some freedom allowed in the choice of weights. To study further the robustness of the conclusions concerning the choice of the criteria, the effects of removing one criterion are analyzed, one at a time.
3,772
Title: Shotgun reconstruction in the hypercube Abstract: Mossel and Ross raised the question of when a random coloring of a graph can be reconstructed from local information, namely, the colorings (with multiplicity) of balls of given radius. In this article, we are concerned with random 2-colorings of the vertices of the n-dimensional hypercube, or equivalently random Boolean functions. In the worst case, balls of diameter omega(n) are required to reconstruct. However, the situation for random colorings is dramatically different: we show that almost every 2-coloring can be reconstructed from the multiset of colorings of balls of radius 2. Furthermore, we show that for q >= n2+epsilon, almost every q-coloring can be reconstructed from the multiset of colorings of 1-balls.
3,799
Title: Asymptotic and bootstrap tests for subspace dimension Abstract: Many linear dimension reduction methods proposed in the literature can be formulated using an appropriate pair of scatter matrices. The eigen-decomposition of one scatter matrix with respect to another is then often used to determine the dimension of the signal subspace and to separate signal and noise parts of the data. Three popular dimension reduction methods, namely principal component analysis (PCA), fourth order blind identification (FOBI) and sliced inverse regression (SIR) are considered in detail and the first two moments of subsets of the eigenvalues are used to test for the dimension of the signal space. The limiting null distributions of the test statistics are discussed and novel bootstrap strategies are suggested for the small sample cases. In all three cases, consistent test-based estimates of the signal subspace dimension are introduced as well. The asymptotic and bootstrap tests are illustrated in real data examples.
3,811
Title: Two-stage stochastic programming under multivariate risk constraints with an application to humanitarian relief network design Abstract: In this study, we consider two classes of multicriteria two-stage stochastic programs in finite probability spaces with multivariate risk constraints. The first-stage problem features multivariate stochastic benchmarking constraints based on a vector-valued random variable representing multiple and possibly conflicting stochastic performance measures associated with the second-stage decisions. In particular, the aim is to ensure that the decision-based random outcome vector of interest is preferable to a specified benchmark with respect to the multivariate polyhedral conditional value-at-risk or a multivariate stochastic order relation. In this case, the classical decomposition methods cannot be used directly due to the complicating multivariate stochastic benchmarking constraints. We propose an exact unified decomposition framework for solving these two classes of optimization problems and show its finite convergence. We apply the proposed approach to a stochastic network design problem in the context of pre-disaster humanitarian logistics and conduct a computational study concerning the threat of hurricanes in the Southeastern part of the United States. The numerical results provide practical insights about our modeling approach and show that the proposed algorithm is computationally scalable.
3,830
Title: Embedding carbon impact assessment in multi-criteria supplier segmentation using ELECTRE TRI-rC Abstract: In the past decade, there has been an increasing interest in green supply chain management, which integrates environmental thinking into supply chain management. Assessing a supplier’s potential for improvement is very important when an organization wants to achieve certain environmental targets concerning their supply base, taking into account the limited resources available. In this paper, incorporating environmental evaluation criteria into a comprehensive supplier segmentation approach called ‘supplier potential matrix’ (SPM), a green supplier segmentation is proposed to segment the suppliers. Two overarching dimensions—supplier’s capabilities and supplier’s willingness—are used to evaluate the supplier’s green potential. The two dimensions are measured by multiple criteria. A sorting method called ELECTRE TRI-rC is used to solve the resulted multi-criteria decision-making problem. In order to make a more meaningful distinction, a simple method is also proposed to assess the suppliers with respect to the carbon footprint of the raw materials they supply. The results of this assessment are combined with the ones of the SPM, resulting in a more useful segmentation. The proposed model is applied to a sample containing the suppliers of a large international company.
3,864
Title: Bridging the research-practice gap in disaster relief: using the IFRC Code of Conduct to develop an aid model Abstract: Bridging the gap between research and practice has been a recognized problem in many fields, and has been especially noticeable in the field of disaster relief. As the number and impact of disasters have increased, there has been a jump in interest from the research community in an attempt to provide tools and solutions for some of the challenges in the field. The International Federation of Red Cross and Red Crescent Societies (IFRC) Code of Conduct (CoC) for Disaster Operations provides a qualitative set of guidelines that is an excellent building block for operational theory, but is insufficiently rigorous in guiding quantitative decision making. In this paper, we review the CoC, exploring each of the ten core principles and identifying three significant operational trade-offs. We then propose a model framework that can be implemented as a stand-alone model, or can be used as a foundation for other quantitative aid allocation models. Finally, we provide an example of how the proposed model could be used to guide decision making in a Microsoft Excel\(^{{\textregistered }}\) environment using CoinOR’s OpenSolver\(^{\textregistered }\). New insights in the field of aid disbursement are provided by examining the challenges of financial management and investment as dictated by the CoC. This paper fills a unique gap in the literature by addressing the issue of financial allocation as guided by a qualitative standard used by the disaster relief community, and serves as a complement to the work in the field of humanitarian logistics.
3,894
Title: Revocable Identity-Based Access Control for Big Data with Verifiable Outsourced Computing Abstract: To be able to leverage big data to achieve enhanced strategic insight, process optimization and make informed decision, we need to be an efficient access control mechanism for ensuring end-to-end security of such information asset. Signcryption is one of several promising techniques to simultaneously achieve big data confidentiality and authenticity. However, signcryption suffers from the limitati...
3,916
Title: PPHOPCM: Privacy-Preserving High-Order Possibilistic c-Means Algorithm for Big Data Clustering with Cloud Computing Abstract: As one important technique of fuzzy clustering in data mining and pattern recognition, the possibilistic c-means algorithm (PCM) has been widely used in image analysis and knowledge discovery. However, it is difficult for PCM to produce a good result for clustering big data, especially for heterogenous data, since it is initially designed for only small structured dataset. To tackle this problem, ...
3,923
Title: NPP: A New Privacy-Aware Public Auditing Scheme for Cloud Data Sharing with Group Users Abstract: Today, cloud storage becomes one of the critical services, because users can easily modify and share data with others in cloud. However, the integrity of shared cloud data is vulnerable to inevitable hardware faults, software failures or human errors. To ensure the integrity of the shared data, some schemes have been designed to allow public verifiers (i.e., third party auditors) to efficiently au...
3,931
Title: Toward Architectural and Protocol-Level Foundation for End-to-End Trustworthiness in Cloud/Fog Computing Abstract: With Cloud/Fog Computing being a paradigm combination of IoT context and Edge Computing extended with Cloud/Fog, business process in it involves dataflows among multi-layers and multi-nodes, possibly provided by multi-organizations. Achieving end-to-end trustworthiness over the whole dataflow in such a Cloud/Fog Computing context is a challenging issue, nonetheless a necessary pre-condition for a ...
3,946
Title: Trustworthiness Evaluation-Based Routing Protocol for Incompletely Predictable Vehicular Ad Hoc Networks Abstract: Incompletely predictable vehicular ad hoc networks is a type of networks where vehicles move in a certain range or just in a particular tendency, which is very similar to some circumstances in reality. However, how to route in such type of networks more efficiently according to the node motion characteristics and related historical big data is still an open issue. In this paper, we propose a novel...
3,978
Title: Time Series Anomaly Detection for Trustworthy Services in Cloud Computing Systems Abstract: As a powerful architecture for large-scale computation, cloud computing has revolutionized the way that computing infrastructure is abstracted and utilized. Coupled with the challenges caused by Big Data, the rocketing development of cloud computing boosts the complexity of system management and maintenance, resulting in weakened trustworthiness of cloud services. To cope with this problem, a comp...
3,979
Title: Design of Remote Fitting Platform Based on Virtual Clothing Sales Abstract: To improve the convenience and effect of fitting process, an online fitting system of user-personalized body type based on the virtual technology is put forward. Firstly, it develops the fitting system with Java as the development language and constructs the human model, spinning motion model, and virtual clothing model; then, it divides the relevant model creation based on the body, generates the personalized body type, and designs the slab model, represented by triangular patch, while all points in same gender model keep the spatial position relation with each other to simplify the interpolation process; finally, it verifies the efficiency of the developed system through the factor analysis and cluster fitting design.
3,994
Title: Achieving Efficient and Privacy-Preserving Cross-Domain Big Data Deduplication in Cloud Abstract: Secure data deduplication can significantly reduce the communication and storage overheads in cloud storage services, and has potential applications in our big data-driven society. Existing data deduplication schemes are generally designed to either resist brute-force attacks or ensure the efficiency and data availability, but not both conditions. We are also not aware of any existing scheme that ...
4,024
Title: Finding the most sustainable wind farm sites with a hierarchical outranking decision aiding method Abstract: This paper considers the problem of finding suitable sites for wind farms in a region of Catalonia (Spain). The evaluation criteria are structured into a hierarchy that identifies several intermediate sub-goals dealing with different points of view. Therefore, the recent ELECTRE-III-H hierarchical multi-criteria analysis method is proposed as a good solution to help decision-makers. This method establishes an order among the set of possible sites for the wind farms for each sub-goal. ELECTRE-III-H aggregates these orders into an overall order using different parameters. The procedure is based on the construction and exploitation of a pairwise outranking relation, following the principles of concordance (i.e. majority rule) and discordance (i.e. respect for the minority opinions). This paper makes two main contributions. First, it contributes to the ELECTRE-III-H method by studying its mathematical properties for the construction of outranking relations. Second, the case study is solved and its results show that we can effectively represent and manage the overall influence of the various criteria on the global result at different levels of the hierarchy. The paper compares different scenarios with strict, normal, and optimistic preference, indifference and veto thresholds. Results show that the best site differs for technical, economic, environmental, and social intermediate criteria. Therefore, the best overall solution changes depending on the preference and veto thresholds fixed at the intermediate level of the hierarchy.
4,093
Title: Identifiability of linear compartmental models: The singular locus Abstract: This work addresses the problem of identifiability, that is, the question of whether parameters can be recovered from data, for linear compartmental models. Using standard differential algebra techniques, the question of whether a given model is generically locally identifiable is equivalent to asking whether the Jacobian matrix of a certain coefficient map, arising from input-output equations, is generically full rank. A natural next step is to study the set of parameter values where the Jacobian matrix drops in rank, which we refer to as the locus of non-identifiable parameter values, or, for short, the singular locus. In this work, we give a formula for coefficient maps in terms of acyclic subgraphs of the model's underlying directed graph and, then, study the case when the singular locus is defined by a single equation, the singular-locus equation. We prove that the singular-locus equation can be used to determine when submodels are generically locally identifiable. We also determine the singular-locus equation for two families of linear compartmental models, cycle and mammillary (star) models with input and output in a single compartment. We also state a conjecture for the corresponding equation for a third family: catenary (path) models. Finally, we introduce the identifiability degree, which is the number of parameter values that map to a generic input-output data vector. This degree was previously computed for mammillary and catenary models, and here we determine this degree for cycle models.
4,227
Title: WukaStore: Scalable, Configurable and Reliable Data Storage on Hybrid Volunteered Cloud and Desktop Systems Abstract: In this paper, we propose a hybrid storage framework WukaStore, which offers a scalable, configurable and reliable big data storage service by integrating stable storage (such as Cloud storage or durable storage utilities) and volatile storage (such as idle storage harnessed from desktop PCs over the Internet). By configuring different storage strategies, WukaStore delivers cost-effective storage ...
4,238
Title: FLAG: Faster Learning on Anchor Graph with Label Predictor Optimization Abstract: Knowledge graphs have received intensive research interests. When the labels of most nodes or datapoints are missing, anchor graph and hierarchical anchor graph models can be employed. With an anchor graph or hierarchical anchor graph, we only need to optimize the labels of the coarsest anchors, and the labels of datapoints can be inferred from these anchors in a coarse-to-fine manner. The complex...
4,242
Title: Efficient Trustworthiness Management for Malicious User Detection in Big Data Collection Abstract: Data collection in big data is an effective way to aggregate information that the collector is interested in. However, there is no assurance for the data that the users provide. Since collector does not have the ability to check the authenticity of every piece of information, the trustworthiness of users participated in the collection become important. In this paper, we design an efficient approac...
4,266
Title: Knowledge Graphs for Social Good: An Entity-Centric Search Engine for the Human Trafficking Domain Abstract: Web advertising related to Human Trafficking (HT) activity has been on the rise in recent years. Answering entity-centric questions over crawled HT Web corpora to assist investigators in the real world is an important social problem, involving many technical challenges. This paper describes a recent entity-centric knowledge graph effort that resulted in a semantic search engine to assist analysts ...
4,275
Title: Quantum information transfer using weak measurements and any non-product resource state Abstract: Information about an unknown quantum state can be encoded in weak values of projectors belonging to a complete eigenbasis. A protocol that enables one party-Bob-to remotely determine the weak values corresponding to weak measurements performed by another spatially separated party-Alice is presented. The particular set of weak values contains complete information of the quantum state encoded on Alice's register, which enacts the role of preselected system state in the aforementioned weak measurement. Consequently, Bob can determine the quantum state from these weak values, which can also be termed as remote state determination or remote state tomography. A combination of non-product bipartite resource state shared between the two parties and classical communication between them is necessary to bring this statistical scheme to fruition. Significantly, the information transfer of a pure quantum state of any known dimensions can be effected even with resource states of low dimensionality and purity with a single measurement setting at Bob's end.
4,343
Title: Trust Based Incentive Scheme to Allocate Big Data Tasks with Mobile Social Cloud Abstract: Recently, mobile social cloud (MSC), formed by mobile users with social ties, has been advocated to allocate tasks of big data applications instead of relying on the conventional cloud systems. However, due to the dynamic topology of networks and social features of users, how to optimally allocate tasks to mobile users based on the trust becomes a new challenge. Therefore, this paper proposes a no...
4,349
Title: On Secure Communication in Sensor Networks Under q -Composite Key Predistribution With Unreliable Links Abstract: Many applications of wireless sensor networks (WSNs) require deploying sensors in hostile environments, where an adversary may eavesdrop communications. To secure communications in WSNs, the $q$ -composite key predistribution scheme has been proposed in the literature. In this paper, we investigate secure <tex...
4,477
Title: Experimental entanglement of temporal order Abstract: The study of causal relations has recently been applied to the quantum realm, leading to the discovery that not all physical processes have a definite causal structure. While indefinite causal processes have previously been experimentally shown, these proofs relied on the quantum description of the experiments. Yet, the same experimental data could also be compatible with definite causal structures within different descriptions. Here, we present the first demonstration of indefinite temporal order outside of quantum formalism. We show that our experimental outcomes are incompatible with a class of generalised probabilistic theories satisfying the assumptions of locality and definite temporal order. To this end, we derive physical constraints (in the form of a Bell-like inequality) on experimental outcomes within such a class of theories. We then experimentally invalidate these theories by violating the inequality using entangled temporal order. This provides experimental evidence that there exist correlations in nature which are incompatible with the assumptions of locality and definite temporal order.
4,523
Title: MRMondrian: Scalable Multidimensional Anonymisation for Big Data Privacy Preservation Abstract: Scalable data processing platforms built on cloud computing becomes increasingly attractive as infrastructure for supporting big data applications. But privacy concerns are one of the major obstacles to making use of public cloud platforms. Multidimensional anonymisation, a global-recoding generalisation scheme for privacy-preserving data publishing, has been a recent focus due to its capability o...
4,550
Title: Value Functions and Optimality Conditions for Nonconvex Variational Problems with an Infinite Horizon in Banach Spaces Abstract: We investigate the value function of an infinite horizon variational problem in the infinite-dimensional setting. First, we provide an upper estimate of its Dini-Hadamard subdifferential in terms of the Clarke subdifferential of the Lipschitz continuous integrand and the Clarke normal cone to the graph of the set-valued mapping describing dynamics. Second, we derive a necessary condition for optimality in the form of an adjoint inclusion that grasps a connection between the Euler-Lagrange condition and the maximum principle. The main results are applied to the derivation of the necessary optimality condition of the spatial Ramsey growth model.
4,578
Title: Assessing urban quality: a proposal for a MCDA evaluation framework Abstract: The paper focuses on the assessment of urban design quality and sustainable urban spaces. In particular, the study is concentrated on the evaluation of urban quality provided by a good design of open spaces, including green areas, walkable areas and squares. In fact, despite the advancement of research during the past two decades and empirical evidence about the relationship among quality of life, quality of open spaces and urban sustainability, there is still a lack of studies on urban quality assessment. This paper brings forward a multidimensional methodology for assessing the quality of open spaces. More precisely, the contribution of this research is the proposal of a multidimensional and multi-methodological framework for assigning a numerical score to the quality of open spaces. The Multi-Attribute Value Theory has been used for addressing the problem under investigation with the aim of defining a synthetic index for the measurement of the urban quality of open spaces on the basis of different attributes, namely (a) accessibility; (b) liveability; (c) vitality and (d) identity. The methodology has been applied on a recently renovated district in the city of Milan, Italy. The proposed multi-methodological framework provides a robust basis for running different kind of analysis and for supporting policy and investment decisions both in the private and in the public sector.
4,590
Title: Mean-field backward stochastic differential equations and applications Abstract: In this paper we study the linear mean-field backward stochastic differential equations (mean-field BSDE) of the form (0.1)dY(t)=−[α1(t)Y(t)+β1(t)Z(t)+∫R0η1(t,ζ)K(t,ζ)ν(dζ)+α2(t)E[Y(t)]+β2(t)E[Z(t)]+∫R0η2(t,ζ)E[K(t,ζ)]ν(dζ)+γ(t)]dt+Z(t)dB(t)+∫R0K(t,ζ)Ñ(dt,dζ),t∈0,T,Y(T)=ξ.where (Y,Z,K) is the unknown solution triplet, B is a Brownian motion, Ñ is a compensated Poisson random measure, independent of B. We prove the existence and uniqueness of the solution triplet (Y,Z,K) of such systems. Then we give an explicit formula for the first component Y(t) by using partial Malliavin derivatives. To illustrate our result we apply them to study a mean-field recursive utility optimization problem in finance.
4,616
Title: On Motzkin numbers and central trinomial coefficients Abstract: The Motzkin numbers Mn=∑k=0n(n2k)(2kk)/(k+1) (n=0,1,2,…) and the central trinomial coefficients Tn (n=0,1,2,…) given by the constant term of (1+x+x−1)n, have many combinatorial interpretations. In this paper we establish the following surprising arithmetic properties of them with n any positive integer:2n∑k=1n(2k+1)Mk2∈Z,n2(n2−1)6|∑k=0n−1k(k+1)(8k+9)TkTk+1, and also∑k=0n−1(k+1)(k+2)(2k+3)Mk23n−1−k=n(n+1)(n+2)MnMn−1.
4,643
Title: Extreme Points of Gram Spectrahedra of Binary Forms Abstract: The Gram spectrahedron $$\mathrm {Gram}(f)$$ of a form f with real coefficients is a compact affine-linear section of the cone of psd symmetric matrices. It parametrizes the sum of squares decompositions of f, modulo orthogonal equivalence. For f a sufficiently general positive binary form of arbitrary degree, we show that $$\mathrm {Gram}(f)$$ has extreme points of all ranks in the Pataki range. We also calculate the dimension of the set of rank r extreme points, for any r. Moreover, we determine the pairs of rank two extreme points for which the connecting line segment is an edge of $$\mathrm {Gram}(f)$$ . The proof of the main result relies on a purely algebraic fact of independent interest: Whenever $$d,r\ge 1$$ are integers with $$\left( {\begin{array}{c}r+1\\ 2\end{array}}\right) \le 2d+1$$ , there exists a length r sequence $$f_1,\dots ,f_r$$ of binary forms of degree d for which the $$\left( {\begin{array}{c}r+1\\ 2\end{array}}\right) $$ pairwise products $$f_if_j$$ , $$i\le j$$ , are linearly independent.
4,648
Title: A mean-field matrix-analytic method for bike sharing systems under Markovian environment Abstract: This paper proposes a novel mean-field matrix-analytic method in the study of bike sharing systems, in which a Markovian environment is constructed to express time-inhomogeneity and asymmetry of processes that customers rent and return bikes. To achieve effective computability of this mean-field method, this study provides a unified framework through the following three basic steps. The first one is to deal with a major challenge encountered in setting up mean-field block-structured equations in general bike sharing systems. Accordingly, we provide an effective technique to establish a necessary reference system, which is a time-inhomogeneous queue with block structures. The second one is to prove asymptotic independence (or propagation of chaos) in terms of martingale limits. Note that asymptotic independence ensures and supports that we can construct a nonlinear quasi-birth-and-death (QBD) process, such that the stationary probability of problematic stations can be computed under a unified nonlinear QBD framework. Lastly, in the third step, we use some numerical examples to show the effectiveness and computability of the mean-field matrix-analytic method, and also to provide valuable observation of the influence of some key parameters on system performance. We are optimistic that the methodology and results given in this paper are applicable in the study of general large-scale bike sharing systems.
4,664
Title: Bounding the Inefficiency of Compromise in Opinion Formation Abstract: Social networks on the Internet have seen an enormous growth recently and play a crucial role in different aspects of today's life. They have facilitated information dissemination in ways that have been beneficial for their users but they are often used strategically in order to spread information that only serves the objectives of particular users. These properties have inspired a revision of classical opinion formation models from sociology using game-theoretic notions and tools. We follow the same modeling approach, focusing on scenarios where the opinion expressed by each user is a compromise between her internal belief and the opinions of a small number of neighbors among her social acquaintances. We formulate simple games that capture this behavior and quantify the inefficiency of equilibria using the well-known notion of the price of anarchy. Our results indicate that compromise comes at a cost that strongly depends on the neighborhood size.
4,678
Title: The Tutte polynomial via lattice point counting Abstract: We recover the Tutte polynomial of a matroid, up to change of coordinates, from an Ehrhart-style polynomial counting lattice points in the Minkowski sum of its base polytope and scalings of simplices. Our polynomial has coefficients of alternating sign with a combinatorial interpretation closely tied to the Dawson partition. Our definition extends in a straightforward way to polymatroids, and in this setting our polynomial has Kálmán's internal and external activity polynomials as its univariate specialisations.
4,680
Title: Strong subadditivity lower bound and quantum channels Abstract: We derive the strong subadditivity of the von Neumann entropy with a strict lower bound, dependent on the distribution of quantum correlation in the system. We investigate the structure of states saturating the bounded subadditivity and examine its consequences for the quantum data processing inequality. The quantum data processing achieves a lower bound associated with the locally inaccessible information.
4,687
Title: An efficient scheduling algorithm using queuing system to minimize starvation of non-real-time secondary users in Cognitive Radio Network Abstract: Cognitive Radio Network is an emerging popular wireless network, designed for efficient spectrum utilization. It enables unlicensed users to access the unused portion of the licensed spectrum opportunistically. The major challenge is to share the unused portion of the spectrum efficiently to the unlicensed users. Channel aggregation is an interesting approach by which channels can be grouped and allocated to unlicensed users. A combined Round Robin Priority (RRP) scheduling algorithm is proposed by which the starvation of low priority SUs are minimized and QoS parameters are better compared to the traditional static and dynamic channel aggregation method. Fuzzy Inference System is used to evaluate the priority of the Secondary User. The QoS parameters utilized for comparison are spectrum utilization, capacity of the secondary network, delay, blocking probability and forced termination probability.
4,725
Title: Joint Estimation and Inference for Data Integration Problems based on Multiple Multi-layered Gaussian Graphical Models Abstract: The rapid development of high-throughput technologies has enabled the generation of data from biological or disease processes that span multiple layers, like genomic, proteomic or metabolomic data, and further pertain to multiple sources, like disease subtypes or experimental conditions. In this work, we propose a general statistical framework based on Gaussian graphical models for horizontal (i.e. across conditions or subtypes) and vertical (i.e. across different layers containing data on molecular compartments) integration of information in such datasets. We start with decomposing the multi-layer problem into a series of two-layer problems. For each two-layer problem, we model the outcomes at a node in the lower layer as dependent on those of other nodes in that layer, as well as all nodes in the upper layer. We use a combination of neighborhood selection and group-penalized regression to obtain sparse estimates of all model parameters. Following this, we develop a debiasing technique and asymptotic distributions of inter-layer directed edge weights that utilize already computed neighborhood selection coefficients for nodes in the upper layer. Subsequently, we establish global and simultaneous testing procedures for these edge weights. Performance of the proposed methodology is evaluated on synthetic and real data.
4,726
Title: Adequate predimension inequalities in differential fields Abstract: In this paper we study predimension inequalities in differential fields and define what it means for such an inequality to be adequate. Adequacy was informally introduced by Zilber, and here we give a precise definition in a quite general context. We also discuss the connection of this problem to definability of derivations in the reducts of differentially closed fields. The Ax-Schanuel inequality for the exponential differential equation (proved by Ax) and its analogue for the differential equation of the j-function (established by Pila and Tsimerman) are our main examples of predimensions. We carry out a Hrushovski construction with the latter predimension and obtain a natural candidate for the first-order theory of the differential equation of the j-function. It is analogous to Kirby's axiomatisation of the theory of the exponential differential equation (which in turn is based on the axioms of Zilber's pseudo-exponentiation), although there are many significant differences. In joint work with Sebastian Eterović and Jonathan Kirby we have recently proven that the axiomatisation obtained in this paper is indeed an axiomatisation of the theory of the differential equation of the j-function, that is, the Ax-Schanuel inequality for the j-function is adequate.
4,839
Title: Jumps in speeds of hereditary properties in finite relational languages Abstract: Given a finite relational language L, a hereditary L-property is a class H of finite L-structures closed under isomorphism and substructure. The speed of H is the function which sends an integer n≥1 to the number of distinct elements in H with underlying set {1,...,n}. In this paper we give a description of many new jumps in the possible speeds of a hereditary L-property, where L is any finite relational language. In particular, we characterize the jumps in the polynomial and factorial ranges, and show they are essentially the same as in the case of graphs. The results in the factorial range are new for all examples requiring a language of arity greater than two, including the setting of hereditary properties of k-uniform hypergraphs for k>2. Further, adapting an example of Balogh, Bollobás, and Weinreich, we show that for all k≥2, there are hereditary properties of k-uniform hypergraphs whose speeds oscillate between functions near the upper and lower bounds of the penultimate range, ruling out many natural functions as jumps in that range. Our theorems about the factorial range use model theoretic tools related to the notion of mutual algebraicity.
4,886
Title: Data envelopment analysis based multi-objective optimization model for evaluation and selection of software components under optimal redundancy Abstract: Software developers face the challenge of developing in-time, low cost, high profit and high-quality software to meet competitive requirements and user demands. The software components for the same can be selected either from the available commercial-off-the-shelf repository or developed in-house. In this paper, we propose a data envelopment analysis (DEA) based nonlinear multi-objective optimization model for selecting software components in the presence of optimal redundancy to ensure software reliability. The proposed optimization model integrates both build and/or buy decisions for selection of components. We use DEA technique for evaluating the fitness of software components based upon multiple inputs and outputs provided by various members of the decision group. The overall efficiency score of each software component is obtained from the aggregated information. The proposed optimization model minimizes the total cost of software system and maximizes the total value of purchasing using constraints corresponding to compatibility of selected components, reliability, execution time, and delivery time of the software system. It also provides the information on the testing efforts needed to be performed on in-house developed components. A real-world case study of modular software development is discussed to illustrate the efficiency of the proposed optimization model. To the best of our knowledge, there exists no previous study on integrated optimization model for the software component selection problem involving build and/or buy decisions under optimal redundancy.
4,916
Title: The evolution of the structure of ABC-minimal trees Abstract: The atom-bond connectivity (ABC) index is a degree-based molecular descriptor that found diverse chemical applications. Characterizing trees with minimum ABC-index remained an elusive open problem even after serious attempts and is considered by some as one of the most intriguing open problems in mathematical chemistry. In this paper, we describe the exact structure of the extremal trees with sufficiently many vertices and we show how their structure evolves when the number of vertices grows. An interesting fact is that their radius is at most 5 and that all vertices except for one have degree at most 54. In fact, all but at most O(1) vertices have degree 1, 2, 4, or 53. Let γn=min⁡{ABC(T):Tis a tree of ordern}. It is shown that γn=1365153(1+2655+156106)n+O(1)≈0.67737178n+O(1).
4,932
Title: N-detachable pairs in 3-connected matroids III: The theorem Abstract: Let M be a 3-connected matroid, and let N be a 3-connected minor of M. A pair {x1,x2}⊆E(M) is N-detachable if one of the matroids M/x1/x2 or M﹨x1﹨x2 is 3-connected and has an N-minor. This is the third and final paper in a series where we prove that if |E(M)|−|E(N)|≥10, then either M has an N-detachable pair after possibly performing a single Δ-Y or Y-Δ exchange, or M is essentially N with a spike attached. Moreover, we describe the additional structures that arise if we require only that |E(M)|−|E(N)|≥5.
4,949
Title: Sufficient Conditions for the Global Rigidity of Periodic Graphs Abstract: Tanigawa (2016) showed that vertex-redundant rigidity of a graph implies its global rigidity in arbitrary dimension. We extend this result to periodic frameworks under fixed lattice representations. That is, we show that if a generic periodic framework is vertex-redundantly rigid, in the sense that the deletion of a single vertex orbit under the periodicity results in a periodically rigid framework, then it is also periodically globally rigid. Our proof is similar to the one of Tanigawa, but there are some added difficulties. First, it is not known whether periodic global rigidity is a generic property in dimension $$d>2$$ . We work around this issue by using slight modifications of recent results of Kaszanitzky et al. (2021). Secondly, while the rigidity of finite frameworks in $${\mathbb {R}}^d$$ on at most d vertices obviously implies their global rigidity, it is non-trivial to prove a similar result for periodic frameworks. This is accomplished by extending a result of Bezdek and Connelly (2002) on the existence of a continuous motion between two equivalent d-dimensional realisations of a single graph in $${\mathbb {R}}^{2d}$$ to periodic frameworks. As an application of our result, we give a necessary and sufficient condition for the global rigidity of generic periodic body-bar frameworks in arbitrary dimension. This provides a periodic counterpart to a result of Connelly et al. (2013) regarding the global rigidity of generic finite body-bar frameworks.
4,967
Title: Nonnegative Polynomials and Circuit Polynomials Abstract: The concept of sums of nonnegative circuit (SONC) polynomials was recently introduced as a new certificate of nonnegativity especially for sparse polynomials. In this paper, we explore the relationship between nonnegative polynomials and SONCs. As a first result, we provide sufficient conditions for nonnegative polynomials with general Newton polytopes to be a SONC, which generalizes the previous result on nonnegative polynomials with simplex Newton polytopes. Second, we prove that every SONC admits a SONC decomposition without cancellation. In other words, SONC decompositions preserve sparsity of nonnegative polynomials, which is dramatically different from the classical sum of squares decompositions and is a key property to design efficient algorithms for sparse polynomial optimization based on SONC decompositions.
4,968
Title: TENSOR-BASED NUMERICAL METHOD FOR STOCHASTIC HOMOGENIZATION Abstract: This paper addresses the complexity reduction of stochastic homogenization of a class of random materials for a stationary diffusion equation. A cost-efficient approximation of the correctors is obtained using a method designed to exploit quasi-periodicity. Accuracy and cost reduction are investigated for local perturbations or small transformations of periodic materials as well as for materials with no periodicity but a mesoscopic structure, for which the limitations of the method are shown. Finally, for materials outside the scope of this method, we propose to use the approximation of homogenized quantities as control variates for variance reduction of a more accurate and costly Monte Carlo estimator (using a multifidelity Monte Carlo method). The resulting cost reduction is illustrated in a numerical experiment and compared with a control variate method from weakly stochastic homogenization. The limits of this variance reduction technique are tested on materials without periodicity or mesoscopic structure.
5,039
Title: Advanced Cell-DEVS modeling applications: a legacy of Norbert Giambiasi Abstract: We describe a number of modeling applications of advanced Cell-DEVS models, a modeling formalism Norbert Giambiasi and I defined in the late 1990s. We discuss improved versions of these models built using the CD++ toolkit, which was built in order to study, model, and simulate such cellular models. The models have removed some limitations that standard cellular models have, which allow each cell to use multiple state variables and multiple ports for inter-cell communications. We show the application of the formalism in three different areas of science and engineering: social models, pedestrian analysis and occupancy in buildings, and virus spreading in computer networks.
5,066
Title: A combined GDM–ELLAM–MMOC scheme for advection dominated PDEs Abstract: We propose a combination of the Eulerian Lagrangian Localised Adjoint Method (ELLAM) and the Modified Method of Characteristics (MMOC) for time-dependent advection-dominated PDEs. The combined scheme, so-called GEM scheme, takes advantages of both ELLAM scheme (mass conservation) and MMOC scheme (easier computations), while at the same time avoids their disadvantages (respectively, harder tracking around the injection regions, and loss of mass). We present a precise analysis of mass conservation properties for these three schemes, and after achieving global mass balance, an adjustment yielding local volume conservation is then proposed. Numerical results for all three schemes are then compared, illustrating the advantages of the GEM scheme. A convergence result of the MMOC scheme, motivated by our previous work (Cheng et al., 2018), is provided which can be extended to obtain the convergence of GEM scheme.
5,104
Title: Adaptive Euler methods for stochastic systems with non-globally Lipschitz coefficients Abstract: We present strongly convergent explicit and semi-implicit adaptive numerical schemes for systems of semi-linear stochastic differential equations (SDEs) where both the drift and diffusion are not globally Lipschitz continuous. Numerical instability may arise either from the stiffness of the linear operator or from the perturbation of the nonlinear drift under discretization, or both. Typical applications arise from the space discretization of an SPDE, stochastic volatility models in finance, or certain ecological models. Under conditions that include montonicity, we prove that a timestepping strategy which adapts the stepsize based on the drift alone is sufficient to control growth and to obtain strong convergence with polynomial order. The order of strong convergence of our scheme is (1 − ε)/2, for ε ∈ (0,1), where ε becomes arbitrarily small as the number of finite moments available for solutions of the SDE increases. Numerically, we compare the adaptive semi-implicit method to a fully drift-implicit method and to three other explicit methods. Our numerical results show that overall the adaptive semi-implicit method is robust, efficient, and well suited as a general purpose solver.
5,208
Title: Generalized discrete event system specification: a state-of-the-art study Abstract: The Generalized Discrete Event System specification (G-DEVS) language was introduced by Norbert Giambiasi in the 1990s. This paper first examines the specification of G-DEVS and gives an historical view of N Giambiasi's works that contributed to this concept. The paper particularly focuses on the extension of G-DEVS to distributed simulation. An example of a G-DEVS model of a ski chairlift system with chair and skier is proposed to show the accuracy gained by using G-DEVS instead of other classical discrete event modeling formalisms. Then, the paper presents how G-DEVS has been extended for interoperability with other components in the context of supply chain modeling and simulation coping, with the possibility to compose different model formats at simulation time. Next, the focus is on a G-DEVS editor tool and its extensions: LSIS_DME. The distributed simulation and the high-level architecture standard were used to support the interoperability of various models and simulators.
5,213
Title: On Simple Connectivity of Random 2-Complexes Abstract: The fundamental group of the 2-dimensional Linial–Meshulam random simplicial complex $$Y_2(n,p)$$ was first studied by Babson, Hoffman, and Kahle. They proved that the threshold probability for simple connectivity of $$Y_2(n,p)$$ is about $$p\approx n^{-1/2}$$ . In this paper, we show that this threshold probability is at most $$p\le (\gamma n)^{-1/2}$$ , where $$\gamma =4^4/3^3$$ , and conjecture that this threshold is sharp. In fact, we show that $$p=(\gamma n)^{-1/2}$$ is a sharp threshold probability for the stronger property that every cycle of length 3 is the boundary of a subcomplex of $$Y_2(n,p)$$ that is homeomorphic to a disk. Our proof uses the Poisson paradigm, and relies on a classical result of Tutte on the enumeration of planar triangulations.
5,233
Title: The legacy of Norbert Giambiasi to the University of Corsica: from behavioral testing to DEVS fault simulation Abstract: The paper delivers a tribute to Professor Giambiasi by pointing out how the work and ideas stemming from the research performed under his umbrella in the 1980s on behavioral testing has been an inspiration for the research around DEVS formalism at the University of Corsica in the late 1990s and the beginning of the new century. The paper retraces how Giambiasi's heritage allowed researchers at the University of Corsica to go from behavioral and hierarchical testing to DEVS fault simulation.
5,268
Title: A comparison analysis for credit scoring using bagging ensembles Abstract: In this paper, we present a hybrid approach for credit scoring, and the classification performance of this approach is compared with 4 base learners in machine learning. A large credit default swap dataset covering the period from 2006 to 2016 is used to build classifiers and test their performances. The results from this empirical study indicate that the bagging ensemble method can substantially improve individual base learners such as decision tree, multilayer perceptron, and k-nearest neighbours. The performance of support vector machine does not change after applying bagging ensemble. The overall results demonstrate that k-nearest neighbour is more suitable than any other method when dealing with large unbalanced datasets in credit scoring.
5,297
Title: On the maximum number of odd cycles in graphs without smaller odd cycles Abstract: We prove that for each odd integer $k geq 7$, every graph on $n$ vertices without odd cycles of length less than $k$ contains at most $(n/k)^k$ cycles of length $k$. This generalizes the previous results on the maximum number of pentagons in triangle-free graphs, conjectured by ErdH{o}s in 1984, and asymptotically determines the generalized Turu0027an number $mathrm{ex}(n,C_k,C_{k-2})$ for odd $k$. In contrary to the previous results on the pentagon case, our proof is not computer-assisted.
5,379
Title: An isoperimetric inequality for Hamming balls and local expansion in hypercubes Abstract: We prove a vertex isoperimetric inequality for the n-dimensional Hamming ball B-n(R) of radius R. The isoperimetric inequality is sharp up to a constant factor for sets that are comparable to B-n(R) in size. A key step in the proof is a local expansion phenomenon in hypercubes.
5,441
Title: Why we should use Min Max DEVS for modeling and simulation of digital circuits Abstract: The delay is a very important element in modeling hardware behavior, and is realized in many hardware description languages such as ADLIB-SABLE, Verilog, and VHDL. The state of the art on hardware delay identifies four classes. In the first class, mean values are used as a precise delay element in the simulation; we found it in VHDL (VHSIC (very high speed integrated circuit) Hardware Description Language), where a single value is utilized to characterize the transport delay. In the second class, the delay is represented by an interval min max, meaning that the delay value is precisely unknown and every value in the interval can represent a possible value for the actual delay. In the third class, a delay is expressed in the form of a stochastic distribution. Fuzzy models of delay constitute the last class. In reality, it is very difficult, if not impossible, to obtain a precise value of the delay; there are many reasons for that: temperature, voltage, variation in the manufacturing process, and other environment parameters. The Min Max DEVS formalism allows an efficient design of the min max delay by proposing a definition of the lifetime function based on time interval. Moreover, its simulation semantics allows the simulation of Min Max DEVS models with only one replication, allowing us to conclude whether the min max delay is too large or exact simulations cannot be obtained. In this paper, we propose to highlight the Min Max DEVS formalism through examples from digital circuits, after having recalled its basic definitions and its simulation semantics. Then, we compare the simulation results obtained with those provided by the well-known tool in the field of digital circuits, Verilog, using the same examples.
5,537
Title: Productivity measurement of industrial sector in China regarding air pollution Abstract: As an important sector of national economy, the industrial sector accounts for 33.4% of gross domestic product while consuming 70% of energy and causing serious air pollution in China. It is meaningful to measure the productivity of industrial sector in China with air pollution consideration. The range-adjusted measure of the nonradial data envelopment analysis, as with natural disposability and managerial disposability, is adopted in order to measure the productivity of provincial industrial sector in China during 2011-2014. The results explain that the unified efficiency under managerial disposability is lower than the unified efficiency under natural disposability and the unified efficiency under natural and managerial disposability, which means that management improvement and technology innovation should be obtained more attention from the government. As modernization of economic restructuring, there is not a trend of unified efficiency under natural disposability. Eastern China has highest unified efficiency under managerial disposability, whereas unified efficiency under natural and managerial disposability are improving in eastern China and western China in period of 2013-2014. The results also describe that more than 20 provinces and nearly half of provinces are suitable for the industrial pollution control investment and research and development investment, respectively. On the basis of the results of the truncated regression model, we can identify the influencing factors of unified efficiency. According to above results, suggestions are proposed in order to improve the productivity.
5,538
Title: On Tutte polynomial expansion formulas in perspectives of matroids and oriented matroids Abstract: We introduce the active partition of the ground set of an oriented matroid perspective (or morphism, or quotient, or strong map) on a linearly ordered ground set. The reorientations obtained by arbitrarily reorienting parts of the active partition share the same active partition. This yields an equivalence relation for the set of reorientations of an oriented matroid perspective, whose classes are enumerated by coefficients of the Tutte polynomial, and a remarkable partition of the set of reorientations into Boolean lattices, from which we get a short direct proof of a 4-variable expansion formula for the Tutte polynomial in terms of orientation activities. This formula was given in the last unpublished preprint by Michel Las Vergnas; the above equivalence relation and notion of active partition generalize a former construction in oriented matroids by Michel Las Vergnas and the author; and the possibility of such a proof technique in perspectives was announced in the aforementioned preprint. We also briefly highlight how the 5-variable expansion of the Tutte polynomial in terms of subset activities in matroid perspectives comes in a similar way from the known partition of the power set of the ground set into Boolean lattices related to subset activities (and we complete the proof with a property which was missing in the literature). In particular, the paper applies to matroids and oriented matroids on a linearly ordered ground set, and applies to graph and directed graph homomorphisms on a linearly ordered edge-set. (C)& nbsp;2022 Elsevier B.V. All rights reserved.
5,548
Title: Bounds for Polynomials on Algebraic Numbers and Application to Curve Topology Abstract: Let $$P \in {\mathbb {Z}}[X, Y]$$ be a given square-free polynomial of total degree d with integer coefficients of bitsize less than  $$\tau $$ , and let $$V_{{\mathbb {R}}} (P) := \{ (x,y) \in {\mathbb {R}}^2\mid P (x,y) = 0 \}$$ be the real planar algebraic curve implicitly defined as the vanishing set of P. We give a deterministic algorithm to compute the topology of $$V_{{\mathbb {R}}} (P)$$ in terms of a simple straight-line planar graph $${\mathcal {G}}$$ that is isotopic to $$V_{{\mathbb {R}}} (P)$$ . The upper bound on the bit complexity of our algorithm is in $${\tilde{O}}(d^5 \tau + d^6)$$ (The expression “the complexity is in $${\tilde{O}}(f(d,\tau ))$$ ” with f a polynomial in $$d,\tau $$ is an abbreviation for the expression “there exists a positive integer c such that the complexity is in $$O(({\log d }\log \tau )^c f(d,\tau ))$$ ”); which matches the current record bound for the problem of computing the topology of a planar algebraic curve. However, compared to existing algorithms with comparable complexity, our method does not consider any change of coordinates, and more importantly the returned simple planar graph $${\mathcal {G}}$$ yields the cylindrical algebraic decomposition information of the curve in the original coordinates. Our result is based on two main ingredients: First, we derive amortized quantitative bounds on the roots of polynomials with algebraic coefficients as well as adaptive methods for computing the roots of bivariate polynomial systems that actually exploit this amortization. The results we obtain are more general that the previous literature. Our second ingredient is a novel approach for the computation of the local topology of the curve in a neighborhood of all singular points.
5,551
Title: Distributed bare-bones communication in wireless networks Abstract: We consider wireless networks operating under the SINRmodel of interference. Nodes have limited individual knowledge and capabilities: they do not knowtheir positions in a coordinate system in the plane, further they do not knowtheir neighborhoods, nor do they know the size of the network n, and finally they cannot sense collisions resulting from simultaneous transmissions by at least two neighbors. Each node is equipped with a unique integer name, where N as an upper bound on the a range of names. We refer as a backbone to a subnetwork induced by a diameter-preserving dominating set of nodes. Let Delta denote a maximum number of nodes that can successfully receive a message transmitted by a node when no other nodes transmit concurrently. We study distributed algorithms for communication problems in three settings. In the single-node-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm that wakes up all nodes in O(n log2 N) rounds with high probability. For the synchronizedstart case, when all nodes start an execution simultaneously, we give a randomized algorithm computing a backbone in O(Delta log(7) N) rounds with high probability. In the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone in time O(n log(2) N + Delta log(7) N) with high probability
5,631
Title: MARGINAL AND DEPENDENCE UNCERTAINTY: BOUNDS, OPTIMAL TRANSPORT, AND SHARPNESS Abstract: Motivated by applications in model-free finance and quantitative risk management, we consider Frechet classes of multivariate distribution functions where additional information on the joint distribution is assumed, while uncertainty in the marginals is also possible. We derive optimal transport duality results for these Frechet classes that extend previous results in the related literature. These proofs are based on representation results for convex increasing functionals and the explicit computation of the conjugates. We show that the dual transport problem admits an explicit solution for the function f = 1(B), where B is a rectangular subset of R-d, and provide an intuitive geometric interpretation of this result. The improved Frechet-Hoeffding bounds provide ad hoc bounds for these Frechet classes. We show that the improved Frechet-Hoeffding bounds are pointwise sharp for these classes in the presence of uncertainty in the marginals, while a counterexample yields that they are not pointwise sharp in the absence of uncertainty in the marginals, even in dimension 2. The latter result sheds new light on the improved Frechet-Hoeffding bounds, since Tankov [J. Appl. Probab., 48 (2011), pp. 389-403] has showed that, under certain conditions, these bounds are sharp in dimension 2.
5,753
Title: Variations of the Catalan numbers from some nonassociative binary operations Abstract: We investigate certain nonassociative binary operations that satisfy a four-parameter generalization of the associative law. From this we obtain variations of the ubiquitous Catalan numbers and connections to many interesting combinatorial objects such as binary trees, plane trees, lattice paths, and permutations. (C) 2021 Elsevier B.V. All rights reserved.
5,763
Title: Asymptotically good edge correspondence colourings Abstract: We prove that every simple graph with maximum degree Delta ${\rm{\Delta }}$ has edge correspondence number Delta+o(Delta) ${\rm{\Delta }}+o({\rm{\Delta }})$.
5,903
Title: Selecting an agricultural technology package based on the flexible and interactive tradeoff method Abstract: The aim of this paper is to solve an agricultural technology packages selection problem by considering multiple dimensions which influence a maize producer’s preferences. The decision-making process is aided by a new multicriteria method for eliciting scale constants in additive models: flexible and interactive tradeoff (FITradeoff). This method works with partial information, obtained from the decision maker (DM), and thus reduces the time that the DM has to spend on the process for eliciting his/her preferences as he/she may avoid answering difficult questions. The decision-making process makes use of a decision support system (DSS), in which the DM interactively gives preference statements in a structured manner. The DSS gives flexibility to the DM, in such way that he/she gives as much information as he/she is willing to. Graphical visualization is provided at each step in order to help the DM’s analyses. Throughout the description of an application, some insights are provided including a discussion of the advantages and features of the FITradeoff method.
5,913
Title: Software component evaluation and selection using TOPSIS and fuzzy interactive approach under multiple applications development Abstract: In this paper, a two phase approach is proposed for decision-making situation that involves optimal software component evaluation and selection for designing component-based modular software system with multiple applications. In the first phase, components are evaluated using technique for order preference by similarity to ideal solution. In the second phase, a non-linear multi-objective optimization model is developed that facilitates the decision whether “to buy commercial-off-the-shelf components” or “to build in-house components” so that the total score of alternative components is maximized while the overall cost and delivery time of the system are minimized. Many critical parameters such as reliability of various applications, reusability and compatibility of the software components are considered simultaneously in the proposed optimization model. To determine a preferred compromise solution for the multi-objective optimization problem, a fuzzy interactive approach is used. Numerical illustrations based on a small-scale case study are presented to demonstrate usefulness of the proposed optimization model for optimal “build or buy” decisions in real-world applications.
5,996
Title: An incentive-based co-operation motivating pseudonym changing strategy for privacy preservation in mixed zones in vehicular networks Abstract: Location privacy of the vehicles is considered as the potential information that needs to be protected with maximum care in the vehicular network. This location privacy depends on the method of the pseudonym strategy scheme adopted in the mix zone of vehicular networks. The maximum degree of privacy is required in the mixed zone because the decentralization and lower vehicular density of vehicles in the mixed zone have the maximum probability of reducing the strength of facilitating location privacy. However, location privacy in the mixed zone depends on the number of cooperative vehicles interacting within the spatio temporal environment. In this paper, an incentive-based co-operation motivating pseudonym changing strategy for privacy preservation in mixed zones is proposed for resolving the issue of pseudonym change and influence of cooperative vehicle density. This proposed mechanism uses a oneway hash function and an improved pseudonym scheme for estimating vehicular incentives in order to facilitate privacy protection. The extensive investigation of the proposed mechanism is confirmed to be excellent in terms of location privacy under different distances, a number of vehicles and the number of mixed zones compared to the pseudonym changing privacy preservation schemes proposed for addressing privacy in mixed zones of vehicular networks. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,025
Title: Some remarks on combinatorial wall-crossing Abstract: We establish a new simple explicit description of the Young diagram at each step of the combinatorial wall-crossing algorithm for the rational Cherednik algebra applied to the trivial representation. In this way we provide a short and explicit proof of a key theorem of P. Dimakis and G. Yue. We also present a conjecture on combinatorial wall-crossing which was found using computer experiments.
6,054
Title: A CLASS OF STOCHASTIC GAMES AND MOVING FREE BOUNDARY PROBLEMS* Abstract: In this paper we propose and analyze a class of N-player stochastic games that include finite fuel stochastic games as a special case. We first derive sufficient conditions for the Nash equilibrium (NE) in the form of a verification theorem. The associated quasi-variational-inequalities include an essential game component regarding the interactions among players, which may be interpreted as the analytical representation of the conditional optimality for NEs. The derivation of NEs involves solving first a multidimensional free boundary problem and then a Skorokhod problem. Finally, we present an intriguing connection between these NE strategies and controlled rank-dependent stochastic differential equations.
6,057