text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: A hybrid optimization model for resource allocation in OFDM-based cognitive radio system Abstract: Cognitive radio (CR) system has been considered as the key technology for the mobile computing and wireless communication in future. However, the main challenge of the CR system is the allocation of resources with minimized transmission power at an enhanced rate of transmission. This paper proposes the hybrid method, which is the combination of Grey Wolf Optimization (GWO) and Group Search Optimization (GSO), to allocate the resources in the CR system in an optimal manner. It simulates the GWOGS-based CR system relying on the orthogonal frequency division multiplexing (OFDM), to allocate the recourses optimally. After attaining the respective simulation, it compares the performance of the GWOGS to the conventional algorithms like Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Firefly (FF), GSO, GWO, and SOAP. Moreover, it provides the valuable comparative analysis in terms of convergence, ranking, cost and impact of orthogonality. In addition, it reveals the statistical analysis of the entire benchmark algorithm to attain the optimum result. Thus the experimental result, affirms the challenging performance of the proposed method against the conventional algorithms.
6,070
Title: More Bisections by Hyperplane Arrangements Abstract: A union of an arrangement of affine hyperplanes $$\mathcal {H}$$ in $$\mathbb {R}^d$$ is the real algebraic variety associated to the principal ideal generated by the polynomial $$p_{\mathcal {H}}$$ given as the product of the degree one polynomials which define the hyperplanes of the arrangement. A finite Borel measure on $$\mathbb {R}^d$$ is bisected by the arrangement of affine hyperplanes $$\mathcal {H}$$ if the measure on the “non-negative side” of the arrangement $$\{x\in \mathbb {R}^d:p_{\mathcal {H}}(x)\ge 0\}$$ is the same as the measure on the “non-positive” side of the arrangement $$\{x\in \mathbb {R}^d : p_{\mathcal {H}}(x)\le 0\}$$ . In 2017 Barba, Pilz & Schnider considered special, as well as modified cases of the following measure partition hypothesis: For a given collection of j finite Borel measures on $$\mathbb {R}^d$$ there exists a k-element affine hyperplane arrangement that bisects each of the measures into equal halves simultaneously. They showed that there are simultaneous bisections in the case when $$d=k=2$$ and $$j=4$$ . Furthermore, they conjectured that every collection of j measures on $$\mathbb {R}^d$$ can be simultaneously bisected with a k-element affine hyperplane arrangement provided that $$d\ge \lceil j/k\rceil $$ . The conjecture was confirmed in the case when $$d\ge j/k=2^a$$ by Hubard and Karasev in 2018. In this paper we give a different proof of the Hubard and Karasev result using the framework of Blagojević, Frick, Haase & Ziegler (2016), based on the equivariant relative obstruction theory of tom Dieck, which was developed for handling the Grünbaum–Hadwiger–Ramos hyperplane measure partition problem. Furthermore, this approach allowed us to prove even more, that for every collection of $$2^a(2h+1)+\ell $$ measures on  $$\mathbb {R}^{2^a+\ell }$$ , where $$1\le \ell \le 2^a-1$$ , there exists a $$(2h+1)$$ -element affine hyperplane arrangement that bisects all of them simultaneously. Our result was extended to the case of spherical arrangements and reproved by alternative methods in a beautiful way by Crabb [8].
6,080
Title: Optimal Bayesian design for model discrimination via classification Abstract: Performing optimal Bayesian design for discriminating between competing models is computationally intensive as it involves estimating posterior model probabilities for thousands of simulated data sets. This issue is compounded further when the likelihood functions for the rival models are computationally expensive. A new approach using supervised classification methods is developed to perform Bayesian optimal model discrimination design. This approach requires considerably fewer simulations from the candidate models than previous approaches using approximate Bayesian computation. Further, it is easy to assess the performance of the optimal design through the misclassification error rate. The approach is particularly useful in the presence of models with intractable likelihoods but can also provide computational advantages when the likelihoods are manageable.
6,102
Title: MAFONN-EP: A Minimal Angular Feature Oriented Neural Network based Emotion Prediction system in image processing Abstract: In recent days, facial emotion recognition techniques are considered important and critical due wide array of application domains use facial emotion as part of their workflow and analytics gathering. Reduced recognition rate, inefficient computation and increased time consumption are the major drawbacks of these techniques. To overcome these issues, this paper develops the new facial emotion recognition technique named as Minimal Angular Feature Oriented Neural Network based Emotion Prediction (MAFONN-EP). Initially, the input video sequence are categorized into image frames that are preprocessed by eliminating the noise by using Weighted Median Filtering (WMF) technique and to separate the background and foreground regions using Edge Preserved Background Separation and Foreground Extraction (EPBSFE) technique. Then, the set of texture patterns are extracted based on four key parts such as two eyes, nose and mouth by using the Minimal Angular Deviation (MAD) technique. Particular features are selected by employing the Cuckoo Search based Particle Swarm Optimization (CS-PSO) technique that also reduces the feature dimensionality. Finally, the Weight Based Pointing Kernel Classification (WBPKC) technique is employed for recognizing the emotion. In the experimental results, the performance of the proposed technique is analyzed and compared with different performance measures like accuracy, sensitivity, specificity, precision, recall. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,110
Title: When is a real generic over L? Abstract: In this paper we isolate a new abstract criterion for when a given real x is generic over L in terms of being able to lift elementary embeddings of initial segments of L to L[x].
6,145
Title: An enhanced dynamic KC-slice model for privacy preserving data publishing with multiple sensitive attributes by inducing sensitivity Abstract: Privacy Preserving Data Publishing (PPDP) is an important aspect of real world scenarios. PPDP moves the researcher in the right direction by maintaining privacy and utility trade-off while publishing the data. This paper presents a concept on dynamic data publishing for multiple sensitive attributes by enhancing KC slice model. Our proposed KCi-slice method completes the data publishing process in two phases. First phase assigns the records into buckets based on the sensitiveness of the attributes, which considers dif-ferent privacy thresholds on various sensitive attributes. It uses a semantic l-diversity approach to assign the records to the buckets to prevent similarity attacks. The privacy thresholds of all the sensitive attri-bute values in a bucket are verified. It splits the sensitive attributes into multiple sensitive tables accord-ing to the correlation among them. The later phase finds the correlation among quasi attributes. It groups the correlated quasi attributes and also concatenates the SIDs of sensitive attribute values with quasi attribute values. Finally it performs random permutations on the published quasi table. The proposed KCi-slice model enhances the utility and reduces the suppression of multiple sensitive attributes when compared to KC-slice approach. (C) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
6,156
Title: Triangle resilience of the square of a Hamilton cycle in random graphs Abstract: Since first introduced by Sudakov and Vu in 2008, the study of resilience problems in random graphs received a lot of attention in probabilistic combinatorics. Of particular interest are resilience problems of spanning structures. It is known that for spanning structures which contain many triangles, local resilience cannot prevent an adversary from destroying all copies of the structure by removing a negligible amount of edges incident to every vertex. In this paper we generalise the notion of local resilience to H-resilience and demonstrate its usefulness on the containment problem of the square of a Hamilton cycle. In particular, we show that there exists a constant C>0 such that if p⩾Clog3⁡n/n then w.h.p. in every subgraph G of a random graph Gn,p there exists the square of a Hamilton cycle, provided that every vertex of G remains on at least a (4/9+o(1))-fraction of its triangles from Gn,p. The constant 4/9 is optimal and the value of p slightly improves on the best-known appearance threshold of such a structure and is optimal up to the logarithmic factor.
6,160
Title: Direct serendipity and mixed finite elements on convex quadrilaterals Abstract: The classical serendipity and mixed finite element spaces suffer from poor approximation on nondegenerate, convex quadrilaterals. In this paper, we develop families of direct serendipity and direct mixed finite element spaces, which achieve optimal approximation properties and have minimal local dimension. The set of local shape functions for either the serendipity or mixed elements contains the full set of scalar or vector polynomials of degree r, respectively, defined directly on each element (i.e., not mapped from a reference element). Because there are not enough degrees of freedom for global $$H^1$$ or $$H(\text {div})$$ conformity, exactly two supplemental shape functions must be added to each element when $$r\ge 2$$ , and only one when $$r=1$$ . The specific choice of supplemental functions gives rise to different families of direct elements. These new spaces are related through a de Rham complex. For index $$r\ge 1$$ , the new families of serendipity spaces $${\mathscr {DS}}_{r+1}$$ are the precursors under the curl operator of our direct mixed finite element spaces, which can be constructed to have reduced or full $$H(\text {div})$$ approximation properties. One choice of direct serendipity supplements gives the precursor of the recently introduced Arbogast–Correa spaces (SIAM J Numer Anal 54:3332–3356, 2016. https://doi.org/10.1137/15M1013705 ). Other fully direct serendipity supplements can be defined without the use of mappings from reference elements, and these give rise in turn to fully direct mixed spaces. Our development is constructive, so we are able to give global bases for our spaces. Numerical results are presented to illustrate their properties.
6,177
Title: A practical streaming approximate matrix multiplication algorithm Abstract: Approximate Matrix Multiplication (AMM) has emerged as a useful and computationally inexpensive substitute for actual multiplication of large matrices. Randomized as well as deterministic solutions to AMM were provided in the past. The latest work provides a deterministic algorithm that solves AMM more accurately than the other works. It is a streaming algorithm that is both fast and accurate. But, it is less robust to noise and is also liable to have less than optimal performance in the presence of concept drift in the input matrices. We propose an algorithm that is more accurate, robust to noise, invariant to concept drift in the data, while having almost the same running time as the state-of-the-art algorithm. We also prove that theoretical guarantees exist for the proposed algorithm. An empirical performance improvement of up to 90% is obtained over the previous algorithm. We also propose a general framework for parallelizing the proposed algorithm. The two parallelized versions of the algorithm achieve up to 1.9x and 3.6x speedups over the original version of the proposed algorithm. (c) 2018 Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,202
Title: A chaos-based probabilistic block cipher for image encryption Abstract: Traditional encryption is based on secrecy provided by secret-key. But this leads to generation of same cipher text when the encryption scheme is applied to same plaintext with same key. Thus, replay of messages can be effortlessly identified by an adversary which can be a weak link in any communication. Probabilistic encryption is an approach to overcome this weakness where different cipher texts are generated each time same plaintext is encrypted using the same key. Extending the probabilistic approach, which is generally employed in asymmetric encryption, this paper proposes a new chaos-based probabilistic symmetric encryption scheme with customizable block-size suitable for image encryption. It employs a Random Bits Insertion phase followed by four rounds of two-staged diffusion involving simple XOR (exclusive-OR) operation making it computationally efficient. Random Bits Insertion makes the scheme probabilistic. This phase also helps in increasing entropy and making intensity distribution more uniform in cipher. The generated cipher text is twice the size of plain text. An increase in cipher text space is inevitable for probabilistic encryption and it provides an advantage as the apparent message space for the attacker is increased. The observations show that the scheme offers high strength to resist statistical and cryptanalytic attacks. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,224
Title: Parametric shape optimization using the support function Abstract: The optimization of shape functionals under convexity, diameter or constant width constraints shows numerical challenges. The support function can be used in order to approximate solutions to such problems by finite dimensional optimization problems under various constraints. We propose a numerical framework in dimensions two and three and we present applications from the field of convex geometry. We consider the optimization of functionals depending on the volume, perimeter and Dirichlet Laplace eigenvalues under the aforementioned constraints. In particular we confirm numerically Meissner’s conjecture, regarding three dimensional bodies of constant width with minimal volume.
6,240
Title: Neutrosophic graph cut-based segmentation scheme for efficient cervical cancer detection Abstract: Cervical cancer is the most serious category of cancer that has very low survival rate in the women's community around the globe. This survival probability of women society affected by this cervical cancer can be potentially enhanced if it is detected at an early stage as they do not provide any realizable degree of symptoms in the early phase. This cervical cancer needs to be detected at an early stage through periodical checkups. Hence, the objective of the proposed work focuses on the merits of Neutrosophic Graph Cut-based Segmentation (NGCS) facilitated over the pre-processed cervical images. This NGCS-based segmentation is mainly employed for investigating the overlapping contexts of cervical smear pre-processed images for better classification accuracy. This NGCS-based segmentation is responsible for partitioning the input preprocessed image into a diversified number of nonoverlapping regions that aids in better perception at the convenience. In NGCS-based segmentation, the preprocessed input image is transformed into a Neutrosophic set and indeterminacy filter depending on the estimated indeterminacy value that integrates the intensity and spatial information the preprocessed image. The utilized indeterminacy filter plays the anchor role in minimizing the indeterminacy value associated with each intensity and spatial information. Then a graph is defined over the image with unique weights are assigned to each of the image pixels based on the estimated indeterminacy value. Finally, the maximum flow graph approach is applied over the graph for determining optimal segmentation results. The results of this NGCS-based cervical cancer detection technique is proved to be excellent on an average by 13% compared to the traditional graph cut oriented cancer detection approaches. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,250
Title: Differential bond energy algorithm for optimal vertical fragmentation of distributed databases Abstract: Distributed database systems are gaining importance due to the production in massive amount of data. The efficacy of such systems is highly dependent upon the design of the system. To increase the effectiveness and efficiency of distributed databases, two processes are mainly employed i.e. fragmentation and allocation. Fragmentations can be vertical or horizontal. This work focuses on vertical fragmentation design methods. In this paper, a novel differential bond energy (DBE) algorithm is proposed with objective to determine optimal partition point. The performance of proposed algorithm is compared with classical bond energy algorithm (BEA) on basis of global affinity measure (GAM) value. Results are depicted in form of line graphs. The mean difference in GAM values for both algorithms are also illustrated. The experimental results portrays that DBE is suitable for vertical fragmentation of high dimensional problems as it attain high GAM value as compared to BEA on various datasets. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,254
Title: Generalized Functional Pruning Optimal Partitioning (GFPOP) for Constrained Changepoint Detection in Genomic Data Abstract: We describe a new algorithm and R package for peak detection in genomic data sets using constrained changepoint models. These detect changes from background to peak regions by imposing the constraint that the mean should alternately increase then decrease. An existing algorithm for this problem exists, and gives state-of-the-art accuracy results, but it is computationally expensive when the number of changes is large. We propose a dynamic programming algorithm that jointly estimates the number of peaks and their locations by minimizing a cost function which consists of a data fitting term and a penalty for each changepoint. Empirically this algorithm has a cost that is O(N log(N)) for analyzing data of length N. We also propose a sequential search algorithm that finds the best solution with K segments in O(log(K)N log(N)) time, which is much faster than the previous O(KN log(N)) algorithm. We show that our disk-based implementation in the PeakSegDisk R package can be used to quickly compute constrained optimal models with many changepoints, which are needed to analyze typical genomic data sets that have tens of millions of observations.
6,259
Title: Performance evaluation of DNN with other machine learning techniques in a cluster using Apache Spark and MLlib Abstract: Sentiment analysis on large data has become challenging due to the diversity, and nature of data. Advancements in the internet, along with large data availability have obviated the traditional limitations to distributed computing. The objective of this work is to carry out sentiment analysis on Apache Spark distributed Framework to speed up computations and enhance machine performance in diverse environments. The analysis, such as polarity identification, subjective analysis and email spam etc., are carried on various text datasets. After pre-processing, Term Frequency-Inverse Document Frequency (TF-IDF) and unsupervised Spark-Latent Dirichlet Allocation (LDA) clustering algorithms are used for feature extraction and selection to improve the accuracy. Deep Neural Networks (DNN), Support Vector Machines (SVM), Tree ensemble classifiers are used to evaluate the performance of the framework on single node and cluster environments. Finally, the proposed work aims at building an approach for enhancing machine performance, more in terms of runtime over accuracy. (c) 2018 Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,303
Title: Support vector regression and extended nearest neighbor for video object retrieval Abstract: Video retrieval is one of the emerging areas in video capturing that gained various technical advances, increasing the availability of a huge mass of videos. For the text or the image query given, retrieving the relevant videos and the objects from the videos is not always an easy task. A hybrid model was developed in the previous work using the Nearest Search Algorithm (NSA) and exponential weighted moving average (EWMA), for the video object retrieval. In NSA + EWMA, the object trajectories are retrieved based on the query specific distance. This work extends the previous work by developing a novel path equalization scheme for equalizing the path length of the query and the tracked object. Initially, a hybrid model based on Support Vector Regression and NSA tracks the position of the object in the video. The proposed density measure scheme equalizes the path length of the query and the object. Then, the identified path length related to the query is given to extended nearest neighbor classifier for retrieving the video. From the simulation results, it is evident that the proposed video retrieval scheme achieved high values of 0.901, 0.860, 0.849, and 0.922 for precision, recall, F-measure, and multiple object tracking precision, respectively.
6,315
Title: Multimodal plant recognition through hybrid feature fusion technique using imaging and non-imaging hyper-spectral data Abstract: Automatic classification of the plants is growing area of association with computer science and Botany, it has attracted many researchers to subsidize plant classification using image processing and machine learning techniques. Plants can be classified using number of traits such as leaf color, flowers, leaves, roots, leaf shape, leaf size etc. highly depends upon feature selection methods. However extraction of features from selected trait is most significant state in classification. State-of-the-art classification can be achieved by using leaf characteristics such as leaf venation patterns, leaf spectral signatures, leaf color, leaf shape, etc. This paper describes multimodal plant classification system using leaf venation patterns and its spectral signatures as a significant features. This paper shows that the feature fusion can be used to achieve efficient plant identification. The accuracy of identification for leaf spectral data, leaf venation features and HOG features is validated, it signifies that feature fusion technique performs better than that of non-imaging spectral signatures features only, with recognition result of 98.03% GAR and 93.51% GAR, respectively. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,318
Title: Fuzzy logic based adaptive duty cycling for sustainability in energy harvesting sensor actor networks Abstract: In an energy harvesting sensor actor network, a node recharges its battery from harvestable sources, such as solar, wind and vibrations. Sustainability of the network till next recharge time is one of the most important challenges in harvesting sensor networks. In this paper, a fuzzy based adaptive duty cycling algorithm has been proposed to achieve the network sustainability in harvesting sensor actor networks. In this work, current residual energy, predicted harvesting energy (for a futuristic time slot) and predicted residual energy parameters are considered as fuzzy input variables to estimate duty cycle for a sensor node. In this work, a harvesting model has been adopted to predict the harvesting energy. Further, resid-ual energy has been estimated for future time slot using predicted harvesting energy, energy consump-tion model and current residual energy. Simulation results are presented to show the efficacy of the proposed mechanism by considering network sustainability metrics, such as number of rounds in which network is connected, the round at which first node dies, maximum number of dead nodes and average number of received packets at the actor node. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,366
Title: CDBR: A semi-automated collaborative execute-before-after dependency-based requirement prioritization approach Abstract: The success of requirement prioritization process largely depends upon how well different constraints and influential factors are handled by stakeholders and developers while prioritization. The main goal of this research is to present a semi-automated dependency based collaborative requirement prioritization approach (CDBR), which uses linguistic values, execute-before-after (EBA) relation among requirements and machine learning algorithm to minimize the difference of opinion between stakeholder and developers for effective collaboration and for better approximation of final prioritization results, acceptable to both. The presented approach targets three major constraints rarely addressed in existing work, namely dependencies among requirements, communication among stakeholder and developers and the issue of scalability. Results of performance assessment conducted on several different requirement sets and on a case study by comparing CDBR with other state of the art approaches namely, AHP and IGA. The results are accurate and comparable in terms of effectiveness, efficiency, scalability and disagreement concerns among stakeholder and developers which in turn provides robustness to decision making process of awarding more importance to some requirements over others. CDBR overpowers AHP and IGA in terms of efficiency and processing time respectively.
6,378
Title: BDNA-A DNA inspired symmetric key cryptographic technique to secure cloud computing Abstract: Cloud computing facilitates the storage and management of huge volumes of data. It offers flexibility for retrieving the data anytime and anywhere. In recent years, storing data onto the cloud achieved fame among corporations as well as private users. Although, the cloud is drawing a lot of attention still there are data security, privacy, reliability and interoperability concerns that need to be taken care. To deal with these issues, cloud data encryption comes to the rescue. Encrypting the data before uploading it onto the cloud prevents unauthorized users from accessing the data. A lot of encryption algorithms have been developed to secure data stored on the cloud. In this paper, a novel cryptographic technique has been presented that uses client-side data encryption for encrypting the data before uploading it onto the cloud. It is a multifold symmetric-key cryptography technique which is based upon DNA cryptography. Besides presenting the detailed design of our approach, we have compared it with the existing symmetric-key algorithms (DNA, AES, DES and Blowfish). The experimental results illustrate that our proposed algorithm outperforms these traditional algorithms in terms of ciphertext size, encryption time and throughput. Hence, the newly proposed technique is more efficient and offers better performance. (c) 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,384
Title: On basis constructions in finite element exterior calculus Abstract: We give a systematic self-contained exposition of how to construct geometrically decomposed bases and degrees of freedom in finite element exterior calculus. In particular, we elaborate upon a previously overlooked basis for one of the families of finite element spaces, which is of interest for implementations. Moreover, we give details for the construction of isomorphisms and duality pairings between finite element spaces. These structural results show, for example, how to transfer linear dependencies between canonical spanning sets, or how to derive the degrees of freedom.
6,408
Title: A pair-based task scheduling algorithm for cloud computing environment Abstract: In the cloud computing environment, scheduling algorithms show the vital role of finding a possible schedule of the tasks. Extant literatures have shown that the task scheduling problem is NP-Complete as the objective is to obtain the minimum overall execution time. In this paper, we address the problem of scheduling a set of l tasks with a set of |G| groups to a set of m clouds, such that the overall layover time is minimized. Note that overall layover time is the sum of the timing gaps between paired tasks. Here, we present a pair-based task scheduling algorithm for cloud computing environment, which is based on the well-known optimization algorithm, called Hungarian algorithm. The proposed algorithm considers an unequal number of tasks and clouds, and pairs the tasks to make the scheduling decision. We simulate the proposed algorithm and compare it with three existing algorithms, first-come-first-served, Hungarian algorithm with lease time and Hungarian algorithm with converse lease time in twentytwo different datasets. The performance evaluation shows that the proposed algorithm produces better layover time in comparison to existing algorithms. The proposed algorithm is analyzed theoretically and shown to require O (kpl2) time for k iterations, p repetitions and l tasks. CO 2018 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
6,430
Title: Improved Cosine Similarity-based Artificial Bee Colony Optimization scheme for reactive and dynamic service composition Abstract: Reliable and dynamic composition of web services is considered as essential for ensuring continuous services to the users since they are responsible for integrating varsity of applications in spite of their independency. The significant development in web services domain in the last two decades enable the option of devising novel service composition and service selection schemes for optimal performance and success rate of dynamic web service composition. Majority of the research works proposed for dynamic composition of web services was confirmed to be formulated using the characteristics of Quality of Service (QoS) or Transactional features of workflow. Improved Cosine Similarity-based Artificial Bee Colony Optimization Scheme for Web Service Composition (ICS-ABCO-WSC) is proposed for integrating the characteristics of QoS and transactional features for determining optimal candidate service solution from the workflow modeled graph generated during reactive service composition. ICS-ABCO-WSC is proved to enhance the rate of exploitation and exploration by incorporating opposition learning in employee bee phase and, combinatorial search strategical equations and enhance rate factor in the employee and onlooker bee phase respectively. The success rate and optimality index derived using experimental investigation of ICS-ABCO scheme is proved to be 26% and 32% excellent to compared baseline graph-modeled web service composition techniques. This improvement is realized in ICS-ABCO-WSC due to its potential of enhancing the precision and acceleration rate of converging solution achieved in Artificial Bee Colony Optimization technique.
6,433
Title: Content-based medical image retrieval by spatial matching of visual words Abstract: Content-Based Image Retrieval (CBIR) systems have recently emerged as one of the most promising and best image retrieval paradigms. To pacify the semantic gap associated with CBIR systems, the Bag of Visual Words (BoVW) techniques are now increasingly used. However, existing BoVW techniques fail to capture the location information of visual words effectively. This paper proposes an unsupervised Content-Based Medical Image Retrieval (CBMIR) framework based on the spatial matching of the visual words. The proposed method efficiently computes the spatial similarity of visual words using a novel similarity measure called the Skip Similarity Index. Experiments on three large medical datasets reveal promising results. The location-based correlation of visual words assists in more accurate and efficient retrieval of anatomically diverse and multimodal medical images than the state-of-the-art CBMIR systems.
6,450
Title: An enhanced utilization mechanism of population information for Differential evolution Abstract: In most Differential evolution (DE) algorithms, the inferior vectors in the selection operator are always ignored during the evolutionary process. However, from the existing studies, these inferior vectors can provide valuable information in guiding the search of DE. Thus, how to effectively utilize the information from the current population together with the inferior vectors is one of the most salient and important topics in DE. This study proposes an enhanced utilization mechanism of population information (EUM) for DE. In EUM, there are two novel operators to utilize the information of the inferior and superior vectors generated during the evolution, proximity-based replacement operator (PRO) and negative direction operator (NDO). For PRO, the trial vector that is worse than its parent vector will have a chance to replace other parent vectors with the conditions based on the proximity. For NDO, the winning vectors in the selection process or RPO are stored in the archive to guide the mutation process by introducing the negative direction information. By incorporating EUM into DE, the novel DE framework, EUM-DE, is proposed. To test the effectiveness of the proposed algorithm, EUM-DE is applied to several original and advanced DE algorithms. The experimental study on the CEC2013 benchmark functions has shown that the proposed EUM is an effective approach to enhance the performance of most DE algorithms studied.
6,495
Title: Dynkin Games with Incomplete and Asymmetric Information Abstract: We study the value and the optimal strategies for a two-player zero-sum optimal stopping game with incomplete and asymmetric information. In our Bayesian setup, the drift of the underlying diffusion process is unknown to one player (incomplete information feature), but known to the other one (asymmetric information feature). We formulate the problem and reduce it to a fully Markovian setup where the uninformed player optimises over stopping times and the informed one uses randomised stopping times in order to hide their informational advantage. Then we provide a general verification result that allows us to find the value of the game and players' optimal strategies by solving suitable quasi-variational inequalities with some nonstandard constraints. Finally, we study an example with linear payoffs, in which an explicit solution of the corresponding quasi-variational inequalities can be obtained.
6,506
Title: Zoom based image super-resolution using DCT with LBP as characteristic model Abstract: The prime intention of super-resolution (SR) technique is to restore the high-resolution images from one or more low-resolution (LR) images. These images are captured from the same scene with different acquisition systems with different resolution. Because these acquisition systems, images are suffered for an ill-posed problem with low visualization and picture information. Therefore, in this paper, the zoom-based super-resolution approach is proposed for super-resolution of low resolute images which are acquired from different camera zoom-lens. In this approach, three LR images of the same static scene which are acquired using three distinct zoom factors are used. Learning-based SR technique is used to enhance the spatial resolution of these LR images. The training dataset comprises three sets of captured images which are LR images, an enhanced version of LR images-HR1 and enhanced version of HR1 images-HR2. High-frequency details of the super-resolute image are learned in form of the discrete cosine transform (DCT) coefficients of HR training images. Finally, the super-resolved versions of LR observations, captured at different zoom-factors, are combined. The experimental results show that this proposed approach can be applied to various types of natural images in grayscale as well as color. The experimental results also show that this proposed approach performs better than existing approaches.
6,578
Title: A Genetic Algorithm based approach for designing multi-state computational grid with cost and bandwidth constraints Abstract: A computational grid needs to be cost-effective and highly reliable. It must have the ability to transfer maximum amount of data for end-to-end delivery. Many research works on computational grids conclude with a binary state of operation. Although some research reveals that, the grids operate with multiple states of connections and operations, they are silent about the mentioned constraints. Since there are multi-state operations, the reliability depends on the load, bandwidth and cost. This paper proposes a Genetic Algorithm (GA) based approach for optimization of reliability of computational grid with desired reliability under cost and bandwidth constraints. The method is well explained through illustration and its implementation on some Grid networks. The efficient encoding, cross-over, mutation and selection operation of the proposed scheme ensures the fast convergence to the optimal solution. On comparison, the proposed method is observed to be more efficient than the existing ones.
6,592
Title: An enhanced deadline constraint based task scheduling mechanism for cloud environment Abstract: In this work, we present a scheduling mechanism for deadline sensitive task or lease. It is very impelling and challenging to allocate on-demand resources within the deadline rather than rejecting the lease. The work studies the insight of existing backfilling algorithm that is used to schedule deadline sensitive lease in the Open Nebula cloud platform. In backfilling mechanism a lease is selected in First Come First Serve (FCFS) to be backfilled in which some ideal resources can be found out and allocated to other leases. Mainly, the scheduling performance of backfilling algorithm could be better when there are conflicts among the similar leases to backfill. Moreover, backfilling algorithm doesn’t allow to schedule a new deadline sensitive task during execution. The proposed work addresses such types of issues and improves the scheduling performances with respect to maximize the task acceptance ratio and reduce the task rejection ratio. Finally, the performances are compared with the existing mechanisms that are quite good.
6,607
Title: An escalated convergent firefly algorithm Abstract: Firefly algorithm is a powerfulalgorithm, however, it may show inferior convergence rate towards global optimum because the optimization process of the firefly algorithm depends upon on a random quantity that facilitates the fireflies to explore the search space at the cost of the exploitation. As a result, to improve the performance of firefly algorithm in terms of convergence rate even as its ability to exploit the solutions is preserved, firefly algorithm is altered using two different approaches in this research work. In the first approach, a local search strategy, i.e., classical unidimensional local search is employed in the firefly algorithm in order to improve its exploitation capability. In the second approach, a new solution search strategy, i.e., stochastic diffusion scout search is integrated with the searching phase of the firefly algorithm in order to increase the probability of inferior solutions to improve themselves. Both the approaches are evaluated on various benchmark problems having different complexities and their performances are compared with artificial bee colony and classical firefly algorithm. Results prove the capability of the proposed approaches in finding the global optimum in the search space and avoid the local optima at the same time.
6,690
Title: Modernizing the multi-temporal multispectral remotely sensed image change detection for global maxima through binary particle swarm optimization Abstract: Change detection is the amount of changes that can guide to more concrete understandings into essential method concerning land cover, land usage and ecological variations. This paper deals with an intelligent methodology to optimize the solution out of the solution space that improves the efficiency of the change detection process. To support the theme, an integrated semi-supervised method is designed with a focus on image fusion, semi supervised clustering and binary swarm based optimization. A new approach of handling fusion using sparse coding is referred to expand the amount of information. Using the extracted information, change detection process is carried out by a constrained clustering technique to provide a solution reflecting the level of changes occurred in the region of investigation. To improve the accuracy by proposing a global optimum solution, still the result is refined through binary swarm based optimization process and hence the results are accelerated towards the increased level of accuracy followed by which the change map is reconstructed to show case changes prominently. To determine the accurateness of the proposed methodology quantitative and qualitative analysis has been done with different datasets. The proposed method has been evaluated with existing techniques such as k-means, AKM, FCM, ECKM and ASCC to show the efficiency and proved to be the preeminent change detection methodology compared to the state-of-art methods.
6,695
Title: Fine-Grained Human-Centric Tracklet Segmentation with Single Frame Supervision Abstract: In this paper, we target at the Fine-grAined human-Centric Tracklet Segmentation (FACTS) problem, where 12 human parts, e.g., face, pants, left-leg, are segmented. To reduce the heavy and tedious labeling efforts, FACTS requires only one labeled frame per video during training. The small size of human parts and the labeling scarcity makes FACTS very challenging. Considering adjacent frames of vide...
6,908
Title: Estimating searching cost of regular path queries on large graphs by exploiting unit-subqueries Abstract: Regular path queries (RPQs) are widely used on a graph whose answer is a set of tuples of nodes connected by paths corresponding to a given regular expression. Traditional automata-based approach for evaluating RPQs is restricted in the explosion of graph size, which makes graph searching take high cost (i.e. memory space and response time). Recently, a cost-based optimization technique using rare labels has been proved to be effective when it is applied to large graph. However, there is still a room for improvement, because the rare labels in the graph and/or the query are coarse information which could not guarantee the minimum searching cost all the time. This is our motivation to find a new approach using fine-grained information to estimate correctly the searching cost, which helps improving the performance of RPQs evaluation. For example, by using estimated searching cost, we can decompose an RPQ into small subqueries or separate multiple RPQs into small batch of queries in an efficient way for parallelism evaluation. In this paper, we present a novel approach for estimating the searching cost of RPQs on large graphs with cost functions based on the combinations of the searching cost of unit-subqueries (i.e. every smallest possible query). We extensively evaluated our method on real-world datasets including Alibaba, Yago, Freebase as well as synthetic datasets. Experimental results show that our estimation method obtains high accuracy which is approximately 87% on average. Moreover, two comparisons with automata-based and rare label based approaches demonstrate that our approach outperforms traditional ones.
6,918
Title: Correction to: Entrepreneurial bricolage and online store performance in emerging economies Abstract: The original version of this paper missed to capture the author, Bang Nguyen, and is now added on this article.
7,049
Title: Software fault classification using extreme learning machine: a cognitive approach Abstract: The software fault classification is very crucial in the development of reliable and high-quality software products. The fault classification allows determining and concentrating on fault software modules for early prediction of fault in time. As a result, it saves the time and money of the industry. Generally, various metrics are generated to represent the fault. But, selecting the dominant metrics from the available set is a challenge. Therefore, in this paper, a sequential forward search (SFS) with extreme learning machine (ELM) approach has used for fault classification. The number of features available in the metrics are selected to represent the fault using SFS and operated on ELM to verify the performance of software fault classification. Also, various activation functions of ELM have tested for the proposed work to identify the best model. The experimental result demonstrates that ELM with radial basis function achieves the good results compared to other activation function. Also, the proposed method has shown good results in comparison to support vector machine.
7,075
Title: Fractional weightage based objective function to a hybrid optimization algorithm for model transformation Abstract: Model transformation (MT) contributes a major role in the model-driven engineering (MDE), which are used to transfer the models among various languages to refactor and simulate the models or to obtain the useful codes from the models. Thus, the need for MT in MDE is very important, but the practical methods are not suitable for the detection of errors in transformations. This paper proposes an advanced algorithm, called fractional whale optimization integrated adaptive dragonfly (F-WOADF) algorithm to perform the MT from the class diagram (CLD) to relational schema (RS) model. The proposed algorithm modifies the adaptive dragonfly (ADF) algorithm with the concept of whale optimization algorithm (WOA) using the fractional theory. The UML CLD is transformed into the RS model using the optimal blocks that are selected with the use of the proposed algorithm. The performance of the F-WOADF method is evaluated using automatic correctness (AC), and fitness measure. The proposed method produces the maximum AC of 0.8583 and the maximum fitness measure of 0.8984 that indicates the effectiveness of the proposed method.
7,078
Title: The development of a pervasive Web application to alert patients based on business intelligence clinical indicators: a case study in a health institution Abstract: This paper proposes the development of a pervasive Web application based on business intelligence clinical indicators created with the data stored into the health information systems of a Portuguese health institution in the last 3 years i.e. between the beginning of 2015 and the end of 2017. With this computational tool, it is principally intended to reduce the number of appointments, surgeries, and medical examinations that were not carried out in the hospital most likely due to forgetfulness since most patients who attend this health institution are elderly people and memory loss is very common with increasing age. Therefore, patients and/or their caregivers and family members are alerted via SMS in advance and appropriately by health professionals through the Web application. This alternative is cheaper, faster, and more customizable than sending those SMS using a smartphone. Advantages liked with the use of this solution also include decreasing losses concerning time, human resources, and money.
7,516
Title: User interest community detection on social media using collaborative filtering Abstract: Community detection in microblogging environment has become an important tool to understand the emerging events. Most existing community detection methods only use network topology of users to identify optimal communities. These methods ignore the structural information of the posts and the semantic information of users’ interests. To overcome these challenges, this paper uses User Interest Community Detection model to analyze text streams from microblogging sites for detecting users’ interest communities. We propose HITS Latent Dirichlet Allocation model based on modified Hypertext Induced Topic Search and Latent Dirichlet Allocation to distil emerging interests and high-influence users by reducing negative impact of non-related users and its interests. Moreover, we propose HITS Label Propagation Algorithm method based on Label Propagation Algorithm and Collaborative Filtering to segregate the community interests of users more accurately and efficiently. Our experimental results demonstrate the effectiveness of our model on users’ interest community detection and in addressing the data sparsity problem of the posts.
7,518
Title: Decision level ensemble method for classifying multi-media data Abstract: In the digital era, the data, for a given analytical task, can be collected in different formats, such as text, images and audio etc. The data with multiple formats are called multimedia data. Integrating and fusing multimedia datasets has become a challenging task in machine learning and data mining. In this paper, we present heterogeneous ensemble method that combines multi-media datasets at the decision level. Our method consists of several components, including extracting the features from multimedia datasets that are not represented by features, modelling independently on each of multimedia datasets, selecting models based on their accuracy and diversity and building the ensemble at the decision level. Hence our method is called decision level ensemble method (DLEM). The method is tested on multimedia data and compared with other heterogeneous ensemble based methods. The results show that the DLEM outperformed these methods significantly.
7,528
Title: Contention-aware prediction for performance impact of task co-running in multicore computers Abstract: In this paper, we investigate the influential factors that impact on the performance when the tasks are co-running on a multicore computers. Further, we propose the machine learning-based prediction framework to predict the performance of the co-running tasks. In particular, two prediction frameworks are developed for two types of task in our model: repetitive tasks (i.e., the tasks that arrive at the system repetitively) and new tasks (i.e., the task that are submitted to the system the first time). The difference between which is that we have the historical running information of the repetitive tasks while we do not have the prior knowledge about new tasks. Given the limited information of the new tasks, an online prediction framework is developed to predict the performance of co-running new tasks by sampling the performance events on the fly for a short period and then feeding the sampled results to the prediction framework. We conducted extensive experiments with the SPEC2006 benchmark suite to compare the effectiveness of different machine learning methods considered in this paper. The results show that our prediction model can achieve the accuracy of 99.38% and 87.18% for repetitive tasks and new tasks, respectively.
7,530
Title: NgramPOS: a bigram-based linguistic and statistical feature process model for unstructured text classification Abstract: Research in financial domain has shown that sentiment aspects of stock news have a profound impact on volume trades, volatility, stock prices and firm earnings. In-depth analysis of stock news is now sourced from financial reviews by various social networking and marketing sites to help improve decision making. Nonetheless, such reviews are in the form of unstructured text, which requires natural language processing (NLP) in order to extract the sentiments. Accordingly, in this study we investigate the use of NLP tasks in effort to improve the performance of sentiment classification in evaluating the information content of financial news as an instrument in investment decision support system. At present, feature extraction approach is mainly based on the occurrence frequency of words. Therefore low-frequency linguistic features that could be critical in sentiment classification are typically ignored. In this research, we attempt to improve current sentiment analysis approaches for financial news classification by focusing on low-frequency but informative linguistic expressions. Our proposed combination of low and high-frequency linguistic expressions contributes a novel set of features for sentiment classification. The experimental results show that an optimal Ngram feature selection (combination of optimal unigram and bigram features) enhances sentiment classification accuracy as compared to other types of feature sets.
7,533
Title: Cognitive ocean of things: a comprehensive review and future trends Abstract: The scientific and technological revolution in Internet of Things is set off in oceanography. Humans have always observed the ocean outside the ocean to study the ocean. In recent years, it changes have been made into the interior of the ocean and the laboratories have been built on the sea floor. Approximately 71% of the Earth’s surface is covered by water. Ocean of things is expected to be important for disaster prevention, ocean resource exploration, and underwater environmental monitoring. Different from traditional wireless sensor networks, ocean of things has its own unique features, such as low reliability and narrow bandwidth. These features may be great challenges for ocean of things. Furthermore, the integration of artificial intelligence and ocean of things has become a topic of increasing interests for oceanology research fields. Cognitive ocean of things (COT) will become the mainstream of future ocean science and engineering development. In this paper, we provide the definition of COT, and the main contributions of this paper are (1) we review the ocean observing networks all the world; (2) we propose the COT architecture and describe the details of it; (3) important and useful applications are discussed; (4) we point out the future trends of COT researches.
7,535
Title: Optimal ThrowBoxes assignment for big data multicast in VDTNs Abstract: Nowadays, vehicles have been equipped with various kinds of sensors which produce so called big data constantly. Among them, multimedia contents form the major part of the total generated data. They are usually shared among vehicles in a region of interest with less concerns on delay criteria. Therefore, to reduce cost, these data will be delivered using a carry-and-forward method in a Vehicular Delay Tolerant Network. Since the volume of these data is not negligible, not only delivery utility but also memory space usage must be considered while designing forwarding strategy. ThrowBoxes have been utilized to increase encountering opportunities. In this paper, we develop an optimal data packet assignment algorithms to achieve global maximum delivery utilities among ThrowBoxes. We evaluate the proposed scheme with the real world data trace. The results show that our scheme achieves better performance in terms of delay, delivery ratio, cost and other metrics.
7,547
Title: vChecker: an application-level demand-based co-scheduler for improving the performance of parallel jobs in Xen Abstract: Big data analysis requires the speedup of parallel computing. However, in the virtualized systems, the power of parallel computing is not fully exploited due to the limit of current VMM schedulers. Xen, one of the most popular virtualization platforms, has been widely used by industry to host parallel job. In practice, the virtualized systems are expected to accommodate both parallel jobs and serial jobs, and resource contention between virtual machines results in severe performance degradation of the parallel jobs. Moreover, the physical resource is vastly wasted during the communication process due to the ineffective scheduling of parallel jobs. Unfortunately, the existing schedulers of Xen are initially targeting at serial jobs, which are not capable of correctly scheduling the parallel jobs. This paper presents vChecker, an application-level co-scheduler which mitigates the performance degradation of the parallel job and optimizes the utilization of the hardware resource. Our co-scheduler takes number of available CPU cores in one hand, and satisfies need of the parallel jobs in other hand, which helps the credit scheduler of Xen to appropriately schedule the parallel job. As our co-scheduler is implemented at application level, no modifications on the hypervisor is required. The experimental result shows that the vChecker optimizes the performance of the parallel job in Xen and enhances the utilization of the system.
7,548
Title: A data mining approach to classify serum creatinine values in patients undergoing continuous ambulatory peritoneal dialysis Abstract: Continuous ambulatory peritoneal dialysis (CAPD) is a treatment used by patients in the end-stage of chronic kidney diseases. Those patients need to be monitored using blood tests and those tests can present some patterns or correlations. It could be meaningful to apply data mining (DM) to the data collected from those tests. To discover patterns from meaningless data, it becomes crucial to use DM techniques. DM is an emerging field that is currently being used in machine learning to train machines to later aid health professionals in their decision-making process. The classification process can found patterns useful to understand the patients’ health development and to medically act according to such results. Thus, this study focuses on testing a set of DM algorithms that may help in classifying the values of serum creatinine in patients undergoing CAPD procedures. Therefore, it is intended to classify the values of serum creatinine according to assigned quartiles. The better results obtained were highly satisfactory, reaching accuracy rate values of approximately 95%, and low relative absolute error values.
7,552
Title: Privacy-preserving big data analytics for cyber-physical systems Abstract: Cyber-physical systems (CPS) generate big data collected from combining physical and digital entities, but the challenge of CPS privacy-preservation demands further research to protect CPS sensitive information from unauthorized access. Data mining, perturbation, transformation and encryption are techniques extensively used to preserve private information from disclosure whilst still providing insight, but these are limited in their effectiveness in still allowing high-level analysis. This paper studies the role of big data component analysis for protecting sensitive information from illegal access. The independent component analysis (ICA) technique is applied to transform raw CPS information into a new shape whilst preserving its data utility. The mechanism is evaluated using the power CPS dataset, and the results reveal that the technique is more effective than four other privacy-preservation techniques, obtaining a higher level of privacy protection. In addition, the data utility is tested using three machine learning algorithms to estimate their capability of identifying normal and attack patterns before and after transformation.
7,553
Title: Collaborative filtering driven by fast semantic feature analysis on Spark Abstract: Collaborative filtering (CF) is a prevailing technique utilized for recommendation systems and has been comprehensively explored to tackle the problem of information overload particularly in the Big Data context. The traditional CF algorithms are capable to perform adequately under various circumstances, nevertheless, there exist some shortcomings involving cold start and data sparsity. Moreover, a potential breakthrough rests in taking full advantage of any valuable semantic information contained in items. Therefore, for alleviating these defects, in this paper, we propose a two-stage collaborative filtering approach driven by Simhash-based semantic feature analysis, of which the first stage is Simhash-based semantic feature extraction for items and categories, and the second stage is reinforced CF rating prediction driven by intensely compressed category features. The rich semantic features of vast items and their categories can be rapidly extracted and compressed in the first stage by employing the Simhash, with being utilized to promote the traditional collaborative filtering processes. Besides, to solve the problems pertaining to the Big Data context, we design a parallel algorithm on Spark to accelerate the time-consuming process of semantic feature extraction for vast items. Finally, we conduct comprehensive experiments to validate the reinforced CF approach by adopting practical datasets, and the results reveal that compared with the traditional CF algorithms it can accomplish a promising performance.
7,555
Title: A framework for social media data analytics using Elasticsearch and Kibana Abstract: Real-time online data processing is quickly becoming an essential tool in the analysis of social media for political trends, advertising, public health awareness programs and policy making. Traditionally, processes associated with offline analysis are productive and efficient only when the data collection is a one-time process. Currently, cutting edge research requires real-time data analysis that comes with a set of challenges, particularly the efficiency of continuous data fetching within the context of present NoSQL and relational databases. In this paper, we demonstrate a solution to effectively adsress the challenges of real-time analysis using a configurable Elasticsearch search engine. We are using a distributed database architecture, pre-build indexing and standardizing the Elasticsearch framework for large scale text mining. The results from the query engine are visulized in almost real-time.
7,560
Title: Birds of prey: identifying lexical irregularities in spam on Twitter Abstract: The advent of spam on social media platforms has lead to a number of problems not only for social media users but also for researchers mining social media data. While there has been substantial research on automated methods of spam detection on Twitter, research on the lexical content of spam on the platform is limited. A dataset of 301 million generic tweets was filtered through a URL blacklisting service to obtain 7207 tweets containing links to malicious web-pages. These tweets, considered spam, were combined with a random sample of non-spam tweets to obtain an overall dataset of 14,414 tweets. A total of 12 numerical tweet features were used to train and test a Random Forest algorithm with an overall classification accuracy of over 90%. In addition to the numerical features, the text of each tweet was processed to create four frequency-mapped corpora pertaining uniquely to spam and non-spam data. The corpora of words, emoji, numbers, and stop-words for spam and non-spam were plotted against each other to visualize differences in usage between the two groups. A clear distinction between words, and emoji used in spam, and non-spam tweets was observed.
7,567
Title: Continuous restricted Boltzmann machines Abstract: Restricted Boltzmann machines are a generative neural network. They summarize their input data to build a probabilistic model that can then be used to reconstruct missing data or to classify new data. Unlike discrete Boltzmann machines, where the data are mapped to the space of integers or bitstrings, continuous Boltzmann machines directly use floating point numbers and therefore represent the data with higher fidelity. The primary limitation in using Boltzmann machines for big-data problems is the efficiency of the training algorithm. This paper describes an efficient deterministic algorithm for training continuous machines.
7,568
Title: Constituent factors of heart rate variability ALLSTAR big data analysis Abstract: Various indices have been reported regarding heart rate variability (HRV), but many of them correlate to each other, suggesting the existence of the underlying common factors. We tried to extract factors underlying HRV indices and investigated their features. Using big data of 24-h electrocardiogram (ECG) called Allostatic State Mapping by Ambulatory ECG Repository (ALLSTAR), we calculated 4 time-domain, 4 frequency-domain, and 2 nonlinear HRV indices and the amplitude of cyclic variation of heart rate (Acv) in 113,793 men and 140,601 women with sinus rhythm ECG. Factor analysis revealed that there were two factors with eigenvalue ≥ 1 by which 91% of variance among the HRV indices was explained. Factor 1 that was strongly contributed by very-low frequency, low frequency (LF), and high frequency (HF) components and Acv and it increased with age from 0 to 20 year, then decreased until 65 year, and increased slightly after 80 year. It also increased with daily physical activity at the mild level of activity. Factor 2 that was contributed strongly by scaling exponent α1 and LF-to-HF ratio increased with age until 35 year, plateaued between 35 and 55 year, and decreased thereafter. It also increased with mild to moderate physical activity. HRV indices are constituted by two common factors relating to cardiac vagal function and complexity of heart rate dynamics, respectively, which differ in the relationships with age and physical activity from each other. Although many indices have been proposed for HRV, their constituent factors may be a few.
7,572
Title: Destination-aware metric based social routing for mobile opportunistic networks Abstract: Although centrality is widely used to differentiate the importance of nodes for social-aware routing in mobile opportunistic networks (MONs), it is destination-agnostic since such metrics are usually measured without destination information. To this end, we propose a destination-aware social routing scheme for MONs, namely DAS, which utilizes the destination-aware betweenness centrality (DBC) to choose the right nodes as relays given the specific destination node. During the process of message dissemination, the number of replicas for a message is calculated by the source and each relay independently in respond to the network condition in a dynamic manner. Therefore, DAS disseminates only a few message copies to ensure data delivery as well as reducing routing cost. We conduct extensive simulations using real trace data sets to show improved performance with low overhead in comparison with existing social-aware routing approaches in various scenarios.
7,578
Title: Cost effective, rule based, big data analytical aggregation engine for investment portfolios Abstract: Recent developments in Big Data in financial industry has created a huge opportunity for design and development of effective aggregation (higher level) analytical measures (Fund, Portfolio, Sector, Industry etc.). Lack of these aggregated measures will jeopardize organization’s ability to provide the financial services promised to clients. Vendor solutions and existing academic research (Data Cube, OLAP) can provide these aggregated measures but are expensive, time consuming and not practical to implement for a small to mid-size investment organization. Our proposed solution using rule-based architecture is cost effective, efficient and building block for “Rapid Application and Decision Support Systems on Big Data”. Our new approach “Selective Dimensional Cuboids” provides a simple but robust solution with flexibility for future expansion into data mining, portfolio trend analysis and cycle forecasting. The solution is easily portable to any dimensional data set.
7,580
Title: An experimental mining and analytics for discovering proportional process patterns from workflow enactment event logs Abstract: In this paper, we carry out an experimental analytics to show how much perfectly the conceptual mining framework is operable on re-discovering workflow process patterns and their enacted proportions from the workflow enactment event histories logged in a format of XES standardized schema. In principle, the framework must be able to properly handle all the workflow process patterns based upon the four types of control-flow primitives such as linear (sequential), disjunctive (selective), conjunctive (parallel), and loop (iterative) process patterns. The paper focuses on implementing an algorithmic mining framework only for discovering all the process patterns and their enacted proportions. To prove the functional correctness of the framework, we carry out an experimental mining and analytics on the real workflow instance enactment event histories of 10,000 workcases, and we finally visualize the mining and analytic artifacts and describe the implications of the results of the experiment.
7,581
Title: Assessing Distraction Potential of Augmented Reality Head-Up Displays for Vehicle Drivers Abstract: Objective: To develop a framework for quantifying the visual and cognitive distraction potential of augmented reality (AR) head-up displays (HUDs). Background: AR HUDs promise to be less distractive than traditional in-vehicle displays because they project information onto the driver's forward-looking view of the road. However, AR graphics may direct the driver's attention away from critical road elements. Moreover, current in-vehicle device assessment methods, which are based on eyes-off-road time measures, cannot capture this unique challenge. Method: This article proposes a new method for the assessment of AR HUDs by measuring driver gaze behavior, situation awareness, confidence, and workload. An experimental user study (n = 24) was conducted in a driving simulator to apply the proposed method for the assessment of two AR pedestrian collision warning (PCW) design alternatives. Results: Only one of the two tested AR interfaces improved driver awareness of pedestrians without visually and cognitively distracting drivers from other road elements that were not augmented by the display but still critical for safe driving. Conclusion: Our initial human-subject experiment demonstrated the potential of the proposed method in quantifying both positive and negative consequences of AR HUDs on driver cognitive processes. More importantly, the study suggests that AR interfaces can be informative or distractive depending on the perceptual forms of graphical elements presented on the displays. Application: The proposed methods can be applied by designers of in-vehicle AR HUD interfaces and be leveraged by designers of AR user interfaces in general.
7,778
Title: Reliability analysis on ammonium nitrate/fuel oil explosive vehicle pharmaceutical system based on dynamic fault tree and Bayesian network Abstract: Ammonium nitrate/fuel oil (ANFO) explosive vehicle is the most common equipment in the mining machinery. The pharmaceutical system is the most important system of the whole mechanical system, its performance and reliability determine the reliability of the whole system. However, there is little literature on reliability analysis of ANFO explosive vehicle. In this paper, ANFO explosive vehicle pharmaceutical system is regarded as the research object. The pharmaceutical system presents complex dynamic characteristics. The electronic control subsystem, for example, contains function dependency gate, and the sensitizer system includes warm spare parts. In this paper, the related theories and methods of dynamic fault tree and Bayesian network were applied to the reliability research of the pharmaceutical system. By analyzing the principle of the system, dynamic fault tree model was established. By mapping relation, it was transformed into Bayesian network. The marginal probability distribution and the conditional probability distribution of each node are determined, and the prior probabilities and the posterior probabilities of the pharmaceutical system are obtained. Compared with the results of Markov’s analysis, there are some deviations between them. According to the sequencing result of the influence degree of parts on the reliability, the fuel flow-meter, oil filter element, sensitizer flow-meter, line interface, and oil pipe are the weak links of the pharmaceutical system. They can be regarded as important objects of system improvement, fault maintenance and health management. The sensitizer system is the most reliable subsystem of the pharmaceutical system. Because of the existence of spare part logic in this subsystem, it has the least influence on the pharmaceutical system. In addition, it is necessary to reduce the impact of the operating environment in the reliability analysis of similar systems. The result can improve maintenance efficiency and provide theoretical support for the system improvement.
7,854
Title: An efficient searching method for minimal path vectors in multi-state networks Abstract: Searching for minimal path vectors (MPVs) is an important topic in solving network related problems, especially for the evaluation of network reliability. One of the popular approaches, namely the three-stage method (TSM), is influenced deeply on the efficiency of searching for minimal path vectors in multi-state networks (MSN). TSM consists of three stages, i.e., searching for all minimal path sets, searching for all MPVs, and calculating union probability on MPVs. After reviewing previous works in the literaure, this paper proposes a more efficient method based on cyclic check on the candidates of MPVs, which can do an efficient searching for MPVs in MSNs and even reduce the three-step approach to two-step approach. Benchmarking with the well-known algorithms are made in this paper, and more complicated networks are also examined for verification of the proposed method.
7,864
Title: Application of dynamic evidential networks in reliability analysis of complex systems with epistemic uncertainty and multiple life distributions Abstract: With the modernization and intelligent of industrial equipment and systems, the challenges of dynamic characteristics, failure dependency and uncertainties have aroused by the increasing of system complexity. Besides, various types of components may follow different life distributions which bring the multiple life distributions problem in systems. In order to model the impact of time dependency and epistemic uncertainty on the failure behavior of system, this paper combines the flexible dynamic modeling with the uncertainty expression. Its advantages are intuitively graphical representation and reasoning that brought by evidential network (EN). After that, the discrete time dynamic evidential network (DT-DEN) is introduced to analyze the reliability of complex systems, and the network inference mechanism is clearly defined. The evidence theory and original definition and inference mechanism of conventional EN is firstly recommended, and the DT-DEN is further presented. Furthermore, the multiple life distributions are synthesized into the DT-DEN to tackle the epistemic uncertainty and mixed life distribution challenges. Specifically, the dynamic logic gates are converted into equivalent DENs with distinguished conditional mass tables, and then the belief interval of system reliability can be calculated by network forward reasoning. Finally, the availability and efficiency of the proposed method is verified by some numerical examples.
7,892
Title: Determination of feeding strategies in aquaculture farms using a multiple-criteria approach and genetic algorithms Abstract: Since the 1990s, fishing production has stagnated and aquaculture has experienced an exponential growth thanks to the production on an industrial scale. One of the major challenges facing aquaculture companies is the management of breeding activity affected by biological, technical, environmental and economic factors. In recent years, decision-making has also become increasingly complex due to the need for managers to consider aspects other than economic ones, such as product quality or environmental sustainability. In this context, there is an increasing need for expert systems applied to decision-making processes that maximize the economic efficiency of the operational process. One of the production planning decisions more affected by these changes is the feeding strategy. The selection of the feed determines the growth of the fish, but also generates the greatest impact of the activity on the environment and determines the quality of the product. In addition, feed is the main production cost in finfish aquaculture. In order to address all these problems, the present work integrates a multiple-criteria methodology with a genetic algorithm that allows determining the best sequence of feeds to be used throughout the fattening period, depending on multiple optimization objectives. Results show its utility to generate and evaluate different alternatives and fulfill the initial hypothesis, demonstrating that the combination of several feeds at precise times may improve the results obtained by one-feed strategies.
7,895
Title: Reliability calculation method based on the Copula function for mechanical systems with dependent failure Abstract: In order to accurately calculate the reliability of mechanical components and systems with multiple correlated failure modes and to reduce the computational complexity of these calculations, the Copula function is used to represent related structures among failure modes. Based on a correlation analysis of the failure modes of parts of a system, a life distribution model of components is constructed using the Copula function. The type of Copula model was initially selected using a binary frequency histogram of the life empirical distribution between the two components. The unknown parameters in the Copula model were estimated using the maximum likelihood estimation method and the most suitable Copula model was determined by calculating the square Euclidean distance. The reliability of series, parallel, and series–parallel systems was analyzed based on the Copula function, where life was used as a variable to measure the correlation between components. Thus, a reliability model of a system with life correlations was established. Reliability calculation of a particular diesel crank and connecting rod mechanism was taken as a practical example to illustrate the feasibility of the proposed method.
7,902
Title: An extended geometric process repairable model with its repairman having vacation Abstract: In this paper, a new single component repairable system model with a repairman is proposed. Assume that the successive working time interval of the component and the successive repair time interval after repair is described by the extended geometric process. The repairman has multiple vacation when the component is working, and component is repaired delayed with a given probability when it fails. The component will work again when it repaired. Under the assumption, the explicit expression of the long-run average cost rate function of the system based on the failure number of the component is derived. Numerical cases are designed to illustrate the long-run average cost rate function of the proposed model. Finally, sensitive analysis of parameters is carried out.
7,921
Title: Maintenance modeling and operation parameters optimization for complex production line under reliability constraints Abstract: An optimal preventive maintenance policy and optimization method of operation parameters for a production line consisting of multiple execution units is described herein. According to the characteristics of the production unit, the relationship between the reliability and operating parameters of the execution unit is established, as well as its relationship between the operating parameters and maintenance cost. The minimum maintenance cost and effective operating speed is selected as the objective, and the optimal parameters are derived by heuristic algorithm. Finally, a numerical example and simulation experiments are shown which validated the effectiveness of the proposed method.
7,926
Title: Recognition method of equipment state with the FLDA based Mahalanobis–Taguchi system Abstract: Mahalanobis–Taguchi system (MTS) is a kind of big data classification and reduction method which can be used in the fault diagnosis and maintenance modeling. Especially in the context of big data, it can get better results in application. And MTS uses Mahalanobis distance (MD) as the measurement scale to identify the system state with multidimensional characteristics. But when the benchmark and abnormal space which are constructed by the traditional MTS have a serious overlap, the model will perform imbalanced classification ability to identify the sample. In this paper, against the problem, a modified MTS amended by Fischer linear discriminant analysis (FLDA) is proposed, and to be used to recognize the running state of equipment. Firstly, the paper discussed the limitation to using MD as the measurement scale in the traditional model, and then to use the balance accuracy while balanced classification as the evaluation index for the balance ability of the model classification. And then the threshold optimization model was discussed with different weight coefficient considering the actual cost and loss of the missed-alarm and the false-alarm. Furthermore, FLDA was used to calculate the projection matrix and the best projection vector was selected to amend the tradition measurement scale. Finally, the modified model amended by FLDA was compared with the traditional MTS and FLDA model form two aspects of accuracy index and the size of abnormal samples by using the bearing running data. The result proved the effectiveness and superiority of the modified model.
7,932
Title: Reliability evaluation of a stochastic multimodal transport network under time and budget considerations Abstract: Recently, taking more than one means of transportation to work or to deliver goods has become common for citizens in big cities or logistics companies. A transport system that contains various transport means is called a multimodal transport system. In each route, there is carrier: airline, railway or bus company who has contracted to carry passengers with a specific schedule. As each carrier may have other customers, the number of passengers that it can serve is stochastic. Hence, this study formulates a stochastic multimodal transport network (SMTN) to model a multimodal transport system. The transfer time and fee are necessary when changing the means of transportation. This study proposes a complete algorithm to evaluate the reliability of the SMTN that is the probability that the requested number of passengers can be sent successfully from the origin to the destination under time and budget constraints. Since time and budget are the passengers considered most before traveling, the reliability in this study evaluates the SMTN regarding meeting travel demand and passengers’ requirements simultaneously. A case study of a travel agent is presented to demonstrate the solution procedure.
7,933
Title: A novel TOPSIS–CBR goal programming approach to sustainable healthcare treatment Abstract: Cancer is one of the most common diseases worldwide and its treatment is a complex and time-consuming process. Specifically, prostate cancer as the most common cancer among male population has received the attentions of many researchers. Oncologists and medical physicists usually rely on their past experience and expertise to prescribe the dose plan for cancer treatment. The main objective of dose planning process is to deliver high dose to the cancerous cells and simultaneously minimize the side effects of the treatment. In this article, a novel TOPSIS case based reasoning goal-programming approach has been proposed to optimize the dose plan for prostate cancer treatment. Firstly, a hybrid retrieval process TOPSIS–CBR [technique for order preference by similarity to ideal solution (TOPSIS) and case based reasoning (CBR)] is used to capture the expertise and experience of oncologists. Thereafter, the dose plans of retrieved cases are adjusted using goal-programming mathematical model. This approach will not only help oncologists to make a better trade-off between different conflicting decision making criteria but will also deliver a high dose to the cancerous cells with minimal and necessary effect on surrounding organs at risk. The efficacy of proposed method is tested on a real data set collected from Nottingham City Hospital using leave-one-out strategy. In most of the cases treatment plans generated by the proposed method is coherent with the dose plan prescribed by an experienced oncologist or even better. Developed decision support system can assist both new and experienced oncologists in the treatment planning process.
7,939
Title: Multicriteria analysis of renewable-based electrification projects in developing countries Abstract: The design of wind-photovoltaic stand-alone electrification projects that combine individual systems and microgrids is complex and requires from support tools. In this paper, a multicriteria procedure is presented in detail, which aims to assist project developers in such a design. More specifically, the procedure has been developed under a four-part structure, using support tools and expert consultations to enhance practicality into the rural context of developing countries. First, from a large amount of criteria, a reduced and easy to handle set is chosen, representing the main characteristics to be assessed in rural electrification projects. Second, two iterative processes, one based on the Analytical Hierarchy Process and one based on a typical 1–10 assessment, are tested to assign weights to the criteria, reflecting end-user preferences. Third, some indicators are proposed to evaluate the accomplishment of each solution regarding each criterion, in an objective manner. Fourth, considering the weights and evaluations, the solutions are ranked, using the compromise programming technique, thus selecting the best one/s. The whole procedure is illustrated by designing the electrification project of a real community in the Andean highlands. In short, this paper provides insights about the suitable decision-making process for the design of wind-PV electrification systems and, in addition, shows how different multicriteria techniques are applied to a very local context in rural, remote and very poor areas of developing countries.
7,946
Title: Properties and estimation of a bivariate geometric model with locally constant failure rates Abstract: Stochastic models for correlated count data have been attracting a lot of interest in the recent years, due to their many possible applications: for example, in quality control, marketing, insurance, health sciences, and so on. In this paper, we revise a bivariate geometric model, introduced by Roy (J Multivar Anal 46:362–373, 1993), which is very appealing, since it generalizes the univariate concept of constant failure rate—which characterizes the geometric distribution within the class of all discrete random variables—in two dimensions, by introducing the concept of “locally constant” bivariate failure rates. We mainly focus on four aspects of this model that have not been investigated so far: (1) pseudo-random simulation, (2) attainable Pearson’s correlations, (3) stress–strength reliability parameter, and (4) parameter estimation. A Monte Carlo simulation study is carried out in order to assess the performance of the different estimators proposed and application to real data, along with a comparison with alternative bivariate discrete models, is provided as well.
7,949
Title: Pricing decisions in a dual supply chain of organic and conventional agricultural products Abstract: We analyze a dual-channel supply chain comprising two suppliers that offer vertically-differentiated agricultural products; specifically, one offers an organic version of an agricultural product and the other offers a conventionally-grown version of the same product. Each supplier distributes his product through two channels: directly to consumers and via a single retailer who sells both product versions. Consumers are assumed to be heterogeneous in their valuations of the benefits associated with organic products and of the benefits of purchasing from a retailer (reflected, e.g., in extra services). We also assume that the agricultural products can depreciate in value (e.g., due to deterioration and spoilage). We study market competition in the case where all supply chain members (the two suppliers and the retailer) determine their prices simultaneously in order to maximize their respective profits. First, we analytically solve the cases where organic and conventional value depreciations are equal in each channel or where the direct and the retailer value depreciations are equal for each product. Under these assumptions, we show that each supplier sets a wholesale price that is equal to the direct price. Moreover, the price margins set by the retailer for both the organic and conventional versions are identical. The more general case is analyzed numerically; this analysis reveals the relationships between the model parameters, the equilibrium pricing and the profits of the supply chain members. We identify the conditions under which the wholesale prices of each product version are higher than the direct prices.
7,951
Title: A framework for fatigue reliability analysis of high-pressure turbine blades Abstract: Fatigue evolution under continued stresses is a process of degradation of material performance with many uncertainties. In order to quantify the uncertainties of materials and working conditions, a probabilistic method is utilized to estimate the reliability of structures by considering scatter of the fatigue life prediction model, in which improvements are provided to model the accumulation of the damage. Firstly, the fatigue parameters are modeled by the Bayesian theory and the finite element analysis. Secondly, the distributions of parameters are transformed by the probabilistic method into the distribution of fatigue life by using the fatigue life prediction model, and a damage accumulation model is chosen to characterize regulation evolution of properties. Finally, the probability distribution function transformation approach is employed to expound distribution of fatigue damage by the known distribution of fatigue life, and a general probabilistic method is then used to estimate the reliability. By combining the above methods, the framework for reliability analysis is established and then is used to calculate the reliability for high-pressure turbine blades in a low cycle fatigue region under variable amplitude loadings.
7,956
Title: Ballast water dynamic allocation optimization model and analysis for safe and reliable operation of floating cranes Abstract: Ballast water can adjust the heel and trim and its optimal allocation is very important for assuring safe operation of floating cranes. The combined ballast system based on pumps and water gravity self-flow can transfer a large amount of ballast water in a short time and is widely used for large floating cranes. Based on the hydrostatics of ships and optimal theories, the optimization model of ballast water allocation is built to minimize the ballasting time of floating cranes using the combined ballast system. In this model, the variations of water levels for all ballast tanks are taken as the optimization variables and the hull balance in the ballasting process as the constraint conditions. The ballasting process can be considered as the Markov process, so the dynamic programming solving model is established for the dynamic ballasting. The analysis of a numerical simulation case shows that the ballasting time of the combined ballast system obviously reduces compared with the ballast pump system. The tilted angles of the floating cranes are very small which assures the operation safety and reliability of floating cranes. The established optimization model and method can obtain the optimal ballasting process, effectively reduce the ballasting time, and provide decision model and solving algorithm support for improving the efficiency, and achieving the dynamic ballasting automatic or intelligent control of floating cranes based on computers.
7,958
Title: Maintenance effort management based on double jump diffusion model for OSS project Abstract: Many open source software (OSS) under various OSS projects are in action around the world. Considering the characteristics of OSS development and management projects, operation performance measures for OSS project management will take an irregular fluctuation in the long term of operation, because several developer and many users are closely related to the maintenance of OSS. Also, OSS projects will heavily depend the environment of internet network. This paper focuses on the irregular fluctuation of operation performance measures for OSS project management. We apply the double jump diffusion process models to the noisy cases in the operation of OSS. In particular, the maintenance effort is estimated by the stochastic differential equation model in terms of OSS project management. Moreover, we propose the method of maintenance effort management based on the double jump diffusion process model considering the irregular fluctuation of performance for OSS projects. Thereby, it will be helpful for the OSS developers and managers to understand the maintenance effort status of OSS from the standpoint of OSS project management. Also, we analyze actual data to show numerical examples of the proposed models with the characteristics considering noisy and jump of OSS projects.
7,962
Title: Learning context-dependent choice functions Abstract: Choice functions accept a set of alternatives as input and produce a preferred subset of these alternatives as output. We study the problem of learning such functions under conditions of context-dependence of preferences, which means that the preference in favor of a certain choice alternative may depend on what other options are also available. In spite of its practical relevance, this kind of context-dependence has received little attention in preference learning so far. We propose a suitable model based on context-dependent (latent) utility functions, thereby reducing the problem to the task of learning such utility functions. Practically, this comes with a number of challenges. For example, the set of alternatives provided as input to a choice function can be of any size, and the output of the function should not depend on the order in which the alternatives are presented. To meet these requirements, we propose two general approaches based on two representations of context-dependent utility functions, as well as instantiations in the form of appropriate end-to-end trainable neural network architectures. Moreover, to demonstrate the performance of both networks, we present extensive empirical evaluations on both synthetic and real-world datasets.
8,164
Title: Constrained mutual convex cone method for image set based recognition Abstract: •Propose novel image-set classification frameworks based on convex cone representation.•Define the similarity between two convex cones.•Introduce discriminative feature extraction method based on the gaps among convex cones.•Demonstrate the effectiveness of the proposed methods through visualizations and classification experiments.
8,165
Title: Envy-free matchings in bipartite graphs and their applications to fair division Abstract: A matching in a bipartite graph with parts X and Y is called envy-free, if no unmatched vertex in X is a adjacent to a matched vertex in Y. Every perfect matching is envy-free, but envy-free matchings exist even when perfect matchings do not. We prove that every bipartite graph has a unique partition such that all envy-free matchings are contained in one of the partition sets. Using this structural theorem, we provide a polynomial-time algorithm for finding an envy-free matching of maximum cardinality. For edge-weighted bipartite graphs, we provide a polynomial-time algorithm for finding a maximum-cardinality envy-free matching of minimum total weight. We show how envy-free matchings can be used in various fair division problems with either continuous resources (“cakes”) or discrete ones. In particular, we propose a symmetric algorithm for proportional cake-cutting, an algorithm for 1-out-of-(2n-2) maximin-share allocation of discrete goods, and an algorithm for 1-out-of-⌊2n/3⌋ maximin-share allocation of discrete bads among n agents.
8,185
Title: Robust and Scalable Methods for the Dynamic Mode Decomposition* Abstract: The dynamic mode decomposition (DMD) is a broadly applicable dimensionality reduction algorithm that decomposes a matrix of time-series data into a product of a matrix of exponentials, representing Fourier-like time dynamics, and a matrix of coefficients, representing spatial structures. This interpretable spatio-temp oral decomposition is classically formulated as a nonlinear least squares problem and solved within the variable projection framework. When the data contains outliers, or other features that are not well represented by exponentials in time, the standard Frobenius norm misfit penalty creates significant biases in the recovered time dynamics. As a result, practitioners are left to clean such defects from the data manually or to use a black-box cleaning approach like robust principal component analysis (PCA). As an alternative, we propose a robust statistical framework for the optimization used to compute the DMD itself. We also develop variable projection algorithms for these new formulations, which allow for regularizers and constraints on the decomposition parameters. Finally, we develop a scalable version of the algorithm by combining the structure of the variable projection framework with the stochastic variance reduction (SVRG) paradigm. The approach is tested on a range of synthetic examples, and the methods are implemented in an open source software package RobustDMD.
8,223
Title: Systematic literature review of mobile application development and testing effort estimation Abstract: In the recent years, the advances in mobile technology have brought an exorbitant change in daily lifestyle of individuals. Smartphones/mobile devices are rampant in all aspects of human life. This has led to an extreme demand for developing software that runs on mobile devices. The developers have to keep up with this high demand and deliver high-quality app on time and within budget. For this, estimation of development and testing of apps play a pivotal role. In this paper, a Systematic Literature Review (SLR) is conducted to highlight development and testing estimation process for software/application. The goal of the present literature survey is to identify and compare existing test estimation techniques for traditional software (desktop/laptop) and for mobile software/application. The characteristics that make mobile software/application different from traditional software are identified in this literature survey. Further, the trend for developing the software is towards agile, thus this study also presents and compares estimation techniques used in agile software development for mobile applications. The analysis of literature review suggests filling a research gap to present formal models for estimating mobile application considering specific characteristics of mobile software.
8,339
Title: On QoS evaluation for ZigBee incorporated Wireless Sensor Network (IEEE 802.15.4) using mobile sensor nodes Abstract: The design of an efficient and scalable Wireless Sensor Network (WSN) to accommodate the fluctuations in topology, node-mobility, node-density, and network-size is an exigent task. A meticulous investigation is necessitated to implement the sensor nodes in an appropriate topology at optimum node-mobility in ZigBee incorporated WSN networks to offer optimum Quality of Services (QoS). Subsequently, an attempt to compute the comprehensive recital of a non-beacon mode based 802.15.4/ZigBee integrated WSN network is demonstrated. An analytical model is designed using parameters such as back-off number, retransmission limit, and back-off exponent considering the impact of node-mobility which has not reported earlier. Further, the investigation is carried out experimentally by evaluating the different QoS metrics, for instance, throughput, network load, bit error rate (BER), received power, signal to noise ratio (SNR) and end-to-end delay for diverse node-density and network-size of a mobile Wireless Sensor Network. Based on the measurements obtained, the authors recommend implementing the mobile sensor nodes in a cluster-tree fashion to afford the best possible QoS services.
8,349
Title: Hamilton-Green solver for the forward and adjoint problems in photoacoustic tomography Abstract: •Novel acoustic solver for the forward and adjoint problems in photoacoustic tomography.•Ability to evaluate partial operators i.e. a solution for a subset of sensors.•Efficient region of interest tomography and coupling with ultrasound tomography.•Solver combines evaluation of the Green's formulae and ray-based approximation.•Evaluation of the solver on a 2D numerical phantom and comparison with k-Wave toolbox.
8,369
Title: Copy-move forgery detection using binary discriminant features Abstract: Digital images have numerous applications in various fields such as cyber forensics, courts, newspapers, magazines, medical field and many more. Therefore, the trustfulness of the contents of an image is very essential. Image manipulation has become very much simple and common due to the development of high resolution digital cameras, powerful computers and numerous image editing software. Hence, falsification of visual content of images is not at all limited to specialists and detection of forged images have become a highly challenging task. Of the different kinds of image forgery, the proposed method deals with one particular kind known as copy-move forgery. The method is an integration of the conventional block-based and keypoint based methods. The method first identifies the forged keypoint locations and then localizes the forged region using a region extraction technique. To improve the accuracy of detection, a Binary Discriminative Feature descriptor (BDF) is used for feature detection and matching. The suspected regions are identified by replacing the matching feature points with corresponding superpixel blocks. The neighbouring blocks are then merged with the suspected regions based on color similarity, using Color Histogram Matching and a final morphological close operation extracts the detected forged regions. Experimental results show that the proposed method has better detection accuracy, in terms of precision, recall and F1 Score, when compared with the state-of-the-art methods of copy-move forgery detection, under conditions of plain copy-move forgery as well as post-processing conditions like brightness changes, contrast adjustments, color reduction and blurring.
8,392
Title: Incidence dimension and 2-packing number in graphs Abstract: Let G = (V, E) be a graph. A set of vertices A is an incidence generator for G if for any two distinct edges e, f is an element of E(G) there exists a vertex from A which is an endpoint of either e or f. The smallest cardinality of an incidence generator for G is called the incidence dimension and is denoted by dim(I)(G). A set of vertices P subset of V(G) is a 2-packing of G if the distance in G between any pair of distinct vertices from P is larger than two. The largest cardinality of a 2-packing of G is the packing number of G and is denoted by rho(G). In this article, the incidence dimension is introduced and studied. The given results show a close relationship between dim(I)(G) and rho(G). We first note that the complement of any 2-packing in graph G is an incidence generator for G, and further show that either dim(I)(G) = |V(G)-|rho(G) or dim(I)(G) = |V(G)-|rho(G) - 1 for any graph G. In addition, we present some bounds for dim(I)(G) and prove that the problem of determining the incidence dimension of a graph is NP-hard.
8,404
Title: Another Note on Intervals in the Hales-Jewett Theorem Abstract: The Hales-Jewett Theorem states that any r-colouring of [m](n) contains a monochromatic combinatorial line if n is large enough. Shelah's proof of the theorem implies that for m = 3 there always exists a monochromatic combinatorial line whose set of active coordinates is the union of at most r intervals. For odd r, Conlon and Kamcev constructed r-colourings for which it cannot be fewer than r intervals. However, we show that for even r and large n, any r-colouring of [3](n) contains a monochromatic combinatorial line whose set of active coordinates is the union of at most r -1 intervals. This is optimal and extends a result of Leader and Ray for r = 2.
8,430
Title: Additive Approximation of Generalized Turan Questions Abstract: For graphs G and T, and a family of graphs F let ex(G, T, F) denote the maximum possible number of copies of T in an F-free subgraph of G. We investigate the algorithmic aspects of calculating and estimating this function. We show that for every graph T, finite family F and constant epsilon > 0 there is a polynomial time algorithm that approximates ex( G, T, F) for an input graph G on n vertices up to an additive error of epsilon n(v(T)). We also consider the possibility of a better approximation, proving several positive and negative results, and suggesting a conjecture on the exact relation between T and F for which no significantly better approximation can be found in polynomial time unless P = NP.
8,489
Title: 3D sign language recognition using spatio temporal graph kernels Abstract: 3D Sign language recognition is challenging from capturing to recognition. 3D signs are a set of spatio temporal variations of hands and fingers with respect to face, head and torso. 3D motion capture technology has enabled us to capture these complex 3D human motions preserving 95% of the visual information required for recognition. A twin motion algorithm is proposed to recognize 3D signs with variable motion joints. Variable motions in joints arise duo to non-uniform distances between the joints. For example, finger motions are different from hand motions. A common measure to extract motion features from 3D skeletal data is relative range of joint relative distance (RRJRD). However, relative range of joint relative distance cannot quantify all the relative joint motions for characterizing a sign because of the different in motion ranges between different body parts used in defining a sign. Hence, we propose a wide RRJRD and narrow RRJRD based characterization to project the motion features on to graph. Each sign is characterized by a set of spatio temporal projections on to a constructed sign graph. The experiment results show that the proposed method is signer invariant, motion invariant and faster compared to state-of-the-art graph kernel methods.
8,491
Title: Curvelet transform based feature extraction and selection for multimedia event classification Abstract: Multimedia event classification has been one of the major endeavors in video event analysis. For event identification, feature extraction plays a critical role, merely distinguishing the right features becomes a challenging job. So, in this paper, feature extraction and selection for video event detection are proposed. For video event detection or classification, identification of object structure and its motion are a basic needs. So, for object detection, curvelet features are considered, due to its high directional selectivity and high anisotropic properties. Next, an algorithm for Shot Boundary Detection (SBD) called Motion based SBD (MSBD) is proposed to identify the shot boundaries. Also, to make the users to access the queried event in less time, an algorithm for representing few representative frames containing whole video content is proposed. To make an event search efficient, object-based features from dominant features are extracted and to enhance the features much more efficient and dominant, feature selection is performed using the ranking method. Lastly, an SVM classifier with RBF kernel is used for event classification. The proposed work is experimented using Columbia Consumer Video (CCV) dataset and evaluated using mean Average Precision (mAP) and find that, it outperforms various other existing methods.
8,511
Title: MRI Brain Tumor Segmentation and Analysis using Rough-Fuzzy C-Means and Shape Based Properties Abstract: Automated brain tumor segmentation of MR image is a very challenging task in a medical point of view. As the nature of the tumor, it can appear anywhere in the brain region with any size, shape, and contrast, that makes the segmentation process more difficult. In order to handle such issues, present work proposes an automated brain tumor segmentation method using rough-fuzzy C-means (RFCM) and shape based topological properties. In rough-fuzzy C-means, overlapping partition is efficiently handled by fuzzy membership and uncertainty in the datasets is resolved by lower and upper bound of the rough set. Fuzzy boundary and crisp lower approximation in RFCM play an effective contribution in brain tumor segmentation on MR images. Initial centroids selection is a major issue in C-means algorithms. Present work has introduced a method for initial centroids selection by which the execution time of RFCM is reduced as compared to random initial centroids. A patch based K-means method is also implemented for skull stripping as a preprocessing step. The proposed method was tested on MRI standard benchmark datasets. Experimental results show that the proposed method has achieved better performance based on statistical volume metrics than previous state-of-the-art algorithms with respect to ground truth (manual segmentation). It is also experimentally noticed that RFCM method achieves most promising results with higher accuracy than HCM (hard C-means) and FCM (fuzzy C-means).
8,556
Title: Emotion recognition in speech signals using optimization based multi-SVNN classifier Abstract: Emotion recognition is an interdisciplinary area, and it achieves a significant attention of the researchers in the past few years. Automatic recognition of an emotional state intends to attain an interface among the machines and human beings. Accordingly, a speaker emotion recognition system, named Fractional Deep Belief Network (FDBN), is designed in the literature using fractional deep belief network that combines fractional theory and deep belief network. This work introduced a novel emotion recognition scheme, Whale-Imperialist Optimization algorithm (Whale-IpCA) based Multiple Support Vector Neural Network (Multi-SVNN) classifier, for identifying the emotions in the speech signal. The newly proposed Whale-IpCA algorithm hybridizes the Whale Optimization Algorithm (WOA) and Imperialist Competitive Algorithm (IpCA), and it trains the Multi-SVNN classifier for identifying the emotions. Also, the spectral feature set is extracted from the input signal and provided to the proposed Whale-IpCA based Multi-SVNN for the recognition purpose. Simulation of the proposed Whale-IpCA based Multi-SVNN is done with the help of the standard emotion databases, such as Berlin and Telugu. From the results, it is evident that the proposed Whale-IpCA based Multi-SVNN classifier has surpassed other existing works with the values of 0.0025, 0, and 0.9987 for the FNR, FPR, and accuracy, respectively.
8,591
Title: Triangles in C-5-free graphs and hypergraphs of girth six Abstract: We introduce a new approach and prove that the maximum number of triangles in a C-5-free graph on n vertices is at most (1 + o(1)) 1/3 root 2 n(3/2). We show a connection to r-uniform hypergraphs without (Berge) cycles of length less than six, and estimate their maximum possible size. Using our approach, we also (slightly) improve the previous estimate on the maximum size of an induced-C-4-free and C-5-free graph.
8,604
Title: IMSS-P: An intelligent approach to design & development of personalized meta search & page ranking system Abstract: The proposed research work aims to discuss and explore various constraints of traditional web page search and ranking systems primarily in the present generation of big data. The primary objective is to facilitate a web user by presenting a most personalized web page ranking as a response to a user’s search query by considering tastes and browsing history of the user while previously searching on the web. This research intends to design and develop a machine learning based next generation of web page ranking algorithm, i.e., Advanced Cluster Vector Page Ranking algorithm (ACVPR). This ACVPR algorithm is implemented in the form of an Intelligent Mata Search System-Personalized tool to evaluate the performance of the algorithm. The ACVPR algorithm arm the user with a powerful meta-search tool to facilitate the user by providing a web page ranking order to quickly satisfy the personalized needs especially when the search query is erroneous or incomplete. An extensive mathematical and experimental evaluation of the developed logistic regression model by calculating and comparing various evaluation metrics such as specificity, sensitivity, precision, recall using R statistical tool shows the improved efficiency as compared to other popular search engines.
8,613
Title: Towards Ubiquitous Intelligent Computing: Heterogeneous Distributed Deep Neural Networks Abstract: For the pursuit of ubiquitous computing, distributed computing systems containing the cloud, edge devices, and Internet-of-Things devices are highly demanded. However, existing distributed frameworks do not tailor for the fast development of Deep Neural Network (DNN), which is the key technique behind many intelligent applications nowadays. Based on prior exploration on distributed deep neural net...
8,633
Title: Computability evaluation of RESTful API using Primitive Recursive Function Abstract: Web services are moving toward a new emerging technology lead to the migration of SOAP to RESTful API, which is an Architectural Style that holds Lightweight, Stateless, Uniform Interface, etc., as its constraints. Various sources of clusters of resources, entity, database relations are access throughout the distributed environment across the internet. Generative Power of the RESTful API witnesses the emergence of many companies whose whole business process is based upon the building applications. Since the Syntactic essentials of RESTful Web Services are mainly concerned with the RESTful API, there is a need for evaluation of those essentials whether they are computable or not. The proposed work is carried out on taking resources as simple and effective using Primitive Recursive Function (PRF). Primitive Recursive Function (PRF) uses Turing Machine for REST API capability evaluation and the Service Invocation along with the Application Logic and AppState Logic in order to handle manageability of the RESTful resources via computability evaluation with or without security. To demonstrate the effectiveness of our evaluation process, we conduct a case study on the available REST web services using Primitive Recursive Resources (PRR). The results of our case study show that our evaluation process achieves greater portability, reliability, scalability, etc., which in turn results in high performance.
8,637
Title: Optimal feature selection using binary teaching learning based optimization algorithm Abstract: Feature selection is a significant task in the workflow of predictive modeling for data analysis. Recent advanced feature selection methods are using the power of optimization algorithms for choosing a subset of relevant features to get better classification results. Most of the optimization algorithms like genetic algorithm use many controlling parameters which need to be tuned for better performance. Tuning these parameter values is a challenging task for the feature selection process. In this paper, we have developed a new wrapper-based feature selection method called binary teaching learning based optimization (FS-BTLBO) algorithm which needs only common controlling parameters like population size, and a number of generations to obtain a subset of optimal features from the dataset. We have used different classifiers as an objective function to compute the fitness of individuals for evaluating the efficiency of the proposed system. The results have proven that FS-BTLBO produces higher accuracy with a minimal number of features on Wisconsin diagnosis breast cancer (WDBC) data set to classify malignant and benign tumors.
8,762
Title: Data encoding techniques to improve the performance of System on Chip Abstract: The concept of System on Chip (SoC) has introduced many opportunities but also many challenges and hurdles in Ultra Large Scale Integration (ULSI). In fact the global interconnects are undergoing a reverse scaling process in SoC which has resulted in a wider, thicker top metal layers and an increase in the wire aspect ratio. These impacts have increased the self and coupling capacitance of interconnects, leading to on-chip data communication to become more power consuming and less reliable. In this paper, an attempt has been made to propose two data encoding techniques namely 1) Odd-Even-Full-Normal inversion considering the total self and coupling switching activity (OEFNSC) 2) OEFNSC with segmentation (OEFNSC-SEG) to reduce the switching activity across self and coupling capacitance. These data encoding techniques have lead to the reduction of dynamic power and improve the reliability. The results proved the effectiveness of the proposed data-encoding schemes, in terms of Energy, Delay and Energy-Delay-Product efficiency of 8-bit, 16-bit, 32-bit and 64-bit respectively in 45 nm technology and these efficiencies have further improved by adopting the segmentation of proposed data encoding scheme.
8,790
Title: Dependency-based fault diagnosis approach for SOA-based systems using Colored Petri Nets Abstract: A faulty situation occurred in Service Oriented Architecture (SOA) based systems can affect its functionality. It is challenging to guarantee the reliability of service utilization in a distributed, concurrent and dynamic composition of web services. This paper aims to present a fault detection approach of SOA using dependency analysis. Data and control dependencies are identified and a fault diagnosis approach based on these dependencies is proposed in this paper. The presented approach extends discrete state space techniques to simulate the concurrent and distributed behavior through Colored Petri Nets (CPNs) for the SOA-based systems in case of the faulty situation. Concrete translation from SOA based system to a CPN model is presented. Model-based technique is used and some heuristics are presented for dependency analysis and fault detection purpose. The experimental performance analysis shows the feasibility of our proposed approach.
8,801
Title: A New Homotopy Proximal Variable-Metric Framework for Composite Convex Minimization Abstract: This paper suggests two novel ideas to develop new proximal variable-metric methods for solving a class of composite convex optimization problems. The first idea is to utilize a new parameterization strategy of the optimality condition to design a class of homotopy proximal variable-metric algorithms that can achieve linear convergence and finite global iteration-complexity bounds. We identify at least three subclasses of convex problems in which our approach can apply to achieve linear convergence rates. The second idea is a new primal-dual-primal framework for implementing proximal Newton methods that has attractive computational features for a subclass of nonsmooth composite convex minimization problems. We specialize the proposed algorithm to solve a covariance estimation problem in order to demonstrate its computational advantages. Numerical experiments on the four concrete applications are given to illustrate the theoretical and computational advances of the new methods compared with other state-of-the-art algorithms.
8,814
Title: Cellular automata-based approach for salt-and-pepper noise filtration Abstract: Application of cellular automata (CA) to digital image processing has achieved considerable attention in last several years. CA are now employed in noise filtration of digital images; particularly many CA-based impulse noise filters have been proposed. Salt-and-pepper noise is a special kind of impulse noise and is introduced in an image during transmission over transmission media by external noise sources like atmospheric disturbances, or due to corrupted hardware memory locations or fault in camera sensors. In this paper, we present five salt-and-pepper noise filters based on modifications of outer totalistic cellular automata (OTCA) with the adaptive neighborhood. Use of OTCA model makes the proposed filters computationally simple on one hand and the use of adaptive neighborhood help the filters to provide efficient noise filtration at varying noise densities on the other. Comparative analysis of these filters followed by comparison with several standard and CA-based filters in terms of peak signal to noise ratio (PSNR) and structural similarity (SSIM) index is presented.
8,852
Title: Quantum-Inspired Secure Wireless Communication Protocol Under Spatial and Local Gaussian Noise Assumptions Abstract: Inspired by quantum key distribution, we consider wireless communication between Alice and Bob when the noise generated in the intermediate space between Alice and Bob is controlled by Eve. Our model divides the channel noise into two parts, the noise generated during the transmission and the noise generated in the detector. Eve is allowed to control the former, but is not allowed to do the latter. While the latter is assumed to be a Gaussian random variable, the former is not assumed to be a Gaussian random variable. In this situation, using backward reconciliation and random sampling, we propose a protocol to generate secure keys between Alice and Bob under the assumption that Eve's detector has a Gaussian noise and Eve is not so close to Alice's transmitting device. In our protocol, the security criteria are quantitatively guaranteed even with finite block-length code based on the evaluation of error of the estimation of the channel.
8,899
Title: Interdistrict school choice: A theory of student assignment Abstract: Interdistrict school choice programs—where a student can be assigned to a school outside of her district—are widespread in the US. We introduce a model of interdistrict school choice and present mechanisms that produce stable assignments. We consider four categories of policy goals on assignments and identify when the mechanisms can achieve them. By introducing a novel framework of interdistrict school choice, we provide a new avenue of research in market design.
8,905
Title: On Vertex-Induced Weighted Turan Problems Abstract: Recently, Bennett, English and Talanda-Fisher introduced the vertex-induced weighted Turan problem. In this paper, we consider their open Turan problem under sum-edge weight function and characterize the extremal structure of K-l-free graphs. Based on these results, we propose a generalized version of the Erdos-Stone theorem for weighted graphs under two types of vertex-induced weight functions. (C) 2021 Elsevier B.V. All rights reserved.
8,935
Title: COVER TIME FOR BRANCHING RANDOM WALKS ON REGULAR TREES Abstract: Let T be the regular tree in which every vertex has exactly d >= 3 neighbours. Run a branching random walk on T, in which at each time step every particle gives birth to a random number of children with mean d and finite variance, and each of these children moves independently to a uniformly chosen neighbour of its parent. We show that, starting with one particle at some vertex 0 and conditionally on survival of the process, the time it takes for every vertex within distance r of 0 to be hit by a particle of the branching random walk is r+(2/log (3/2)) log log r + o( log log r).
9,079
Title: Intersection Disjunctions for Reverse Convex Sets Abstract: We present a framework to obtain valid inequalities for a reverse convex set: the set of points in a polyhedron that lie outside a given open convex set. Reverse convex sets arise in many models, including bilevel optimization and polynomial optimization. An intersection cut is a well-known valid inequality for a reverse convex set that is generated from a basic solution that lies within the convex set. We introduce a framework for deriving valid inequalities for the reverse convex set from basic solutions that lie outside the convex set. We first propose an extension to intersection cuts that defines a two-term disjunction for a reverse convex set, which we refer to as an intersection disjunction. Next, we generalize this analysis to a multiterm disjunction by considering the convex set's recession directions. These disjunctions can be used in a cut-generating linear program to obtain valid inequalities for the reverse convex set.
9,135