_id
stringlengths
36
36
text
stringlengths
200
328k
label
stringclasses
5 values
49db18f0-1a28-4616-b816-916882ca445a
In the motion conditions tested here, both proposed approaches (ratio with Gaussian smoothing, or generative modelling) produced comparable improvements in \(R_1\) reproducibility in the presence of inter-scan motion. Equally importantly, when there was no overt motion neither method decreased reproducibility, which was at a level in keeping with previous reports for similar resolution MPM data [1]}, [2]}.
d
2455b594-99a5-4e5a-b485-625ae7a173a9
The simpler ratio method defines one calibration image as the reference (denominator in equation (REF )), and may therefore be vulnerable to low SNR. The alternative generative modelling approach inherently adapts to variable SNR by estimating the position-specific net sensitivity modulation relative to a common image, which is their barycentre mean. This common image dictates the final modulation of all the corrected volumes. The generative model can also easily incorporate any additional data, e.g. body coil images as done at 3T, which flattens the final modulation. Furthermore, rigid registration could be interleaved with model fitting [1]} to reach a better global optimum. Finally, the generative model could naturally be integrated with any fitting approach that defines a joint probability over all acquired data, such as [2]} in the context of MPM.
d
2044270a-88c6-453a-89b9-dc097d15b76c
The impact of movement on the effective transmit field has previously been investigated in the context of specific absorption rate management [1]}, [2]}, [3]}, [4]}. An important additional finding of the present work is the impact this can have on \(R_1\) estimates at 7T, which was negligible at 3T as demonstrated previously [5]}. Although \(R_1\) errors were more substantially reduced by correcting for receive field effects, they could only be reduced to the approximate level of no overt motion by additionally accounting for the positional-dependence of the transmit field. However, acquiring a \(B_1^+\) map at multiple positions comes at a cost of increased scan time.
d
c2307a2a-bf05-4266-bfc5-ef27f16295e4
Inter-scan motion causes serially acquired weighted volumes to be differentially modulated by position-specific coil sensitivities leading to substantial errors when they are combined to compute quantitative metrics. We have demonstrated the efficacy of two methods at reducing these artefacts in the context of \(R_1\) mapping. The proposed methods do not require a body coil making them ideally suited for use at 7T, and can be extended to the computation of other quantitative metrics, such as magnetisation transfer saturation [1]}, that similarly assume constant modulation across multiple weighted acquisitions.
d
3b17c371-6854-4aac-8783-7040df4102f7
Approximation algorithms were proposed for the first time in the 1960s to solve challenging optimization problems [1]}. They are called approximation algorithms because they generate near-optimal solutions. They were mainly used to solve optimization problems that could not be solved efficiently using computational techniques available at that period [2]}. The \(NP\) -completeness theory also had a considerable contribution to approximation algorithms' maturing since the need to solve \(NP\) -hard optimization problems became the most priority to deal with the computational intractability [3]}. Some optimization problems are easy to solve (i.e., generating near-optimal solutions is quick), while for other ones, that task is as hard as finding optimal solutions [4]}.
i
a4f7490b-84de-4fe7-a895-f67a6f3594d6
Approximation algorithms using probabilistic and randomized techniques had known tremendous advances between the 1980s and 1990s. They were named metaheuristic algorithms [1]}. Famous metaheuristic algorithms might include, for example, Simulated Annealing [2]}, Ant Colony Optimization [3]}, Evolutionary Computation [4]}, Tabu Search [5]}, Memetic Algorithms [6]}, and Particle Swarm Optimization [7]}, to name but a few. During the past three decades, many metaheuristic algorithms have been proposed in the literature. Most of them have been assessed experimentally and have shown good performance for solving real-world optimization problems [8]}. Metaheuristic algorithms aim to find the best possible solutions and guarantee that such solutions satisfy some criteria [9]}. The No-Free-Lunch theorem [10]} proves that universal metaheuristic algorithms for solving all the optimization problems are non-existent, which justifies the growing amount of the proposed state-of-the-art. In other words, if a certain metaheuristic algorithm efficiently solves some optimization problems, it will systematically show mediocre performance for other ones. Also, the theorem states that all metaheuristic algorithms' averaged performance on all the optimization problems is the same. Tables REF and REF encompass some well-known metaheuristic algorithms (these algorithms are split into four families, such as evolutionary-based, swarm-based, physical-based, and human-based algorithms [11]}). <TABLE><TABLE>
i
03865a8c-749a-4ee6-947f-a07e1b918868
Mainly, there are two groups of metaheuristic algorithms: population-based and individual-based algorithms [1]}. Population-based algorithms use several agents, whereas individual-based algorithms use one agent. In the first group, several individuals swarm in the search space and cooperatively evolve towards the global optimum [2]}. In the second group, one individual moves in the search space and evolves towards the global optimum [3]}. In both groups, individuals' positions are randomly initialized in the search space and are modified over generations until the satisfaction of specific criteria [1]}.
i
e7609c50-8885-42f1-b972-1a82f1a4e4a5
Metaheuristic algorithms have two fundamental components: exploration and exploitation [1]}. The exploration is called global optimization or diversification. The exploitation is named local optimization or intensification. The exploration allows metaheuristic algorithms to discover new search space regions and avoid being trapped in local optimums [2]}. The exploitation permits metaheuristic algorithms to zoom on a particular area to find the best solution [2]}. Any metaheuristic algorithm should find the best balance between diversification and intensification; otherwise, the found solutions' quality is compromised [4]}. Too many exploration operations may result in a considerable waste of effort: i.e., the algorithm jumps from one location to another without concentrating on enhancing the current solution's quality [5]}. Excessive exploitation operations may lead the algorithm to be trapped in local optimums and to converge prematurely [6]}. The main weakness of metaheuristic algorithms is the sensitivity to the tuning of controlling parameters. Also, the convergence to the global optimum is not always guaranteed [7]}.
i
183161b0-b385-422b-be17-631461007c27
We propose a novel swarm-based metaheuristic algorithm for global optimization, named the Archerfish Hunting Optimizer (AHO). The algorithm is inspired by the shooting and jumping behaviors of archerfish when catching prey. The prominent features of AHO are:
i
30544115-1d41-429a-9e81-ba658d0c573f
AHO has three controlling parameters to set, the population size, the swapping angle between the exploration and exploitation phases, and the attractiveness rate between the archerfish and the prey. AHO uses elementary laws of physics (i.e., equations of the general ballistic trajectory) to determine the positions of new solutions. The swapping angle controls the balance between the exploration and exploitation of the search space.
i
52c290db-2c4c-4c31-90e3-a2b4152fc656
The performance of AHO is assessed using the benchmark CEC 2020 for unconstrained optimization. The considered benchmark contains ten challenging single objective test functions. For further information on this benchmark, the reader is referred to [1]}. The obtained results are compared to 12 most recent state-of-the-art metaheuristic algorithms (the accepted algorithms for the 2020's competition on single objective bound-constrained numerical optimization). In addition, the performance of AHO is evaluated on five engineering design problems selected from the benchmark CEC 2020 for non-convex constrained optimization. More details on this benchmark is available in [2]}. The collected results are opposed to 3 most recent state-of-the-art metaheuristic algorithms. The experimental outcomes are judged using the Wilcoxon signed-rank and the Friedman tests. The statistical results show that AHO produces very encouraging and most of the times competitive results compared to the well-established metaheuristic methods.
i
76b8e509-6f53-4422-9fb1-dfca361f74bd
The rest of the paper is organized as follows. Section illustrates the hunting behavior of archerfish and provides the source of inspiration for AHO. Section describes the proposed metaheuristic algorithm and its mathematical model. Sections and represent and discuss the statistical results and comparative study on some unconstrained and constrained optimization problems. Section sums up the paper and concludes with some future directions.
i
f21743af-9663-4884-9073-5b63b2036b86
In this paper, a new population-based optimization algorithm termed the Archerfish Hunting Optimizer is introduced to handle constrained and unconstrained optimization problems. AHO is founded on the shooting and jumping behaviors of archerfish when hunting aerial insects in nature. Some equations are outlined to model the hunting behavior of archerfish to solve optimization problems. Ten unconstrained optimization problems are used to assess the performance of AHO. The exploration, exploitation, and local optima avoidance's capacities are examined using unimodal, basic, hybrid, and composition functions. The obtained statistical results of the Wilcoxon signed-rank and the Friedman tests confirm that AHO can find superior solutions by comparison to 12 well-acknowledged optimizers. AHO's collected numerical outcomes on three constrained engineering design problems also show that AHO has outstanding results compared to 3 well-regarded optimizers.
d
d662e83f-f376-4286-bdd7-452e35b40b4d
AHO is straightforwardly explained with uncomplicated exploration and exploitation techniques. It is desirable to introduce other evolutionary schemes such as mutation, crossover, or multi-swarm, which we plan to do in the future. In addition, we plan to develop the binary and multi-objective versions of AHO.
d
cac3b46a-76cb-4024-869b-5025189ba465
Recent advances in deep learning [1]}, [2]} have led to state-of-the-art performance for varied classification tasks in natural language processing, computer vision and speech recognition. Traditional Artificial Neural Networks (ANN) use idealized computing units which have a differentiable, non-linear activation function allowing stacking of such neurons in multiple trainable layers. The existence of derivatives makes it possible to carry out large scale training of these architectures with gradient based optimization methods [3]} using high computing resources like Graphic Processing Units (GPU). However, this prevents the use of such deep learning models for essential real-life applications like mobile devices and autonomous systems that have limited compute power.
i
44a259c8-448c-433a-a75a-fc96a4385182
Spiking Neural Networks (SNN) have been proposed as an energy-efficient alternative to ANNs as they simulate the event-based information processing of the brain [1]}. These bio-inspired SNNs follow an asynchronous method of event processing using spiking neurons. The internal state of a spiking neuron is updated when it receives an action potential and consequently an output spike is fired when the membrane voltage crosses a pre-defined threshold. Further, improvements in neuromorphic engineering allow the implementation of SNNs on neuromorphic hardware platforms [2]} that lead to a much higher efficiency in terms of power and speed compared to conventional GPU based computing systems.
i
84596410-e23d-4f0c-93cb-b8c51de276a8
Although SNNs are considered as the third generation of neural networks holding the potential for sparse and low-power computation, their classification performance is considerably lower than those of ANNs. This can be attributed to the fact that gradient optimization techniques like the backpropagation algorithm can't be implemented in SNNs due to the discrete nature of spiking neurons. A common technique for training SNN models is the Hebbian learning inspired Spike Timing Dependent Plasticity (STDP) that is used in several state-of-the-art approaches [1]}, [2]}. Other works like [3]}, [4]} have adapted the gradient descent algorithm for SNNs using a differentiable approximation of spiking neurons. Our approach also employs a similar modified backpropagation algorithm proposed by Hunsberger et al. [5]} that is implemented in the Nengo-DL library [6]}.
i
27c70b98-e516-464c-9773-b3064119752f
As shown by Camunas-Mesa et al. [1]}, the efficiency gain of SNNs from event-based processing can be further improved through the use of inputs from event-based sensors like a neuromorphic Dynamic Vision Sensor (DVS) [2]}. Event driven sensors represent the information dynamically by asynchronously transmitting the address event of each pixel and hence avoid processing redundant data. However, the classification accuracy drops drastically when using real sensory data from a physical spiking silicon retina, since the spike events are no longer Poissonian [3]}. <FIGURE>
i
c64fdd4a-aaa3-470e-a897-a014392496e2
In our previous work [1]}, we had demonstrated the effect of foveal-pit inspired filtering for synthetically generated datasets like MNIST [2]} and Caltech [3]}. In this work, we present the results of applying similar neural filtering to data generated by the DVS. In our proposed model, we process DVS outputs using bio-inspired filters that simulate receptive fields of the midget and parasol ganglion cells of the primate retina. The DVS is stimulated with vertical black and white bars having a constant displacement of 2 pixels from frame to frame. The foveal-pit informed Difference of Gaussian (DoG) filters are applied to the DVS recordings in order to capture the most perceptually important information from the input data. The use of DoG functions to model retinal filters was originally proposed by Rullen et al. [4]} and the receptive fields of the foveal-pit are implemented as in Bhattacharya et al. [5]}.
i
283c80bd-b48d-4edf-929c-5249d2106e53
The processed features are then used to perform the classification using a Spiking Convolutional Neural Network (SCNN). The SCNN architecture is inspired by two previous works viz. Diehl et al. [1]} and Kheradpisheh et al. [2]}, while the model is implemented as in Gupta et al. [3]}. Each input is presented to the network for a total duration of 60 timesteps and the predictions are assigned based on the voltages measured from the output neurons. The empirical results demonstrate that the application of neural filtering to DVS recordings leads to an improvement of 35% in classification accuracy compared to the unfiltered DVS spike responses. Out of the filtered scenarios, the highest performance of 100% is achieved using the off-center parasol ganglion cells.
i
2d97520a-875c-4bf0-9ecd-a4e56283cb16
The rest of the paper is organized as follows: Section II describes the architecture of the proposed model including the response generation and filtering, Section III provides the results of the experiments and Section IV contains the conclusion and future directions.
i
9ebc8b74-6632-44d7-903f-01f446806279
The overall architecture of our model consists of two main stages: the first stage is made up of the DVS response generation and neural filtering of output spikes; the second stage consists of performing classification using the SCNN. The proposed model is shown in Fig. REF and each of the individual stages are covered in detail in the following sub-sections.
m
c29e3d6a-0952-42e5-92dc-d55d91eee15a
In this paper, we have presented a novel method for processing the DVS spike responses of a visual pattern with foveal-pit inspired DoG filters that simulate the primate retinal system. The pattern was composed of varying number of vertical white and black bars of different spatial frequencies moving at a fixed velocity. The outputs from the sensor are applied as input to the bio-inspired neural filters that model the receptive field structure of midget and parasol ganglion cells of the foveal-pit. These processed features are passed as input to our spiking convolutional neural network architecture which classifies the frame-based version of the filtered responses into seven corresponding categories. The SCNN is composed of convolutional and pooling layers and is trained with a modified backpropogation algorithm using a differentiable approximation of spiking neurons [1]}.
d
1a82c139-374a-4e37-81d2-08b314a661a9
The proposed model demonstrates the effect of applying neural filtering to real DVS data generated from a neuromorphic vision sensor. This builds upon our previous work [1]} that depicted the results of foveal-pit inspired filtering for synthetically generated datasets like MNIST [2]} and Caltech [3]}. Our model achieves a promising performance of 92.5% using the unshifted off-center parasol ganglion cell and an accuracy of 100% in the circular-shifted scenario, which is an improvement of 35% over the classification using unfiltered DVS responses. The empirical results indicate the importance of the foveal-pit inspired neural filtering in redundancy reduction of the DVS inputs and in discarding irrelevant background information.
d
9066f30a-ccf8-4479-a413-c02e5656cdef
For our proposed network, the asynchronous DVS recordings generated from the first stage of the model were converted to an analog vector representation for training the frame-based classifier composed of convolution layers. As future work, we plan to adapt our spiking convolutional network architecture to directly process event-based data and evaluate the effects of the bio-inspired neural filtering on continuous outputs of a neuromorphic DVS. Also, the dataset used in this work is limited in terms of variation in the inputs as well as the size of the training and testing corpora. Hence, we would like to further verify the effect of the DoG filters on DVS spike responses of larger and more complex datasets.
d
38a58f49-9718-4871-85b6-3dbcf1fa85a1
Recent advances in pre-trained language models (PLMs) have created new state-of-the-art results on many natural language processing (NLP) tasks. While scaling up PLMs with billions or trillions of parameters [1]}, [2]}, [3]}, [4]}, [5]} is a well-proved way to improve the capacity of the PLMs, it is more important to explore more energy-efficient approaches to build PLMs with fewer parameters and less computation cost while retaining high model capacity.
i
fbddb126-d857-4edf-9559-ef5ca02a7b23
Towards this direction, there are a few works that significantly improve the efficiency of PLMs. The first is RoBERTa [1]} which improves the model capacity with a larger batch size and more training data. Based on RoBERTa, DeBERTa [2]} further improves the pre-training efficiency by incorporating disentangled attention which is an improved relative-position encoding mechanism. By scaling up to 1.5B parameters, which is about an eighth of the parameters of xxlarge T5 [3]}, DeBERTa surpassed human performance on the SuperGLUE [4]} leaderboard for the first time. The second new pre-training approach to improve efficiency is replaced token detection (RTD), proposed by ELECTRA [5]} . Unlike BERT [6]} which uses a transformer encoder to predict corrupted tokens with masked language modeling (MLM), RTD uses a generator to generate ambiguous corruptions and a discriminator to distinguish the ambiguous tokens from the original inputs, similar to Generative Adversarial Networks (GAN). The effectiveness of RTD is also verified by follow-up works, including CoCo-LM [7]}, XLM-E [8]}, and SmallBenchNLP [9]}.
i
fe73e194-40c7-4b43-80ed-9d3c645574d8
In this paper, we explore two methods of improving the efficiency of pre-training DeBERTa. Following ELECTRA-style training, we replace MLM in DeBERTa with RTD where the model is trained as a discriminator to predict whether a token in the corrupted input is either original or replaced by a generator. We show that DeBERTa trained with RTD significantly outperforms the model trained using MLM.
i
bd839b1c-0933-477e-927c-cd6bced4b5fd
The second is a new embedding sharing method. In ELECTRA, the discriminator and the generator share the same token embeddings. However, our analysis shows that embedding sharing hurts training efficiency and model performance, since the training losses of the discriminator and the generator pull token embeddings into different directions. This is because the training objectives between the generator and the discriminator are very different. The MLM used for training the generator tries to pull the tokens that are semantically similar close to each other while the RTD of the discriminator tries to discriminate semantically similar tokens and pull their embeddings as far as possible to optimize the binary classification accuracy. This creates the “tug-of-war” dynamics, as illustrated in [1]}. On the other hand, we show that using separated embeddings for the generator and the discriminator results in significant performance degradation when we fine-tune the discriminator on downstream tasks, indicating the merit of embedding sharing, e.g., the embeddings of the generator are beneficial to produce a better discriminator, as argued in [2]}. To seek a tradeoff, we propose a new gradient-disentangled embedding sharing (GDES) method where the generator shares its embeddings with the discriminator but stops the gradients in the discriminator from back propagating to the generator embeddings to avoid the tug-of-war dynamics. We empirically demonstrate that GDES improves both pre-training efficiency and the quality of the pre-trained models.
i
af131e2a-f580-4208-8e6c-e93c1e88aa76
We pre-train three variants of DeBERTaV3 models, i.e., DeBERTaV3large, DeBERTaV3base and DeBERTaV3small. We evaluate them on various representative natural language understanding (NLU) benchmarks and set new state-of-the-art numbers among models with a similar model structure. For example, DeBERTaV3large surpasses previous SOTA models with a similar model structure on GLUE [1]} benchmark with an average score over +1.37%, which is significant. DeBERTaV3base achieves a 90.6% accuracy score on the MNLI-matched [2]} evaluation set and an 88.4% F1 score on the SQuAD v2.0 [3]} evaluation set. This improves DeBERTabase by 1.8% and 2.2%, respectively. Without knowledge distillation, DeBERTaV3small surpasses previous SOTA models with a similar model structure on both MNLI-matched and SQuAD v2.0 evaluation set by more than 1.2% in accuracy and 1.3% in F1, respectively. We also train DeBERTaV3base on the CC100 [4]} multi-lingual data using a similar setting as XLM-R [4]} but with only a third of the training passes. We denote the model as mDeBERTabase. Under the cross-lingual transfer setting, mDeBERTabase achieves a 79.8% average accuracy score on the XNLI [6]} task, which outperforms XLM-Rbase and mT5base [7]} by 3.6% and 4.4%, respectively. This makes mDeBERTa the best model among multi-lingual models with a similar model structure. All these results strongly demonstrate the efficiency of DeBERTaV3 models and set a good base for future exploration towards more efficient PLMs.
i
615a9294-697a-40cf-84ab-526f0c779c11
In this paper we explored methods to further improve the pre-training efficiency of PLMs. We start with combining DeBERTa with ELECTRA which shows a significant performance jump. Next, we perform extensive analysis and experiments to understand the interference issues between the generator and the discriminator which is well known as the “tug-of-war” dynamics. Further we proposed gradient-disentangled embedding sharing as a new building block of DeBERTaV3 to avoid the “tug-of-war” issue and achieve a better pre-training efficiency.
d
c1b437a1-5aa1-41bb-818c-e6af92380ef6
We evaluate the DeBERTaV3 on a broad range of representative NLU tasks and show the significant performance improvements over previous SOTA models, e.g., DeBERTaV3large outperforms other models with a similar model structure by more than 1.37% with regards to GLUE average score and mDeBERTabase outperforms XLM-Rbase by 3.6% in terms of the cross lingual transfer accuracy on the XLNI task. Our results demonstrate the efficiency of all the DeBERTaV3 models and make DeBERTaV3 the new state-of-the-art PLMs for natural language understanding at multiple different model sizes, i.e., Large, Base, and Small. Meanwhile, this work clearly shows a huge potential to further improve model's parameter efficiency and provide some direction for future studies of far more parameter-efficient pre-trained language models.
d
5c56b29b-0368-42f6-aacf-0ff8b50043da
Huge language models (LMs) such as BERT , GPT-3 , Jurassic-1 , PaLM , and others , , , , , have taken AI by storm, with the promise of serving as versatile, general-purpose foundations for many applications. Indeed, partly for this reason, they have been rebranded by some as “foundation models” . The term is meant broadly, covering language models as well as models that were trained on more than just text, and although such multimodal models are not the focus of this paper, there’s another reason to take the term “language model” with a grain of salt. While LMs indeed model syntax, and other linguistic elements, their most striking feature is that they model the world, as described by the data on which they were trained. And so really LMs serve as a textual gateway to the universe of knowledge , , and perhaps should instead be called “language and knowledge” models.
i
bda8db17-e050-4d70-9bd7-0028bdb1c118
When viewed this way, it becomes clear that, despite their value, current LMs have inherent limitations. While versatile and impressive, the output of even huge LMs is in many cases wrong, and often ridiculously so . Here is a sample output of GPT-3 on some simple queries. (To be clear, this is not a critique of GPT-3 specifically, and other LMs — including our own Jurassic-1 — exhibit similar silliness.)
i
2506229b-0a04-4cbb-abe9-d594d2c2ce78
For example, LMs can struggle to understand that there are no US cities with more than 20m citizens, that a math teacher is a person, don’t know what today’s date is, nor can they engage in even simple (e.g., mathematical) reasoning.
i
aad9cea3-0120-45ad-8b0e-074da2eab70f
When you look for the root cause, you realize the core limitations of LMs: They don’t have access to all relevant knowledge, and neural models are ill-suited for certain types of calculation. More specifically:
i
04adc775-abba-4763-a4be-de0039b8fb4f
Lack of access to current information. Certain data constantly change – the exchange rate between the dollar and the Moroccan Dirham, current COVID numbers, the stock price of AAPL, the weather in Vancouver (OK, not so much), or even the current date. It’s impossible, by their design, for pretrained language models to keep up with this dynamic information . Lack of access to proprietary information sources. As an important special case of 1, the models don’t have access to proprietary information, such as the client roster in a company’s database or the state of an online game. Lack of reasoning. Certain reasoning is beyond the reach of the neural approach, and requires a dedicated reasoning process. We saw above the classic example of arithmetic reasoning. GPT-3 and Jurassic-1 perform well on 2-digit addition, which is impressive, but confidently spit out nonsensical answers on 4-digit additions. With increased training time, better data, and larger models, the performance of LMs will improve, but will not reach the robustness of an HP calculator from the 1970s. And mathematical reasoning is just the tip of an iceberg.
i
94b6209d-5736-470c-9226-4668482ed0cd
Model explosion. Today’s LM’s zero-shot performance trails that of fine-tuned models. One can fine-tune the LM to a specific task, but then lose versatility. Contemporary efforts to mitigate the problem focus on training a huge LM jointly on many sets of curated NLP tasks in a massive multi-task setting (several leading studies reaching \(100+\) tasks) , , , . These formidable efforts are effective; the resulting models exhibit versatility and high performance when encountering inputs resembling those of the curated tasks. But the performance of these models on tasks that are not close enough to those included in the curated tasks can significantly deteriorate (for example, perplexity degrades significantly). It is not practical to fine-tune and serve multiple large models. Nor can one further tune a multi-task-trained LM , , , on a new task that hadn't been covered in its training; due to catastrophic forgetting, adding the new task necessitates retraining on the entire task set. Given the cost of training such models , , , this is clearly infeasible to do repeatedly.
i
c400821e-fb4f-40e3-8ae0-1cb9399dd04b
Despite all these shortcomings, large language models are an essential backbone of any future AI system. So the question is how to have our cake and eat it too, enjoying the benefits of self-supervised deep language models without suffering these drawbacks. The solution we offer takes the form of a flexible architecture dubbed the Modular Reasoning, Knowledge and Language (MRKL, pronounced “miracle”) system, whose high-level design is depicted below.
i
e8094f74-f0aa-4433-bbb9-62180ea1de96
Thus a MRKL system consists of an extendable set of modules, which we term 'experts', and a router that routes every incoming natural language input to a module that can best respond to the input (the output of that module can be the output of the MRKL system, or be routed to another module). These modules can be:
i
9446ce30-8b3c-4214-aab1-52fd7eca7260
Safe fallback: In case the input doesn’t match any existing expert module, the router sends the input directly to the general-purpose huge LM. Robust extensibility: Since each expert is trained independently we are able to cheaply add new capabilities while guaranteeing that they do not compromise the performance of existing ones. The only component that requires retraining is the router which is a relatively lightweight task. Interpretability: When the router invokes a specific module, that often has the side benefit of providing a rationale for the MRKL system's output (“\(1+1=2\) because the calculator said so”); such explanations are crucially lacking in existing language models. Up-to-date information: The integration of external APIs allows the MRKL system to hook into dynamic knowledge bases, and correctly answer inputs that static models cannot. Proprietary knowledge: Access to proprietary databases and other information sources. Compositionality: By routing compounded multi-hop inputs to different experts we are able to naturally integrate their responses and correctly address complex inputs .
i
9d82e7d5-a7aa-490e-a71f-1fd6f64fc713
We conducted our experiments with the \(7B\) parameters J1-large model using prompt-tuning with 10 prompt tokens. Across all experiments, we set our learning rate to be \(lr=0.3\) and used linear decay. The batch size was set to 32 and we defined a short warm-up depending on the number of steps in each experiment.
m
4ffc9c79-ab29-4dbc-8da5-97efc7a69fe9
Experiments 1 and 2 below were trained for 3000 steps, and the reported results here were the test accuracy evaluated on the final model. For the remaining experiments we used linear weight decay (\(0.001\) ), which we found to be crucial for the model's performance, and selected the best checkpoint using a validation set. Each experiment was run 3-5 times, and results show mean \(\pm \) standard deviation across these runs.
m
1be5deb0-a097-4405-b91e-eab13d8c9945
In all experiments we verified that there was no overlap between the problems included in the training set and those included in the test set. This also includes avoiding cases with the same underlying arithmetic expression, but using different wordings for training and testing. For a detailed description of the sizes of the data splits see Section .
m
f5d485f3-62c5-4096-84ec-cf7a6e1cb07f
In the following results we report the accuracy we achieve in different experiments, by which we mean the percent of problems in the relevant test set on which the system gave the correct answer. (We note again that, since the actual calculation is done by the calculator module, all errors are due to passing the wrong operations or operands to the calculator.)
r
5e43bac3-b7b4-4909-b031-c696479424d8
This paper introduces the concept of Modular Reasoning, Knowledge and Language (MRKL) systems, which embraces large language models (LMs) and augments them with an easily extensible set of external knowledge and reasoning modules. This flexible, neuro-symbolic design retains all the advantages of modern LMs, but avoids their limitations: a lack of current and/or proprietary information, and an inability to reason symbolically when needed. We discussed some of the technical challenges in implementing a MRKL system, with a special focus on how to cross the neuro-symbolic divide, which we did by looking at how Jurassic-X — AI21 Labs' implementation of a MRKL system — was trained to handle basic arithmetic reliably.
d
ccda51ba-9486-4a25-a006-7f8bff72898e
How does ZeRO-Offload scale the trainable model size compared to existing multi-billion parameter training solutions on a single GPU/DGX-2 node? What is the training throughput of ZeRO-Offload on single GPU/DGX-2 node? How does the throughput of ZeRO-Offload scale on up to 128 GPUs? What is the impact of our CPU-Adam and delay parameter update (DPU) on improving throughput, and does DPU change model convergence?
m
2dfb4385-0a8c-4b97-9d42-2872a759fe69
We presented ZeRO-Offload, a powerful GPU-CPU hybrid DL training technology with high compute efficiency and near linear throughput scalability, that can allows data scientists to train models with multi-billion parameter models even on a single GPU, without requiring any model refactoring. We open-sourced ZeRO-Offload as part of the DeepSpeed library (www.deepspeed.ai) with the hope to democratize large model training, allowing data scientist everywhere to harness the potential of truly massive DL models.
d
720bfacc-5d37-4ad0-a6ac-96cb123dabec
Quantum teleportation [1]}, [2]} is a strikingly curious quantum phenomenon with myriads of applications ranging from secure quantum communication to distributed quantum computing [3]}, [4]}, [5]}, [6]}, [7]}. First presented by Bennett et al. [1]}, quantum teleportation allows one to recreate an arbitrary qubit from just two bits of classical information. The precondition is that the two communicating parties share an entangled pair of qubits. Entanglement, essentially, transmits the raw values of amplitudes present in the arbitrary qubit and the classical bits provide a final “correction” to these values. Quantum teleportation has proven to be an invaluable tool in quantum information science [9]}, [10]}, [11]}. Quantum teleportation enables the development of quantum repeaters [3]}, [13]}, a pivotal technology for the establishment of quantum communication networks [14]}, and in the general the framework of quantum Internet. Quantum teleportation has been implemented in lab setting using a number of different resources [15]}, [16]}.
i
75236d97-5d27-48be-9c0a-7f5e8846beae
It is known that, given a pre-shared Bell pair, the teleportation of an unknown qubit, requires the transmission of two classical bits [1]}. More generally, the teleportation of an \(N\) -state requires 2\(log_2 N\) classical bits [2]}. This, however, assumes that the sender has access to only one copy of the arbitrary unknown qubit that she is teleporting. Kak investigated the minimum number of qubits required for teleportation [3]}. In turn, he proposed three variations on Bennett's protocol that required fewer than two qubits. Two of these variations, however, use a non-traditional setup where the entangled qubits are not pre-shared between the communicating parties but transmitted along with the classical bits of information. This change in setup can be looked upon as an encryption system where the classical bit of information acts as the encryption key for the qubit and multipath routing can be used for their transmission. From the viewpoint of teleportation, nevertheless, if the qubits are to be transmitted along with classical bits, then one could transmit the arbitrary qubit directly at that time without the need to resorting to teleportation.
i
dd096c68-c893-48b3-8891-32041a2e22c8
In this paper, we consider the case when the sender has more than one copy of an (unknown) arbitrary qubit \(\mathinner {|{\phi }\rangle }=a\mathinner {|{0}\rangle }+b\mathinner {|{1}\rangle }\) . This may happen in several practical situations where the unknown qubit is either the result of a periodic or on-demand natural process or output of a quantum algorithm. One may need to send this (unmeasured) output to the receiver for further processing. We show that the teleportation of such a qubit requires the transmission of only one classical bit with a probability greater than one-half. In order to do so, we propose a reset procedure that allows for the recreation of the original shared Bell state using only local operations at the sender's end and no consumption of resources at receiver's end. In the proposed reset procedure, as with conventional teleportation protocol, we begin with a maximally entangled qubits. At the first stage of the protocol, if this maximally entangled qubit enters into a desired state we move on to the second stage of the protocol. If, on the other hand, the entangled part ends up in an undesired state we reset it back to the original state using only local operations at the sender's end. The cost of doing so is one copy of the arbitrary qubit, \(\mathinner {|{\phi }\rangle }\) .
i
189b42a4-80c6-4715-ba27-ca194238fb3b
The probability of success of the reset attempts vary with the values of \(a\) and \(b\) and is given by \(2|ab|^2\) for the first reset attempt. Since, \(|ab|^2=|a|^2|b|^2 = |a|^2(1-|a|^2) = |a|^2-|a|^4\) and \(0\le |a|^2\le 1\) , by plotting the function (figure REF ) we see that \(|ab|^2\le \frac{1}{4}\) and \(2|ab|^2\le \frac{1}{2}\) . <FIGURE>
d
a076c5ac-7b31-4fb4-ac7b-4a11f9cdec95
For example, when \(a=\frac{i}{\sqrt{2}}\) and \(b=\frac{1}{2}+\frac{i}{2}\) , the probability that the first reset attempt will succeed is \(\frac{1}{2}\) . After a successful reset, Alice makes a second attempt to teleport the qubit. The probability that entangled qubits will collapse into the desirable state (\(a\mathinner {|{00}\rangle }+b\mathinner {|{11}\rangle }\) ) is again \(\frac{1}{2}\) . Therefore, with probability \(\frac{1}{2}+\frac{1}{4}\) Alice would transmit just one bit for teleportation but must have at least three copies of \(\mathinner {|{\phi }\rangle }\) at her disposal.
d
89cd5dfd-d525-4369-9e21-62facc83f46f
Consequently, in general, considering only the first reset attempt and an unknown qubit \(\mathinner {|{\phi }\rangle }\) the probability of ending up in the desired state (\(a\mathinner {|{00}\rangle }+b\mathinner {|{11}\rangle }\) ), that is a successful teleportation with only one-classical bit, is \(\frac{1}{2}+\frac{1}{2}\cdot 2|ab|^2=\frac{1}{2}+|ab|^2\) . Note, this probability is always set to \(\frac{1}{2}\) in Bennett et al.'s quantum teleportation [1]} protocol which always requires two bits of information to be transmitted.
d
82f32e76-a744-46df-b904-7e849ba77edf
Alice, however, does require the values of \(a\) and \(b\) for the reset attempts beyond the first one. This latter case is useful when the precision required for classical representation of the \(a\) and \(b\) values exceeds the available bandwidth or resources available on the classical channel. In other words, if the values of \(a\) and \(b\) are known, Alice can pursue the reset attempts until it succeeds. Note that once a reset attempt succeeds, the probability that we will end up in the desired entangled state is again \(\frac{1}{2}\) . Therefore, the expected number of attempts, given a successful reset, is 2.
d
923e1a05-7247-4be1-bf43-b6dbbd0ef2a1
The reset procedure presented above, when successful, reduces the teleportation protocol to one stage protocol under the standard setting of pre-shared Bell states. Furthermore, Alice knows at each stage whether the reset has succeeded or not. If the values of \(a\) and \(b\) are not known, and the reset fails, she can abandon that particular pair of entangled qubits and use another. The computational burden of teleportation with reset is, \(H(T)=\left[(\frac{1}{2}+|ab|^2)(1)+ (\frac{1}{2}-|ab|^2) (2)\right] \mbox{bits}\)
d
9da23ff3-51d4-41f1-9743-87d5bba5a0f2
It is easy to see that \(1.25\le H(T)\le 1.5\) depending on the values of \(a\) and \(b\) . For comparison, one of the protocols by Kak [1]} reduces the computational burden to 1.5 bits when the Bell states are pre-shared. The cost of reducing the computational burden to 1.25 bits, therefore, comes from the expenditure of extra copies of the unknown qubit \(\mathinner {|{\phi }\rangle }\) .
d
7e8d6d2e-adc1-49fd-8d2d-d79a77935192
The coronavirus disease 2019 (COVID-19) was identified in Wuhan city of China in December 2019 that arises due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) [1]}. It is categorized as an infectious disease and spreads among people through coming in close contact with infected people generally via small droplets due to coughing, sneezing, or talking, and through the infected surface. On March 11, 2020, the World Health Organization (WHO) declared the COVID-19 as a pandemic of infectious disease. In India, the first case of COVID-19 was reported in Kerala on January 30, 2020 and gradually spread throughout India especially in urban area, and India witnessed the first wave of COVID-19. India witnessed the second wave in March 2021, which was much more devastating than the first wave, with shortages of hospital beds, vaccines, oxygen cylinder and other medicines in parts of the country. To fight with the COVID-19, the country has vaccination, herd immunity, and epidemiological interventions as few possible options. In the early stage of COVID-19, India had imposed complete as well as partial lockdown as epidemiological interventions during the first wave that slowed the transmission rate and delayed the peak, and resulted in a lesser number of COVID-19 cases. India is the second most populous country in the world, where 68.84 % and 31.16 % India’s population lives in rural areas and urban areas respectively. The population density in northeast India is low in comparison to other states of India. The chance of getting infection depends on the spatial distance between the contacts and low-density population is less prone in comparison to high density population. Individual personal behavior (social distancing, frequent hand sanitation, and wearing a mask, etc.) also plays a key role to control the COVID-19 spread.
i
881e7866-4bf8-4de4-9819-536d9b1e48b9
Prediction of COVID-19 new cases per day will help the administration and planners to take the proper decision and help them in making effective policy to tackle the pandemic situation. The epidemiological models are very helpful to understand the trend of COVID-19 spread and useful in predicting the spread rate of the disease, the duration of the disease, and the peak of the infectious disease. It can be used for short term and long term predictions for new confirmed COVID-19 cases per day that may be used in decision making to optimize possible controls from the infectious disease. In literature, several mathematical models for infectious diseases such Logistic models [1]}, generalized growth models [2]}, Richards’s models [3]}, sub epidemics wave models [4]}, Susceptible-Infected-Recovered (SIR) model [5]}, and Susceptible-Exposed-Infectious-Removed (SEIR) have been introduced. The SIR model is a compartmental model that considers the whole population as a closed population and divides this closed population into susceptible, infected, and recovered compartments. Few infected persons infect some other persons at an average rate \( R0 \) , known as the basic reproduction number. Recently, some works have been reported in the literature using the SIR and its variants model to predict the COVID-19 outbreak [6]}, [7]}, [8]}, [9]}, [10]}. These epidemiological models are good in understanding the trend of COVID-19 spread but are designed based on several assumptions that would not hold generally on real-life data[11]}. It is unreliable due to the complex trend of spread of the infection as it depends on population density, travel, and individual social aspects like cultural and life styles. Therefore, there is a need for deep learning approaches to accurately predict the COVID-19 trends in India. In deep learning, convolutional neural network (CNN) [12]} is one form of deep learning architecture for processing data that has a grid like topology. It includes the time series data that can be considered as 1D grid taking samples at regular time intervals and image data considered as 2D grid of pixels. A typical end-to-end CNN network consists of different layers such as convolution, activation, max-pooling, softmax layer etc.
i
f62e87ac-0a5f-4556-ab37-6fb66ba543a2
Recurrent neural network (RNN) [1]} derived from the feedforward neural networks can use their interval states (memory) to process variable length sequences of data suitable for the sequential data. Long Short-Term Memory (LSTM) has been introduced by Hochreiter and Schmidhuber [2]} which overcomes the vanishing and exploding gradient problem in RNN and have long dependencies that proved to be very promising for modelling of sequential data. A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell. For a given input sequence \( x=(x_{1}, x_{2},\ldots x_{T}) \) from time \( t=1 \) to \( T \) , LSTM calculates an output sequence \( y=y_{1},y_{2},\ldots y_{T} \) , mathematically represented as [2]}: \(i_{i}=\sigma (\mathbf {W}_{ix}x_{t}+\mathbf {W}_{im}m_{t-1}+\mathbf {W}_{ic}c_{t-1}+b_{i})\) \(i_{f}=\sigma (\mathbf {W}_{fx}x_{t}+\mathbf {W}_{fm}m_{t-1}+\mathbf {W}_{fc}c_{t-1}+b_{f})\) \(c_{t}=f_{t}\odot c_{t-1}+i_{t}\odot g(\mathbf {W}_{cx}x_{t}+\mathbf {W}_{cm}m_{t-1}+b_{c})\) \(o_{t}=\sigma (\textbf {W}_{ox}x_{t}+\mathbf {W}_{om}m_{t-1}+\mathbf {W}_{oc}c_{t-1}+b_{o})\) \(m_{t}=o_{t}\odot h(c_{t})\) \(y_{t}=\phi (\mathbf {W}_{ym}m_{t}+b_{y})\)
i
67385be2-84cc-42bf-8611-e2dddd9308c5
From Equation REF to Equation REF , \( i,o,f \) and \( c \) represent the input gate, output gate, forget gate and cell activation vector respectively, \( m \) depicts hidden state vector also known as output vector of the LSTM unit. \( \mathbf {W} \) denotes the weight matrix, for example \( \mathbf {W}_{ix} \) means weight matrix from input gate to input. The \( \odot \) stands for element wise multiplication, and b denotes the bias term, whereas \(g\) and \(h\) are used for activation functions at the input and output respectively. \( \sigma \) represents logistic sigmoid function.
i
3642ff71-177a-4854-b7a9-e948994f2453
LSTM is a method having multiple layers which can map the input sequence to a vector having fixed dimensionality, in which the deep LSTM decodes the target sequence from the vector. This deep LSTM is essential for a recurrent neutral network model except on the input sequence. The LSTM can solve problems with long term dependencies which may be caused due to the introduction of many short term dependencies to the dataset. LSTM has the ability to learn successfully on data having a long range of temporal dependencies because of the time lag between the input and their corresponding outputs [1]}. LSTM can be used for predicting time series and it is beneficial for sequential data[2]}.
i
a8cd76b6-65ce-4a1c-936e-fd5d71b87f06
Deep learning models such as LSTM and CNN are well suited for understanding and predicting the dynamical trend of COVID-19 spread and have recently been used in prediction by several researchers [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}. Chandra et al. [8]} used the LSTM and its variants for ahead prediction of COVID-19 spread for India with split the training and testing data as static and dynamics. LSTMs have been used for COVID-19 transmission in Canada by Chimmula & Zhang [9]} and results show the linear transmission in the Canada . Arora et al. [10]} performed forecasting of the COVID-19 cases for India using LSTMs variants and categorized the Indian states in different zones based on COVID-19 cases.
i
272d30f2-1132-49c7-bd73-75167f8411aa
In this paper, we employ the vanilla LSTM, stacked LSTM, ED-LSTM, Bi-LSTM, CNN, and hybrid CNN+LSTM model to capture the dynamic trend of COVID-19 spread and predict the COVID-19 daily confirmed cases for 7, 14 and 21 days for India and its four most affected states: Maharashtra, Kerala, Karnataka, and Tamil Nadu. To demonstrate the performance of deep learning models, RMSE and MAPE errors are computed on the testing data. The flowchart of the model is represented in Figure REF .
i
a7365bca-f1a3-41cc-b722-a4b5e9b8e639
The rest of the manuscript is organized as follows. Section , describes the deep learning model along with experimental setup and evaluation metrics. In Section , we present the COVID-19 dataset and experimental results and discussions. Finally, the conclusion is made in Section . <FIGURE>
i
7f94a5ae-80aa-4594-af5d-7510b8933b7b
The COVID-19 outbreak trend is highly dynamic and depends on imposing various intervention strategies. To capture the complex trend, in this study, we proceed the following steps during the training, testing and forecasting.
m
e5debdc2-0b92-4a21-92a4-980b4de7db11
We used early COVID-19 data up to July 10, 2021, and split the COVID-19 time series data into training and testing data by taking the last 20 days data as testing data and remaining data as training data. To avoid the inconsistency in COVID-19 time series data, the data is normalized in the interval[0,1] using 'MinMaxScaler' Keras function. The COVID-19 time series data is reshaped into the input shape data by taking time step (time-lag) or observation window 15 and number of features is one as for the univariate model. The observation window 15 means, we are using previous 15 days COVID-19 time series data to predict the next day, that is the 16th day. In a univariate model the input contains only one feature. Further, we train and test the recurrent and convolutional neural network approaches on COVID-19 time series data and setup the model with setting hyper parameters through manual search. COVID-19 daily confirmed cases predictions are performed up to July 17, 2021(7 days), up to July 24, 2021 (14 days) and up to July 31, 2021(21 days) from July 10, 2021 using Vanilla LSTM, Stacked LSTM, ED-LSTM, Bi-LSTM, CNN, and Hybrid CNN+LSTM for India and its four most affected states Maharashtra, Kerala, Karnataka and Tamil Nadu. The experimental work is summarized in Figure 1.
m
7b1ce2ab-65cb-4f5c-b7aa-ce2e77417fcb
RNN abd CNN approaches viz. vanilla LSTM, stacked LSTM, ED-LSTM, Bi-LSTM, CNN, and hybrid CNN+LSTM have been implemented in Python using Keras module of Tensorflow and consider the prediction by taking univariate approaches.
m
80675cc1-d2b3-4a89-a503-dbd85023c64d
The COVID-19 outbreak is a potential threat due to its dynamical behaviour and more threatening in a country like India because it is very densely populated. The researchers are engaged in seeking new approaches to understand the COVID-19 dynamics that will overcome the limitation of existing epidemiological models. In this study, we designed the vanilla LSTM, stacked LSTM, ED-LSTM, Bi-LSTM, CNN, and hybrid CNN+LSTM model to capture the complex dynamical trends of COVID-19 spread and perform forecasting of the COVID-19 confirmed cases of 7, 14, 21 days for India and its four most affected states: Maharashtra, Kerala, Karnataka, and Tamil Nadu. The RMSE and MAPE errors on the testing data are computed to demonstrate the relative performance of the deep learning models. The predicted COVID-19 confirmed cases of 7, 14, and 21 days for entire India and its states: Maharashtra, Kerala, Karnataka, and Tamil Nadu along with confidence intervals results shows that predicted daily confirmed cases by most of the models studied are very close to actual confirmed cases per day. The stacked LSTM and hybrid CNN+LSTM models perform better among the six models. These accurate predictions can help the governments to take decisions accordingly and create more infrastructures if required.
d
a2323630-a5a6-4601-8ac4-66ad38eb3ee6
The constraint satisfaction problem (CSP) is the widely studied combinatorial problem of determining whether a set of constraints admits at least one solution. It is common to parameterise this problem by a set of relations (a constraint language) which determines the allowed types of constraints, and by choosing different languages one can model different types of problems. Finite-domain languages e.g. makes it possible to formulate Boolean satisfiability problems and coloring problems while infinite-domain languages are frequently used to model classical qualitative reasoning problems such as Allen's interval algebra and the region-connection calculus (RCC). Under the lens of classical complexity a substantial amount is known: every finite-domain CSP is either tractable or is NP-complete [1]}, [2]}, and for infinite domains there exists a wealth of dichotomy results separating tractable from intractable cases [3]}.
i
8768a9b9-a2c9-408c-af92-c340f5a6f02d
The vast expressibility of infinite-domain CSPs makes the search for efficient solution methods extremely worthwhile. While worst-case complexity results indicate that many interesting problems should be insurmountably hard to solve, they are nevertheless solved in practice on a regular basis. The discrepancy between theory and practice is often explained by the existence of “hidden structure” in real-world problems [1]}. If such a hidden structure exists, then it may be exploited and offer a way of constructing improved constraint solvers. To this end, backdoors have been proposed as a concrete way of exploiting this structure. A backdoor represents a “short cut” to solving a hard problem instance and may be seen as a measurement for how close a problem instance is to being polynomial-time solvable [2]}. The existence of a backdoor then allows one to solve a hard problem by brute-force enumeration of assignments to the (hopefully small) backdoor and then solving the resulting problems in polynomial time. This approach has been highly successful: applications can be found in e.g. (quantified) propositional satisfiability [3]}, [4]}, abductive reasoning [5]}, argumentation [6]}, planning [7]}, logic [8]}, and answer set programming [9]}. Williams et al. (2003) argue that backdoors may explain why SAT solvers occasionally fail to solve randomly generated instances with only a handful of variables but succeed in solving real-world instances containing thousands of variables. This argument appears increasingly relevant since modern SAT solvers frequently handle real-world instances with millions of variables. Might it be possible to make similar headway for infinite-domain CSP solvers? For example, can solvers in qualitative reasoning (see, e.g., the survey [10]}) be analysed in a backdoor setting? Or are the various problems under consideration so different that a general backdoor definition does not make sense?
i
025dfb7e-428b-4b76-afb7-0a445b77c0c2
We begin by recapitulating the standard definition of backdoors for finite-domain CSPs. Let \(\alpha \colon X \rightarrow D\) be an assignment. For a \(k\) -ary constraint \(c = R(x_1, \ldots , x_k)\) we denote by \(c_{\mid \alpha }\) the constraint over the relation \(R_0\) and with scope \(X_0\) obtained from \(c\) as follows: \(R_0\) is obtained from \(R\) by
i
7ca70b25-85e0-42d1-b333-7423b5660a19
removing \((d_1 ,\ldots , d_k)\) from \(R\) if there exists \(1 \le i \le k\) such that \(x_i \in X\) and \(\alpha (x_i) \ne d_i\) , and removing from all remaining tuples all coordinates \(d_i\) with \(x_i \in X\) .
i
5f7fde02-17b9-42ab-810b-0331f79a5dd9
The scope \(X_0\) is obtained from \(x_1, \ldots , x_k\) by removing every \(x_i \in X\) . For a set \(C\) of constraints we define \(C_{\mid \alpha }\) as \(\lbrace c_{\mid \alpha } \colon c \in C\rbrace \) . We now have everything in place to define the standard notion of a (strong) backdoor, in the context of Boolean satisfiability problems and finite-domain CSPs.
i
46e2649d-0f55-4b7c-8a7f-ed24c7df3870
Definition 1 (See, for instance, [1]} or [2]}) Let \({\mathcal {H}}\) be a set of CSP instances. A \({\mathcal {H}}\) -backdoor for a CSP\((\Gamma _D)\) instance \((V,C)\) is a set \(B \subseteq V\) where \((V \setminus B, C_{\mid \alpha }) \in {\mathcal {H}}\) for each \(\alpha \colon B \rightarrow D\) .
i
59a031c9-a2be-4e64-832d-dd5d11e1d657
In practice, \({\mathcal {H}}\) is typically defined as a polynomial-time solvable subclass of CSP and one is thus interested in finding a backdoor into the tractable class \({\mathcal {H}}\) . If the CSP instance \(I\) has a backdoor of size \(k\) , then it can be solved in \(|D|^k \cdot {\rm poly}(||I||)\) time. This is an exponential running time with the advantageous feature that it is exponential not in the instance size \(||I||\) , but in the domain size and backdoor set size only.
i
cd640a51-a7d8-4230-b0f5-e410422478b6
Example 2 Let us first see why Definition REF is less impactful for infinite-domain CSPs. Naturally, the most obvious problem is that one, even for a fixed \(B \subseteq V\) , need to consider infinitely many functions \(\alpha \colon V \rightarrow D\) , and there is thus no general argument which resolves the backdoor evaluation problem. However, even for a fixed assignment \(\alpha \colon V \rightarrow D\) we may run into severe problems. Consider a single equality constraint of the form \((x = y)\) and an assignment \(\alpha \) where \(\alpha (x) = 0\) but where \(\alpha \) is not defined on \(y\) . Then \((x =y)_{\mid \alpha } = \lbrace (0)\rbrace \) , i.e., the constant 0 relation, which is not an equality relation. Similarly, consider a constraint \(X r Y\) where \(r\) is a basic relation in RCC-5. Regardless of \(r\) , assigning a fixed region to \(X\) but not to \(Y\) results in a CSP instance which is not included in any tractable subclass of RCC-5 (and is not even an RCC-5 instance).
i
1d1636cd-8fbd-4835-bdf7-e1da91877a7d
Hence, the usual definition of a backdoor fails to compensate for a fundamental difference between finite and infinite-domain CSPs: that assignments to variables are typically much less important than the relation between variables.
i
0975d4ef-7d5a-41de-a641-0c0ef8d6d624
The stylish Chinese font generation has attracted rising attention within recent years [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, since it has a wide range of applications including but not limited to the automatic generation of artistic Chinese calligraphy [13]}, art font design [14]} and personalized style generation of Chinese characters [15]}.
i
7bb4b8b2-f4ba-4265-997d-8dfd144c6646
The existing Chinese font generation methods can be generally divided into two categories. The first category is firstly to extract some explicit features such as strokes and radicals of Chinese characters and then utilize some traditional machine learning methods to generate new characters [1]}, [2]}. The quality of feature extraction plays a central role in the first category of Chinese font generation methods. However, such feature extraction procedure is usually hand-crafted, and thus time and effort consuming.
i
73f61471-3f28-4098-9980-07444e7ae47c
The second category of Chinese font generation methods has been recently studied in [1]}, [2]}, [3]}, [4]}, [5]}, [6]} with the development of deep learning [7]}, particularly the generative adversarial networks (GAN) [8]}. Due to the powerful expressivity and approximation ability of deep neural networks, feature extraction and generation procedures can be combined into one procedure, and thus, Chinese font generation methods in the second category can be usually realized in an end-to-end training way. Instead of using the stroke or radical features of Chinese characters, the methods in the second category usually regard Chinese characters directly as images, and then translate the Chinese font generation problem into certain image style translation problem [9]}, [10]}, for which GAN and its variants are the principal techniques. However, it is well-known that GAN usually suffers from the issue of mode collapse [8]}, that is, producing the same patterns for different inputs by generator. Such issue will significantly degrade the diversity and quality of the generated results (see, Figure REF below). When adapted to Chinese font generation problem, the mode collapse issue will happen more frequently due to there are many Chinese characters with very similar strokes. <FIGURE>
i
488fcd39-2b63-49a4-95f2-7a83c116d976
Due to the artificial nature of Chinese characters, the explicit stroke information contains amount of mode information of Chinese characters (see Figure REF ). This is very different from natural images, which are usually regarded to be generated according to some probability distributions at some latent spaces. Inspired by this observation, in this paper, we at first introduce a one-bit stroke encoding to preserve the key mode information of a Chinese character, and then suggest certain stroke-encoding reconstruction loss to reconstruct the stroke encoding of the generated character such that the key mode information can be well preserved, and finally incorporate them into CycleGAN for Chinese font generation [1]}. Thus, our suggested model is called StrokeGAN. The contributions of this paper can be summarized as follows:
i
8c280f25-a199-4268-b2b2-194a22233736
We propose an effective method called StrokeGAN for the generation of Chinese fonts with unpaired data. Our main idea is firstly to introduce a one-bit stroke encoding to capture the mode information of Chinese characters and then incorporate it into the training of CycleGAN [1]}, in the purpose of alleviating the mode collapse issue of CycleGAN and thus improving the diversity of its generated characters. In order to preserve the stroke encoding, we introduce a stroke-encoding reconstruction loss to the training of CycleGAN. By the use of such one-bit stroke encoding and the associated reconstruction loss, StrokeGAN can effectively alleviate the mode collapse issue for Chinese font generation, as shown by Figure REF . The effectiveness of StrokeGAN is verified over a set of Chinese character datasets with 9 different fonts (see Figure REF ), that is, a handwriting font, 3 standard printing fonts and 5 pseudo-handwriting fonts. Compared to CycleGAN for Chinese font generation [1]}, StrokeGAN can generate Chinese characters with higher quality and better diversity, particularly, the strokes are better preserved. Besides CycleGAN [1]}, our method also outperforms the other state-of-the-art methods including zi2zi [4]} and Chinese typography transfer (CTT) method [5]} using paired data, in terms of generating Chinese characters with higher quality and accuracy. Some generated characters by our method with 9 different fonts can be found in Figure REF . It can be observed that these generated characters of StrokeGAN are very realistic. <FIGURE><FIGURE>
i
b32349e2-f601-4ad0-97f6-349c05d08566
In recent years, many generation methods of stylish Chinese fonts have been suggested in the literature [1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]} with the development of deep learning. In [1]}, the authors adapted pix2pix model developed in [10]} for the image style translation problem to Chinese font generation and then suggested zi2zi method with paired training data, that is, there is a one-to-one correspondence between the characters in the source (input) style domain and target (output) style domain. Similar idea was extended to realize the Chinese character generation from one font to multiple fonts in [4]}. Besides [1]} and [4]}, some other paired data based Chinese font generation methods were suggested in [2]}, [6]}, [5]}. However, it is usually human-intensive to build up the paired training data. In order to overcome this challenge, [3]} adapted CycleGAN developed in [18]} for the image style translation to Chinese font generation based on unpaired training data. Yet, the CycleGAN based method suggested in [3]} (called CCG-CycleGAN) may suffer from the mode collapse issue [20]}. When mode collapse occurs, the generator produces fewer patterns for different inputs, and thus significantly degrades the diversity and quality of generated results.
w
47b877e7-716f-43e1-877c-bbd5728a0aaa
Motivated by the observation from traditional Chinese character generation and recognition methods (see [1]}, [2]}) that the explicit stroke feature can provide much mode information for a Chinese character, in this paper, we incorporate such stroke information into the training of CycleGAN for Chinese font generation [3]} to tackle the issue of mode collapse, via introducing a one-bit stroke encoding and certain stroke-encoding reconstruction loss. The very recent papers [4]}, [5]}, [6]} also incorporated some stroke or radical information of Chinese characters into Chinese font generation. Their main idea is firstly to utilize a deep neural network to extract the strokes or radicals of Chinese characters and then merge them by another deep neural network, which is very different to our idea of the use of a very simple one-bit stroke encoding. According to our later numerical experiments, our introduced one-bit stroke encoding is very effective.
w
0df8a46c-f781-4c3e-b49c-531d3fe88b1f
The rest of this paper is organized as follows. In Section 2, we present some preliminary work. In Section 3, we introduce the proposed method in detail. In Section 4, we provide a series of experiments to demonstrate the effectiveness of the proposed method. We conclude this paper in Section 5. <FIGURE>
w
9563e0fe-8f00-401f-9bed-54f8ce16cade
In this section, we provide a series of experiments to demonstrate the effectiveness of the suggested StrokeGAN. All experiments were carried out in Pytorch environment running Linux, AMD(R) Ryzen 7 2700x eight-core processor \(\times 16\) CPU, GeForce RTX 2080 GPU. Our codes are available in https://github.com/JinshanZeng/StrokeGAN.
m
2f2ebd63-9938-4398-879c-47378eaf96b8
This paper proposes an effective Chinese font generation method called StrokeGAN by incorporating a one-bit stroke encoding into CycleGAN to tackle the mode collapse issue. The key intuition of our idea is that the stroke encodings of Chinese characters contain amount of mode information of Chinese characters, unlike the natural images. A new stroke-encoding reconstruction loss was introduced to enforce a faithful reconstruction of the stroke encoding as accurately as possible and thus preserve the mode information of Chinese characters. Besides the commonly used content accuracy, the crowdsourcing recognition accuracy and stroke error are also introduced to evaluate the performance of our method. The effectiveness of StrokeGAN is demonstrated by a series of Chinese font generation tasks over 9 datasets with different fonts, comparing with CycleGAN and other two existing methods based on the paired data. The experiment results show that StrokeGAN helps preserve the stroke modes of Chinese characters in a better way and generates very realistic characters with higher quality. Besides Chinese font generation, our idea of the one-bit stroke encoding can be easily adapted to other deep generative models and applied to the font generation related to other languages such as Korean and Japanese.
d
f917755b-f95a-4bec-9050-6d69aedf46ac
Data analysis is a critical and dominant stage of the machine learning lifecycle. Once the data is collected, most of the work goes into studying and wrangling the data to make it fit for training. A highly experimental phase follows where a model is selected and tuned for optimal performance. The final model is then productionised and monitored constantly to detect data drifts and drop in performance [1]}, [2]}, [3]}, [4]}.
i
71af00c8-de7f-413f-9042-649c46389d2e
When compared to traditional software, the feedback loop of a machine learning system is longer. While traditional software primarily experiences change in code, a machine learning system matures through changes in data, model & code [1]}. Given the highly tangled nature of machine learning systems, a change in any of the stages of the lifecycle triggers a ripple effect throughout the entire pipeline [2]}. Testing such changes also becomes challenging since all three components need to be tested. Besides the traditional test suites, a full training-testing cycle is required which incurs time, resource and financial costs. The surrounding infrastructure of a machine learning pipeline becomes increasingly complex as we move towards a productionised model. Thus catching potential problems in the early, upstream phase of data analysis becomes extremely valuable as fixes are faster, easier and cheaper to implement.
i
2a86a1cc-a0a3-4edd-bb62-bf7340cb7d55
AI has had a significant impact on the technology sector due to the presence of large quantities of unbiased data [1]}. But AI's true potential lies in its application in critical sectors such as healthcare, wildlife preservation, autonomous driving, and criminal justice system [2]}. Such high-risk domains almost never have an existing dataset and require practitioners to collect data. Once the data is collected, it is often small and highly biased. While AI research is primarily dominated by model advancements, this new breed of high-stakes AI supports the need for a more data-centric approach to AI [3]}, [4]}, [5]}.
i
5627f9ed-a59e-4824-8260-924b06f6c56f
Since the study of software systems with machine learning components is a fairly young discipline, resources are lacking to aid practitioners in their day-to-day activities. The highly data-driven nature of machine learning makes data equivalent to code in traditional software. The notion of code smells is critical in software engineering to identify early indications of potential bugs, sources of technical debt and weak design choices. Code smells have existed for over 30 years. A large body of scientific work has catalogued the different smells, the context in which they occur and their potential side-effects. To the best of our knowledge, such a catalogue however does not exist for data science.
i
bf78b321-0601-405f-89ed-7a85e8493a9c
RQ1. What are the recurrent data quality issues that appear in public datasets? Analogous to code smells, we introduce the notion of data smells. Data smells are anti-patterns in datasets that indicate early signs of problems or technical debt. RQ2. What is the prevalence of such data quality issues in public datasets? We create a catalogue of 14 data smells by analysing 25 popular public datasetsOur analysis of the datasets can be found on Figshare https://figshare.com/s/fd608796dd65f0808e7e. The catalogue also presents real-world examples of the smells along with refactoring suggestions to circumvent the problem. Additionally, we plan to publish the catalogue online under the creative commons license in hopes that students and practitioners find it valuable.
i
419fba26-a9ef-4ae3-b3a1-563a2b9feda6
The remainder of the paper is structured as follows. Section provides an overview of related concepts and prior work that has been done. The methodology followed by this paper is presented in Section followed by the results in Section . The paper concludes with a discussion of the results, limitations and future work in Section , and respectively. <FIGURE>
i
898863d5-ced0-467c-91b3-c12fe0bc5c13
Code smells were originally proposed by Kent Beck in the 1900s and later popularised by [1]} in his book Refactoring [1]}, [3]}. Code smells are indications of potential problems in the code and require engineers to investigate further. Common code smells include presence of bloated code such as large classes & long methods, redundant code such as duplicate code & dead code paths and excessive coupling such as feature envy [1]}, . Code smells have been widely adopted by the software engineering community to improve the design and quality of their codebase. The notion of code smells has also been extended to other areas such as testing [5]}, [6]}, [7]}, bug tracking [8]}, code review [9]} and database management systems [10]}, [11]}, [12]}. Code smells however still suffer from the problem of lacking generalisability over a large population as most smells are subjective to the developer, team or organisation.
w
2776f17f-11e2-4970-aef4-5fd84cbdd92c
Data validation is a well established field of research with roots in Database Management Systems (DBMS). With the wide adoption of data-driven decision-making by businesses, significant efforts have been made towards automated data cleaning and quality assurance [1]}, [2]}, [3]}, [4]}. In the context of machine learning, several tools and techniques have been proposed for improving data quality and automated data validation [5]}, [6]}, [7]}, [8]}, [9]}, [10]}. [11]} present a data linting tool in the context of Deep Neural Networks (DNNs). The tool checks the training data for potential errors both at the dataset and feature level. The paper presents empirical evidence of applying the linter to over 600 open source datasets from Kaggle, along with several proprietary Google datasets. The results indicate that such a tool is useful for new machine learning practitioners and educational purposes [11]}. Although there is some overlap between the data linter by [11]} and our data smells project, We argue that [11]} did not follow a systematic approach to collect the linting rules. Our work is complimentary to data linters as our approach exhaustively extracts potential data quality issues from datasets. Our catalogue of data smells can be seen as a framework for systematically extending, or creating new data linting and validation tools.
w
7d47baf7-2fab-4d59-85b6-1685f032f042
AI engineering is a relatively young discipline of software engineering (SE) research. The primary focus of the field is to compare and contrast machine learning systems to traditional software systems and adopt best practices from the SE community. The seminal paper by [1]} was the first to recognise that machine learning systems accumulate technical debt faster than traditional software [1]}. This accelerated rate of technical debt accumulation is due to the highly tangled nature of machine learning models to its data. Machine learning is data-centric as each problem—which requires new or combination of existing datasets—needs to be addressed individually [1]}, [4]}, [5]}, [6]}, [7]}, [8]}.
w
db76fec5-9630-407b-afc9-6bb8e1101698
Data scientists spend the majority of their time working with data, yet unlike in software engineering, lack tools that can aid them in their analysis [1]}, [2]}. This study proposes a catalogue of data smells that can be beneficial to practitioners and used as a framework for development of tools in the future. <TABLE>
w
0c952056-114c-4fd8-9bac-e2ad108a12d9
This section presents the results obtained from the analysis of public datasets. The most recurrent data quality issues are presented first. A catalogue of data smells showing the prevalence of such data quality issues is presented next (See RQ1 and RQ2 in Section ). This study analysed 25 public datasets from which 14 data smells were discovered. We group the smells into 4 distinct categories based on their similarity as listed below.
r
d7cc7695-fab0-44f7-9cd7-6de163df70bd
Redundant value smells or smells which occur due to presence of features that do not contribute any new information. Categorical value smells or smells which occur due to presence of features containing categorical data. Missing value smells or smells which occur due to absence of values in a dataset. String value smells or smells which occur due to presence of features containing string type data.
r