_id
stringlengths
36
36
text
stringlengths
5
665k
marker
stringlengths
3
6
marker_offsets
sequence
label
stringlengths
28
32
2cdb2102-120e-4757-bcad-3f0490723521
Statistical analysis with many features is an increasingly common practice. It is tedious to perform model diagnostics when association with a large number of features are being explored and for this reason, model diagnostics is often overlooked. As we have illustrated in sections and , the proposed maximal permutation test can be robust to outliers and offers a general blackbox method for making a decision about association without necessarily performing such diagnostics. We employed the maximal permutation test here using chi-square test as the underlying test at each cutpoint. For association of NSCLC recurrence with preoperative levels of the HE4 marker, Figure REF shows a plot of \(p-\) values obtained at different cut points for the original sequence of the data. This process is repeated for the permuted sequences to obtain the permutation distribution of the test statistic. For comparison, we also report \(p\) -values based on [1]}, [2]} and modified Bonferroni [3]} approaches. Note that the [2]} adjustment is known to be similar to the [1]} adjustment for larger \(p\) -values. As we noted in our simulation studies, these adjustments are often overly conservative and have less power. For association with NSCLC recurrence, the proposed maximal permutation test reports a \(p\) -value of \(0.008\) for human epididymis secretory protein 4 (HE4) and \(p\) -value of \(0.05\) for Carcino Embryonic Antigen (CEA). respectively. After adjusting for multiplicity by the [6]} False Discovery Rate (FDR) approach, these \(p\) -values are respectively \(0.08\) and \(0.16\) but the multiplicity adjustment maintains the ordering of the \(p\) -values, and biomarkers HE4 and CEA still remain at the top among 10 markers ranked by adjusted \(p-\) value.
[2]
[ [ 956, 959 ], [ 1016, 1019 ] ]
https://openalex.org/W2028896946
993b5008-eb04-4f2f-a2d3-5e829c215553
Statistical analysis with many features is an increasingly common practice. It is tedious to perform model diagnostics when association with a large number of features are being explored and for this reason, model diagnostics is often overlooked. As we have illustrated in sections and , the proposed maximal permutation test can be robust to outliers and offers a general blackbox method for making a decision about association without necessarily performing such diagnostics. We employed the maximal permutation test here using chi-square test as the underlying test at each cutpoint. For association of NSCLC recurrence with preoperative levels of the HE4 marker, Figure REF shows a plot of \(p-\) values obtained at different cut points for the original sequence of the data. This process is repeated for the permuted sequences to obtain the permutation distribution of the test statistic. For comparison, we also report \(p\) -values based on [1]}, [2]} and modified Bonferroni [3]} approaches. Note that the [2]} adjustment is known to be similar to the [1]} adjustment for larger \(p\) -values. As we noted in our simulation studies, these adjustments are often overly conservative and have less power. For association with NSCLC recurrence, the proposed maximal permutation test reports a \(p\) -value of \(0.008\) for human epididymis secretory protein 4 (HE4) and \(p\) -value of \(0.05\) for Carcino Embryonic Antigen (CEA). respectively. After adjusting for multiplicity by the [6]} False Discovery Rate (FDR) approach, these \(p\) -values are respectively \(0.08\) and \(0.16\) but the multiplicity adjustment maintains the ordering of the \(p\) -values, and biomarkers HE4 and CEA still remain at the top among 10 markers ranked by adjusted \(p-\) value.
[6]
[ [ 1494, 1497 ] ]
https://openalex.org/W2110065044
5ad33977-a30a-4d92-968e-afd1ac764834
Note that CLIQUE is decision problem whose proof certificate would be a set of vertices in \(G\) on which the induced subgraph is claimed to be a cliqueA clique is a complete subgraph. of size at least k. Verifying that claim should be as easy as verifying that all vertices are valid, pairwise-adjacent in \(G\) and that their count is at least k. All of that can be done in polynomial time making CLIQUE an NP problem. In fact, it was also proven to be NP-complete [1]}. Now let's bring back the halting problem. HALT is surely not in NP because it is undecidable, hence undecidable by an NTM. Despite the apparent irrelevance of HALT to CLIQUE, we can, in fact, find a reduction from CLIQUE to HALT.
[1]
[ [ 469, 472 ] ]
https://openalex.org/W2401610261
c59a285e-8056-4fea-80cd-ccde8905af36
In the field of theoretical complexity, SAT holds a unique place for several reasons including the fact that it has the simplest formulation of all other NP-complete problems, and it was the very first to be proven NP-complete along with 21 graph and combinatorial problems by [1]}.
[1]
[ [ 277, 280 ] ]
https://openalex.org/W2401610261
bfaed30b-9504-456e-a27d-420940471594
The good news is there are algorithms that can transform an arbitrary Boolean formula into an equisatisfiable one in CNF in polynomial time such as the one devised by Karp [1]}. However, the bad news is that no similar polynomial transformation is known for DNF, which is the much easier one to solve. In fact, the existence of such reduction would imply \(\text{P}=\text{NP}\) . What makes efficient CNF reductions useful is that we can now transform the SAT problem into CNF-SAT without making extra assumptions about its complexity class since the conversion is PTIME anyways, so rest assured; CNF-SAT is just as hard! That is why SAT is generally used to mean CNF-SAT.
[1]
[ [ 172, 175 ] ]
https://openalex.org/W2401610261
5d39ad84-09cf-4380-b31e-0d37654c9b49
One of the most common feasible SATs is 2-SAT which is a special case of CNF-SAT where each clause contains at most 2 literals. This variant can be efficiently solved by first writing each clause in its implicative normal form that is as simple as writing each disjunction as an equivalent implication. For instance, \(x \vee y\) can be written as \(\bar{x} \Rightarrow y\) or \(\bar{y} \Rightarrow x\) . Now, since all clauses are in conjunction, we can just chain all implications together and see if we obtain an implicative contradiction in form of \(x \Rightarrow ... \Rightarrow \bar{x}\) AND \(\bar{x} \Rightarrow ... \Rightarrow x\) , in which case the formula would be unsatisfiable. Performing the latter check has a linear-time solution by [1]} through constructing a implication graph where implications between literals correspond to edges between vertices. The algorithms is based on graph theory concepts such as strongly connected components and topological sorting (for finding an assignment) which are defined as follows.
[1]
[ [ 754, 757 ] ]
https://openalex.org/W2031823362
2cbd32f0-6fb7-4f7b-943a-d5fe9db8909c
The following proof of CLT is based on the works of [1]}, [2]} and it uses a few concepts that were introduced earlier in this paper.
[1]
[ [ 52, 55 ] ]
https://openalex.org/W2061344791
b60149ee-dc9f-490d-ae4f-2c21772af185
The large model has recently been extremely successful in machine learning and data mining with the growth of data volume. A great number of complex deep neural networks [1]}, [2]}, [3]}, [4]}, [5]} have been devised to solve real-world applications problem. However, many practical applications can only provide limited computing resources, i.e. limited storage devices and unaccelerated hardware such as CPUs. These constrain the model complexity and make model training extremely time-consuming under the huge amount of training datasets. To solve these issues, this paper explores to accelerate model training and reduce model storage costs simultaneously for large-scale machine learning.
[3]
[ [ 182, 185 ] ]
https://openalex.org/W2163605009
8cbd988f-ec70-42d5-b6b1-a344feecf2d7
The large model has recently been extremely successful in machine learning and data mining with the growth of data volume. A great number of complex deep neural networks [1]}, [2]}, [3]}, [4]}, [5]} have been devised to solve real-world applications problem. However, many practical applications can only provide limited computing resources, i.e. limited storage devices and unaccelerated hardware such as CPUs. These constrain the model complexity and make model training extremely time-consuming under the huge amount of training datasets. To solve these issues, this paper explores to accelerate model training and reduce model storage costs simultaneously for large-scale machine learning.
[5]
[ [ 194, 197 ] ]
https://openalex.org/W2970302319
cc417771-64ca-43b8-ae35-db1bedae44a5
Distributed training offers a potential solution to solve the issue of long training time [1]}, [2]}, [3]}, [4]}, [5]}. The data parallelism [6]}, [7]} is one of the most popular framework in distributed training. As shown in Fig. REF , data parallelism framework has one or more computer workers connected via some communication networks, while multiple model replicas are trained in parallel on each worker. A global parameter server ensures the consistency among replicas of each worker by collecting all gradients computed from different workers and then averaging them to update parameters. The goal is to optimize a global objective function formed by the average of a series of local loss functions derived from local computation on each worker. <FIGURE>
[2]
[ [ 96, 99 ] ]
https://openalex.org/W1442374986
9c663190-92aa-446b-a462-3b8671637cff
Distributed training offers a potential solution to solve the issue of long training time [1]}, [2]}, [3]}, [4]}, [5]}. The data parallelism [6]}, [7]} is one of the most popular framework in distributed training. As shown in Fig. REF , data parallelism framework has one or more computer workers connected via some communication networks, while multiple model replicas are trained in parallel on each worker. A global parameter server ensures the consistency among replicas of each worker by collecting all gradients computed from different workers and then averaging them to update parameters. The goal is to optimize a global objective function formed by the average of a series of local loss functions derived from local computation on each worker. <FIGURE>
[7]
[ [ 147, 150 ] ]
https://openalex.org/W2963964896
22e1f670-a0e0-4d56-bf32-22fab5d7b2a6
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[3]
[ [ 648, 651 ] ]
https://openalex.org/W2963228337
b12f3a87-aa47-4a82-9b5d-f4823be3d672
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[4]
[ [ 654, 657 ] ]
https://openalex.org/W3023944502
6711b625-2c77-4152-b478-52fa92f60d5f
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[8]
[ [ 1110, 1113 ] ]
https://openalex.org/W2107438106
0de9f2a3-ee25-4a99-963d-1272d29fd916
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[10]
[ [ 1367, 1371 ] ]
https://openalex.org/W2469490737
8950c06d-ae0e-4c14-bfe1-64c4dc1d40dd
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[11]
[ [ 1382, 1386 ] ]
https://openalex.org/W2769644379
b3862cf3-88c3-4147-8c2a-42121888cd75
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[12]
[ [ 1543, 1547 ] ]
https://openalex.org/W2407022425
3a3d958b-a3f3-486e-96c6-122c791672a4
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[13]
[ [ 1570, 1574 ] ]
https://openalex.org/W2964004663
61e4636b-184d-4bad-b1b1-c233dde82eac
In the above distributed framework, time consumption is mainly caused by computation and communication. While increasing the number of workers helps to reduce computation time, the communication overhead for exchanging training information, e.g., stochastic gradients among different workers, is a key bottleneck for training efficiency, especially for high delay communication network. Even worse, some slow workers can adversely affect the progress of fast workers, leading to a drastic slowdown of the overall convergence process [1]}. Asynchronous communication [2]}, alleviate the negative effect of slow machines and decentralized algorithms [3]}, [4]} get rid of the dependency on high communication costs on the central node. But, the parameter variance among workers may deteriorate model accuracy[5]}. In this paper, we concentrate on the data parallelism framework which belongs to synchronous centralized algorithms. There are many types of research focusing on how to save communication costs. Some studies [6]}, [7]} propose to reduce the number of training rounds. For example, one may use SVRG [8]}, [9]} to periodically calculate an accurate gradient estimate to reduce the variance introduced by stochastic sampling, which, however, is an operation with high calculation costs. Other methods focus on reducing the precision of gradients. DoReFa-Net [10]} and QSGD [11]} quantize gradients into fixed-point numbers, so that much fewer bits are needed to be transmitted. More aggressive quantization methods, such as 1-bit-sgd [12]} and ternary gradient [13]}, sacrifice a certain degree of expression power to achieve the purpose of reducing communication costs. [14]} studies the double squeeze of the gradient that not only the local gradient is compressed but also the synchronous gradient.
[14]
[ [ 1680, 1684 ] ]
https://openalex.org/W2946815927
b00784d0-52a4-4ec1-a6c0-89fd2b6e92e3
The size of the model is not only a determinant of memory usage but also an important factor of save the communication cost in distributed training. Although in data parallelism, we transmit the gradient instead of the model parameters. But for a sparse model , we can avoid transmitting unnecessary gradients corresponding to the parameters that always be 0. The combination of the above bandwidth reduction methods and parallel stochastic gradient descent (PSGD) has been demonstrated to be effective for model training without model sparsity constraints. Therefore, inspired by some online algorithms such as adaptive composite mirror descent and adaptive regularized dual averaging [1]}, which perform well in generating sparse models, we design two corresponding communication efficient sparse model training algorithms for distributed framework, named quantized composite mirror descent adaptive subgradient (QCMD adagrad) and quantized regularized dual averaging adaptive subgradient (QRDA adagrad).
[1]
[ [ 686, 689 ] ]
https://openalex.org/W2146502635
c74e6473-3fef-4fb8-8d3a-03d18e6cc9e0
To be specific, we define the distributed objective not only under a nonsmooth constraint to keep the model sparse, but also construct a proximal function base on a quantized gradient to achieve a balance between communication costs, accuracy, and model sparsity. The quantization is conducted not only for local gradients computed by all local workers but also for the aggregated gradient like the double squeeze [1]}. We proof the convergence rate for distributed QCMD adagrad and QRDA adagrad is \(O(\frac{1}{\sqrt{T}})\) . Besides, our theoretical analysis shows that quantization introduces additional noise which affects the model convergence and sparsity. Hence, we apply a threshold quantization method with small error to the gradient to reduce the influence of noise on model training.
[1]
[ [ 414, 417 ] ]
https://openalex.org/W2946815927
37c5a649-b7d3-4df2-b36c-89ea4140e3f6
Gradients sparsification. Gradient sparsification [1]} imposes sparsity onto gradients, where only a small fraction of elements of gradients are exchanged across workers based on their importance. Lin et al. [2]} find most of the gradient exchange in distributed SGD are redundant, and propose a deep gradient compression method to cut the gradient size of ResNet-50 from 97MB to 0.35MB, and that of DeepSpeach from 488MB to 0.74MB. Aji et al. [3]} sparsify gradients by removing the \(R\%\) smallest gradients regarding absolute value. Simlarly, [4]} propose sparsified SGD with top k-sparsification. Wangni et al. [5]} analyse how to achieve the optimal sparseness under certain variance constraints and the unbiasedness of the gradient vector is kept after sparsification.
[1]
[ [ 50, 53 ] ]
https://openalex.org/W2407022425
4f10bdc4-0fb8-4c75-97e7-7738642f374b
Gradients sparsification. Gradient sparsification [1]} imposes sparsity onto gradients, where only a small fraction of elements of gradients are exchanged across workers based on their importance. Lin et al. [2]} find most of the gradient exchange in distributed SGD are redundant, and propose a deep gradient compression method to cut the gradient size of ResNet-50 from 97MB to 0.35MB, and that of DeepSpeach from 488MB to 0.74MB. Aji et al. [3]} sparsify gradients by removing the \(R\%\) smallest gradients regarding absolute value. Simlarly, [4]} propose sparsified SGD with top k-sparsification. Wangni et al. [5]} analyse how to achieve the optimal sparseness under certain variance constraints and the unbiasedness of the gradient vector is kept after sparsification.
[2]
[ [ 208, 211 ] ]
https://openalex.org/W2963803379
621cf137-1a42-4d9b-aea6-e7b0452c7b56
Gradients sparsification. Gradient sparsification [1]} imposes sparsity onto gradients, where only a small fraction of elements of gradients are exchanged across workers based on their importance. Lin et al. [2]} find most of the gradient exchange in distributed SGD are redundant, and propose a deep gradient compression method to cut the gradient size of ResNet-50 from 97MB to 0.35MB, and that of DeepSpeach from 488MB to 0.74MB. Aji et al. [3]} sparsify gradients by removing the \(R\%\) smallest gradients regarding absolute value. Simlarly, [4]} propose sparsified SGD with top k-sparsification. Wangni et al. [5]} analyse how to achieve the optimal sparseness under certain variance constraints and the unbiasedness of the gradient vector is kept after sparsification.
[3]
[ [ 444, 447 ] ]
https://openalex.org/W3101036738
b8c995ae-f5a7-49b8-b9d9-e094003f630d
Gradients sparsification. Gradient sparsification [1]} imposes sparsity onto gradients, where only a small fraction of elements of gradients are exchanged across workers based on their importance. Lin et al. [2]} find most of the gradient exchange in distributed SGD are redundant, and propose a deep gradient compression method to cut the gradient size of ResNet-50 from 97MB to 0.35MB, and that of DeepSpeach from 488MB to 0.74MB. Aji et al. [3]} sparsify gradients by removing the \(R\%\) smallest gradients regarding absolute value. Simlarly, [4]} propose sparsified SGD with top k-sparsification. Wangni et al. [5]} analyse how to achieve the optimal sparseness under certain variance constraints and the unbiasedness of the gradient vector is kept after sparsification.
[4]
[ [ 548, 551 ] ]
https://openalex.org/W2890924858
dbfde172-6b77-4491-8e9a-1b4009e78ad4
Gradients sparsification. Gradient sparsification [1]} imposes sparsity onto gradients, where only a small fraction of elements of gradients are exchanged across workers based on their importance. Lin et al. [2]} find most of the gradient exchange in distributed SGD are redundant, and propose a deep gradient compression method to cut the gradient size of ResNet-50 from 97MB to 0.35MB, and that of DeepSpeach from 488MB to 0.74MB. Aji et al. [3]} sparsify gradients by removing the \(R\%\) smallest gradients regarding absolute value. Simlarly, [4]} propose sparsified SGD with top k-sparsification. Wangni et al. [5]} analyse how to achieve the optimal sparseness under certain variance constraints and the unbiasedness of the gradient vector is kept after sparsification.
[5]
[ [ 617, 620 ] ]
https://openalex.org/W2963540381
521ff7f4-4de9-47d2-a5e7-ddc216381097
Gradients quantization. Gradient quantization replaces the original gradient with a small number of fixed values. Quantized SGD (QSGD) [1]} adjusts the number of bits of the exchanging gradients to balance the bandwidth and accuracy. More aggressive quantization methods, such as the binary representation [2]}, cut each component of the gradient to its sign. TernGrad [3]} uses three numerical levels {-1, 0, 1} and a scaler, e.g. maximum norm or \(l_2\) norm of the gradient, to replace the full precision gradient. This aggressive method can be regarded as a variant of QSGD. To reduce the influence of noise introduced by the aggressive quantization, Wu et al. [4]} utilize the accumulated quantization error to compensate the quantized gradient. Several applications, such as federated machine learning (ML) at the wireless edge [5]}, benefit from the error compensation. In addition, [6]} introduce a family of adaptive gradient quantization schemes which can enable linear convergence in any norm for gradient-descent-type algorithms. [7]} propose a quantized Newton’s method which is suitable for ill-conditioned but low-dimensional problems, as it reduces the communication complexity by a trade-off between the dependency on the dimension of the input features and its condition number. In [7]}, lattice quantization [9]} that reduces the variance is adapted to quantize the covariance matrix.
[1]
[ [ 135, 138 ] ]
https://openalex.org/W2769644379
8c53a06d-f02e-4c56-9387-9049741bb447
Gradients quantization. Gradient quantization replaces the original gradient with a small number of fixed values. Quantized SGD (QSGD) [1]} adjusts the number of bits of the exchanging gradients to balance the bandwidth and accuracy. More aggressive quantization methods, such as the binary representation [2]}, cut each component of the gradient to its sign. TernGrad [3]} uses three numerical levels {-1, 0, 1} and a scaler, e.g. maximum norm or \(l_2\) norm of the gradient, to replace the full precision gradient. This aggressive method can be regarded as a variant of QSGD. To reduce the influence of noise introduced by the aggressive quantization, Wu et al. [4]} utilize the accumulated quantization error to compensate the quantized gradient. Several applications, such as federated machine learning (ML) at the wireless edge [5]}, benefit from the error compensation. In addition, [6]} introduce a family of adaptive gradient quantization schemes which can enable linear convergence in any norm for gradient-descent-type algorithms. [7]} propose a quantized Newton’s method which is suitable for ill-conditioned but low-dimensional problems, as it reduces the communication complexity by a trade-off between the dependency on the dimension of the input features and its condition number. In [7]}, lattice quantization [9]} that reduces the variance is adapted to quantize the covariance matrix.
[2]
[ [ 306, 309 ] ]
https://openalex.org/W2407022425
0e4a551b-9331-4ef4-904c-3f26e3d5747a
Gradients quantization. Gradient quantization replaces the original gradient with a small number of fixed values. Quantized SGD (QSGD) [1]} adjusts the number of bits of the exchanging gradients to balance the bandwidth and accuracy. More aggressive quantization methods, such as the binary representation [2]}, cut each component of the gradient to its sign. TernGrad [3]} uses three numerical levels {-1, 0, 1} and a scaler, e.g. maximum norm or \(l_2\) norm of the gradient, to replace the full precision gradient. This aggressive method can be regarded as a variant of QSGD. To reduce the influence of noise introduced by the aggressive quantization, Wu et al. [4]} utilize the accumulated quantization error to compensate the quantized gradient. Several applications, such as federated machine learning (ML) at the wireless edge [5]}, benefit from the error compensation. In addition, [6]} introduce a family of adaptive gradient quantization schemes which can enable linear convergence in any norm for gradient-descent-type algorithms. [7]} propose a quantized Newton’s method which is suitable for ill-conditioned but low-dimensional problems, as it reduces the communication complexity by a trade-off between the dependency on the dimension of the input features and its condition number. In [7]}, lattice quantization [9]} that reduces the variance is adapted to quantize the covariance matrix.
[3]
[ [ 369, 372 ] ]
https://openalex.org/W2964004663
95f6214f-46a5-4f50-a2bd-f4145ce2149c
Gradients quantization. Gradient quantization replaces the original gradient with a small number of fixed values. Quantized SGD (QSGD) [1]} adjusts the number of bits of the exchanging gradients to balance the bandwidth and accuracy. More aggressive quantization methods, such as the binary representation [2]}, cut each component of the gradient to its sign. TernGrad [3]} uses three numerical levels {-1, 0, 1} and a scaler, e.g. maximum norm or \(l_2\) norm of the gradient, to replace the full precision gradient. This aggressive method can be regarded as a variant of QSGD. To reduce the influence of noise introduced by the aggressive quantization, Wu et al. [4]} utilize the accumulated quantization error to compensate the quantized gradient. Several applications, such as federated machine learning (ML) at the wireless edge [5]}, benefit from the error compensation. In addition, [6]} introduce a family of adaptive gradient quantization schemes which can enable linear convergence in any norm for gradient-descent-type algorithms. [7]} propose a quantized Newton’s method which is suitable for ill-conditioned but low-dimensional problems, as it reduces the communication complexity by a trade-off between the dependency on the dimension of the input features and its condition number. In [7]}, lattice quantization [9]} that reduces the variance is adapted to quantize the covariance matrix.
[4]
[ [ 666, 669 ] ]
https://openalex.org/W2964163156
29a4b376-89cd-42d3-a67f-c50a44b0c805
Gradients quantization. Gradient quantization replaces the original gradient with a small number of fixed values. Quantized SGD (QSGD) [1]} adjusts the number of bits of the exchanging gradients to balance the bandwidth and accuracy. More aggressive quantization methods, such as the binary representation [2]}, cut each component of the gradient to its sign. TernGrad [3]} uses three numerical levels {-1, 0, 1} and a scaler, e.g. maximum norm or \(l_2\) norm of the gradient, to replace the full precision gradient. This aggressive method can be regarded as a variant of QSGD. To reduce the influence of noise introduced by the aggressive quantization, Wu et al. [4]} utilize the accumulated quantization error to compensate the quantized gradient. Several applications, such as federated machine learning (ML) at the wireless edge [5]}, benefit from the error compensation. In addition, [6]} introduce a family of adaptive gradient quantization schemes which can enable linear convergence in any norm for gradient-descent-type algorithms. [7]} propose a quantized Newton’s method which is suitable for ill-conditioned but low-dimensional problems, as it reduces the communication complexity by a trade-off between the dependency on the dimension of the input features and its condition number. In [7]}, lattice quantization [9]} that reduces the variance is adapted to quantize the covariance matrix.
[5]
[ [ 835, 838 ] ]
https://openalex.org/W2907379776
474b38ec-5c43-41b7-b978-650b4783a211
Gradients quantization. Gradient quantization replaces the original gradient with a small number of fixed values. Quantized SGD (QSGD) [1]} adjusts the number of bits of the exchanging gradients to balance the bandwidth and accuracy. More aggressive quantization methods, such as the binary representation [2]}, cut each component of the gradient to its sign. TernGrad [3]} uses three numerical levels {-1, 0, 1} and a scaler, e.g. maximum norm or \(l_2\) norm of the gradient, to replace the full precision gradient. This aggressive method can be regarded as a variant of QSGD. To reduce the influence of noise introduced by the aggressive quantization, Wu et al. [4]} utilize the accumulated quantization error to compensate the quantized gradient. Several applications, such as federated machine learning (ML) at the wireless edge [5]}, benefit from the error compensation. In addition, [6]} introduce a family of adaptive gradient quantization schemes which can enable linear convergence in any norm for gradient-descent-type algorithms. [7]} propose a quantized Newton’s method which is suitable for ill-conditioned but low-dimensional problems, as it reduces the communication complexity by a trade-off between the dependency on the dimension of the input features and its condition number. In [7]}, lattice quantization [9]} that reduces the variance is adapted to quantize the covariance matrix.
[6]
[ [ 891, 894 ] ]
https://openalex.org/W2917189468
924af0ed-4beb-4e75-8577-20112bb31fa7
Stochastic optimization. In the modern implementation of large-scale machine learning algorithms, stochastic gradient descent (SGD) is commonly used as the optimization method in distributed training frameworks because of its universality and high computational efficiency for each iteration. SGD intrinsically implies gradient noise, which helps to escape saddle points for non-convex problems, like neural networks [1]}, [2]}. However, when producing a sparse model, simply adding a subgradient of the \(l_1\) penalty to the gradient of the loss does not essentially produce parameters that are exactly zero. More sophisticated approaches such as composite mirror descent [3]}, [4]} and regularized dual averaging [5]} do succeed in introducing sparsity, but the sparsity of model is limited. Their adaptive subgradient extensions (Adagrad) with \(l_1\) regularization [6]} produce even better accuracy vs. sparsity tradeoffs. Compared with SGD which is very sensitive to the learning rate, Adagrad [6]} dynamically incorporates knowledge of the geometry of the data curvature of the loss function to adjust the learning rate of gradients. As a result, it requires no manual tuning of learning rate and appears robust to noisy gradient information and large-scale high-dimensional machine learning. <FIGURE>
[1]
[ [ 417, 420 ] ]
https://openalex.org/W2963092340
fc08db88-1cfe-46a2-94dc-cd9c894d87c5
Stochastic optimization. In the modern implementation of large-scale machine learning algorithms, stochastic gradient descent (SGD) is commonly used as the optimization method in distributed training frameworks because of its universality and high computational efficiency for each iteration. SGD intrinsically implies gradient noise, which helps to escape saddle points for non-convex problems, like neural networks [1]}, [2]}. However, when producing a sparse model, simply adding a subgradient of the \(l_1\) penalty to the gradient of the loss does not essentially produce parameters that are exactly zero. More sophisticated approaches such as composite mirror descent [3]}, [4]} and regularized dual averaging [5]} do succeed in introducing sparsity, but the sparsity of model is limited. Their adaptive subgradient extensions (Adagrad) with \(l_1\) regularization [6]} produce even better accuracy vs. sparsity tradeoffs. Compared with SGD which is very sensitive to the learning rate, Adagrad [6]} dynamically incorporates knowledge of the geometry of the data curvature of the loss function to adjust the learning rate of gradients. As a result, it requires no manual tuning of learning rate and appears robust to noisy gradient information and large-scale high-dimensional machine learning. <FIGURE>
[2]
[ [ 423, 426 ] ]
https://openalex.org/W2963317585
84427a79-46da-4c1d-824d-9b6f88fbdd5f
Stochastic optimization. In the modern implementation of large-scale machine learning algorithms, stochastic gradient descent (SGD) is commonly used as the optimization method in distributed training frameworks because of its universality and high computational efficiency for each iteration. SGD intrinsically implies gradient noise, which helps to escape saddle points for non-convex problems, like neural networks [1]}, [2]}. However, when producing a sparse model, simply adding a subgradient of the \(l_1\) penalty to the gradient of the loss does not essentially produce parameters that are exactly zero. More sophisticated approaches such as composite mirror descent [3]}, [4]} and regularized dual averaging [5]} do succeed in introducing sparsity, but the sparsity of model is limited. Their adaptive subgradient extensions (Adagrad) with \(l_1\) regularization [6]} produce even better accuracy vs. sparsity tradeoffs. Compared with SGD which is very sensitive to the learning rate, Adagrad [6]} dynamically incorporates knowledge of the geometry of the data curvature of the loss function to adjust the learning rate of gradients. As a result, it requires no manual tuning of learning rate and appears robust to noisy gradient information and large-scale high-dimensional machine learning. <FIGURE>
[4]
[ [ 681, 684 ] ]
https://openalex.org/W131378802
d26b471f-2c43-4a02-a100-4233fc3a23d9
Stochastic optimization. In the modern implementation of large-scale machine learning algorithms, stochastic gradient descent (SGD) is commonly used as the optimization method in distributed training frameworks because of its universality and high computational efficiency for each iteration. SGD intrinsically implies gradient noise, which helps to escape saddle points for non-convex problems, like neural networks [1]}, [2]}. However, when producing a sparse model, simply adding a subgradient of the \(l_1\) penalty to the gradient of the loss does not essentially produce parameters that are exactly zero. More sophisticated approaches such as composite mirror descent [3]}, [4]} and regularized dual averaging [5]} do succeed in introducing sparsity, but the sparsity of model is limited. Their adaptive subgradient extensions (Adagrad) with \(l_1\) regularization [6]} produce even better accuracy vs. sparsity tradeoffs. Compared with SGD which is very sensitive to the learning rate, Adagrad [6]} dynamically incorporates knowledge of the geometry of the data curvature of the loss function to adjust the learning rate of gradients. As a result, it requires no manual tuning of learning rate and appears robust to noisy gradient information and large-scale high-dimensional machine learning. <FIGURE>
[6]
[ [ 873, 876 ], [ 1003, 1006 ] ]
https://openalex.org/W2146502635
601f5b76-27b2-4d01-abce-efb8361f3217
Stochastic Optimization [1]} is a popular approach in training large-scale machine learning models. We denote the objective function as \(f({\bf x})\) , depending on the model parameter \({\bf x}\in \chi \) . The dataset could be split into \(M\) workers, where each has \(N\) data samples. We denote \(f_{m,n}\) as the loss function associated with the \(n^{th}\) sample on the \(m^{th}\) worker, and \(f({\bf x})\) is an average over all loss functions on all workers. \(f({\bf x})=\frac{1}{M} \frac{1}{N} \sum _{m,n} f_{m,n} ({\bf x})\)
[1]
[ [ 24, 27 ] ]
https://openalex.org/W3101017602
6ae8c60e-8114-4122-a3b5-3527f47ae1f5
Generate Sparse Model. Vanilla SGD is not particularly effective at producing a sparse model, even adding a subgradient of \(l_1\) penalty to the gradient of the loss. Some other approaches such as proximal gradient descent (also named composite mirror descent) [1]}, [2]} and regularized dual averaging [3]} introduce limited sparsity with \(l_1\) regularization. Their adaptive subgradient extensions, CMD adagrad and RDA adagrad, [4]} produce better accuracy vs. sparsity tradeoffs.
[2]
[ [ 269, 272 ] ]
https://openalex.org/W131378802
2b504835-a9f2-4f5a-b52a-9c4d4138919f
Generate Sparse Model. Vanilla SGD is not particularly effective at producing a sparse model, even adding a subgradient of \(l_1\) penalty to the gradient of the loss. Some other approaches such as proximal gradient descent (also named composite mirror descent) [1]}, [2]} and regularized dual averaging [3]} introduce limited sparsity with \(l_1\) regularization. Their adaptive subgradient extensions, CMD adagrad and RDA adagrad, [4]} produce better accuracy vs. sparsity tradeoffs.
[4]
[ [ 435, 438 ] ]
https://openalex.org/W2146502635
b76e5cde-dc72-486b-af10-3687958b49fe
The original proximal gradient method [1]} employs an immediate trade-off among the current gradient \(f_t^{\prime }({\bf x}_t)\) , the regularizer \(\phi \) and proximal function \(\psi \) . The proximal function \(\psi \) aims to keep \({\bf x}\) staying close to \({\bf x}_t\) and sometimes be simply set to \(\psi ({\bf x})=\frac{1}{2}||{\bf x}-{\bf x}_t||^2_2\) ). It makes the model parameters meet the assumption of steady change. To achieve better bound of regret, the CMD adagrad adopts \(\psi _t\) which varies with \(t\) . The update for CMD adagrad amounts to solving \({\bf x}_{t+1}=\arg \min _{{\bf x}\in \chi }{\eta f^{\prime }_t({\bf x}_t)^\top }{\bf x}+\eta \phi ({\bf x})+B_{\psi _t}({\bf x},{\bf x}_t)\)
[1]
[ [ 38, 41 ] ]
https://openalex.org/W131378802
455b6990-21b1-4d5b-a514-95823d675e75
Although gradient quantization can reduce the cost of gradient communication in distributed training, it also introduces additional errors, which affect the convergence of the model and the sparsity of parameters in QCMD adagrad and QRDA adagrad. As an unbiased gradient quantization method, TernGrad [1]} has already made a good balance between the encoding cost and accuracy of the general model. However, when it comes to the sparse model, a large quantization error still leads to slower convergence of the \(l_1\) norm as a part of the objective function, which affects the sparsity of the model. In order to mitigate this problem, we apply the threshold quantization method to the QCMD adagrad and QRDA adagrad.
[1]
[ [ 301, 304 ] ]
https://openalex.org/W2964004663
f631a603-46e4-4328-b3d3-4410dbfb1bc0
Threshold quantization is an existing quantization method used for model quantization in [1]}. We apply it to gradient quantization since it produces less error than Terngrad[2]}. In this section, we use \({\bf v}^t\) to represent the (stochastic) gradient in the \(t_{th}\) iteration. Fig. REF gives a brief explanation of threshold quantization, and more analysis is provided below. <FIGURE>
[1]
[ [ 89, 92 ] ]
https://openalex.org/W2405920868
4d25e6be-db34-4a69-937b-f465f7a636ad
Threshold quantization is an existing quantization method used for model quantization in [1]}. We apply it to gradient quantization since it produces less error than Terngrad[2]}. In this section, we use \({\bf v}^t\) to represent the (stochastic) gradient in the \(t_{th}\) iteration. Fig. REF gives a brief explanation of threshold quantization, and more analysis is provided below. <FIGURE>
[2]
[ [ 174, 177 ] ]
https://openalex.org/W2964004663
783b3c03-1318-475f-a97c-fb70c933cdd0
We try to improve the computational efficiency of the threshold quantization procedure without harming the optimality of the coding described in Eq. (REF ). This improvement begins with the assumption that the gradient follows the Gaussian distribution. In case of \({\bf v}^t_i\) follows \(N(0,\sigma ^2)\) , Li and Liu [1]} have given an approximate solution for the optimal threshold \(\triangle ^*\) by \(0.6\sigma \) which equals \(0.75\cdot \mathbb {E}(|{\bf v}^t_i|)\approx \frac{0.75}{d}\sum _{i=1}^{d}|{\bf v}_i^t|\) . We also find in our experiment that most of the gradients satisfy the previous assumption. Fig. REF shows the experimental result for training AlexNet [2]} on two workers. The left column visualizes the first convolutional layer and the right one visualizes the first fully-connected layer. The distribution of the original floating gradients is close to the Gaussian distribution for both convolutional and fully-connected layers. Based on this observation, we simply use \(\frac{0.75}{d}\sum _{i=1}^{d}|{\bf v}_i^t|\) to approximate the optimal threshold to avoid the expensive cost of solving the optimal threshold \(\Delta ^*\) every iteration.
[1]
[ [ 322, 325 ] ]
https://openalex.org/W2405920868
b0f7eca2-ce9b-43d9-9dae-f2b50bee0597
We try to improve the computational efficiency of the threshold quantization procedure without harming the optimality of the coding described in Eq. (REF ). This improvement begins with the assumption that the gradient follows the Gaussian distribution. In case of \({\bf v}^t_i\) follows \(N(0,\sigma ^2)\) , Li and Liu [1]} have given an approximate solution for the optimal threshold \(\triangle ^*\) by \(0.6\sigma \) which equals \(0.75\cdot \mathbb {E}(|{\bf v}^t_i|)\approx \frac{0.75}{d}\sum _{i=1}^{d}|{\bf v}_i^t|\) . We also find in our experiment that most of the gradients satisfy the previous assumption. Fig. REF shows the experimental result for training AlexNet [2]} on two workers. The left column visualizes the first convolutional layer and the right one visualizes the first fully-connected layer. The distribution of the original floating gradients is close to the Gaussian distribution for both convolutional and fully-connected layers. Based on this observation, we simply use \(\frac{0.75}{d}\sum _{i=1}^{d}|{\bf v}_i^t|\) to approximate the optimal threshold to avoid the expensive cost of solving the optimal threshold \(\Delta ^*\) every iteration.
[2]
[ [ 683, 686 ] ]
https://openalex.org/W2163605009
3c1b956a-4779-49a2-bb9b-890a701827e8
To obtain the regret bound for QCMD adagrad and QRDA adagrad, we provide the follow lemma, which has been proved in Lemma 4 of [1]}. For any \(\delta \ge 0\) , the Mahalanobis norm \(||\cdot ||_{\psi _t}=\sqrt{<\cdot ,{\bf H}_t\cdot >}\) and \(||\cdot ||_{\psi ^*_t}=\sqrt{<\cdot ,\frac{1}{{\bf H}_t}\cdot >}\) be the associated dual norm, we have \(\begin{split}\frac{1}{2}\sum _{t=1}^T||{\bf q}_t||^2_{\psi ^*_t}\le \sum _{i=1}^d ||{\bf q}_{1:T,i}||_2\end{split}\)
[1]
[ [ 127, 130 ] ]
https://openalex.org/W2146502635
7853b279-b5a4-48c0-b288-1a6ad73964bb
Baselines. We compare QCMD adagrad and QRDA adagrad with several sparse model distributed optimization methods, including 32 bits Prox-gd[1]}, 32 bits CMD adagrad[2]}, 32 bits RDA adagrad[2]} and their corresponding ternary variants[4]}. \({\dagger }\) is to mark the methods that only the local gradients are quantized but the synchronous gradient.
[1]
[ [ 137, 140 ] ]
https://openalex.org/W131378802
615cc396-95b9-4d73-8f6d-abd2f77ba0b3
Baselines. We compare QCMD adagrad and QRDA adagrad with several sparse model distributed optimization methods, including 32 bits Prox-gd[1]}, 32 bits CMD adagrad[2]}, 32 bits RDA adagrad[2]} and their corresponding ternary variants[4]}. \({\dagger }\) is to mark the methods that only the local gradients are quantized but the synchronous gradient.
[2]
[ [ 162, 165 ], [ 187, 190 ] ]
https://openalex.org/W2146502635
b1a3ae54-eb4b-456f-bf25-570eee88fb00
Baselines. We compare QCMD adagrad and QRDA adagrad with several sparse model distributed optimization methods, including 32 bits Prox-gd[1]}, 32 bits CMD adagrad[2]}, 32 bits RDA adagrad[2]} and their corresponding ternary variants[4]}. \({\dagger }\) is to mark the methods that only the local gradients are quantized but the synchronous gradient.
[4]
[ [ 232, 235 ] ]
https://openalex.org/W2964004663
a4da57b4-ffcc-4ff8-8a9b-dee21cefa9ef
Implementation Details. All the experiments are carried out in a distributed framework that the network bandwidth is 100MBps. For the linear models training, each worker only utilizes the CPUs. For the training of convolutional neural networks, each worker is allocated with 1 NVIDIA Tesla P40 GPU. The methods are evaluated on four publicly available datasets. news20 and rcv1 are text datasets with a high dimension of input features from LIBSVM [1]}. MNIST [2]} is for handwritten digits classification problem and CIFAR10 [3]} is for image classification problem. Table REF shows the details of datasets. For news20 and rcv1, the \(\ell _1\) norm regularized logistic regression model are trained. As for multi-classification problems, we train LeNet for MNIST[2]} and AlexNet [5]} for CIFAR10. To generate a network with large sparsity, the batch normalization layer is added before each convolution layer and fully connected layer. The code is implemented via tensorflow. Experimental results are averaged over 5 runs with a random initialization seed. <TABLE><TABLE><FIGURE>
[2]
[ [ 460, 463 ], [ 766, 769 ] ]
https://openalex.org/W2112796928
8ae00263-b104-409b-b4f3-af9bbdfd0591
Implementation Details. All the experiments are carried out in a distributed framework that the network bandwidth is 100MBps. For the linear models training, each worker only utilizes the CPUs. For the training of convolutional neural networks, each worker is allocated with 1 NVIDIA Tesla P40 GPU. The methods are evaluated on four publicly available datasets. news20 and rcv1 are text datasets with a high dimension of input features from LIBSVM [1]}. MNIST [2]} is for handwritten digits classification problem and CIFAR10 [3]} is for image classification problem. Table REF shows the details of datasets. For news20 and rcv1, the \(\ell _1\) norm regularized logistic regression model are trained. As for multi-classification problems, we train LeNet for MNIST[2]} and AlexNet [5]} for CIFAR10. To generate a network with large sparsity, the batch normalization layer is added before each convolution layer and fully connected layer. The code is implemented via tensorflow. Experimental results are averaged over 5 runs with a random initialization seed. <TABLE><TABLE><FIGURE>
[3]
[ [ 526, 529 ] ]
https://openalex.org/W3118608800
adf8e35c-e152-4880-8564-1cfa740752b2
Implementation Details. All the experiments are carried out in a distributed framework that the network bandwidth is 100MBps. For the linear models training, each worker only utilizes the CPUs. For the training of convolutional neural networks, each worker is allocated with 1 NVIDIA Tesla P40 GPU. The methods are evaluated on four publicly available datasets. news20 and rcv1 are text datasets with a high dimension of input features from LIBSVM [1]}. MNIST [2]} is for handwritten digits classification problem and CIFAR10 [3]} is for image classification problem. Table REF shows the details of datasets. For news20 and rcv1, the \(\ell _1\) norm regularized logistic regression model are trained. As for multi-classification problems, we train LeNet for MNIST[2]} and AlexNet [5]} for CIFAR10. To generate a network with large sparsity, the batch normalization layer is added before each convolution layer and fully connected layer. The code is implemented via tensorflow. Experimental results are averaged over 5 runs with a random initialization seed. <TABLE><TABLE><FIGURE>
[5]
[ [ 783, 786 ] ]
https://openalex.org/W2163605009
412d57e7-eb1b-4ea2-9c15-3b12ad979b81
\(\frac{1}{2}\sum _{t=1}^{T}\mathbb {E}_{\bf q}||{\bf q}_t||^2_{\psi ^*_t}\le \sum _{i=1}^{d}\mathbb {E}_{\bf q}||{\bf q}_{1:T,i}||_2\) has been proved in Lemma 4 of [1]}, then \(\begin{split}&\quad \sum _{t=1}^{T}\mathbb {E}_{\bf q}[f_t({\bf x}_{t+1})+\phi ({\bf x}_{t+1})-f_t({\bf x}^*)-\phi ({\bf x}^*)]\\&\le \sum _{i=1}^{d}\mathbb {E}_{\bf q}||{\bf q}_{1:T,i}||_2+\frac{1}{2\eta }\max _{t\le T}||{\bf x}^*-{\bf x}_t||^2_\infty \sum _{i=1}^{d}\mathbb {E}_{\bf q}||{\bf q}_{1:T,i}||_2\\\end{split}\)
[1]
[ [ 167, 170 ] ]
https://openalex.org/W2146502635
8c0b0b76-05f2-48d0-9212-45848e333baf
Much is known about \(\mathrm {ex}(n,\mathcal {F})\) for graphs, that is, when \(r=2\) . For example, Turán [1]} determined \(\mathrm {ex}(n,K_{\ell }^2)\) for all \(n> \ell > 2\) (where, more generally, \(K^r_\ell \) denotes the complete \(r\) -graph on \(\ell \) vertices). Also, the Erdős–Stone–Simonovits theorem [2]}, [3]} determines the Turán density for every family \(\mathcal {F}\) of graphs; namely, it holds that \(\pi (\mathcal {F})=\min \lbrace 1-1/\chi (F):F\in \mathcal {F}\rbrace \) , where \(\chi (F)\) denotes the chromatic number of the graph \(F\) .
[3]
[ [ 328, 331 ] ]
https://openalex.org/W2111007781
bf91516d-6b74-4c88-b5ac-9e00b28dc14c
of possible \(r\) -graph Turán densities is also very difficult for \(r\ge 3\) . (For \(r=2\) , we have by the Erdős–Stone–Simonovits theorem [1]}, [2]} that \(\Pi _{\infty }^{(2)} = \Pi _{\mathrm {fin}}^{(2)}= \lbrace 1\rbrace \cup \left\lbrace 1-{1}/{k} \colon \mbox{integer $k\ge 1$}\right\rbrace \) .)
[2]
[ [ 149, 152 ] ]
https://openalex.org/W2111007781
94914ce0-37c0-4487-b958-12c5175bba33
One of the earliest results on this direction is the theorem of Erdős [1]} from the 1960s that \(\Pi _{\infty }^{(r)} \cap (0, r!/r^r) = \emptyset \) for every integer \(r\ge 3\) . However, our understanding of the locations and the lengths of other maximal intervals avoiding \(r\) -graph Turán densities and the right accumulation points of \(\Pi _{\infty }^{(r)}\) (the so-called jump problem) is very limited; for some results in this direction see e.g. [2]}, [3]}, [4]}, [5]}, [6]}.
[1]
[ [ 70, 73 ] ]
https://openalex.org/W2110354774
d2f5c657-08b3-482a-9b18-2c18305159d8
One of the earliest results on this direction is the theorem of Erdős [1]} from the 1960s that \(\Pi _{\infty }^{(r)} \cap (0, r!/r^r) = \emptyset \) for every integer \(r\ge 3\) . However, our understanding of the locations and the lengths of other maximal intervals avoiding \(r\) -graph Turán densities and the right accumulation points of \(\Pi _{\infty }^{(r)}\) (the so-called jump problem) is very limited; for some results in this direction see e.g. [2]}, [3]}, [4]}, [5]}, [6]}.
[2]
[ [ 460, 463 ] ]
https://openalex.org/W2168244850
ed06b090-1601-4758-a0c0-94f82fca354c
One of the earliest results on this direction is the theorem of Erdős [1]} from the 1960s that \(\Pi _{\infty }^{(r)} \cap (0, r!/r^r) = \emptyset \) for every integer \(r\ge 3\) . However, our understanding of the locations and the lengths of other maximal intervals avoiding \(r\) -graph Turán densities and the right accumulation points of \(\Pi _{\infty }^{(r)}\) (the so-called jump problem) is very limited; for some results in this direction see e.g. [2]}, [3]}, [4]}, [5]}, [6]}.
[3]
[ [ 466, 469 ] ]
https://openalex.org/W2030153168
4c9371d4-d653-49e8-a533-4da9982e0e88
One of the earliest results on this direction is the theorem of Erdős [1]} from the 1960s that \(\Pi _{\infty }^{(r)} \cap (0, r!/r^r) = \emptyset \) for every integer \(r\ge 3\) . However, our understanding of the locations and the lengths of other maximal intervals avoiding \(r\) -graph Turán densities and the right accumulation points of \(\Pi _{\infty }^{(r)}\) (the so-called jump problem) is very limited; for some results in this direction see e.g. [2]}, [3]}, [4]}, [5]}, [6]}.
[4]
[ [ 472, 475 ] ]
https://openalex.org/W1978688508
d4f4797b-548d-4149-8534-e8764d1268cb
It is known that the set \(\Pi _{\infty }^{(r)}\) is the topological closure of \(\Pi _{\mathrm {fin}}^{(r)}\) (see [1]}) and thus the former set is determined by the latter. In order to show that the set \(\Pi _{\mathrm {fin}}^{(r)}\subseteq [0,1]\) has rich structure for each \(r\ge 3\) , the second author proved in [1]} that, for every minimal \(r\) -graph pattern \(P\) , there is a finite family \(\mathcal {F}\) of \(r\) -graphs such that the maximum \(\mathcal {F}\) -free graphs are precisely the maximum \(r\) -graphs that can be obtained by taking blowups of \(P\) and using recursion. (See Section REF for all formal definitions.) In particular, the maximum asymptotic edge density obtainable this way from the pattern \(P\) is an element of \(\Pi _{\mathrm {fin}}^{(r)}\) .
[1]
[ [ 118, 121 ], [ 323, 326 ] ]
https://openalex.org/W2153199929
56f959f4-b28d-48bb-a1a8-b223377b35f4
Another factor that makes the hypergraph Turán problem difficult is that some families may have many rather different (almost) extremal configurations. A series of recent papers [1]}, [2]}, [3]}, [4]} (discussed in more detail in Sections REF and REF ) concentrated on exhibiting examples for which the richness of extremal configurations can be proved.
[2]
[ [ 184, 187 ] ]
https://openalex.org/W4213296153
8d69ab8c-9d6e-47b4-9143-f488e15a9f2c
Our paper contributes further to this line of research. The new results proved here are, informally speaking, as follows. Our main result, on which all new constructions are based, is Theorem REF . It extends [1]} to the case when there is a finite set \(\lbrace P_i:i\in I\rbrace \) of minimal patterns (instead of just one) and we can mix them in any way when using recursion. By applying Theorem REF , we present two examples of a 3-graph family \(F̏\) with a rich set of (almost) extremal configurations. The first one (given by Theorem REF ) has the property that the set of maximum \(F̏\) -free 3-graphs on \([n]\) has exponentially many in \(n\) non-isomorphic hypergraphs and, moreover, the Turán problem for \(F̏\) is not finitely stable, that is, roughly speaking, there is no bounded number of constructions such that every almost maximum \(F̏\) -free 3-graph is close to one of them. The second finite family \(F̏\) of 3-graphs (given by Corollary REF ) satisfies the property that the limit set of possible densities of the shadows of asymptotically maximum \(F̏\) -free 3-graphs is a Cantor-like set of positive Hausdorff dimension.
[1]
[ [ 209, 212 ] ]
https://openalex.org/W2153199929
b1a1cd2a-2da9-453c-abbe-0f069254e6d1
If \(P_I=\lbrace P\rbrace \) consists of a single pattern \(P\) then we always have to use this pattern \(P\) and the definition of a \(P_{I}\) -mixing construction coincides with the definition of a \(P\) -construction from [1]}. For brevity, we abbreviate \(\Lambda _{P}:=\Lambda _{\lbrace P\rbrace }\) , \(\lambda _{P}:=\lambda _{\lbrace P\rbrace }\) , etc.
[1]
[ [ 228, 231 ] ]
https://openalex.org/W2153199929
67d12b7e-bb10-4560-b569-b557228b6d0e
In the case of a single pattern (when \(|I| = 1\) ), Theorem REF gives [1]}. As we will see in Lemma REF , it holds that \(\lambda _{P_I}=\max \lbrace \lambda _{P_i}:i\in I\rbrace \) and thus we do not increase the set of obtainable Turán densities by mixing patterns. The main purpose of Theorem REF is to show that some finite Turán problems have rich sets of (almost) extremal graphs. In this paper, we present two applications of Theorem REF of this kind as follows.
[1]
[ [ 72, 75 ] ]
https://openalex.org/W2153199929
b2dd4cdf-9afa-4ade-bfec-e9523a5a1832
The first stability theorem, which says that \(K_{\ell }^2\) is stable for all integers \(\ell \ge 3\) , was established independently by Erdős [1]} and Simonovits \(\cite {SI68}\) . In fact, the classical Erdős–Stone–Simonovits theorem [2]}, [3]} and Erdős–Simonovits stability theorem [1]}, [5]} imply that every family of graphs is stable.
[2]
[ [ 238, 241 ] ]
https://openalex.org/W2111007781
3c4aa691-abb5-4281-9309-189f393355b3
For hypergraphs, there are some conjectures on the Turán density of various concrete families which, if true, imply that these families are not stable. One of the most famous examples in this regard is the tetrahedron \(K_{4}^3\) whose conjectured Turán density is \(5/9\) . If this conjecture is true then the constructions by Brown [1]} (see also [2]}, [3]}, [4]}) show that \(\xi (K_4^3)=\infty \) . A similar statement applies to some other complete 3-graphs \(K_\ell ^3\) ; we refer the reader to [5]}, [6]} for details. Another natural example of conjectured infinite stability number is the Erdős–Sós Conjecture on triple systems with bipartite links; we refer the reader to [7]} for details.
[5]
[ [ 503, 506 ] ]
https://openalex.org/W2498848698
3cab8631-f419-4d6e-9e93-496ee5427452
For hypergraphs, there are some conjectures on the Turán density of various concrete families which, if true, imply that these families are not stable. One of the most famous examples in this regard is the tetrahedron \(K_{4}^3\) whose conjectured Turán density is \(5/9\) . If this conjecture is true then the constructions by Brown [1]} (see also [2]}, [3]}, [4]}) show that \(\xi (K_4^3)=\infty \) . A similar statement applies to some other complete 3-graphs \(K_\ell ^3\) ; we refer the reader to [5]}, [6]} for details. Another natural example of conjectured infinite stability number is the Erdős–Sós Conjecture on triple systems with bipartite links; we refer the reader to [7]} for details.
[6]
[ [ 509, 512 ] ]
https://openalex.org/W2004751842
41af65bd-9a4e-46f0-8810-cd55c94f4542
For hypergraphs, there are some conjectures on the Turán density of various concrete families which, if true, imply that these families are not stable. One of the most famous examples in this regard is the tetrahedron \(K_{4}^3\) whose conjectured Turán density is \(5/9\) . If this conjecture is true then the constructions by Brown [1]} (see also [2]}, [3]}, [4]}) show that \(\xi (K_4^3)=\infty \) . A similar statement applies to some other complete 3-graphs \(K_\ell ^3\) ; we refer the reader to [5]}, [6]} for details. Another natural example of conjectured infinite stability number is the Erdős–Sós Conjecture on triple systems with bipartite links; we refer the reader to [7]} for details.
[7]
[ [ 683, 686 ] ]
https://openalex.org/W2029849517
8f18603b-0365-4a63-94d2-ac5021a34c45
Despite these old conjectures, no finite family with more than one asymptotic Turán extremal construction was known until recently. In [1]}, Mubayi and the first author constructed the first finite non-stable family \(F̏\) of 3-graphs; in fact, their family satisfies \(\xi (F̏)=2\) . Further, in [2]}, Mubayi, Reiher, and the first author found, for every integer \(t\ge 3\) , a finite family \(F̏_t\) of 3-graphs with \(\xi (F̏_t)= t\) . Their construction was extended to families of \(r\) -graphs for every integer \(r \ge 4\) in [3]}. In [4]}, Hou, Li, Mubayi, Zhang, and the first author constructed a finite family \(F̏\) of 3-graphs such that \(\xi (F̏)=\infty \) .
[1]
[ [ 135, 138 ] ]
https://openalex.org/W4213296153
68d57c6c-24ca-4964-bad2-3452850a85d2
Note that it is possible that \(\xi (F̏)=1\) but there are many maximum \(F̏\) -free \(r\) -graphs of order \(n\) . For example, if \(k\ge 5\) is odd and we forbid the star \(K_{1,k}^2\) (where, more generally, \(K_{k_1,\ldots ,k_\ell }^2\) denotes the complete \(\ell \) -partite graph with part sizes \(k_1,\ldots ,k_\ell \) ), then the extremal graphs on \(n\ge r\) vertices are precisely \((k-1)\) -regular graphs and, as it is easy to see, there are exponentially many in \(n\) such non-isomorphic graphs. For \(r\) -graphs with \(r\ge 3\) , a similar conclusion for an infinite sequence of \(n\) can be achieved by forbidding e.g. two \(r\) -edges intersecting in \(r-1\) vertices: if a sufficiently large \(n\) satisfies the obvious divisibility conditions then by the result of Keevash [1]} there are \(\exp (\Omega (n^{r-1}\log n))\) extremal \(r\) -graphs on \([n]\) , namely designs where each \((r-1)\) -set is covered exactly once. While the above Turán problems are degenerate (i.e. have the Turán density 0), a non-degenerate example for graphs can be obtained by invoking a result of Simonovits [2]}, a special case of which is that every maximum graph of order \(n\rightarrow \infty \) without \(K_{1,t,t}^2\) can be obtained from a complete bipartite graph \(K_{a,n-a}\) with \(a=(1/2+o(1))n\) by adding a maximum \(K_{1,t}^2\) -free graph into each part. Very recently, Balogh, Clemen, and Luo [3]} found a single 3-graph \(F\) with \(\pi (F)>0\) and with \(\exp (\Omega (n^{2}\log n))\) non-isomorphic extremal constructions on \(n\) vertices for an infinite sequence of \(n\) . Note that all families in this paragraph are 1-stable.
[1]
[ [ 803, 806 ] ]
https://openalex.org/W3037839091
3a9d7b06-dcbd-44fd-9640-6f89ac1355bd
The feasible region unifies and generalizes asymptotic versions of some classical problems such as the Kruskal–Katona theorem [1]}, [2]} and the Turán problem. It was introduced in [3]} to understand the extremal properties of \(\mathcal {F}\) -free hypergraphs beyond just the determination of \(\pi (\mathcal {F})\) .
[1]
[ [ 126, 129 ] ]
https://openalex.org/W1534511119
38a28128-7184-4ab0-bbb2-a9d101ce0bf0
For most families \(\mathcal {F}\) that were studied before, the set \(M(\mathcal {F})\) has size one, i.e. the function \(g(\mathcal {F})\) attains its maximum at only one point. In general, the set \(M(\mathcal {F})\) is not necessarily a single point, and, in fact, trying to understand how complicated \(M(\mathcal {F})\) can be is one of the motivations for constructions in [1]}, [2]}, [3]}. Indeed, the construction in [2]} shows that there exists a finite family \(\mathcal {F}\) of 3-graphs for which \(M(\mathcal {F})\) has size exactly two, the constructions in [3]} show that for every positive integer \(t\) there exists a finite family \(\mathcal {F}\) of 3-graphs for which \(M(\mathcal {F})\) has size exactly \(t\) , and the constructions in [1]} show that there exists a finite family \(\mathcal {F}\) of 3-graphs for which \(M(\mathcal {F})\) is a non-trivial interval.
[2]
[ [ 391, 394 ], [ 431, 434 ] ]
https://openalex.org/W4213296153
9a1234d6-3b3a-4e78-b9ad-a900d149c2c6
see also Lemma REF that relates \(\lambda _E\) and \(\lambda _{P}\) . The special case of (REF ) when \(E\) is an \(r\) -graph (i.e. \(E\) consists of simple sets) is a generalisation of the well-known hypergraph Lagrangian (see e.g. [1]}, [2]}) that has been successfully applied to Turán-type problems, with the basic idea going back to Motzkin and Straus [3]}.
[1]
[ [ 238, 241 ] ]
https://openalex.org/W2168244850
f76ba810-4fb6-4205-abfa-9f0e6c6aa686
see also Lemma REF that relates \(\lambda _E\) and \(\lambda _{P}\) . The special case of (REF ) when \(E\) is an \(r\) -graph (i.e. \(E\) consists of simple sets) is a generalisation of the well-known hypergraph Lagrangian (see e.g. [1]}, [2]}) that has been successfully applied to Turán-type problems, with the basic idea going back to Motzkin and Straus [3]}.
[2]
[ [ 244, 247 ] ]
https://openalex.org/W1978688508
75f9d0ba-94a6-49c8-b57b-509658c97e65
see also Lemma REF that relates \(\lambda _E\) and \(\lambda _{P}\) . The special case of (REF ) when \(E\) is an \(r\) -graph (i.e. \(E\) consists of simple sets) is a generalisation of the well-known hypergraph Lagrangian (see e.g. [1]}, [2]}) that has been successfully applied to Turán-type problems, with the basic idea going back to Motzkin and Straus [3]}.
[3]
[ [ 362, 365 ] ]
https://openalex.org/W2027470314
081d5ba2-57a8-451c-9416-e88571ae0219
Lemma 7 (Lemma 16 in [1]}) Let \(P=(m,E,R)\) be a minimal pattern. If distinct \(j,k \in [m]\) satisfy \(L_{E}(j)\subseteq L_{E}(k)\) , then \(j\in R\) , \(k\notin R\) , and \(L_E(j) \ne L_E(k)\) . In particular, no two vertices in \([m]\) have the same links in \(E\) .
[1]
[ [ 21, 24 ] ]
https://openalex.org/W2153199929
d0a92300-c0ae-4385-b8c0-89e55502cae0
We will also need the following result from [1]}, which characterizes the patterns whose Lagrangian is 1.
[1]
[ [ 44, 47 ] ]
https://openalex.org/W2153199929
baa260ca-626a-4986-af3f-aa371d3d9945
Lemma 8 (Lemma 12 in [1]}) An \(r\) -graph pattern \(P = (m,E,P)\) satisfies \(\lambda _{P}=1\) if and only if at least one of the following holds:
[1]
[ [ 21, 24 ] ]
https://openalex.org/W2153199929
055d970a-c2c4-4a75-8249-9e8d02c6370d
We will need the following result from [1]}, which extends some classical results (see e.g. [2]}) about the Lagrangian of hypergraphs.
[1]
[ [ 39, 42 ] ]
https://openalex.org/W2153199929
584ace5a-7ac3-4426-9b0b-d1234feb9bea
We will need the following result from [1]}, which extends some classical results (see e.g. [2]}) about the Lagrangian of hypergraphs.
[2]
[ [ 92, 95 ] ]
https://openalex.org/W1978688508
f6d0b730-e5ae-40ed-bb49-ab3d351006f4
Lemma 9 (Lemma 14 in [1]}) Let \(P=(m,E,R)\) be a proper \(r\) -graph pattern and let \(f({\bf x}):= \lambda _{E}({\bf x}) + \lambda _{P} \sum _{j\in R} x_j^r\)
[1]
[ [ 21, 24 ] ]
https://openalex.org/W2153199929
4646a217-f864-464c-a66e-f7e00e2718e6
The main idea of the proof of Theorem  REFREF is similar to the proof of Theorem 3 in [1]}. The starting point is the easy observation (Lemma REF ) that by forbidding \(F̏_\infty \) we restrict ourselves to subgraphs of \(P_{I}\) -mixing constructions; thus Part REF of Theorem REF would trivially hold if infinite forbidden families were allowed. Our task is to show that, for some large \(M\) , the finite subfamily \(F̏_M\) of \(F̏_\infty \) still has the above properties. The Strong Removal Lemma of Rödl and Schacht [2]} (stated as Lemma REF here) implies that for every \(\varepsilon >0\) there is \(M\) such that every \(F̏_M\) -free \(r\) -graph with \(n\ge M\) vertices can be made \(F̏_\infty \) -free by removing at most \(\frac{c_0}{2} {n\atopwithdelims ()r}\) edges. It follows that every maximum \(F̏_M\) -free \(r\) -graph \(G\) on \([n]\) is \(c_0 {n\atopwithdelims ()r}\) -close in the edit distance to a \(P_{I}\) -mixing construction (see Lemma REF ), where \(c_0>0\) can be made arbitrarily by choosing \(M\) large. Then our key Lemma REF (which heavily relies on another important result, the existence of a “rigid” \(F\in \Sigma P_I\) as proved in Lemma REF ) shows via stability-type arguments that some small constant \(c_0>0\) (independent of \(n\) ) suffices to ensure that there is a partition \(V(G)=V_1\cup \dots \cup V_{m_i}\) for some \(i\in I\) such that \(G\setminus (\bigcup _{j\in R_i} G[V_j])=E(\!(V_1,\dots ,V_{m_i})\!)\) , that is, \(G\) follows exactly the bottom level of some \(P_i\) -construction (but nothing is stipulated about what happens inside the recursive parts \(V_j\) ). The maximality of \(G\) implies that each \(G[V_j]\) with \(j\in R_i\) is maximum \(F̏_M\) -free (see Lemma REF ), allowing us to apply induction.
[1]
[ [ 87, 90 ] ]
https://openalex.org/W2153199929
3222ecc4-42b0-4a7d-b813-c2c109f2d94c
Part REF of Theorem REF (which has no direct analogue in [1]}) is needed in those applications where we have to analyse almost extremal constructions. It does not directly follow from Lemma REF (i.e. from the Removal Lemma), since the same constant \(M\) in Theorem  REFREF has to work for every \(\epsilon >0\) . Similarly to Part REF , the key idea here that, once we forced our \(F̏_M\) -free graph \(G\) on \([n]\) to be \(c_0{n\atopwithdelims ()r}\) -close to a \(P_I\) -mixing construction for some sufficiently small \(c_0>0\) (but independent of \(\varepsilon \) ) then we can further bootstrap this to the required \(\varepsilon {n\atopwithdelims ()r}\) -closeness by stability-type arguments.
[1]
[ [ 59, 62 ] ]
https://openalex.org/W2153199929
b84eb5f1-9ecc-402d-8002-401a4d581183
Many simple lemmas that are needed for our proof can be borrowed from [1]}, verbatim or with some minor modifications. However, new challenges arise to accommodate our situation \(|I| \ge 2\) and some new ideas are required here.
[1]
[ [ 70, 73 ] ]
https://openalex.org/W2153199929
beb48535-bdeb-4335-b492-5fbaa447d85e
Informally speaking, the following lemma (which a routine generalization of [1]}) implies, among other things, that all part ratios of bounded height in a \(P_{I}\) -mixing construction with large minimum degree approximately follow some optimal vectors. In particular, for each \(i\in I^{\prime }\) , the set \(X̏_i\) consists precisely of optimal limiting bottom ratios that lead to asymptotically maximum \(P_{I}\) -mixing constructions with base pattern \(P_i\) . Recall that \(\beta >0\) is the constant that satisfies Part REF of Lemma REF for every \(i\in I\) while \(X̏_i\) is the set of \(P_i\) -optimal vectors.
[1]
[ [ 76, 79 ] ]
https://openalex.org/W2153199929
7ba44649-b608-47d6-a571-1efb45b41d56
The following key lemma generalizes [1]} by allowing more than one pattern. Its proof requires some new ideas. For example, a trick that was used a number of times in [1]}, in particular when proving Lemma 17 there, is that any embedding of a maximum \(P\) -construction \(G\) into another \(P\) -construction is induced (that is, non-edges are mapped to non-edges). However, a maximum \(P_I\) -mixing construction whose base has to be \(P_i\) for given \(i\in I^{\prime }\) need not be maximum (nor even maximal) among all \(P_I\) -mixing constructions and some different arguments are required.
[1]
[ [ 36, 39 ], [ 167, 170 ] ]
https://openalex.org/W2153199929
4f628990-b806-4b69-b4dc-2f81944e7f8f
where the stated three terms come from the following arguments. The first term accounts for the triples containing \(x_1\) in the Lagrange polynomial \(\lambda _{\overline{D}}({\bf x})\) . The link graph \(L_D(1)\) is just a perfect matching \(M\) on \(\lbrace 2,\dots ,t\rbrace \) (because \(D\) is a Steiner triple system) and receives total weight \(1-x_i\) . As it is well-known (see e.g. [1]}), the Largrangian of a graph is maximised by putting the uniform weight on a maximum clique which, for the complement \(L_{\overline{D}}(1)\) of a perfect matching, has size \(s:=(t-1)/2\) . Thus \((1-x_1)^{-2}\sum _{ij\in L_D(1)} x_ix_j\le {s\atopwithdelims ()2}/s^2=\frac{s-1}{2s}=\frac{t-3}{2(t-1)}\) , giving the first term. The second term just upper bounds the Lagrangian of \({\overline{D}}-1={\overline{D}}[\lbrace 2,\ldots ,t\rbrace ]\) by the Largrangian of the complete 3-graph on \(t-1\) vertices, scaling the result by the cube of the total weight \(1-x_1\) . The third term uses the fact that the sum of cubes of non-negative entries with sum \(1-x_1\) is maximised when we put all weight on a single element.
[1]
[ [ 398, 401 ] ]
https://openalex.org/W2027470314
f0e0286a-b8d3-43e5-8f49-cd414dbf6e88
Proof of Theorem REF . Keevash [1]} proved that if \(t\rightarrow \infty \) is 1 or 3 modulo 6 then the number of Steiner triples systems on \([t]\) is \((t/\mathrm {e}^2+o(1))^{t^2/6}\) . Note that the function \(F\) assumes at most \(t^{9t}\) values while each isomorphism class of \(\mathcal {ST\!S}_t\) has at most \(t!\) elements. Thus we can fix a sufficiently large \(t\) and a subset \(I\subseteq \mathcal {ST\!S}_t\) consisting of non-isomorphic 3-graphs such that \(F\) is constant on \(I\) while \(|I|\ge \frac{(t/\mathrm {e}^2+o(1))^{t^2/6}}{t!\, t^{9t}}> t!.\)
[1]
[ [ 32, 35 ] ]
https://openalex.org/W3037839091
66d52ef7-82e9-4b65-bc19-a04d89a717db
The study of water waves has its root in Newton [1]}, Stokes [2]} and Levi-Civita [3]}. Local wellposedness of gravity water waves in the Euclidean space was first shown by Nalimov [4]} and Shinbrot [5]}, assuming the Taylor sign condition [6]}. Then Wu [7]}, [8]} dropped this condition, only assuming that the interface is non self-intersecting. All these results are local, valid only for a time period inversely proportional to the size of the initial data.
[1]
[ [ 48, 51 ] ]
https://openalex.org/W2945914100
d469d1be-521e-449f-acee-bc854c5a4f78
The study of water waves has its root in Newton [1]}, Stokes [2]} and Levi-Civita [3]}. Local wellposedness of gravity water waves in the Euclidean space was first shown by Nalimov [4]} and Shinbrot [5]}, assuming the Taylor sign condition [6]}. Then Wu [7]}, [8]} dropped this condition, only assuming that the interface is non self-intersecting. All these results are local, valid only for a time period inversely proportional to the size of the initial data.
[2]
[ [ 61, 64 ] ]
https://openalex.org/W3216036906
c6e19af4-e2fa-47db-8eaa-82a3453c11cb
The study of water waves has its root in Newton [1]}, Stokes [2]} and Levi-Civita [3]}. Local wellposedness of gravity water waves in the Euclidean space was first shown by Nalimov [4]} and Shinbrot [5]}, assuming the Taylor sign condition [6]}. Then Wu [7]}, [8]} dropped this condition, only assuming that the interface is non self-intersecting. All these results are local, valid only for a time period inversely proportional to the size of the initial data.
[7]
[ [ 254, 257 ] ]
https://openalex.org/W2916835505
36ea1ee4-27e0-411d-b6c4-93f149e38e19
The study of water waves has its root in Newton [1]}, Stokes [2]} and Levi-Civita [3]}. Local wellposedness of gravity water waves in the Euclidean space was first shown by Nalimov [4]} and Shinbrot [5]}, assuming the Taylor sign condition [6]}. Then Wu [7]}, [8]} dropped this condition, only assuming that the interface is non self-intersecting. All these results are local, valid only for a time period inversely proportional to the size of the initial data.
[8]
[ [ 260, 263 ] ]
https://openalex.org/W1608561939
a425517d-34af-4478-ae46-604153a021ca
Global wellposedness of gravity water waves in three dimensions for small data was shown by Germain–Masmoudi–Shatah [1]} and Wu [2]}. The same problem in two dimensions is harder, due to weaker decay of the solution. The first result in this vein is that of almost global wellposedness by Hunter–Ifrim–Tataru [3]} and Wu [4]}, who showed a lifespan exponential in terms of the reciprocal of the size of the initial data. Later global wellposedness was obtained by Alazard–Delort [5]}, Ionescu–Pusateri [6]} and Ifrim–Tataru [7]}.
[1]
[ [ 116, 119 ] ]
https://openalex.org/W2592223562
9ca5a85e-fe0e-4d7c-9922-e05540ce3741
Global wellposedness of gravity water waves in three dimensions for small data was shown by Germain–Masmoudi–Shatah [1]} and Wu [2]}. The same problem in two dimensions is harder, due to weaker decay of the solution. The first result in this vein is that of almost global wellposedness by Hunter–Ifrim–Tataru [3]} and Wu [4]}, who showed a lifespan exponential in terms of the reciprocal of the size of the initial data. Later global wellposedness was obtained by Alazard–Delort [5]}, Ionescu–Pusateri [6]} and Ifrim–Tataru [7]}.
[3]
[ [ 309, 312 ] ]
https://openalex.org/W2275878094
f430e296-93b0-40e0-8d1c-93444df16a58
Global wellposedness of gravity water waves in three dimensions for small data was shown by Germain–Masmoudi–Shatah [1]} and Wu [2]}. The same problem in two dimensions is harder, due to weaker decay of the solution. The first result in this vein is that of almost global wellposedness by Hunter–Ifrim–Tataru [3]} and Wu [4]}, who showed a lifespan exponential in terms of the reciprocal of the size of the initial data. Later global wellposedness was obtained by Alazard–Delort [5]}, Ionescu–Pusateri [6]} and Ifrim–Tataru [7]}.
[4]
[ [ 321, 324 ] ]
https://openalex.org/W3099581489
937636ec-a003-4f5d-afb4-49911938773f
Global wellposedness of gravity water waves in three dimensions for small data was shown by Germain–Masmoudi–Shatah [1]} and Wu [2]}. The same problem in two dimensions is harder, due to weaker decay of the solution. The first result in this vein is that of almost global wellposedness by Hunter–Ifrim–Tataru [3]} and Wu [4]}, who showed a lifespan exponential in terms of the reciprocal of the size of the initial data. Later global wellposedness was obtained by Alazard–Delort [5]}, Ionescu–Pusateri [6]} and Ifrim–Tataru [7]}.
[5]
[ [ 479, 482 ] ]
https://openalex.org/W2964260464
5101cfec-5d14-45ba-a820-feb87dd5525a
Global wellposedness of gravity water waves in three dimensions for small data was shown by Germain–Masmoudi–Shatah [1]} and Wu [2]}. The same problem in two dimensions is harder, due to weaker decay of the solution. The first result in this vein is that of almost global wellposedness by Hunter–Ifrim–Tataru [3]} and Wu [4]}, who showed a lifespan exponential in terms of the reciprocal of the size of the initial data. Later global wellposedness was obtained by Alazard–Delort [5]}, Ionescu–Pusateri [6]} and Ifrim–Tataru [7]}.
[6]
[ [ 502, 505 ] ]
https://openalex.org/W2141636416
2cc19dad-bccb-42e3-bc31-08d9073fb58e
Global wellposedness of gravity water waves in three dimensions for small data was shown by Germain–Masmoudi–Shatah [1]} and Wu [2]}. The same problem in two dimensions is harder, due to weaker decay of the solution. The first result in this vein is that of almost global wellposedness by Hunter–Ifrim–Tataru [3]} and Wu [4]}, who showed a lifespan exponential in terms of the reciprocal of the size of the initial data. Later global wellposedness was obtained by Alazard–Delort [5]}, Ionescu–Pusateri [6]} and Ifrim–Tataru [7]}.
[7]
[ [ 524, 527 ] ]
https://openalex.org/W2963213150
72b21fa0-8f29-4aed-a9af-a1c98959e4d2
Of the results above, the local ones assume only unweighted Sobolev norms of the initiial data, while the global ones also presuppose that the initial data decays far away from the origin by requiring it to be also small in a weighted Sobolev space, which allows for \(1/t\) decay of the solution, and with more careful analysis, closes the estimates needed for global existence. Without that assumption of locality, only part of the argument survives, giving a lifespan of \(\epsilon ^{-2}\) in three dimensions. In two dimensions additional integrability in the equations can be exploited to extend the lifespan to \(\epsilon ^{-3}\) , both in the Euclidean case [1]} and in the periodic case [2]}. In [3]} the author combined the energy estimates and Strichartz estimates to extend the lifespan of three dimensional water wavaes to an almost global one.
[1]
[ [ 667, 670 ] ]
https://openalex.org/W3093237945
5b161596-97ae-431b-ac9a-c06885edcc5c
Of the results above, the local ones assume only unweighted Sobolev norms of the initiial data, while the global ones also presuppose that the initial data decays far away from the origin by requiring it to be also small in a weighted Sobolev space, which allows for \(1/t\) decay of the solution, and with more careful analysis, closes the estimates needed for global existence. Without that assumption of locality, only part of the argument survives, giving a lifespan of \(\epsilon ^{-2}\) in three dimensions. In two dimensions additional integrability in the equations can be exploited to extend the lifespan to \(\epsilon ^{-3}\) , both in the Euclidean case [1]} and in the periodic case [2]}. In [3]} the author combined the energy estimates and Strichartz estimates to extend the lifespan of three dimensional water wavaes to an almost global one.
[2]
[ [ 697, 700 ] ]
https://openalex.org/W2898175050