text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
willusually, butnotalways, resultinanQptimalsolution path. Often, butnotalways, fewernodeswillbeexpanded andlessarithmeticeffortrequiredthan:ifweusedA(n) VX2+y2. Thusweseethattheformulationpresentedusesone function, A, toembodyinaformaltheoryallknowledge availablefromtheproblemdomain. TheselectionofA,
ieee_xplore
15,770
difference favors the learning process by increasing the synaptic plasticity of certain groups of neurons. We also note an interesting point presented in [Perlovsky 2009] associating certain emotions with the need for cognition, i. e. , emotions play the role of reinforce-ment signals which drive the
ieee_xplore
16,300
tion algorithm is required for comparison. Further, whilethe presented theory works for complex-valued measurement operators, it is not straightforward to extend the current CNN architecture to complex-valued images. Any potential solutionto this problem (e. g. , usin g the modulus, o r splitting the
ieee_xplore
16,476
sparse coding solver is not feed-forward, i. e. , it is an itera- tive algorithm. On the contrary, our non-linear operator is fully feed-forward and can be computed efficiently. If we setf2¼1, then our non-linear operator can be considered as a pixel-wise fully-connected layer. It is worth noting that
ieee_xplore
16,900
Set5 are 32. 57 dB, which is slightly higher than the 32. 52 dB reported in Section 4. 1. This indicates that a reasonablyFig. 4. Training with the much larger ImageNet dataset improves the performance over the use of 91 images. Fig. 5. The figure shows the first-layer filters trained on ImageNet with an
ieee_xplore
16,940
PSNR 3 30. 39 31. 42 31. 84 32. 28 31. 92 32. 59 32. 75 4 28. 42 - 29. 61 30. 03 29. 69 30. 28 30. 49 2 0. 9299 - 0. 9490 0. 9511 0. 9499 0. 9544 0. 9542 SSIM 3 0. 8682 0. 8821 0. 8956 0. 9033 0. 8968 0. 9088 0. 9090 4 0. 8104 - 0. 8402 0. 8541 0. 8419 0. 8603 0. 8628 2 6. 10 - 7. 84 6. 87 8. 09 8. 48 8. 05
ieee_xplore
16,970
IFC 3 3. 52 3. 16 4. 40 4. 14 4. 52 4. 84 4. 58 4 2. 35 - 2. 94 2. 81 3. 02 3. 26 3. 01 2 36. 73 - 42. 90 39. 49 43. 28 44. 58 41. 13 NQM 3 27. 54 27. 29 32. 77 32. 10 33. 10 34. 48 33. 21 4 21. 42 - 25. 56 24. 99 25. 72 26. 97 25. 96 2 50. 06 - 58. 45 57. 15 58. 61 60. 06 59. 49 WPSNR 3 41. 65 43. 64 45. 81 46. 22 46. 02 47. 17 47. 10
ieee_xplore
16,971
PSNR 3 27. 54 28. 31 28. 60 28. 94 28. 65 29. 13 29. 30 4 26. 00 - 26. 81 27. 14 26. 85 27. 32 27. 50 2 0. 8687 - 0. 8993 0. 9026 0. 9004 0. 9056 0. 9067 SSIM 3 0. 7736 0. 7954 0. 8076 0. 8132 0. 8093 0. 8188 0. 8215 4 0. 7019 - 0. 7331 0. 7419 0. 7352 0. 7491 0. 7513 2 6. 09 - 7. 59 6. 83 7. 81 8. 11 7. 76
ieee_xplore
16,979
IFC 3 3. 41 2. 98 4. 14 3. 83 4. 23 4. 45 4. 26 4 2. 23 - 2. 71 2. 57 2. 78 2. 94 2. 74 2 40. 98 - 41. 34 38. 86 41. 79 42. 61 38. 95 NQM 3 33. 15 29. 06 37. 12 35. 23 37. 22 38. 24 35. 25 4 26. 15 - 31. 17 29. 18 31. 27 32. 31 30. 46 2 47. 64 - 54. 47 53. 85 54. 57 55. 62 55. 39 WPSNR 3 39. 72 41. 66 43. 22 43. 56 43. 36 44. 25 44. 32
ieee_xplore
16,980
4 35. 71 - 37. 75 38. 26 37. 85 38. 72 38. 87 2 0. 9813 - 0. 9886 0. 9890 0. 9888 0. 9896 0. 9897 MSSSIM 3 0. 9512 0. 9595 0. 9643 0. 9653 0. 9647 0. 9669 0. 9675 4 0. 9134 - 0. 9317 0. 9338 0. 9326 0. 9371 0. 9376 TABLE 4 The Average Results of PSNR (dB), SSIM, IFC, NQM, WPSNR (dB) and MSSIM on the BSD200 Dataset
ieee_xplore
16,981
SSIM 3 0. 7469 0. 7729 0. 7823 0. 7881 0. 7843 0. 7945 0. 7971 4 0. 6727 - 0. 7037 0. 7093 0. 7060 0. 7171 0. 7184 2 5. 30 - 7. 10 6. 33 7. 28 7. 51 7. 21 IFC 3 3. 05 2. 77 3. 82 3. 52 3. 91 4. 07 3. 91 4 1. 95 - 2. 45 2. 24 2. 51 2. 62 2. 45 2 36. 84 - 41. 52 38. 54 41. 72 42. 37 39. 66 NQM 3 28. 45 28. 22 34. 65 33. 45 34. 81 35. 58 34. 72
ieee_xplore
16,983
4 21. 72 - 25. 15 24. 87 25. 27 26. 01 25. 65 2 46. 15 - 52. 56 52. 21 52. 69 53. 56 53. 58 WPSNR 3 38. 60 40. 48 41. 39 41. 62 41. 53 42. 19 42. 29 4 34. 86 - 36. 52 36. 80 36. 64 37. 18 37. 24 2 0. 9780 - 0. 9869 0. 9876 0. 9872 0. 9883 0. 9883 MSSSIM 3 0. 9426 0. 9533 0. 9575 0. 9588 0. 9581 0. 9609 0. 9614
ieee_xplore
16,984
quences is the following: We imagine a number of possible states G1, a2, . . . , am. For each state only certain symbols from the set 51, . . . , S«can be transmitted (different subsets for the different states). When one of these has been transmitted thestate changes to a new state depending both on
ieee_xplore
17,259
abilities p(i, j), i. e. , therelative frequency of the digram ij. The letter frequencies p(i), (the probability ofletter i), thetransition probabilities p;(j) and thedigram probabilities p(i, j)arerelated by the following formulas. p(i) =Lp(i, j)=Lpel, i) i i p(i, . i) =p(i)p;(j) Lp;(j) =Lp(i)=Lp(i, j)=1.
ieee_xplore
17,283
or of transition probabilities pi, . i", , , . •i, , _Ji n)isrequired to specify the statistical structure. (D) Stochastic processes can also be defined which produce atext con­ sisting of a sequence of "words. " Suppose there are five letters A, B, C, D, E and 16"words" in the language with associated
ieee_xplore
17,288
The entropy in the case of two possibilities with probabilities Pand q= 1 - P, namely H= - (plogP+qlogq) isplotted in Fig. 7 as a function of p. The quantity Hhas a number ofinteresting properties which further sub­ stantiate it as a reasonable measure of choice or information. /""". . . . . . . -, / \ / \
ieee_xplore
17,349
. lfATllEJIATICAL TlIEORr OFCOMMFNICATION 395 while flex) = -LpU, j)log1:pU, j) t. , i H(y) Lp(i, j)log1:pCi, j). i. j i It ISeasily shown that tu», y)Sll(x)+H(y) with equality only if the events areindependent (i. e. , p(i, j)=p(i)prJ»). The uncertainty of a joint event is less than orequal to the sum of the
ieee_xplore
17,354
individual uncertainties. 4. Any change toward equalization of the probabilities PI, h, . . . , pn increases H. Thus ifPI<P2and we increase PI, decreasing P2anequal amount sothat PIand P2are more nearly equal, then Hincreases. More generally, if we perform any "averaging" operation onthePiof the form
ieee_xplore
17,355
iLldllding, P. . We first encode into a binary system. The binary code for message sis_obtained byexpanding P, asabinary number. The expansion is carried outtom, places, where m, is the integer satisfying: 1 1log2-<m, <1+logs -P. - p. Thus the messages of high probability arerepresented byshort codes and
ieee_xplore
17,412
MATHEMATICAL THEORl' OFCOMAfUNICATION 403 -~p. log r. ~H'<l~-~p. logp. AsNincreases -~p. logP. approaches H, theentropy of the source andH' approaches H. We see from this that the inefficiency in coding, when only a finite delay of Nsymbols is used, need not be greater than~plus the difference between
ieee_xplore
17,417
therate C, isthefollowing (found by a method due to R. Hamming): Leta block of seven symbols be, 'rr, X2, •••• '(7, Ofthese X', 3, X. , X6and )(7aremessage symbols and chosen arbitrarily bythesource. The other three areredundant and calculated as follows: X, Iis chosen to make a=X4+Xo+. \6+X7even X". '2 v". q"
ieee_xplore
17,541
. 11. 4TI1EMATICAL THEORJ" OF CO. lfJll'NICATION 419 where bL, b7j, . . . b"/jare the length of the symbols which may be chosen instate iand lead to state j. These are linear difference equations and the behavior asL---+ ccmust be of the type N, =AjWL Substituting mthe difference equation. r, J1'L=L. L H'L-bj:)
ieee_xplore
17,545
H=K[~Pilog~n, -~Pilognil r'\' I 1Ii, -"i' I= - ft. . . P, 'og - = - ft. . . PiogPt'. 2";l1i Ifthe Piare incommeasurable, they may be approximated byrationals and the same expression must hold by our continuity assumption. Thus the expression holds in general. The choice of coefficient Kis amatter of con­
ieee_xplore
17,553
Information Technology, University of Technology Sydney, Ultimo, NSW2007, Australia (e-mail: zonghan. wu-3@s tudent. uts. edu. au; fengwen. chen@ student. uts. edu. au; guodong. long@ uts. edu. au; chengqi. zhang@ uts. edu. au). Shirui Pan is with the Faculty of Information Technology, Monash Univer-
ieee_xplore
17,582
learning algorithms (e. g. , support vector machines for classifi-cation). Meanwhile, GNNs are deep learning models aiming at addressing graph-related tas ks in an end-to-end manner. Many GNNs explicitly extract high-level representations. The maindistinction between GNNs and network embedding is that
ieee_xplore
17,622
v+αW1σ⎛ ⎝W2⎡ ⎣xv,  u∈N(v)[h(t−1) u, xu]⎤ ⎦⎞ ⎠ (3)where αis a hyperparameter and h(0) vis initialized randomly. While conceptually important, SSE does not theoretically prove that the node states will gradually converge to fixedpoints by applying (3) repeatedly. V. C ONVOLUTIONAL GRAPH NEURAL NETWORKS
ieee_xplore
17,682
multiple channels. The graph convolutional layer of SpectralCNN is defined as H (k) :, j=σ fk−1 i=1U(k) i, jUTH(k−1) :, i (j=1, 2, . . . , fk)(6) where kis the layer index, H(k−1)∈Rn×fk−1is the input graph signal, H(0)=X, fk−1is the number of input channels, fkis the number of output channels, and (k)
ieee_xplore
17,697
same dimension as the input feature matrix Xand is not a function of its previous hidden representation matrix H(k−1). DCNN concatenates H(1), H(2), . . . , H(K)together as the final model outputs. As the stationary distribution of a diffusion process is a summation of power series of probability tran-
ieee_xplore
17,724
lutional layers to learn temporal and spatial dependencies, respectively. Assume that the inputs to an STGNN is a tensor X∈R T×n×d, and the 1-D-CNN layer slides over X[:, i, :] along the time axis to aggregate temporal information for each node, while the graph convolutional layer operates on X[i, :, :]
ieee_xplore
17,862
(line 1. 6 is modified as follows: ; continue from 1. 0). It is argued that whenever the action’s weights converge onehas a stable control, and such a training procedure eventuallyfinds the optimal control sequence. While theory behind classical dynamic programming de- mands choosing the optimal vector
ieee_xplore
18,165
method provides a good detection rate in the case of a Denial of Service (DoS) attack and achieves a good detection rate in the case of U2R and R2L attacks. However, the precision of Probe, U2R and R2L is 84. 2%, 25. 0% and 89. 4%, respec- tively. In other words, the method provided by the essay leads
ieee_xplore
18,337
RNN for intrusion detection, but the dataset used was the KDD 99 Cup. Experiment accuracy, recall and precision of Probe was 96. 6%, 97. 8% and 88. 3%, respectively; DoS was 97. 4%, 97. 05% and 99. 9%, respectively; U2R was 86. 5%, 62. 7% and 56. 1%, respectively; and R2L are 29. 73%, 28. 81% and 94. 1%.
ieee_xplore
18,430
Digital Object Identifier 10. 1109/COMST. 2020. 2965856hitherto unexplored services as well as scenarios of future wireless networks. Index Terms —Machine learning (ML), future wireless network, deep learning, regression, classification, clustering, network association, resource allocation. NOMENCLATURE
ieee_xplore
18,498
parameters. Let us assume having Nrandom train- ing samples and Mindependent variables, formulated as {yn, xn1, xn2, . . . , xnM}, n=1, 2, . . . , N. Then the linear regression function can be formulated as: yn=ε0+ε1xn1+ε2xn2+. . . +εMxnM+en, (1) whereε0is termed as the regression intercept, while enis
ieee_xplore
18,686
the error term and n=1, 2, . . . , N. Hence, Eq. (1) can be rewritten in the form of a matrix as y=Xε+e, where y=[y1, y2, . . . , yN]Tis an observation vector of the dependent variable and e=[e1, e2, . . . , eN]T, while ε=[ε0, ε2, . . . , ε M]TandXrepresents the observation matrix of independent variables, given by:
ieee_xplore
18,687
SVM based classification can be formulated as the following optimization problem: max ω, bmin n=1, . . . , Nyn((ω ∥ω∥)T xn+b ∥ω∥) s. t. yn( ωTxn+b) ≥γ, n=1, 2, . . . , N, ∥ω∥=1, (10) w h e r ew eh a v e γ=m i n n=1, . . . , Nyn((ω ∥ω∥)Txn+b ∥ω∥). A f t e r some further mathematical manipulations, the problem in (10)
ieee_xplore
18,732
p=(yk|x1, . . . , xM)=p(yk)p(x1, . . . , xM|yk) p(x1, . . . , xM), (13) where p=(yk|x1, . . . , xM)is the posteriori probability, whilst p(yk)is the priori probability of yk. Given that xi is conditionally independent of xjfori̸=j, w eh a v e : p=(yk|x1, . . . , xM)=p(yk) p(x1, . . . , xM)M∏ m=1p(xm|yk), (14)
ieee_xplore
18,768
WANG et al. : THIRTY YEARS OF ML: ROAD TO PARETO-OPTIMAL WIRELESS NETWORKS 1491 final cluster segmentation result by alternating between the following two steps, •Step 1: In the iterative round r, assign each sample to a cluster. For n=1, 2. . . , Nandi, k=1, 2. . . , K, i f we have: s(r) i={ xn:x n−μ(r)
ieee_xplore
18,792
related. Similarly, each succeeding component tends to havethe next highest variance. These principal components can be generated by invoking the eigenvectors of the normalized covariance matrix. Specifically, let us consider Ntraining samples of {x 1, x2, . . . , xN}, where xn=[xn1, xn2, . . . , xnM]Tis
ieee_xplore
18,832
a reduced spatial resolution. At this stage, the size of thetemporal dimension is already relatively small (3 for gray, gradient-x, gradient-y, and 2 for optflow-x and optflow-y), so we perform convolution only in the spatial dimension at this layer. The size of the convolution kernel used is 7/C24so
ieee_xplore
19,186
has very recently emerged. This field addresses a broad rangeof problems of significant practical interest, namely, therecovery of a data matrix from what appears to be incomplete, and perhaps even corrupted, information. In its simplest form, the problem is to recover a matrix from a small sample of
ieee_xplore
19,284
• System identification. In control, one would like to fit a discrete-time linear time-invariant state-spacemodel xðtþ1Þ¼AxðtÞþBuðtÞ yðtÞ¼CxðtÞþDuðtÞ to a sequence of inputs uðtÞ2R mand outputs yðtÞ2Rp, t¼0;. . . ;N. T h ev e c t o r xðtÞ2Rnis the state of the system at time t, a n d nis the order of
ieee_xplore
19,318
Even with the information that the unknown matrix M has low rank, this problem may be severely ill posed. Here is an example that shows why: let xbe a vector in R nand consider the n/C2nrank-1 matrix M¼e1x/C3¼x1x2x3/C1/C1/C1 xn/C01xn 000 /C1/C1/C1 00 000 /C1/C1/C1 00. . . . . . . . . . . . . . . . . .
ieee_xplore
19,343
reasonable random matrix models. There is, however, another condition that states the singular values of theunknown matrix cannot be too large or too small (the ratiobetween the top and lowest value must be bounded). Thisalgorithm 1) trims each row and column with too manyentries; i. e. , replaces the
ieee_xplore
19,382
F, and since (III. 6) gives kH/C10kF/C202/C14, it suffices to bound kH/C10ckF. Note that by the Pythagorean identity, we have kH/C10ck2 F¼P TðH/C10cÞ kk2 FþP T?ðH/C10cÞ kk2 F (III. 7) and it is thus sufficient to bound each term in the right- hand side. We start with the second term. Let /C3be a dual
ieee_xplore
19,429
and let /C10be the range of P/C10)d e f i n e db y A:¼P /C10PT. T h e n assuming that the operator A/C3A¼P TP/C10PTmapping T onto Tis invertible (which is the case under the hypotheses of Theorem 7), the least squares solution is given by MOracle:¼ð A/C3AÞ/C01A/C3ðYÞ ¼Mþð A/C3AÞ/C01A/C3ðZÞ: (III. 11)
ieee_xplore
19,439
(III. 2) for some value of /C22, and thus one could use (IV. 1) to solve (III. 2) by searching for the value of /C22ð/C14Þgiving kP/C10ð^M/C0YÞkF¼/C14(assuming kP/C10ðYÞkF>/C14). We use (IV. 1) because it works well in practice and because theFPC algorithm solves (IV. 1) nicely and accurately. We also
ieee_xplore
19,459
switches to monitorthe information flow), the security of such networks is a bigconcern, especially for the applications where confidentiality hasprime importance. Therefore, in order to operate WSNs in asecure way, any kind of intrusions should be detected beforeattackers can harm the network (i. e. ,
ieee_xplore
19,489
sensor nodes) and/orinformation destination (i. e. , data sink or base station). In thisarticle, a survey of the state-of-the-art in Intrusion DetectionSystems (IDSs) that are proposed for WSNs is presented. Firstly, detailed information about IDSs is provided. Secondly, a briefsurvey of IDSs proposed
ieee_xplore
19,490
K. P. Sinaga, M. -S. Yang: U-k-means Clustering Algorithm with splitting themselves to get better clustering. Users need to specify a range of cluster numbers in which the true cluster number reasonably lies and then a model selection, such as BIC or AIC, is used to do the splitting process. Although
ieee_xplore
19,819
for our proposed U-k-means clustering method. In Eq. (6), ∑c s=1αslnαsis the weighted mean of ln αkwith the weights α1, . . . , α c. For the kth mixing proportion α(t) k, if lnα(t) kis less than the weighted mean, then the new mixing propor- tionα(t+1) kwill become smaller than the old α(t) k. That is,
ieee_xplore
19,856
free parameters is not sufficient for the optimal solution. Instead, the single time direction networks try to make atradeoff between “remembering” the past input information, which is useful for regression (classification), and “knowledgecombining” of currently available input information. Thisresults
ieee_xplore
20,032
recognition rate of 65. 28% and is worse than the forwardRNN structure using one segment delay. The bidirectionalrecurrent neural network (BRNN) structure results in the best performance (68. 53%). III. P REDICTION ASSUMING DEPENDENT OUTPUTS In the preceding section, we have estimated the conditional
ieee_xplore
20,075
domain documents focus on different topics. Given specific domains DSandDT, when the learning tasks TSand TTare different, then either 1) the label spaces between the domains are different, i. e. , YS6¼Y T, o r 2) the conditional probability distributions between thedomains are different; i. e. , PðY
ieee_xplore
20,193
1;b2;. . . ;bsgare learned on the source domain data by solving the optimizationproblem (2) as shown as follows: min a;bX ixSi/C0X jaj Sibj/C13/C13/C13/C13/C13/C13/C13/C13/C13/C132 2þ/C12/C13/C13aSi/C13/C13 1 s:t: kbjk2/C201;8j21;. . . ;s :ð2Þ In this equation, aj Siis a new representation of basis bjfor
ieee_xplore
20,251
each problem can be solved by linear classifier, which is shown as follows: flðxÞ¼sgn/C0 wT l/C1x/C1 ;l¼1;. . . ;m : SCL can learn a matrix W¼½w1w2. . . wm/C138of parameters. In the third step, singular value decomposition (SVD) is applied to matrix W¼½w1w2. . . wm/C138. Let W¼UDVT, then /C18¼UT ½1:h;:/C138
ieee_xplore
20,317
approach, the feature-representation-transfer approach, the parameter-transfer approach, and the relational-knowledge-transfer approach, respectively. The former three contextshave an i. i. d. assumption on the data while the last context deals with transfer learning on relational data. Most of these
ieee_xplore
20,397
model can share them. On the other hand, for all approaches, now different models (i. e. , different parameter sets) are fullyindependent. There are no caches for passing kernel elements from one model to another. VI. D ISCUSSION AND CONCLUSION We note that a difference between all-together methods is
ieee_xplore
20,541
when accessed by the CPU (host). This architecture allows for two levels of parallelism: instruction (memory) level (i. e. , MPs) and thread level (SPs). This SIMT (Single Instruction, Multiple Threads) architecture allows for thousands or tens of thousands of threads to be run concurrently, which is
ieee_xplore
20,662
bin-to-bin scores, filled with a single learnable parameter: ¯Si, N+1=¯SM+1, j=¯SM+1, N+1=z∈R. (8) While keypoints in Awill be assigned to a single keypoint inBor the dustbin, each dustbin has as many matches as there are keypoints in the other set: N, Mfor dustbins inA, Brespectively. We denote as a=[
ieee_xplore
20,838
the negative log-likelihood of the assignment ¯P: Loss =−∑ (i, j)∈Mlog¯Pi, j −∑ i∈Ilog¯Pi, N+1−∑ j∈Jlog¯PM+1, j. (10) This supervision aims at simultaneously maximizing the precision and the recall of the matching. 3. 4. Comparisons to related work The SuperGlue architecture is equivariant to permutation
ieee_xplore
20,845
featuresMatcherPose estimation AUC P MS @5◦@10◦@20◦ ORB NN + GMS 5. 21 13. 65 25. 36 72. 0 5. 7 D2-Net NN + mutual 5. 25 14. 53 27. 96 46. 7 12. 0 ContextDesc NN + ratio test 6. 64 15. 01 25. 75 51. 2 9. 2 SIFTNN + ratio test 5. 83 13. 06 22. 47 40. 3 1. 0 NN + NG-RANSAC 6. 19 13. 80 23. 73 61. 9 0. 7
ieee_xplore
20,871
NN + OANet 6. 00 14. 33 25. 90 38. 6 4. 2 SuperGlue 6. 71 15. 70 28. 67 74. 2 9. 8 SuperPointNN + mutual 9. 43 21. 53 36. 40 50. 4 18. 8 NN + distance + mutual 9. 82 22. 42 36. 83 63. 9 14. 6 NN + GMS 8. 39 18. 96 31. 56 50. 3 19. 0 NN + PointCN 11. 40 25. 47 41. 41 71. 8 25. 5 NN + OANet 11. 76 26. 90 43. 85 74. 0 25. 7
ieee_xplore
20,872
analogy that SuperGlue “glues” together local features. Local featuresMatcherPose estimation AUC P MS @5◦@10◦@20◦ ContextDesc NN + ratio test 20. 16 31. 65 44. 05 56. 2 3. 3 SIFTNN + ratio test 15. 19 24. 72 35. 30 43. 4 1. 7 NN + NG-RANSAC 15. 61 25. 28 35. 87 64. 4 1. 9 NN + OANet 18. 02 28. 76 40. 31 55. 0 3. 7
ieee_xplore
20,888
2) crowding distance ( ). We now define a partial order as if or and Thatis, betweentwosolutionswithdifferingnondomination ranks, wepreferthesolutionwiththelower(better)rank. Other-wise, if both solutions belong to the same front, then we preferthe solution that is located in a lesser crowded region.
ieee_xplore
20,990
678 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. VOL II. NO. 7. JULY IYXI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . :. . . . . . . . . . . . . . . . . . . . . . . A fd (16) Equation (16) shows that Ai, f can be computed by con- the output. All the discrete approximations A, d, f, for j < 1-2
ieee_xplore
22,864
Fig. 10. (a) Decomposition of the frequency support of the image &. if into A$, , fand the detail images D$ f. The image A‘, ‘, Jcorresponds to the lower horizontal and vertical frequencies of A, , . , _ Di, fgives the vertical high frequencies and horizontal low frequencies. G, J’the horizontal high
ieee_xplore
22,957
well to the challenges associated with Big Data velocity;its incremental learning nature alleviates challenges of data availability, real-time processing, i. i. d, and concept drift. For example, this paradigm could be used to handle stock data prediction due to the ever-changing and rapidly evolving
ieee_xplore
23,387
age by up to a factor of 1. 5in the HSV color space. 2. 3. Inference Just like in training, predicting detections for a test image only requires one network evaluation. On P ASCAL VOC the network predicts 98 bounding boxes per image and class probabilities for each box. YOLO is extremely fast at test
ieee_xplore
23,531
it is classified based on the type of error: •Correct: correct class and IOU >. 5 •Localization: correct class, . 1<IOU<. 5 •Similar: class is similar, IOU >. 1Correct: 71. 6% Correct: 65. 5%Loc: 8. 6%Sim: 4. 3%Other: 1. 9%Background: 13. 6% Loc: 19. 0%Sim: 6. 75%Other: 4. 0%Background: 4. 75%Fast R-CNN YOLO
ieee_xplore
23,572
dicted by YOLO and the overlap between the two boxes. The best Fast R-CNN model achieves a mAP of 71. 8% on the VOC 2007 test set. When combined with YOLO, its mAP Combined Gain Fast R-CNN 71. 8 - - Fast R-CNN (2007 data) 66. 9 72. 4. 6 Fast R-CNN (VGG-M) 59. 2 72. 4. 6 Fast R-CNN (CaffeNet) 57. 1 72. 1. 3
ieee_xplore
23,577
train-time region proposals test-time region proposals method # boxes method # proposals mAP (%) SS 2, 000 SS 2, 000 58. 7 EB 2, 000 EB 2, 000 58. 6 RPN+ZF, shared 2, 000 RPN+ZF, shared 300 59. 9 ablation experiments follow below RPN+ZF, unshared 2, 000 RPN+ZF, unshared 300 58. 7 SS 2, 000 RPN+ZF 100 55. 1
ieee_xplore
23,757
SS 2, 000 RPN+ZF 300 56. 8 SS 2, 000 RPN+ZF 1, 000 56. 3 SS 2, 000 RPN+ZF (no NMS) 6, 000 55. 2 SS 2, 000 RPN+ZF (no cls) 100 44. 6 SS 2, 000 RPN+ZF (no cls) 300 51. 4 SS 2, 000 RPN+ZF (no cls) 1, 000 55. 8 SS 2, 000 RPN+ZF (no reg) 300 52. 1 SS 2, 000 RPN+ZF (no reg) 1, 000 51. 3 SS 2, 000 RPN+VGG 300 59. 2
ieee_xplore
23,758
RPN 300 07++12 70. 4 84. 9 79. 8 74. 3 53. 9 49. 8 77. 5 75. 9 88. 5 45. 6 77. 1 55. 3 86. 9 81. 7 80. 9 79. 6 40. 1 72. 6 60. 9 81. 2 61. 5 RPN 300 COCO+07++12 75:987:483:676:862:959:681:982:091:354:982:659:089:085:584:7 84:1 52:278:965. 5 85:470:2 For RPN, the train-time proposals for Fast R-CNN are 2, 000. TABLE 6
ieee_xplore
23,774
For RPN, the train-time proposals for Fast R-CNN are 2, 000. RPN* denotes the unsharing feature version. TABLE 8 Detection Results of Faster R-CNN on PASCAL VOC 2007 Test Set Using Different Settings of Anchors settings anchor scales aspect ratios mAP (%) 1 scale, 1 ratio 1282 1:1 65. 8 2562 1:1 66. 7
ieee_xplore
23,779
Assume that X∈Rm×nand the rank of X, i. e. rank(X)=r. The SVD of Xis computed as X=U3VT(II. 9) where U∈Rm×rwith UTU=IandV∈Rn×rwith VTV=I. The columns of UandVare called left and right singular vectors of X, respectively. Additionally, 3is a diagonal matrix and its elements are composed of the singular
ieee_xplore
23,923
Clearly, dltis orthogonal to Rk+1, and then ∥Rk∥2=|⟨Rt, dlt⟩|2+∥Rt+1∥2(IV. 6) For the n-th iteration, the representation residual ∥Rn∥2≤τwhereτis a very small constant and the probe sample ycan be formulated as: y=n−1∑ j=1⟨Rj, dlj⟩dlj+Rn (IV. 7) If the representation residual is small enough, the probe
ieee_xplore
23,965
Step 1: Compute σtexploiting Eq. V. 8 and σt←mid (σmin, σt, σmax), where mid(·, ·, ·) denotes the middle value of the three parameters. Step 2: While Eq. V. 9 not satisfied doσt←γσtend Step 3: zt+1=(zt−σt∇G(zt))+andt=t+1. End Output: zt+1, αB. INTERIOR-POINT METHOD BASED SPARSE REPRESENTATION STRATEGY
ieee_xplore
23,995
gate gradient algorithm, and then the direction of linear search [△α, △σ] is obtained. Second, the Lagrange dual of problem III. 12 is used to construct the dual feasible point and duality gap: a) The Lagrangian function and Lagrange dual of problem III. 12 are constructed. The Lagrangian function is
ieee_xplore
24,005
determine an optimal step size of the Newton linear search. The stopping condition of the backtracking linear search is G(α+ηt△α, σ+ηt△σ)>G(α, σ) +ρηt∇G(α, σ)[△α, △σ] (V. 20) whereρ∈(0, 0. 5) andηt∈(0, 1) is the step size of the Newton linear search. Finally, the termination condition of the Newton linear
ieee_xplore
24,008
and dual problems in III. 12. First, an auxiliary variable is introduced to convert problem in III. 12 into a constrained problem with the form of problem V. 22. Subsequently, the alternative direction method is used to efficiently address the sub-problems of problem V. 22. By introducing the auxiliary
ieee_xplore
24,016
First, the first optimization problem V. 24(a) is considered arg min L(s, αt, λt)=1 2τ∥s∥2+∥αt∥1−(λt)T ×(s+Xαt−y)+µ 2∥s+Xαt−y∥2 2 =1 2τ∥s∥2−(λt)Ts+µ 2∥s+Xαt−y∥2 2 +∥αt∥1−(λt)T(Xαt−y) (V. 25) Then, it is known that the solution of problem V. 25 with respect to sis given by st+1=τ 1+µτ(λt−µ(y−Xαt)) (V. 26)
ieee_xplore
24,019
(3) if−λ≤sj≤λ, and thenα∗ j=0. So the solution of problem VI. 6 is summarized as α∗ j=  sj−λ, if s j>λ sj+λ, if s j<−λ 0, otherwise(VI. 7) The equivalent expression of the solution is α∗=shrink (s, λ), where the j-th component of shrink (s, λ) isshrink (s, λ)j=sign(sj) max{|sj|−λ, 0}. The operator
ieee_xplore
24,036
f(α)≈1 2∥Xαt−y∥2 2+(α−αt)TXT(Xαt−y) +1 2τ(α−αt)T(α−αt)=Qt(α, αt) (VI. 11) Thus problem VI. 8 using the proximal algorithm can be successively addressed by αt+1=arg min Qt(α, αt)+λ∥α∥1 (VI. 12) Problem VI. 12 is reformulated to a simple form of problem VI. 6 by Qt(α, αt)=1 2∥Xαt−y∥2 2+(α−αt)TXT(Xαt−y) +1
ieee_xplore
24,040
imate the Hessian matrix of f(α), i. e. L(f)=2λmax(XTX). Thus, the problem VI. 8 can be converted to the problem below: f(α)≈1 2∥Xαt−y∥2 2+(α−αt)TXT(Xαt−y) +L 2(α−αt)T(α−αt)=Pt(α, αt) (VI. 15) where the solution can be reformulated as αt+1=arg minL 2∥α−θ(αt)∥2 2+λ∥α∥1(VI. 16) whereθ(αt)=αt−1 LXT(Xαt−y).
ieee_xplore
24,046
toleranceε=10−5. Step 1:λt=max{γ∥XTyt∥∞, λ}. Step 2: Exploit shrinkage operator to solve problem VI. 14, i. e. αi+1=shrink (αi−τiXT(XTαt−y), λtτi). Step 3: Update the value of1 τi+1using the Eq. VI. 20. Step 4: If∥αi+1−αi∥ αi≤ε, go to step 5; Otherwise, return to step 2 and i=i+1. Step 5: yt+1=y−Xαt+1.
ieee_xplore
24,060
=Rλ, 1(α+τXT(y−Xα)) (VI. 26) can be obtained which is well-defined. θ(α)=α+τXT (y−Xα) is denoted and the resolvent operator can be explicitly expressed as: Rλ, 1 2(x)=(fλ, 1 2(x1), fλ, 1 2(x2), ···, fλ, 1 2(xN))T(VI. 27) where fλ, 1 2(xi)=2 3xi(1+cos(2π 3−2 3gλ(xi)), gλ(xi)=arg cos(λ 8(|xi| 3)−3 2) (VI. 28)
ieee_xplore
24,069
Input: Probe sample y, the measurement matrix X Initialization: t=0, ε=0. 01, τ=1−ε ∥X∥2. While not converged do Step 1: Compute θ(αt)=αt+τXT(y−Xαt). Step 2: Compute λt=√ 96 9τ|[θ(αt)]k+1|3 2in Eq. VI. 31. Step 3: Apply the half proximal thresholding operator to obtain the representation solution αt+1=
ieee_xplore
24,076
Specifically, the sparse representation problem III. 9 can be viewed as an equality constrained problem and the equivalent problem III. 12 is an unconstrained problem, which augments the objective function of problem III. 9 with a weighted constraint function. In this section, the augmented Lagrangian
ieee_xplore
24,078
2∥y−Xα∥2 2s. t. y−Xα=0 (VI. 32) Then, a new optimization problem VI. 32 with the form of the Lagrangain function is reformulated as arg min Lλ(α, z)=∥α∥1+λ 2∥y−Xα∥2 2+zT(y−Xα) (VI. 33) where z∈Rdis called the Lagrange multiplier vector or dual variable and Lλ(α, z) is denoted as the augmented Lagrangian
ieee_xplore
24,080
Z. Zhang et al. : Survey of Sparse Representation whereµ∈Rdis a Lagrangian multiplier and τis a penalty parameter. Finally, the dual optimization problem VI. 43 is solved and a similar alternating minimization idea of PALM can also be applied to problem VI. 43, that is, zt+1=arg min z∈B1∞Lτ(λt, z, µt)
ieee_xplore
24,088
indices of all the samples in X3are all included in the support set3. If we analyze the KKT optimality condition for problem III. 12, we can obtain the following two equivalent conditions of problem VII. 1, i. e. X3(y−Xα)=λu; ∥XT 3c(y−Xα)∥∞≤λ(VII. 2) where3cdenotes the complementary set of the set 3.
ieee_xplore
24,112
pi=xT i(Xα−y), qi=xT iXδ, ri=(1−σ)wi+σˆwi andsi= ˆwi−wi. Thus, at the l-th stage (if ( XT iXi)−1 exists), the update direction of the homotopy algorithm can be computed by δl={ (XT 3X3)−1(W−ˆW)u, on3 0, otherwise(VII. 17) The step size which can lead to a critical point can be com- puted byτ∗ l=min(τ+
ieee_xplore
24,144
framework of dictionary learning can be generally formulated as an optimization problem arg min D∈, xi{ 1 NN∑ i=1(1 2∥yi−Dxi∥2 2+λP(xi))} (VIII. 1) where= { D=[d1, d2, ···, dM]:dT idi=1, i=1, 2, ···, M}(Mhere may not be equal to N), Ndenotes the number of the known data set (eg. training samples in
ieee_xplore
24,169
problem VIII. 2 is converted to arg min X∥Y−DX∥2 Fs. t. ∥xi∥0≤k, i=1, 2, ···, N (VIII. 3) which is called sparse coding and kis the limit of sparsity. Then, its subproblem is considered as follows: arg minxi∥yi−Dxi∥2 2s. t. ∥xi∥0≤k, i=1, 2, ···, N where we can iteratively resort to the classical sparse repre-
ieee_xplore
24,192
⟨D, C, X⟩=arg min D, C, X∥(Y√µH) −(D√µC) X∥2 F +η∥C∥2 Fs. t. ∥xi∥0≤k (VIII. 12) In consideration of the KSVD algorithm, each column of the dictionary will be normalized to l2-norm unit vector and(D√µC) will also be normalized, and then the penalty term∥C∥2 Fwill be dropped out and problem VIII. 12 will be
ieee_xplore
24,229
KSVD algorithm is applied to update Zatom by atom and compute X. Thus Zand Xcan be obtained. Then, the LC-KSVD algorithm normalizes dictionary D, transform matrix A, and the classifier parameter Cby D′=[d′ 1, d′ 2, ···, d′ M]=[d1 ∥d1∥, d2 ∥d2∥, ···, dM ∥dM∥] A′=[a′ 1, a′ 2, ···, a′ M]=[a1 ∥d1∥, a2 ∥d2∥, ···, aM
ieee_xplore
24,245
Z. Zhang et al. : Survey of Sparse Representation function for learning a structured discriminative dictionary, which is used for pattern classification. The general model of FDDL is formulated as J(D, X)=arg min D, X{f(Y, D, X)+µ∥X∥1+ηg(X)} (VIII. 22) where Yis the matrix composed of input data, Dis the
ieee_xplore
24,251
Z. Zhang et al. : Survey of Sparse Representation be sparsely represented over a dictionary D, i. e. the solution of the following problem is sufficiently sparse: argminα∥α∥0s. t. Dα=z (VIII. 34) And an equivalent problem can be reformulated for a proper value ofλ, i. e. argminα∥Dα−z∥2 2+λ∥α∥0 (VIII. 35)
ieee_xplore
24,312
practice, we do this by saving the features learned (e. g. , atregular intervals during training, to perform early stop-ping) and training a cheap classifier on top (such as a linearclassifier). However, training the final classifier can be asubstantial computational overhead (e. g. , supervised fine
ieee_xplore
24,935