text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
capturing the variations in the target output random variables of interest (e. g. , classes). The optimization effect is more difficult to tease out because the top two layers of a deep neural net can just overfit the training set whether the lower layers compute useful features or not, but there are
ieee_xplore
24,959
these factors would typically be relevant for any particular example, justifying sparsity of representation. These factors are expected to be related to simple (e. g. , linear) depen-dencies, with subsets of these explaining different random variables of interest (inputs, tasks) and varying in struc-
ieee_xplore
25,078
domain, while only unlabeled data DTare available in the target domain. More specifically, let the source domain data beDS={(xS1, yS1), . . . , ( xSn1, ySn1)}, w h e r e xSi∈Xis the input and ySi∈Yis the corresponding output. Similarly, let the target domain data be DT={xT1, . . . , xTn2}, w h e r et h e
ieee_xplore
25,388
the distance (measured w. r. t. the MMD) between the projected source and target domain data while maximizing the embeddeddata variance. By virtue of the kernel trick, it can be shown that the MMD distance in Section III-A can be written as tr (KL), where K=[φ(x i)⊤φ(xj)], a n d Lij=1/n2 1ifxi, xj∈XS,
ieee_xplore
25,420
(i, j)∈Nmij[W⊤K]i−[W⊤K]j2 =tr(W⊤KLKW). (11) B. Formulation and Optimization Procedure Combining all three objectives, we thus want to find a W that maximizes (10) while simultaneously minimizing (5) and(11). The final optimization problem can be written as min Wtr(W⊤KLKW )+μtr(W⊤W)+λ n2tr(W⊤KLKW)
ieee_xplore
25,461
208 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 22, NO. 2, FEBRUARY 2011 0 5 10 15 20 25 30 35 40 45 50 551. 52. 54. 58153055100 # DimensionsAED ( unit: m)KPCA SSTCATCA TCARed uced SSA (a)0 100 200 300 400 500 600 700 800 9002345678 # Unlabeled Data in the Tar get Domain for Trainin gAED ( unit: m)RLSR
ieee_xplore
25,548
72. 33 (2. 32) 75. 86 (1. 55) 76. 70 (1. 05) 81. 59 (1. 36) 90. 51 (0. 70) TCAlinear 10 70. 41 (6. 84) 87. 67 (2. 12) 82. 83 (3. 07) 84. 51 (5. 04) 89. 79 (2. 54) 88. 67 (3. 01) kernel 20 69. 04 (5. 07) 81. 52 (8. 86) 83. 86 (3. 24) 82. 23 (2. 64) 89. 73 (4. 04) 92. 03 (2. 05) 30 69. 01 (2. 39) 77. 26 (15. 81)
ieee_xplore
25,565
84. 44 (2. 29) 79. 81 (4. 20) 91. 81 (2. 38) 92. 23 (2. 41) Laplacian 10 69. 12 (13. 27) 78. 68 (8. 23) 79. 58 (8. 12) 69. 02 (5. 31) 82. 29 (2. 57) 90. 01 (1. 02) kernel 20 69. 37 (11. 22) 79. 57 (4. 31) 78. 71 (9. 22) 74. 71 (3. 01) 85. 59 (1. 59) 92. 68 (1. 12) 30 68. 58 (11. 21) 80. 43 (4. 64) 76. 69 (9. 52)
ieee_xplore
25,566
74. 40 (2. 49) 87. 89 (2. 22) 92. 63 (0. 84) RBF 10 74. 88 (3. 51) 82. 51 (7. 65) 78. 34 (6. 58) 81. 65 (4. 08) 82. 69 (2. 24) 89. 15 (0. 69) kernel 20 72. 60 (5. 60) 77. 47 (2. 62) 78. 09 (6. 88) 79. 54 (1. 89) 83. 51 (3. 32) 90. 77 (0. 83) 30 71. 64 (5. 49) 77. 62 (3. 75) 80. 11 (7. 73) 79. 50 (1. 91) 83. 71 (2. 27)
ieee_xplore
25,567
91. 58 (0. 64) SSTCAlinear 10 68. 64 (3. 00) 75. 11 (11. 93) 81. 46 (3. 59) 73. 75 (7. 55) 85. 99 (3. 22) 91. 38 (2. 22) kernel 20 64. 28 (3. 48) 60. 69 (14. 87) 77. 45 (5. 30) 78. 19 (4. 17) 86. 71 (3. 36) 91. 81 (2. 13) 30 65. 08 (3. 12) 66. 30 (16. 74) 77. 98 (4. 19) 72. 79 (5. 75) 85. 81 (3. 23) 93. 38 (2. 02)
ieee_xplore
25,568
Laplacian 10 75. 29 (3. 92) 79. 84 (5. 03) 72. 70 (10. 69) 73. 10 (2. 10) 85. 26 (2. 25) 91. 72 (0. 64) kernel 20 71. 99 (4. 73) 81. 77 (3. 44) 72. 99 (9. 99) 74. 94 (2. 28) 84. 28 (1. 27) 92. 47 (0. 74) 30 69. 71 (4. 99) 82. 09 (4. 42) 72. 34 (10. 82) 74. 67 (1. 79) 85. 30 (1. 80) 92. 73 (0. 76) RBF 10
ieee_xplore
25,569
73. 76 (3. 25) 74. 50 (7. 85) 78. 51 (7. 50) 77. 61 (1. 49) 83. 09 (. 0287) 90. 35 (1. 18) kernel 20 70. 87 (7. 51) 75. 49 (6. 67) 79. 28 (7. 20) 79. 46 (1. 27) 80. 02 (. 0287) 90. 62 (0. 83) 30 70. 16 (5. 98) 77. 03 (5. 56) 79. 06 (7. 60) 79. 88 (1. 52) 81. 30 (. 0287) 90. 21 (0. 96) KPCAlinear 10 68. 66 (6. 59)
ieee_xplore
25,570
88. 26 (5. 85) 68. 59 (10. 00) 81. 42 (6. 67) 87. 33 (3. 56) 91. 24 (1. 84) kernel 20 69. 18 (6. 27) 82. 59 (7. 07) 71. 46 (7. 41) 80. 22 (3. 81) 89. 49 (3. 34) 93. 44 (1. 92) 30 70. 55 (2. 81) 80. 94 (11. 63) 78. 90 (8. 33) 77. 92 (4. 32) 91. 36 (1. 51) 93. 66 (1. 81) Laplacian 10 44. 43 (8. 01) 81. 52 (9. 00)
ieee_xplore
25,571
54. 42 (7. 33) 80. 37 (. 0252) 58. 87 (4. 97) 58. 47 (2. 35) kernel 20 49. 08 (10. 46) 55. 67 (6. 35) 50. 42 (1. 01) 72. 67 (. 0252) 75. 71 (6. 83) 73. 94 (3. 75) 30 45. 24 (8. 17) 63. 13 (7. 76) 50. 43 (1. 03) 69. 36 (. 0252) 75. 07 (10. 64) 74. 18 (4. 24) RBF 10 53. 82 (6. 23) 78. 50 (4. 23) 51. 64 (2. 11)
ieee_xplore
25,572
79. 92 (. 0252) 57. 84 (3. 74) 56. 82 (2. 03) kernel 20 47. 66 (8. 19) 60. 94 (10. 97) 50. 49 (1. 00) 79. 37 (. 0252) 67. 73 (5. 53) 62. 36 (3. 84) 30 47. 82 (8. 37) 69. 13 (9. 66) 51. 86 (3. 82) 72. 31 (. 0252) 67. 66 (4. 48) 64. 76 (5. 14) SCL all+50 68. 29 (1. 22) 72. 38 (2. 36) 75. 87 (1. 48) 76. 73 (1. 00)
ieee_xplore
25,573
81. 60 (1. 35) 90. 61 (0. 64) KMMlinear kernel all 69. 81 (1. 27) 72. 86 (1. 53) 75. 29 (1. 85) 76. 38 (1. 32) 78. 17 (1. 29) 88. 06 (1. 33) Laplacian kernel all 69. 64 (1. 27) 73. 10 (1. 67) 76. 62 (1. 23) 75. 83 (1. 27) 77. 81 (1. 21) 85. 92 (0. 70) RBF kernel all 69. 65 (1. 24) 73. 07 (1. 48) 76. 63 (1. 14)
ieee_xplore
25,574
fects of randomness are seen to be controlled. Some results oftheparticleswarmoptimizer, usingmodificationsderivedfromtheanalysis, arepresented;theseresultssuggestmethodsforal- tering the original algorithm in ways that eliminate some prob- lemsandincreasetheoptimizationpoweroftheparticleswarm. II. A
ieee_xplore
25,893
choice. The probabilities for messages are implicitly determined bystating oura priori knowledge of the enemy's language habits, thetactical situation (which will influence the probable content of the message) and any special information we may have regarding the cryptogram. . . . . , ~. , . . . ----::;:;~
ieee_xplore
26,143
integers. Thus, for d=5, we might have 2 3 1 54as the permutation. This means that: mim2mam4m6m6m7ms mg mlO. . . becomes ~ma mi m«m4m7msm6mIDmg. . . . Sequential application of two or more transpositions will be called compound transposition. Ifthe periods are dl, d2, "', d. it isdear that the result is
ieee_xplore
26,149
6. Matrix System? One method ofn-gram substitution is to operate on successive n-grams with a matrix having an inverse. The letters are assumed numbered from oto25, making them elements of an algebraic ring. From the n-gram m, m~. . . m;of message, the matrix ajjgives an n-gram of cryptogram "e, =Lau m,
ieee_xplore
26,159
674 BELL SYSTEM TECHNICAL JOURNAL bethesame as theMspace, i. e. that thesystem beendomorphic. The fractional transposition is as homogeneous as the ordinary transposition without being endomorphic. The proper definition is the following: A cipher Tispure if for every T, , T, , T»there is a T. such that
ieee_xplore
26,208
identifying Bwith the message gives the second result. The last result fol­ lows from [fB(M) ~HB(K, M) =HE(K)+HB. K(M) and the fact that HB, K(M) =0 since KandEuniquely determine M. Since the message and key are chosen independently we have: H(M, K)=H(M)+H(K). Furthermore, H(M, K)=H(E, K)=H(E)+HE(K),
ieee_xplore
26,325
tions of the system. The lower limit is achieved if all the systems R, S, . . . , Ugo to completely different cryptogram spaces. This theorem is also proved by the general inequalities governing equivocation, HAB)sH(B) ::::;H(A)+HA(B). Weidentify Awith the particular system being used and Bwith the key
ieee_xplore
26,333
for 0 and 1, and successive letters chosen independently. We have HB(M) =HB(K) = -L: P(E)PB(K) logP. (K) The probability that Econtains exactly sO's in a particular permutation is: !(p. qN-. +q'pN-. ) ""-, \ r-, \ r-, \<, "-<, \<, . . . . . . . P"'11, q"'1~ \r-, . . . . . . . \i""""--. r-. . I f\r--r-. \ --\.
ieee_xplore
26,343
692 BELL SYSTEM TECHNICAL JOURNAL mkeys from high probability messages each with probabilityf. Hence the equivocation is: T k(k)(S)m( s)~m -L: - 1 - - InlogmSkm_1 m T T We wish to find a simple approximation to this when kis large. Ifthe expected value of m, namely iii=SkiT, is»1, thevariation of log m
ieee_xplore
26,355
each L, weighted in accordance with its Pi. The mean equivocation char­ acteristic will be a line somewhere in the midst of these ridges and may not give a very complete picture of the situation. This is shown in Fig. 11. A similar effect occurs if the system is not pure butmade up of several systems
ieee_xplore
26,430
secrecy" afforded by the system. For a simple substitution on English the work and equivocation char­ acteristics would be somewhat as shown in Fig, 12. The dotted portion of. :". . . . . . -. \, , , , , , , \, . , ', I, . , . •, , . I•, , , •, , , , , , , , , , , . , , . Fig. 12-Typical work and equivocation characteristics.
ieee_xplore
26,438
therefore be complex in the kj, and involve many of them. Otherwise the enemy can solve the simple ones and then the more complex ones by sub­ stitution. From thepoint of view of increasing confusion, it is desirable to have the f. involve several mi, especially ifthese are not adjacent and hence less
ieee_xplore
26,521
(e-mail: cheewooi@iastate. edu; liu@iastate. edu; gmani@iastate. edu). Digital Object Identifier 10. 1109/TPWRS. 2008. 2002298Since the 1970s, the control center framework has gradually evolved from a closed monolithic structure to a more opennetworked environment. With the recent trend of using stan-
ieee_xplore
26,573
, represents the level of impact on a power system when a substation is removed, i. e. , electrically disconnected, by switching actions due to the attack. The impact caused by an attack through an access point will be evaluated by a logic- and power flow-based procedure. The steady state probabilities
ieee_xplore
26,638
bound on transmission rate of 2, 3, and 4 b/s/Hz. It follows also from the above that there is a fundamental tradeoff between constellation size, diversity, and the trans-mission rate. We relate this tradeoff to the trellis complexityof the code. Lemma 3. 3. 1: The constraint length of an -space–time
ieee_xplore
27,246
uses the RBF kernel, whose two optimal hyperparame- tersσandλ(the regularization parameter to balance the training and testing errors) can be determined by fivefold cross validation in the range σ=[2−3, 2−2, . . . , 24]and λ=[10−2, 10−1, . . . , 104]. 4) For the 1-D CNN, we use one convolutional block,
ieee_xplore
27,687
including a 1-D convolutional layer with a filter size of 128, a BN layer, a ReLU activation layer, and a softmax layer with the size of P, w h e r e Pdenotes the dimension of network output. 3https://www. csie. ntu. edu. tw/ ∼cjlin/libsvm/TABLE IV GENERAL NETWORK CONFIGURATION IN EACH LAYER OF OURFUNET.
ieee_xplore
27,688