subset
stringclasses
3 values
text
stringlengths
14
7.51M
source
stringclasses
2 values
web
Glide to success with Doorsteptutor material for UGC : Get detailed illustrated notes covering entire syllabus: point-by-point for high retention. National Defense Academy NDA-2016 is an all India level entrance examination hosted by Union Public Service Commission (UPSC) for recruiting applicants to the Indian Defense forces that includes Indian Navy, Indian Army, and Indian Air Force after 12th class. The NDA examination is hosted twice in a year namely NDA 2016: (I) in the month of April, 2016 (II) In the month of September, 2016 Important Dates for NDA-I 2016 • Notification will be available from 2nd January 2016. • Last date of registration 29th January 2016 till 11: 59 PM. • Exam will be held on 17th April 2016 Important Dates for NDA-II 2016 • Notification will be available from 18th June 2016. • Last date of registration 15th July 2016 till 11: 59 PM. • Exam will be held on 18th September 2016. Eligibility Criteria • The applicant should be a citizen of India, Nepal, Bhutan, or Tibetan migrant who came to India before the 01 January, 1962 with the purpose of eternally settling in India. • Unmarried candidates are only eligible to appear in the NDA 2016 exam Age limit • Candidate must not be born before 2nd July 1997 and not after than 1st July 2000 are eligible to apply. Education Qualifications: For Indian Army wing: • The interested candidates must qualified 12th class from a recognized board/university or candidates pursuing in 12th or equivalent examination can also apply. For Air force and Indian Navy: • The candidates must have qualified or pursuing class 12thwith Mathematics and physics from a recognized board or university. Application Fee: • Candidate needs to pay ₹ 100 for getting the application form. Physical Standards: • Candidate must be physically fit. Candidate must fulfill all the physical standards norms. Selection Process: • Candidate will be selected on the basis of their performance in the written test and SSB Interview. Exam Pattern for NDA 2016: The NDA exam comprises two papers: • General ability test, • Mathematics Subject Code Duration Maximum Marks Mathematics 01 2-$\frac{1}{2}$ Hours 300 General Ability Test 02 2-$\frac{1}{2}$ Hours 600 Total 900 General Ability: • The test consists of two sections Section A and Section B. • The section A contains questions of English and • Section B includes the questions of general knowledge. Mathematics: • The mathematics paper carries out 300 marks and consists of MCQs based on the syllabus of class 10th and 11th. • There is also a negative marking for the wrong responses given by candidates. Selection Process: • The candidates are selection is based on their score in the written test as well as the SSB interview which generally last for 4 - 5 days. • The candidates who successfully qualify the written exam are called for the SSB interview and after clearing all the rounds in the interview, the final merit list is prepared and the selection is done after the medical process and document verification. Result: • The result is declared one month after the examination; the candidates can check their result from the official website of UPSC. • All the information regarding NDA I and NDA II result will be updated here in this page.
auto_math_text
web
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Extracting quantum coherence via steering ## Abstract As the precious resource for quantum information processing, quantum coherence can be created remotely if the involved two sites are quantum correlated. It can be expected that the amount of coherence created should depend on the quantity of the shared quantum correlation, which is also a resource. Here, we establish an operational connection between coherence induced by steering and the quantum correlation. We find that the steering-induced coherence quantified by such as relative entropy of coherence and trace-norm of coherence is bounded from above by a known quantum correlation measure defined as the one-side measurement-induced disturbance. The condition that the upper bound saturated by the induced coherence varies for different measures of coherence. The tripartite scenario is also studied and similar conclusion can be obtained. Our results provide the operational connections between local and non-local resources in quantum information processing. ## Introduction Quantum coherence, being at the heart of quantum mechanics, plays a key role in quantum information processing such as quantum algorithms1 and quantum key distribution2. Inspired by the recently proposed resource theory of quantum coherence3,4, researches are focused on the quantification5,6 and evolution7,8 of quantum coherence, as well as its operational meaning5,9 and role in quantum information tasks10,11,12. When multipartite systems are considered, coherence is closely related to the well-established quantum information resources, such as entanglement13 and discord-type quantum correlations14. It is shown that the coherence of an open system is frozen under the identical dynamical condition where discord-type quantum correlation is shown to freeze15. Further, discord-type quantum correlation can be interpreted as the minimum coherence of a multipartite system on tensor-product basis16. An operational connection between local coherence and non-local quantum resources (including entanglement17 and discord18) is presented. It is shown that entanglement or discord between a coherent system and an incoherent ancilla can be built by using incoherent operations, and the generated entanglement or discord is bounded from above by the initial coherence. The converse procedure is of equal importance: to extract coherence locally from a spatially separated but quantum correlated bipartite state. The extraction of coherence with the assistance of a remote party has been studied in the asymptotical limit19. In this paper, we ask how we extract coherence locally from a single copy of a bipartite state. The quantum steering has long been noted as a distinct nonlocal quantum effect20 and has attracted recent research interest both theoretically and experimentally21,22,23,24,25,26,27,28,29,30,31. It demonstrates that Alice can remotely change Bob’s state by her local selective measurement if they are correlated, and is hence a natural candidate to accomplish the task of remote coherence extraction. In this paper, we present the study of coherence extraction induced by quantum steering and the involved quantum correlation. Precisely, we introduce the quantity of steering-induced coherence (SIC) for bipartite quantum states. Here Bob is initially in an incoherent state but quantum correlated to Alice. Alice’s local projective measurement can thus steer Bob to a new state which might be coherent. The SIC is then defined as the maximal average coherent of Bob’s steered states that can be created by Alice’s selective projective measurement. When there is no obvious incoherent basis for Bob, (for example, Bob’s system is a polarized photon), the definition can be generalized to arbitrary bipartite system where Bob’s incoherent basis is chosen as the eigenbasis of his reduced state. In this case, the SIC can be considered as a basis-free measure of Bob’s coherence. The main result of this paper is building an operational connection between the SIC and the shared quantum correlation between Alice and Bob. We prove that the SIC can not surpass the initially shared B-side quantum correlation, which is a known quantum correlation measure named as measurement-induced disturbance (MID) 32. States whose relative entropy SIC can reach its upper bound are identified as maximally correlated states. For two-qubit states, while the trace-norm SIC can always reach the corresponding , we find an example of two-qubit state whose is strictly less than . This indicates that the condition for to reach the upper bound strongly depends on the measure of coherence. We further generalize the results to a tripartite scenario, where Alice can induce entanglement between Bob and Charlie in a controlled way. Since coherence of a single party is generally robust than quantum correlations involving two parties, our work provides a way to “store” quantum correlation as coherence. Besides, the coherent state induced by steering can be widely used for quantum information processing. Our results establish the intrinsic connection between coherence and quantum correlation by steering. ## Results ### Coherence and measurement-induced disturbance A state is said to be incoherent on the reference basis , if it can be written as3 Let be the set of incoherent state on basis Ξ. The incoherent completely positive trace-preserving (ICPTP) channel is defined as where the Kraus operators Kn satisfy . According to ref. 3, a proper coherence measure of a quantum state ρ on a fixed reference basis Ξ should satisfy the following three conditions. (C1) C(ρ, Ξ) = 0 iff . (C2) Monotonicity under selective measurements on average: satisfying and , where , occurring with probability , is the state corresponding to outcome n. (C3) Convexity: . A candidate of coherence measure is the minimum distance between ρ and the set of incoherent states where is a distance measure on quantum states and satisfies the following five conditions. (D1) D(ρ, σ) = 0 iff ρ = σ. (D2) Monotonicity under selective measurements on average: . (D3) Convexity: . (D4) , , where U is a unitary operation, and denotes the projective measurement on basis Ξ: . (D5) . Conditions (D1-D3) make sure that (C1-C3) is satisfied by the coherence measure defined in Eq. (3). When (D4) is satisfied, the coherence of ρ on the reference basis Ξ can be written as As proved in ref. 3, the relative entropy and the l1 matrix norm satisfies all the conditions (D1-D4), which makes the corresponding coherence measures and satisfy the conditions (C1-C3). As discovered recently33, the trace-norm distance does not satisfy (D2). Introduced in ref. 32, MID characterizes the quantumness of correlations. MID of a bipartite system ρ is defined as the minimum disturbance caused by local projective measurements that do not change the reduced states and where the infimum is taken over projective measurements which satisfy and , and is a distance on quantum states, which satisfies conditions (D1-D5) and further (D6) . It can be checked that (D6) can be satisfied by relative entropy but not satisfied by l1-norm. Comparing Eq. (5) with Eq. (4), we find MID is just the coherence of the bipartite state ρ on the local eigenbasis . For later convenience, we introduce B-side MID as goes to zero for B-side classical states, which can be written as , while is strictly positive for if . Notice that for one do not have a coherence interpretation. ### Definition of steering-induced coherence As shown in Fig. 1, Alice and Bob initially share a quantum correlated state ρ, and Bob’s reduced state ρB is incoherent on his own basis. Now Alice implements a local projective measurement on basis ΞA. When she obtains the result i (which happens with probability ), Bob is “steered” to a coherent state . We introduce the concept of SIC for characterizing Alice’s ability to create Bob’s coherence on average using her local selective measurement. ### Definition (Steering-induced coherence, SIC) For a bipartite quantum state ρ, Alice implements projective measurement on basis (). With probability , she obtains the result , which steers Bob’s state to . Let () be the eigenbasis of reduced states ρB. The steering-induced coherence is defined as the maximum average coherence of Bob’s steered states on the reference basis where the maximization is taken over all of Alice’s projective measurement basis ΞA, and the infimum over is taken when ρB is degenerate and hence is not unique. Since Bob’s initial state ρB is incoherent on its own basis , the SIC describes the maximum ability of Alice’s local selective measurement to create Bob’s coherence on average. We verify the following properties for . (E1) , and iff ρ is a B-side classical state. (E2) Non-increasing under Alice’s local completely-positive trace-preserving channel: . (E3) Monotonicity under Bob’s local selective measurements on average: satisfying , where and . (E4) Convexity: . Proof. Condition (E1) can be proved using the method in ref. 31, where it is proved that vanishes iff ρ is a B-side classical state. (E2) is verified by noticing that the local channel can not increase the set of Bob’s steered states, and hence the optimal steered states may not be steered to after the action of channel . The conditions (E3) and (E4) are directly derived from conditions (C2) and (C3) for coherence. ### Relation between SIC and MID Intuitively, Alice’s ability to extract coherence on Bob’s side should depend on the quantum correlation between them. The following theorem gives a quantitative relation between the SIC and quantum correlation measured by B-side MID . Theorem 1. When the distance measure in the definition of MID and coherence satisfies conditions (D1-D6), the SIC is bounded from above by the B-side MID, i.e., Proof. We start with the situation that ρB is non-degenerate and hence one do not need to take the infimum in Eqs (5) and (7). By definition, we have where . After Alice implements a selective measurement on basis ΞA, the average coherence of Bob’s state becomes The second equality holds because (condition (D5)) and . Since selective measurement does not increase the state distance (condition (D2)), we have , and hence Eq. (8) holds. The generalization to degenerate state is straightforward. We choose to reach the infimum of , which may not be the optimal eigen-basis for . Hence we have . According to ref. 17, the coherence of a quantum system B can in turn be transferred to the entanglement between the system and an ancilla C by incoherent operations. The established entanglement, measured by the minimum distance between the state ρBC and a separable state as , is bounded from above by the initial coherence of B. Here is the set of separable states and the state distance D is required not to increase under trace-preserving channels , which is automatically satisfied when we combine conditions (D2) and (D3). This leads to the three-party protocol as shown in Fig. 2, where Alice’s local selective measurement can create entanglement between Bob and Charlie. In this protocol, Bob and Charlie try to build entanglement between them from a product state , but are limited to use incoherent operations. Since ρB is incoherent on his eigenbasis , Bob and Charlie can build only classically correlated state without Alice’s help. Now Alice implement projective measurement and on the outcome i, the state shared between Bob and Charlie is steered to which can be entangled. The following corollary of theorem 1 gives the upper bound of the steering-induced entanglement. Corollary 1 Alice, Bob and Charlie share a tripartite state ρ, which is prepared from the product state using an ICPTP channel on BC: . Here is the reference basis of coherence. Alice’s local selective measurement can establish entanglement between Bob and Charlie, and the established entanglement on average is bounded from above by the initial B-side MID between Alice and Bob Proof. Before Alice implement the measurement, the state shared between Bob and Charlie is incoherent on basis and hence can be written as . Apparently, , so Bob and Charlie is classically correlated. On the measurement outcome i, the entanglement between Bob and Charlie becomes which satisfies . Notice that and hence . Eq. (11) is arrived by noticing that from theorem 1. Now we consider a general tripartite state ρ. If the reduced state is non-degenerate, one can follow the same steps and prove that whenever ρBC is incoherent on basis . Here is the {BC}-side MID between Alice and the combination of Bob and Charlie. However, when ρBC is degenerate, the condition that the tripartite state ρ is prepared from by an ICPTP channel on BC is stringent. For example, the state where , with ρBC incoherent on basis , violates Eq. (12), since but the left-hand-side reaches unity for Alice’s measurement basis . It indicate that the state ρX can not be prepared from a product state in the form using only incoherent operations. ### States to reach the upper bound According to theorem 1, Bob’s maximal coherence that can be extracted by Alice’s local selective measurement is bounded from above by the initial quantum correlation between them. Since the relative entropy is the only distance measure found to date which satisfies all the conditions (D1-D6), we employ relative entropy as the distance in the definition of coherence and MID, and discuss the states which can reach the upper bound of theorem 1. Theorem 2. The SIC can reach B-side MID for maximally correlated states . Proof. Any maximally correlated state can be written in a pure state decomposition form with and . Here has eigenbasis . In order to calculate the B-side MID, we consider Bob’s projective measurement , which takes the bipartite state to . Apparently, . By definition, we have In order to extract the maximum average coherence on Bob’s side, Alice measures her quantum system on basis , where , and dA is the dimension of A. On the measurement result k, Bob’s state is steered to where , which happens with probability . Apparently, and hence . Meanwhile, we have . The coherence of steered state is then for any outcome k. Therefore we arrive at Eq. (13). Any pure bipartite state can be written in a Schmidt decomposition form , and hence belongs to the set of maximally correlated states. As introduced in ref. 17, a maximally correlated states ρmc is prepared from an product states using an incoherent unitary operator, and its entanglement E(ρmc) can reach the initial coherence of ρB. Further, for maximally correlated states, one can check the equality, . Therefore, ρmc can be used in a scenario where coherence is precious and entanglement is not as robust as single-party coherence. Precisely, consider the situation where Alice and Bob share a maximally correlated state but they are not use it in a hurry. To store the resource for latter use, she can transfer the entanglement between them into Bob’s coherence using her local selective measurement. Bob stores his coherent state as well as Alice’s measurement results. When required, Bob can perfectly retrieve the entanglement by preparing a maximally correlated state using only incoherent operations. ### Two-qubit case, relation between l1-norm of SIC and trace-norm distance of B-side MID One cannot define MID based on the l1-norm distance, since it does not satisfy (D6) in general. However, it can be checked that for single-qubit states ρB and σB, 34, where rρ and rσ are Bloch vectors of ρB and σB respectively. Hence the l1-norm of coherence for a single-qubit state ρB can be written as Besides, Dt, which satisfies condition (D6), is proper to be used as a distance measure for MID. Therefore, when the Bob’s particle is a qubit, it is meaningful to study the relation between l1-norm of SIC and trace-norm distance of B-side MID. Now we consider a two-qubit state ρ, and employ in the definition of as in Eq. (7) and prove the following theorem. Theorem 3. For a two-qubit state ρ, we have Proof. The state of a two-qubit state can be written as , where the coefficient matrix can be written in the block form . For non-degenerate case b ≠ 0, we choose the eigenbasis of ρB for the basis of density matrix and hence b = (0, 0, b3). Further, a proper basis of qubit A is chosen such that the matrix T is in a triangle form with . We calculated the explicit form of and and obtain For degenerate case with b = 0, we can always chose proper local basis such that T is diagonal. Here we impose T11 ≥ T22 ≥ T33 without loss of generality. Direct calculations lead to We check that, for the state , we have , but according to theorem 3, . It means that relative entropy of coherence and -norm of coherence are truly different measures of coherence. ## Discussion In this paper, we have introduced the notion of SIC which characterizes the power of Alice’s selective measurement to remotely create quantum coherence on Bob’s site. Quantitative connection has been built between SIC and the initially shared quantum correlation measured by -side MID. We show that SIC is always less than or equal to -side MID. Our results are also generalized to a tripartite scenario where Alice can build the entanglement between Bob and Charlie in a controlled way. Next, we discuss a potential application of SIC in secrete sharing. Suppose Alice and Bob share a two-qubit state , whose SIC reaches unity. When Alice measures her state on different basis, Bob’s state is steered to, e.g., or with . The coherence of states in reach unity on basis and vise visa. Consequently, when we measure the states in the set on basis , the outcome is completely random. It is essential to quantum secret sharing using . In this sense, the SIC is potentially related to the ability for Alice to share secret with Bob. Coherence and various quantum correlations, such as entanglement and discord-like correlations, are generally considered as resources in the framework of resource theories9,35. By coining the concept of SIC, we present an operational interpretation between measures of those two resources, SIC and MID, and open the avenue to study their (ir)reversibility. The applications of various coherence quantities like SIC in many-body systems, as in the case of entanglement36,37,38, can be expected. How to cite this article: Hu, X. and Fan, H. Extracting quantum coherence via steering. Sci. Rep. 6, 34380; doi: 10.1038/srep34380 (2016). ## References 1. 1 Shor, P. Algorithms for quantum computation: discrete logarithms and factoring. In Goldwasser, S. (ed.) Proceedings. 35th Annual Symposium on Foundations of Computer Science (Cat. No. 94CH35717), 124–34 (IEEE Comput. Soc. Tech. Committee on Math. Found. Comput, 1994). Proceedings 35th Annual Symposium on Foundations of Computer Science, 20–22 Nov. 1994, Santa Fe, NM, USA. 2. 2 Bennett, C. H. Quantum cryptography using any two nonorthogonal states. Phys. Rev. Lett. 68, 3121–3124 (1992). 3. 3 Baumgratz, T., Cramer, M. & Plenio, M. B. Quantifying coherence. Phys. Rev. Lett. 113, 140401 (2014). 4. 4 Brandão, F. G. S. L. & Gour, G. Reversible framework for quantum resource theories. Phys. Rev. Lett. 115, 070503 (2015). 5. 5 Yuan, X., Zhou, H., Cao, Z. & Ma, X. Intrinsic randomness as a measure of quantum coherence. Phys. Rev. A 92, 022124 (2015). 6. 6 Girolami, D. Observable measure of quantum coherence in finite dimensional systems. Phys. Rev. Lett. 113, 170401 (2014). 7. 7 Ćwikliński, P., Studziński, M., Horodecki, M. & Oppenheim, J. Limitations on the evolution of quantum coherences: Towards fully quantum second laws of thermodynamics. Phys. Rev. Lett. 115, 210403 (2015). 8. 8 Zhou, Z.-Q., Huelga, S. F., Li, C.-F. & Guo, G.-C. Experimental detection of quantum coherent evolution through the violation of leggett-garg-type inequalities. Phys. Rev. Lett. 115, 113002 (2015). 9. 9 Winter, A. & Yang, D. Operational resource theory of coherence (2016). 10. 10 Leung, D., Li, K., Smith, G. & Smolin, J. A. Maximal privacy without coherence. Phys. Rev. Lett. 113, 030502 (2014). 11. 11 Yadin, B. & Vedral, V. A general framework for quantum macroscopicity in terms of coherence. Phys. Rev. A 93, 022122 (2016). 12. 12 Leung, D. & Yu, N. Maximum privacy without coherence, zero-error. ArXiv:1509.01300. 13. 13 Horodecki, R., Horodecki, P., Horodecki, M. & Horodecki, K. Quantum entanglement. Rev. Mod. Phys. 81, 865–942 (2009). 14. 14 Modi, K., Brodutch, A., Cable, H., Paterek, T. & Vedral, V. The classical-quantum boundary for correlations: Discord and related measures. Rev. Mod. Phys. 84, 1655–1707 (2012). 15. 15 Bromley, T. R., Cianciaruso, M. & Adesso, G. Frozen quantum coherence. Phys. Rev. Lett. 114, 210401 (2015). 16. 16 Yao, Y., Xiao, X., Ge, L. & Sun, C. P. Quantum coherence in multipartite systems. Phys. Rev. A 92, 022112 (2015). 17. 17 Streltsov, A., Singh, U., Dhar, H. S., Bera, M. N. & Adesso, G. Measuring quantum coherence with entanglement. Phys. Rev. Lett. 115, 020403 (2015). 18. 18 Ma, J., Yadin, B., Girolami, D., Vedral, V. & Gu, M. Converting coherence to quantum correlations. Phys. Rev. Lett. 116, 160407 (2016). 19. 19 Chitambar, E. et al. Assisted distillation of quantum coherence. Phys. Rev. Lett. 116, 070402 (2016). 20. 20 Schrödinger, E. Proc. Cambridge Philos. Soc. 31, 555 (1935). 21. 21 Wiseman, H. M., Jones, S. J. & Doherty, A. C. Steering, entanglement, nonlocality, and the einstein-podolsky-rosen paradox. Phys. Rev. Lett. 98, 140402 (2007). 22. 22 Saunders, D. J., Jones, S. J., Wiseman, H. M. & Pryde, G. J. Experimental epr-steering using bell-local states. Nat. Phys. 6, 845 (2010). 23. 23 Händchen, V. et al. Observation of one-way einstein-podolsky-rosen steering. Nat. Photon. 6, 596 (2012). 24. 24 Skrzypczyk, P., Navascués, M. & Cavalcanti, D. Quantifying einstein-podolsky-rosen steering. Phys. Rev. Lett. 112, 180404 (2014). 25. 25 Verstraete, F. Ph.D. thesis, Katholieke Universiteit Leuven (2002). 26. 26 Shi, M., Yang, W., Jiang, F. & Du, J. J. Phys. A: Math. Theor. 44, 415304 (2011). 27. 27 Jevtic, S., Pusey, M., Jennings, D. & Rudolph, T. Quantum steering ellipsoids. Phys. Rev. Lett. 113, 020402 (2014). 28. 28 Milne, A., Jevtic, S., Jennings, D., Wiseman, H. & Rudolph, T. New Journal of Physics 16, 083017 (2014). 29. 29 Milne, A., Jennings, D. & Rudolph, T. Geometric representation of two-qubit entanglement witnesses. Phys. Rev. A 92, 012311 (2015). 30. 30 Hu, X. & Fan, H. Effect of local channels on quantum steering ellipsoids. Phys. Rev. A 91, 022301 (2015). 31. 31 Hu, X., Milne, A., Zhang, B. & Fan, H. Quantum coherence of steered states. Scientific Reports 6 (2016). 32. 32 Luo, S. Using measurement-induced disturbance to characterize correlations as classical or quantum. Phys. Rev. A 77, 022301 (2008). 33. 33 Xiao-Dong Yu, G. F. X. D. M. T. & Da-Jian Zhang . An alternative framework for quantifying coherence. ArXiv:1606.03181. 34. 34 Shao, L.-H., Xi, Z., Fan, H. & Li, Y. Fidelity and trace-norm distances for quantifying coherence. Phys. Rev. A 91, 042120 (2015). 35. 35 Brandäo, F. G. S. L. & Gour, G. The general structure of quantum resource theories. Phys. Rev. Lett. 115, 070503 (2015). 36. 36 Cui, J. et al. Quantum phases with differing computational power. Nature Commun. 3, 812 (2012). 37. 37 Cui, J. et al. Local characterization of 1d topologically ordered states. Phys. Rev. B 88, 125117 (2013). 38. 38 Franchini, F. et al. Local convertibility and the quantum simulation of edge states in many-body systems. Phys. Rev. X 4, 041028 (2014). ## Acknowledgements This work was supported by NSFC under Grant Nos 11447161, 11504205 and 91536108, the Fundamental Research Funds of Shandong University under Grant No. 2014TB018, the National Key Basic Research Program of China under Grant No. 2015CB921003, and Chinese Academy of Sciences Grant No. XDB01010000. ## Author information Authors ### Contributions X.H. and H.F. contributed the idea. X.H. performed the calculations. X.H. and H.F. wrote the paper. All authors reviewed the manuscript and agreed with the submission. ### Corresponding author Correspondence to Xueyuan Hu. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Rights and permissions Reprints and Permissions Hu, X., Fan, H. Extracting quantum coherence via steering. Sci Rep 6, 34380 (2016). https://doi.org/10.1038/srep34380 • Accepted: • Published: • ### Nonlocal advantage of quantum coherence of coupled qubits in thermal and dephasing reservoirs • Yu-Xia Xie Laser Physics Letters (2021) • ### Steering-induced coherence in decoherence channels • Shao-Jie Xiong • , Lu-Hong Zhang • , Jin-Ming Liu •  & Zhe Sun Laser Physics Letters (2021) • ### Thermal Nonlocal Advantage of Quantum Coherence in the Two-Site, Triangular, and Tetrahedral Lattices with Heisenberg Interactions • Yu-Xia Xie •  & Yu-Han Zhang International Journal of Theoretical Physics (2021) • ### Exploring the nonlocal advantage of quantum coherence and Bell nonlocality in the Heisenberg XYZ chain • Huan Yang •  & Ling-Ling Xing Laser Physics Letters (2020) • ### Enhancing nonlocal advantage of quantum coherence in correlated quantum channels • Yu-Xia Xie •  & Zhi-Yong Qin Quantum Information Processing (2020)
auto_math_text
web
Volume 364 - European Physical Society Conference on High Energy Physics (EPS-HEP2019) - Detector R&D and Data Handling Real-time alignment and temperature dependency of the LHCb Vertex Detector B. Mitreska* on behalf of the LHCb collaboration *corresponding author Full text: Not available Abstract The accuracy of the LHCb Vertex Locator (VELO) position has ensured excellent detector performance, with a track reconstruction efficiency above 98\%, and a vertex resolution along the beam axis of about 70 $\mu$m. The real-time alignment and calibration procedure developed by the LHCb experiment for Run 2 (2015-2018) for the full detector, including the VELO, provided extremely stable conditions during the full data taking period. In 2010, a significant shrinkage of the VELO modules was observed at the operating temperature of -30 degrees with respect to survey measurements made at ambient temperature. This has been confirmed by the following laboratory measurements of different tmeperatures on a single module. In a recent study, using a dedicated LHCb data sample taken over a range of VELO temperatures, the variation of the detector position as a function of temperature has been evaluated. An overview of the VELO alignment procedure and its performance during Run 2 will be presented, with an emphasis on the study of the temperature dependence. How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
auto_math_text
web
# Masking, Limiting, and Related Functions There are filters which change the video in various ways, and then there are ways to change the filtering itself. There are likely hundreds of different techniques at your disposal for various situations, using masks to protect details from smoothing filters, blending two clips with different filtering applied, and countless others—many of which haven’t been thought of yet. This article will cover: • Limiting • Reference clips • Expressions and Lookup Tables • Runtime functions • Pre-filtering Masking refers to a broad set of techniques used to merge multiple clips. Usually one filtered clip is merged with a source clip according to an overlay mask clip. A mask clip specifies the weight for each individual pixel according to which the two clips are merged; see MaskedMerge for details. In practice, masks are usually used to protect details, texture, and/or edges from destructive filtering effects like smoothing; this is accomplished by masking the areas to protect, e.g. with an edgemask, and merging the filtered clip with the unfiltered clip according to the mask, such that the masked areas are taken from the unfiltered clip, and the unmasked areas are taken from the filtered clip. In effect, this applies the filtering only to the unmasked areas of the clip, leaving the masked details/edges intact. Mask clips are usually grayscale, i.e. they consist of only one plane and thus contain no color information. In VapourSynth, such clips use the color family GRAY and one of these formats: GRAY8 (8 bits integer), GRAY16 (16 bits integer), or GRAYS (single precision floating point). This is the main function for masking that performs the actual merging. It takes three clips as input: two source clips and one mask clip. The output will be a convex combination of the input clips, where the weights are given by the brightness of the mask clip. The following formula describes these internals for each pixel: $$\mathrm{output} = \mathrm{clip~a} \times (\mathit{max~value} - \mathrm{mask}) + (\mathrm{clip~b} \times \mathrm{mask})$$ where $$\mathit{max~value}$$ is 255 for 8-bit. In simpler terms: for brighter areas in the mask, the output will come from clip b, and for the dark areas, it’ll come from clip a. Grey areas result in an average of clip a and clip b. If premultiplied is set to True, the equation changes as follows: $$\mathrm{output} = \mathrm{clip~a} \times (\mathit{max~value} - \mathrm{mask}) + \mathrm{clip~b}$$ Building precise masks that cover exactly what you want is often rather tricky. VapourSynth provides basic tools for manipulating masks that can be used to bring them into the desired shape: #### std.Minimum/std.Maximum The Minimum/Maximum operations replace each pixel with the smallest/biggest value in its 3x3 neighbourhood. The 3x3 neighbourhood of a pixel are the 8 pixels directly adjacent to the pixel in question plus the pixel itself. Illustration of the 3x3 neighborhood The Minimum/Maximum filters look at the 3x3 neighbourhood of each pixel in the input image and replace the corresponding pixel in the output image with the brightest (Maximum) or darkest (Minimum) pixel in that neighbourhood. Maximum generally expands/grows a mask because all black pixels adjacent to white edges will be turned white, whereas Minimum generally shrinks the mask because all white pixels bordering on black ones will be turned black. See the next section for usage examples. Side note: In general image processing, these operations are known as Erosion (Minimum) and Dilation (Maximum). Maximum/Minimum actually implement only a specific case where the structuring element is a 3x3 square. The built-in morpho plug-in implements the more general case in the functions morpho.Erode and morpho.Dilate which allow finer control over the structuring element. However, these functions are significantly slower than std.Minimum and std.Maximum. TODO #### std.Binarize Split the luma/chroma values of any clip into one of two values, according to a fixed threshold. For instance, binarize an edgemask to white when edge values are at or above 24, and set values lower to 0: mask.std.Binarize(24, v0=0, v1=255) For methods of creating mask clips, there are a few general categories… These are used for normal edge detection, which is useful for processing edges or the area around them, like anti-aliasing and deringing. The traditional edge detection technique is to apply one or more convolutions, focused in different directions, to create a clip containing what you might call a gradient vector map, or more simply a clip which has brighter values in pixels where the neighborhood dissimilarity is higher. Some commonly used examples would be Prewitt (core), Sobel (core), and kirsch (kagefunc). There are also some edge detection methods that use pre-filtering when generating the mask. The most common of these would be TCanny, which applies a Gaussian blur before creating a 1-pixel-thick Sobel mask. The most noteworthy pre-processed edge mask would be kagefunc’s retinex_edgemask filter, which at least with cartoons and anime, is unmatched in its accuracy. This is the mask to use if you want edge masking with ALL of the edges and nothing BUT the edges. Another edge mask worth mentioning is the mask in dehalohmod, which is a black-lineart mask well-suited to dehalo masking. Internally it uses a mask called a Camembert to generate a larger mask and limits it to the area affected by a line-darkening script. The main mask has no name and is simply dhhmask(mode=3). The range mask (or in masktools, the “min/max” mask) also fits into this category. It is a very simple masking method that returns a clip made up of the maximum value of a range of neighboring pixels minus the minimum value of the range, as so: clipmax = core.std.Maximum(clip) clipmin = core.std.Minimum(clip) minmax = core.std.Expr([clipmax, clipmin], 'x y -') The most common use of this mask is within GradFun3. In theory, the neighborhood variance technique is the perfect fit for a debanding mask. Banding is the result of 8 bit color limits, so we mask any pixel with a neighbor higher or lower than one 8 bit color step, thus masking everything except potential banding. But alas, grain induces false positives and legitimate details within a single color step are smoothed out, therefore debanding will forever be a balancing act between detail loss and residual artifacts. #### Example: Build a simple dehalo mask Suppose you want to remove these halos: Screenshot of the source. Point-enlargement of the halo area. (Note that the images shown in your browser are likely resized poorly; you can view them at full size in this comparison.) Fortunately, there is a well-established script that does just that: DeHalo_alpha. However, we must be cautious in applying that filter, since, while removing halos reliably, it’s extremely destructive to the lineart as well. Therefore we must use a dehalo mask to protect the lineart and limit the filtering to halos. A dehalo mask aims to cover the halos but exclude the lines themselves, so that the lineart won’t be blurred or dimmed. In order to do that, we first need to generate an edgemask. In this example, we’ll use the built-in Sobel function. After generating the edge mask, we extract the luma plane: mask = core.std.Sobel(src, 0) luma Next, we expand the mask twice, so that it covers the halos. vsutil.iterate is a function in vsutil which applies the specified filter a specified number of times to a clip—in this case it runs std.Maximum 2 times. mask_outer = vsutil.iterate(luma, core.std.Maximum, 2) mask_outer Now we shrink the expanded clip back to cover only the lineart. Applying std.Minimum twice would shrink it back to the edge mask’s original size, but since the edge mask covers part of the halos too, we need to erode it a little further. The reason we use mask_outer as the basis and shrink it thrice, instead of using mask and shrinking it once, which would result in a similar outline, is that this way, small adjacent lines with gaps in them (i.e. areas of fine texture or details), such as the man’s eyes in this example, are covered up completely, preventing detail loss. mask_inner = vsutil.iterate(mask_outer, core.std.Minimum, 3) mask_inner Now we subtract the outer mask covering the halos and the lineart from the inner mask covering only the lineart. This yields a mask covering only the halos, which is what we originally wanted: halos = core.std.Expr([mask_outer, mask_inner], 'x y -') halos Next, we do the actual dehaloing: dehalo = hf.DeHalo_alpha(src) dehalo Lastly, we use MaskedMerge to merge only the filtered halos into the source clip, leaving the lineart mostly untouched: masked_dehalo = core.std.MaskedMerge(src, dehalo, halos) masked_dehalo A diff(erence) mask is any mask clip generated using the variance of two clips. There are many different ways to use this type of mask: limiting a difference to a threshold, processing a filtered difference itself, or smoothing → processing the clean clip → overlaying the original grain. They can also be used in conjunction with line masks, for example: kagefunc’s hardsubmask uses a special edge mask with a diff mask, and uses core.misc.Hysteresis to grow the line mask into diff mask. #### Example: Create a descale mask for white non-fading credits with extra protection for lines (16 bit input) src16 = kgf.getY(last) src32 = fvf.Depth(src16, 32) standard_scale = core.resize.Spline36(last, 1280, 720, format=vs.YUV444P16, resample_filter_uv='spline16') inverse_scale = core.descale.Debicubic(src32, 1280, 720) inverse_scale = fvf.Depth(inverse_scale, 16) #absolute error of descaling error = core.resize.Bicubic(inverse_scale, 1920, 1080) error = core.std.Expr([src, error], 'x y - abs') #create a light error mask to protect smaller spots against halos aliasing and rings error_light = core.std.Maximum(error, coordinates=[0,1,0,1,1,0,1,0]) error_light = core.std.Expr(error_light, '65535 x 1000 / /') error_light = core.resize.Spline36(error_light, 1280, 720) #create large error mask for credits, limiting the area to white spots #masks are always full-range, so manually set fulls/fulld to True or range_in/range to 1 when changing bitdepth credits = core.std.Expr([src16, error], 'x 55800 > y 2500 > and 255 0 ?', vs.GRAY8) credits = core.resize.Bilinear(credits, 1280, 720) credits = core.std.Maximum(credits).std.Inflate().std.Inflate() credits = fvf.Depth(credits, 16, range_in=1, range=1) descale_mask = core.std.Expr([error_light, credits], 'x y -') output = muvf.MergeChroma(output, standard_scale) ## Single and multi-clip adjustments with std.Expr and friends VapourSynth’s core contains many such filters, which can manipulate one to three different clips according to a math function. Most, if not all, can be done (though possibly slower) using std.Expr, which will be covered at the end of this sub-section. #### std.MakeDiff and std.MergeDiff Subtract or add the difference of two clips, respectively. These filters are peculiar in that they work differently in integer and float formats, so for more complex filtering float is recommended whenever possible. In 8 bit integer format where neutral luminance (gray) is 128, the function is $$\mathrm{clip~a} - \mathrm{clip~b} + 128$$ for MakeDiff and $$\mathrm{clip~a} + \mathrm{clip~b} - 128$$ for MergeDiff, so pixels with no change will be gray. The same is true of 16 bit and 32768. The float version is simply $$\mathrm{clip~a} - \mathrm{clip~b}$$ so in 32 bit the difference is defined normally, negative for dark differences, positive for bright differences, and null differences are zero. Since overflowing values are clipped to 0 and 255, changes greater than 128 will be clipped as well. This can be worked around by re-defining the input clip as so: smooth = core.bilateral.Bilateral(src, sigmaS=6.4, sigmaR=0.009) noise = core.std.MakeDiff(src, smooth) # subtract filtered clip from source leaving the filtered difference smooth = core.std.MakeDiff(src, noise) # subtract diff clip to prevent clipping (doesn't apply to 32 bit) #### std.Merge This function is similar to MaskedMerge, the main difference being that a constant weight is supplied instead of a mask clip to read the weight from for each pixel. The formula is thus just as simple: $$\mathrm{output} = \mathrm{clip~a} \times (\mathit{max~value} - \mathrm{weight}) + (\mathrm{clip~b} \times \mathrm{weight})$$ It can be used to perform a weighted average of two clips or planes. TODO #### std.Lut and std.Lut2 May be slightly faster than Expr in some cases, otherwise they can’t really do anything that Expr can’t. You can substitute a normal Python function for the RPN expression, though, so you may still find it easier. See link for usage information. TODO TODO ## Runtime filtering with FrameEval TODO #### Example: Strong smoothing on scene changes (i.e. for MPEG-2 transport streams) from functools import partial src = core.d2v.Source() src = ivtc(src) src = haf.Deblock_QED(src) ref = core.rgvs.RemoveGrain(src, 2) # xvid analysis is better in lower resolutions first = core.resize.Bilinear(ref, 640, 360).wwxd.WWXD() # shift by one frame last = core.std.DuplicateFrames(first, src.num_frames - 1).std.DeleteFrames(0) # copy prop to last frame of previous scene propclip = core.std.ModifyFrame(first, clips=[first, last], selector=shiftback) def shiftback(n, f): both = f[0].copy() if f[1].props.SceneChange == 1: both.props.SceneChange = 1 return both def scsmooth(n, f, clip, ref): if f.props.SceneChange == 1: clip = core.dfttest.DFTTest(ref, tbsize=1) return clip out = core.std.FrameEval(src, partial(scsmooth, clip=src, ref=ref), prop_src=propclip) ## Pre-filters TODO #### Example: Deband a grainy clip with f3kdb (16 bit input) src16 = last src32 = fvf.Depth(last, 32) # I really need to finish zzfunc.py :< #8-16 bit MakeDiff and MergeDiff are limited to 50% of full-range, so float is used here clean = core.std.Convolution(src32, [1,2,1,2,4,2,1,2,1]).std.Convolution([1]*9, planes=[0]) grain = core.std.Expr([src32, clean32], 'x y - 0.5 +') clean = fvf.Depth(clean, 16) deband =core.f3kdb.Deband(clean, 16, 40, 40, 40, 0, 0, keep_tv_range=True, output_depth=16) #limit the debanding: f3kdb becomes very strong on the smoothed clip (or rather, becomes more efficient)
auto_math_text
web
# Search for Excited Neutrinos at HERA Abstract : A search for excited neutrinos produced in electron-proton collisions is performed using a data sample corresponding to an integrated luminosity of 114 $pb^{-1}$ recently collected by the H1 detector at HERA. In absence of a signal, the measurement is interpreted within a minimal model parameterised in terms of couplings and compositness scale. New parameter regions, beyond other colliders sensitivities, are explored by the present preliminary analysis. Document type : Conference papers Cited literature [1 references] http://hal.in2p3.fr/in2p3-00126056 Contributor : Dominique Girod <> Submitted on : Tuesday, January 23, 2007 - 2:56:30 PM Last modification on : Thursday, January 18, 2018 - 2:21:50 AM Long-term archiving on: : Tuesday, April 6, 2010 - 10:29:54 PM ### Identifiers • HAL Id : in2p3-00126056, version 1 ### Citation C. Diaconu. Search for Excited Neutrinos at HERA. 14th International Workshop on Deep Inelastic Scattering (DIS 2006), Apr 2006, Tsukuba, Japan. ⟨in2p3-00126056⟩ Record views
auto_math_text
web
Robert W. Childress 13 posts World's most average software developer, learning low level coding for the good of the world. D3D Unproject Screen Coords -- and ray casting? Quick sanity check for my mouse selection process and a question about ray casting in general: I'm calling XMVector3Unproject twice, once with z dimension at ViewportMinZ (0.0) and once at ViewportMaxZ (1.0). I find the diff between near plane z and target entity z (which is fixed and known for now), turn that into a ratio over the total z distance between near and far plane, then multiply that ratio by the differences in x,y between the near and far plane to find the x,y at the target z depth. It looks something like this: vec3 pos = {0}; pos.z = (zDepth - rayPoints[0].z); vec3 diff = rayPoints[1] - rayPoints[0]; ASSERT(diff.z != 0.0f); float zRatio = pos.z/diff.z; ASSERT(zRatio >= 0.0f && zRatio <= 1.0f); pos.xy = rayPoints[0].xy + zRatio*diff.xy; return pos; This works splendidly for my current wonky prototyping, but I made this up on my own instead of copying something someone else did so I'm checking whether I should be doing this differently? ALSO, for ray casting in general (for when I don't know the depth of my target), would I do the same thing as I'm doing here but then brute force iterate over the positions of all entities in the scene testing their z depths to see if they're in the ray to aggregate a collection of entities that the ray passes through? Or is there a better approach? Thanks, Robert 188 posts / 1 project D3D Unproject Screen Coords -- and ray casting? Edited by Dawoodoz on Assert is not useful for division by zero checks, because of the low chance of being caught in debug mode. The best way to prevent rare runtime exceptions from causing crashes at the end user, is by handling all the cases to begin with and testing the edge cases in regression tests. Conditional move instructions only take one CPU cycle, so don't be afraid of using the ? operator in C/C++ when handling special cases. No branching will be performed even if you write it as a trivial if statement with an assignment inside. The standard way of making ray casts is by having broad-phases with height-fields (ground), grids (dense) and trees (sparse), so that one can skip whole groups of objects that have a combined hull not intersecting with the line. No need to involve the view frustum or depth buffer, because the line might have nothing to do with the player's line of sight and you will end up writing a general intersection eventually. You also need to filter line intersections based on item groups to allow passing through different items based on what is being tested with the ray intersection. Usually done efficiently using bit masks. If you then have a group of items containing glass and grass, their bit masks are combined with bitwise or operations. Then sight rays may have a bit mask checking for the grass and concrete bits, masking the item group with bitwise and. If the result is not zero, then one of the occluding groups was present (grass) and the group will check the items and sub-groups for their masks before performing the line intersections on individual items. static const uint32_t group_glass = 1u << 0; static const uint32_t group_grass = 1u << 1; static const uint32_t group_stone = 1u << 2; static const uint32_t groupMask_occluding = group_grass | group_stone; static const uint32_t groupMask_bulletHit = group_glass | group_stone; static const uint32_t groupMask_bulletStop = group_stone; If you are using it for bullets or checking where one can walk for AI, you also need a thickness on the line. This can be added to the thickness of any convex shape being intersected. Robert W. Childress 13 posts World's most average software developer, learning low level coding for the good of the world. D3D Unproject Screen Coords -- and ray casting?
auto_math_text
web
2 Replies Latest reply on Jun 15, 2010 12:54 AM by undraw # Uniformity Correction Are there any uniformity correction feature/API supported on ATI video cards? Hello all, I'm a new member here and I just wanted to ask if there are any uniformity correction API's on ATI video cards? • ###### Uniformity Correction Hi, Gamma correction, gamma nonlinearity, gamma encoding, or often simply gamma, is the name of a nonlinear operation used to code and decode luminance or tristimulus values in video or still image systems.[1] Gamma correction is, in the simplest cases, defined by the following power-law expression: $V_{\text{out}} = {V_{\text{in}}}^{\gamma}$ where the input and output values are non-negative real values, typically in a predetermined range such as 0 to 1. A gamma value $\gamma < 1\,$ is sometimes called an encoding gamma, and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely a gamma value $\gamma > 1\,$ is called a decoding gamma and the application of the expansive power-law nonlinearity is called gamma expansion. Regards. • ###### Uniformity Correction Gamma is set in the video card's LUT, however, it cannot compensate for nonuniformity if the monitor would have hardware inconsistencies. Example: the upper left block of the screen somewhat displays darker than the other blocks... This can be corrected by hardware (by the monitor manufacturer), however, there may still be some inconsistencies and the upper left block would still be darker... Are there any API's that would adjust all RGB values (possibly inside framebuffer) that would compensate for this nonuniformity in the upper left block of pixels alone? so that when the monitor receives the framebuffer, it would display higher values for the upper left block, and thus would increase the screen output for the block resulting in a uniform screen luminance... Thank you very much
auto_math_text
web
# Design and Characteristics of Ultrasonic Linear Motor Using $L_14-$B_4$Sandwich-type Vibrator #$L_14-$B_4$샌드위치형 진동자를 이용한 선형 초음파 모터의 설계 및 특성 • Published : 2000.11.01 #### Abstract An ultrasonic linear motors consist of a slider and an ultrasonic vibrator which generates an elliptical oscillations. The ultrasonic linear motors mainly consist of an ultrasonic vibrator which generates elliptical oscillations. The ultrasonic linear motor fabricated in this paper was the use of the 1st longitudinal(L1) and 4th bending vibrations(B4). In order to low driving voltage and improve the life time of the ultrasonic motor, we used stacked piezoceramics. Stacked piezoceramics are adhered to aluminum elastic material. The finite element method was used to optimize dimension of ultrasonic vibrator and direction of vibratory displacement. As a result of estimating the characteristics of the ultrasonic linear motor, no-load velocity was 0.204[m/s] when applied voltage was 70[ $V_{rms}$] in resonance frequency.y.
auto_math_text
web
# KDF3 - implementation explanation An old project that I did not code is using the KDF3 algorithm, but I struggle to find an explanation about the way this key derivation algorithm works. I know that this algorithm is part of the ISO-18033-2:2006, but it is pretty expensive. Can someone explain it to me ? • Some suggestions: 1) if you have to maintain compatibility, ask your employer to buy the standard from your national standardization organ, or you can try local libraries where there often is a section for inter/intra-national standards and academic journals. 2) Consider functionally replace the component with better alternatives such as ECIES (available for free from secg.org) or Curve-25519/448 (tools.ietf.org/html/rfc7748). – DannyNiu Feb 14 at 9:41 • Thanks for your suggestion. I do not plan to modify the current implementation. the algorithm is implemented on an embedded system with not much horsepower, so I don't think that asymmetric/ECC cryptography would be a good idea. – Bastien Feb 14 at 10:36 • There seems to be a well written version in BASIC (of all things) here. And here it is in Python. Looks similar, but pay attention to the pAmt and it's size; I'm not 100% if that's correct in the Python code. – Maarten Bodewes Feb 14 at 13:15
auto_math_text
web
# ${{\boldsymbol H}^{0}}$ SIGNAL STRENGTHS IN DIFFERENT CHANNELS The ${{\mathit H}^{0}}$ signal strength in a particular final state ${{\mathit x}}{{\mathit x}}$ is given by the cross section times branching ratio in this channel normalized to the Standard Model (SM) value, $\sigma$ $\cdot{}$ B( ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit x}}{{\mathit x}}$ ) $/$ ($\sigma$ $\cdot{}$ B( ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit x}}{{\mathit x}}$ ))$_{{\mathrm {SM}}}$, for the specified mass value of ${{\mathit H}^{0}}$. For the SM predictions, see DITTMAIER 2011 , DITTMAIER 2012 , and HEINEMEYER 2013A. Results for fiducial and differential cross sections are also listed below. # ${{\boldsymbol c}}{{\overline{\boldsymbol c}}}$ Final State INSPIRE search VALUE CL% DOCUMENT ID TECN  COMMENT $\bf{<110}$ 95 1 2018 M ATLS ${{\mathit p}}{{\mathit p}}$ , 13 TeV 1  AABOUD 2018M use 36.1 fb${}^{-1}$ at of ${{\mathit p}}{{\mathit p}}$ collisions at $\mathit E_{{\mathrm {cm}}}$ = 13 TeV. The upper limit on ${\mathit \sigma (}$ ${{\mathit p}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit Z}}{{\mathit H}^{0}}{)}\cdot{}$B( ${{\mathit H}^{0}}$ $\rightarrow$ ${{\mathit c}}{{\overline{\mathit c}}}$ ) is 2.7 pb at 95$\%$ CL. The quoted values are given for ${\mathit m}_{{{\mathit H}^{0}}}$ = 125 GeV. References: AABOUD 2018M PRL 120 211802 Search for the Decay of the Higgs Boson to Charm Quarks with the ATLAS Experiment
auto_math_text
web
/ nucl-ex CERN-PH-EP-2015-254 Direct photon production in Pb-Pb collisions at $\sqrt{s_\rm{NN}}$ = 2.76 TeV Pages: 25 Abstract: Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{_{\mathrm{NN}}}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_\mathrm{T} < 14$ GeV$/c$. Photons were detected with the highly segmented electromagnetic calorimeter PHOS and via conversions in the ALICE detector material with the $e^+e^-$ pair reconstructed in the central tracking system. The results of the two methods were combined and direct photon spectra were measured for the 0-20%, 20-40%, and 40-80% centrality classes. For all three classes, agreement was found with perturbative QCD calculations for $p_\mathrm{T} \gtrsim 5$ GeV$/c$. Direct photon spectra down to $p_\mathrm{T} \approx 1$ GeV$/c$ could be extracted for the 20-40% and 0-20% centrality classes. The significance of the direct photon signal for $0.9 < p_\mathrm{T} < 2.1$ GeV$/c$ is $2.6\sigma$ for the 0-20% class. The spectrum in this $p_\mathrm{T}$ range and centrality class can be described by an exponential with an inverse slope parameter of $(297 \pm 12^\mathrm{stat}\pm 41^\mathrm{syst})$ MeV. State-of-the-art models for photon production in heavy-ion collisions agree with the data within uncertainties. Note: *Temporary entry*; 25 pages, 6 captioned figures, 2 tables, authors from page 20, submitted to PLB, figures at http://aliceinfo.cern.ch/ArtSubmission/node/1887 Total numbers of views: 490 Numbers of unique views: 260
auto_math_text
web
Arc-tunable Weyl Fermion metallic state in Mo{}_{x}W{}_{1-x}Te{}_{2} # Arc-tunable Weyl Fermion metallic state in MoxW1−xTe2 ## Abstract Weyl semimetals may open a new era in condensed matter physics because they provide the first example of Weyl fermions, realize a new topological classification even though the system is gapless, exhibit Fermi arc surface states and demonstrate the chiral anomaly and other exotic quantum phenomena. So far, the only known Weyl semimetals are the TaAs class of materials. Here, we propose the existence of a tunable Weyl metallic state in MoWTe via our first-principles calculations. We demonstrate that a 2% Mo doping is sufficient to stabilize the Weyl metal state not only at low temperatures but also at room temperatures. We show that, within a moderate doping regime, the momentum space distance between the Weyl nodes and hence the length of the Fermi arcs can be continuously tuned from zero to of the Brillouin zone size via changing Mo concentration, thus increasing the topological strength of the system. Our results provide an experimentally feasible route to realizing Weyl physics in the layered compound MoWTe, where non-saturating magneto-resistance and pressure driven superconductivity have been observed. ###### pacs: In 1929, H. Weyl noted that the Dirac equation takes a simple form if the mass term is set to zero Weyl (): , with being the conventional Pauli matrices. Such a particle, the Weyl fermion, is massless but is associated with a definite chirality. Weyl fermions may be thought of as the basic building blocks for a Dirac fermion. They have played a vital role in quantum field theory but they have not been found as fundamental particles in vacuum. A Weyl semimetal is a solid state crystal that host Weyl fermions as its low energy quasiparticle excitations Weyl (); Herring (); Abrikosov (); Volovik2003 (); Murakami2007 (); Wan2011 (); Ran (); Balents_viewpoint (); Burkov2011 (); Hosur (); Ojanen (); Ashvin (); Hasan2010 (); Qi2011 (); TI_book_2014 (); Hasan_Na3Bi (). Weyl semimetals have attracted intense research interest not only because they provide the only known example of a Weyl fermion in nature, but also because they can be characterized by a set of topological invariants even though the system is not an (topological) insulator. In a Weyl semimetal, a Weyl fermion is associated with an accidental degeneracy of the band structure. Away from the degeneracy point, the bands disperse linearly and the spin texture is chiral, giving rise to a quasiparticle with a two-component wavefunction, a fixed chirality and a massless, linear dispersion. Weyl fermions have distinct chiralities, either left-handed or right-handed. In a Weyl semimetal crystal, the chiralities of the Weyl nodes gives rise to topological charges, which can be understood as monopoles and anti-monopoles of Berry flux in momentum space. Remarkably, the topological charges in a Weyl semimetal are protected only by the translational invariance of the crystal. The band structure degeneracies in Weyl semimetals are uniquely robust against disorder, in contrast to the Dirac nodes in graphene, topological insulator and Dirac semimetals which depend on additional symmetries beyond the translational symmetry Hasan2010 (); Qi2011 (); Hasan_Na3Bi (); TI_book_2014 (); Graphene (). As a result, the Weyl fermion carriers are expected to transmit electrical currents effectively. Moreover, the transport properties of Weyl semimetals are predicted to show many exotic phenomena including the negative magnetoresistance due to the chiral anomaly known from quantum field theory, non-local transport and quantum oscillations where electrons move in real space between opposite sides of a sample surface Hosur (); Ojanen (); Ashvin (). These novel properties suggest Weyl semimetals as a flourishing field of fundamental physics and future technology. The separation of the opposite topological charges in momentum space leads to surface state Fermi arcs which form an anomalous band structure consisting of open curves that connect the projections of opposite topological charges on the boundary of a bulk sample. Without breaking symmetries, the only way to destroy the topological Weyl phase is to annihilate Weyl nodes with opposite charges by bringing them together in space. Thus the length of the Fermi arc provides a measure of the “topological strength” of a Weyl state. For many years, research on Weyl semimetals has been held back due to the lack of experimentally feasible candidate materials. Recently, it was proposed that a family of isostructural compounds, TaAs, NbAs, TaP and NbP, are Weyl semimetals Huang2015 (); Hasan_TaAs (); NbAs_Hasan (); Weng2015 (). Shortly after the theoretical prediction, the first Weyl semimetal was experimentally discovered in TaAs Hasan_TaAs (). So far, the TaAs class of four iso-electronic compounds remains to be only experimentally realized Weyl semimetals Hasan_TaAs (); TaAs_Ding (); TaAs_Ding_2 (); NbAs_Hasan (). Tungsten ditelluride, WTe, has an inversion symmetry breaking crystal structure, and exhibits a compensated semi-metallic ground state WT-tran-1 (); WT-ARPES-1 (); WT-ARPES-2 (); WT-ARPES-3 (). The coexistence of inversion symmetry breaking and semimetallic transport behavior resembles the properties of TaAs and hence suggests a possible Weyl semimetal state. Here, we propose a tunable Weyl metallic state in Mo-doped WTe via our first-principles calculation, where the length of the Fermi arc and hence the topological strength of the system can be adiabatically tuned as a function of Mo doping. A very recent paper WT-Weyl () predicted the Weyl state in pure WTe but the separation between Weyl nodes was reported to be beyond spectroscopic experimental resolution. We demonstrate that a 2% Mo doping is sufficient to stabilize the Weyl metal state not only at low temperatures but also at room temperatures. We show that, within a moderate doping regime, the momentum space distance between the Weyl nodes and hence the length of the Fermi arcs can be continuously tuned from zero to of the Brillouin zone size via changing Mo concentration, thus increasing the topological strength of the system. Our results present a tunable topological Weyl system, which is not known to be possible in the TaAs class of Weyl semimetals. We computed the electronic structures using the projector augmented wave method PAW-1 (); PAW-2 () as implemented in the VASP package VASP () within the generalized gradient approximation (GGA) schemes Perdew (). For WTe, experimental lattice constants were used WT-Structure (). For MoTe, we assumed that it has the same crystal structure as WTe and calculated the lattice constants self-consistently (). A MonkhorstPack -point mesh was used in the computations. The spin-orbit coupling effects were included in calculations. In order to systematically calculate the surface and bulk electronic structure, we constructed a first-principles tight-binding model Hamilton for both WTe and MoTe, where the tight-binding model matrix elements were calculated by projecting onto the Wannier orbitals wan-1 (); wan-2 (); wan-3 (), which used the VASP2WANNIER90 interface Franchini (). The electronic structure of the samples with finite dopings was calculated by a linear interpolation of tight-binding model matrix elements of WTe and MoTe. The surface state electronic structure was calculated by the surface Green’s function technique, which computes the spectral weight near the surface of a semi-infinite system. We used W (Mo) and orbitals and Te orbitals to construct Wannier functions without using the maximizing localization procedure. WTe crystalizes in an orthorhombic Bravais lattice, space group (31). In this structure, each tungsten layer is sandwiched by two tellurium layers and form strong ionic bonds. The left panel of Fig. 1(a) shows a top view of the lattice. It can be seen that the tungsten atom is shifted away from the center of the hexagon formed by the tellurium atoms. This makes the in-plane lattice constant along the direction () longer than that of along the direction (). The WTe sandwich stacks along the out of plane direction, with van der Waals bonding between layers (the right panel of Fig. 1(a)). We used the experimental lattice constants reported in Ref. WT-Structure (), Å, Å, Å. The bulk Brillouin zone (BZ) and the (001) surface BZ are shown in Fig. 1(b), where high symmetry points are noted. In Fig. 1(c), we show the bulk band structure of WTe along important high symmetry directions. Our calculation shows that there is a continuous energy gap near the Fermi level, but the conduction and valence bands have a finite overlap in energy. The band gap along the direction is much smaller than that of along the direction or direction, consistent with the fact that the lattice constant is much smaller than and . At the Fermi level, our calculation reveals a hole pocket and an electron pocket along the direction (in Fig. 1(c)), which agrees with previous calculation and photoemission results WT-tran-1 (); WT-ARPES-1 (); WT-ARPES-2 (); WT-ARPES-3 (). We also calculated the band structure of MoTe by assuming that it is in the same crystal structure. As shown in Fig. 1(d), the general trend is that the bands are “pushed” closer to the Fermi level. For example, in MoTe, there are bands crossing the Fermi level even along the and directions. We emphasize that, according to available literature WT-Structure (); MT-Structure (), MoTe has a different crystal structure, either hexagonal MT-Structure () or monoclinic WT-Structure () both of which have inversion symmetry, but not orthorhombic. Thus in our calculation we assumed that MoTe has the orthorhombic crystal structure as WTe and obtained the lattice constants and atomic coordinates from first-principle calculations. Very recently, a paper MT-SC () claimed that MoTe can be grown in the orthorhombic structure. This still needs to be further confirmed. We now calculate the band structure of pure WTe throughout the bulk BZ based on the lattice constants reported in WT-Structure (). Our results show that pure WTe has a continuous energy gap throughout the bulk BZ without any Weyl nodes. The point that corresponds to the minimal gap is found to be close to the (Fig. 2(a)) axis. The minimal gap of WTe is meV (Fig. 2(c)). We note that the discrepancy between our results and Ref. WT-Weyl () is due to the slightly different values of the lattice constants WT-Structure (); WT-Structure-2 (). The lattice constants used in Ref. WT-Weyl () were at low temperatures WT-Structure-2 (). Thus the results WT-Weyl () better refelct the groundstate () of WTe. We used the lattice constants at room temperatures WT-Structure (), so our results correspond to the state of WTe at elevated temperatures. The difference between our results and Ref. WT-Weyl () shows from another angle that WTe is very close to the phase boundary between the Weyl state and the fully gapped state. For many purposes, it is favorable to have the Weyl state in a material robust at elevated (room) temperatures. Here, we use the room temperature lattice constants for all of our calculations at all Mo concentrations. We also note that the very small difference of the lattice constant value does not play a role except for undoped or very lightly doped samples , where the separation of the Weyl nodes is beyond experimental resolution anyway. We propose Mo doped WTe, MoWTe, as an experimental feasible platform to realize Weyl state in this compound. We have shown that pure WTe is very close to the phase transition boundary. Therefore, the splitting between the Weyl nodes would be beyond experimental resolution. On the other hand, another very recent paper proposed a Weyl state in pure MoTe MT-Weyl (), but as shown above the existence of the orthorhombic MoTe needs to be confirmed. By contrast, we show that the moderately Mo doped WTe sample have a number of advantages, making it experimentally feasible. First, pure MoTe has many irrelevant band crossing the Fermi level along the and directions, whereas the band structure of moderately Mo-doped system is as clean as pure WTe (Figs. 1(c-e)). Second, as we will show below, a moderate Mo doping leads to a space separation of the Weyl nodes that is similarly large as pure MoTe. Therefore, we propose the Mo- doped WTe as a better platform for studying Weyl physics. Fig. 2(e) shows the evolution of the space distance between a pair of Weyl nodes as a function of Mo concentration. Our calculation shows that a 2% Mo doping is sufficient to stabilize the system in the Weyl metal state. Also, the distance between the Weyl nodes increases rapidly at the small doping regime. At a moderate doping , the space distance is found to be as large as . As one further increases the doping concentration, the distance seems saturated. The distance is about at . The energy difference between the pair of Weyl nodes is shown in Fig. 2(g). In Fig. 2(d) we show the dispersion along the momentum space cut that goes through the direct pair of Weyl nodes as defined in Fig. 2(b). It can be seen clearly that two singly generate bands, b2 and b3, cross each other and form the two Weyl nodes with opposite chiralities. We name the Weyl node at lower energy as W1 and the Weyl node at higher energy as W2. Another useful quantity is the energy difference between the extrema of these two bands. This characterizes the magnitude of the band inversion, as shown in Fig. 2(f). It is interesting to note that, in contrast to the space distance between the Weyl nodes (Fig. 2(e)), the energy difference between the Weyl nodes (Fig. 2(g)) and the band inversion energy (Fig. 2(f)) does not show signs of saturation as one increases the Mo concentration up to . In Fig. 2(h), we show a schematic for the distribution of the Weyl nodes in Mo doped WTe. We observe a pair of Weyl nodes in each quadrant of the plane. Thus in total there are 4 pairs of Weyl nodes on the plane. A critical signature of a Weyl semimetal/metal is the existence of Fermi arc surface states. We present calculations of the (001) surface states in Fig. 3. We choose the 20% Mo-doped system, MoWTe. Figure 3(a) shows the surface energy dispersion along the momentum space cut that goes through the direct pair of Weyl nodes, W1(-) and W2(+), which arises from a single band inversion. Our calculation (Fig. 3(a)) clearly shows the topological Fermi arc surface state, which connects the direct pair of Weyl nodes. The Fermi arc is found to terminates directly onto the projected Weyl nodes. In addition, we also observe a normal surface state, which avoids the Weyl node and merges into the bulk band continuum. Because the W1 and W2 Weyl nodes have different energies, and because W1 is a type II Weyl cone WT-Weyl (), constant energy maps always have finite Fermi surfaces. Hence, visualizing Fermi arc connectivity in constant energy maps is not straightforward. Instead of a constant energy map, it is possible to use a varying-energy map, i.e. Energy, so that there are no bulk states on this varying-energy map at all points except the Weyl nodes. Figure 3(f) shows the calculated surface and bulk electronic structure on such a varying-energy map in the vicinity of a pair of Weyl nodes. A Fermi arc that connects the pair of Weyl nodes can be clearly seen. We study the effect of surface perturbations. The existence of Weyl nodes and Fermi arcs are guaranteed by the system’s topology whereas the details of the surface states can change under surface perturbations. In order to do so, we change the surface on-site potentials of the system. Physically, the surface potentials can be changed by surface deposition or applying an electric field on the surface. Figure 3(b) shows the surface band structure with the surface on-site energy increased by 0.02 eV. We find that the normal surface state moves further away from the Weyl nodes whereas the topological Fermi arc does not change significantly. Figure 3(c) shows the surface band structure with the surface on-site energy decreased by 0.11 eV. The normal surface states disappear. The Fermi arc also changes significantly. Instead of directly connecting the two Weyl nodes in Figs. 3(a and b), a surface state stems from each Weyl node and disperses outside the window. We note that the surface states in Fig. 3(c) are still topological and are still arcs because they terminate directly onto the projected Weyl nodes. We illustrate the two types of Fermi arc connectivity in Figs. 3(d and e). Figure 3(d) corresponds to the case in Figs. 3(a and b), where a Fermi arc directly connects the pair of Weyl nodes in a quadrant. Figure 3(e) corresponds to Fig. 2(d). In this case, Fermi arcs connect Weyl nodes in two different quadrants across the line. The nontrivial topology in a Weyl semimetal requires that there must be Fermi arc(s) terminating onto each projected Weyl node with a nonzero projected chiral charge and that the number of Fermi arcs associated with a projected Weyl node must equal its projected chiral charge. On the other hand, the pattern of connectivity can vary depending on details of the surface. The observed different Fermi arc connectivity patterns as a function of surface on-site potential provide an explicit example of both the constraints imposed and the freedoms allowed to the Fermi arc electronic structure by the nontrivial topology in a Weyl semimetal. We further study the surface states via bulk boundary correspondence. We note that except Figs. 3(b-e), all other figures corresponds to the case without additional changes to the onsite energy. Specifically, we choose a closed loop in () space as shown in Fig. 3(j). As we mentioned above, the conduction and valence bands only touch at the eight Weyl nodes. Thus as long as the loop chosen does not go through these Weyl nodes, there is a continuous bulk energy gap along the loop. In the bulk BZ, the chosen rectangular loop corresponds to a rectangular pipe along the direction. Then topological band theory requires that the net chiral charge of the Weyl nodes that are enclosed by the pipe equals the Chern number on this manifold, which further equals the net number of chiral edge modes along the loop. For example, the rectangular loop in Fig. 3(j) encloses a W1(-) and a W2(+), which leads to a net chiral charge zero. The energy dispersion along this rectangular loop is shown in Fig. 3(g). It can be seen that the bands are fully gapped without any surface states along . Along , there are two surface states, SS1 and SS2, both of which connect the band gap. Interestingly, we note that these two surface bands are counter-propagating although they seem to have the same sign of Fermi velocity. This is because the continuous energy gap is highly “tilted”. If we “tilt” the energy gap back to being horizontal, then it can be clearly seen that the two surface bands are counter-propagating, which means that the net number of chiral edge modes is zero. Similarly, we can choose other loops. For example, we choose another rectangular loop that encloses only the W2(+) Weyl node. Because there are no surface states along the two horizontal edges and the vertical edge to the right, we only need to study the vertical edge to the left, that is the . The enclosed net chiral charge is , which should equal the net number of chiral edge mode along . The band structure along this line is shown in Fig. 3(h). We see that while the surface band SS1 still connects across the band gap, SS2 starts from and ends at the conduction bands. Therefore, SS1 contributes one net chiral edge mode whereas SS2 contributes zero net chiral edge mode. Hence there is one net chiral edge mode along the this rectangular loop. By the same token, we can choose the loop , which does not enclose any Weyl node. Consistently, as shown in Fig. 3(i), along , SS1 does not appear along this line and SS2 does not connect across the band gap. Hence the net number of chiral edge mode is also zero along this rectangular loop. we study the constant energy contours of the surface states. We emphasize that (1) there is a significant energy offset between the W1 and W2 Weyl nodes, and that (2) the W1 Weyl cones are the type II Weyl cone WT-Weyl (), which means that at the Weyl node energy, its constant energy contour consists of an electron and a hole pockets touching at a point, the Weyl node. These two properties are very different from the ideal picture, where all Weyl cones are normal rather than type II and their nodes are all at the same energy. We show below that these two properties make the surface states’ constant energy contours quite different from what one would expect naively. Fig. 4(d) shows the calculated constant energy contour within the top half of the surface BZ at energy , which is between the W1 and W2 nodes in energy. We see three bulk pockets. The corresponding schematic is shown in Fig. 4(a). Specifically, we see a big pocket closer to the point, which encloses two W1 Weyl nodes with opposite chiral charges. We also see two separate small pockets closer to the , each of which encloses a W2 Weyl node. As for the surface states, from Figs. 4(d and e) we see a surface state band that connects the two small pockets, each of which encloses a W2 Weyl node. This is quite counter-intuitive because we know that the Fermi arc connects the direct pair of Weyl nodes, namely a W1(-) and a W2(+) or vice versa. We show that there is no discrepancy. Specifically, we show that the surface band seen in the constant energy contours is exactly the Fermi arc that connects the W1(-) and the W2(+) Weyl nodes seen in Fig. 4(c). To do so, we consider the constant energy contours at two different energies, and . According to the energy dispersion (Fig. 4(c)), we see that the big bulk pocket in the constant energy map is electron-like while the two small bulk pockets are hole-like. Thus as we increase the energy from to , the big pocket should expand whereas the two small pockets should shrink, as shown in Fig. 4(b). The surface state band keeps connecting the two small pockets as one changes the energy. This evolution is shown by real calculations in Figs. 4(e and f). The orange line in Fig. 4(b) connects the W1(-) and W2(+) Weyl nodes. At each energy, the surface state band crosses the orange line at a specific point. By picking up the crossing points at different energies, we can reconstruct the Fermi arc that connect the W1(-) and W2(+) Weyl nodes shown in Fig. 4(c). Therefore, from our systematic studies above, we show that the Fermi arc connectivity means the pattern in which the surface state connects the Weyl nodes. This is defined on a varying-energy () map where the chosen map crosses the bulk bands only at the Weyl nodes. If there is no significant energy offset between Weyl nodes and if all Weyl cones are normal rather than type II, then the connectivity can also be seen in a constant energy contour. However, in our case here, one needs to be careful with the simplified ideal picture, that is to study the Fermi arc connectivity from the constant energy contour. Because of the energy offset between the Weyl nodes and because of the existence of type II Weyl cones, how surface bands connect different bulk pockets in a constant energy contour does not straightforwardly show the Fermi arc connectivity. Finally, we discuss the tunability of the length of the Fermi arcs as a function of Mo concentration in our MoWTe system. (1) The undoped sample is fully gapped according to our calculations (Fig. 4(g)). (2) A very small Mo concentration () will drive the system to the critical point, where the conduction and valence bands just touch each other Fig. 4(h). The length of the Fermi arc is zero, and hence the system is at the critical point. (3) As one further increases the Mo concentration , the touching point splits into a pair of Weyl nodes with opposite chiralities (Fig. 4(i)). The Weyl nodes are connected by a Fermi arc. A way to gap the system without breaking any symmetry is to annihilate pairs of Weyl nodes with opposite chiralities. In order to do so, one needs to overcome the momentum space separation between the Weyl nodes to bring them together in space. For a sample with a given Mo concentration , this may be achieved by applying external hydrostatic pressure. Thus the length of the Fermi arc provides a measure of the system’s topological strength. Such a tunability is not known in the TaAs class of Weyl system. This highlights the uniqueness of our work. Acknowledgements: T.R.C. and H.T.J. are supported by the National Science Council, Academia Sinica, and National TsingHua University, Taiwan. We also thank NCHC, CINC-NTU, and NCTS, Taiwan for technical support. Work at Princeton University were supported by the Gordon and Betty Moore Foundations EPiQS Initiative through Grant GBMF4547 (Hasan). Work at National University of Singapore were supported by the National Research Foundation, Prime Minister’s Office, Singapore under its NRF fellowship (NRF Award No. NRF-NRFF2013-03). The work at Northeastern University was supported by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences grant number DE-FG02-07ER46352, and benefited from Northeastern University’s Advanced Scientific Computation Center (ASCC) and the NERSC Supercomputing Center through DOE grant number DE-AC02-05CH11231. We thank B. A. Bernevig, Chen Fang, and Titus Neupert for discussions. ### Footnotes 1. Corresponding authors (emails): nilnish@gmail.com and mzhasan@princeton.edu ### References 1. H. Weyl, Z. Phys. , 330 (1929). 2. C. Herring, Phys. Rev. , 365 (1937). 3. A. A. Abrikosov, S. D. Beneslavskii, Some Properties of Gapless Semiconductors of the Second Kind. J. Low Temp. Phys. , 141 (1972). 4. G. E. Volovik, The Universe in a Helium Droplet (Oxford University Press, 2003). 5. S. Murakami, New J. Phys. 9, 356 (2007). 6. X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Phys. Rev. B 83, 205101 (2011). 7. K.-Y. Yang, Y.-M. Lu, and Y. Ran, Phys. Rev. B , 075129 (2011). 8. L. Balents, Physics 4, 36 (2011). 9. A. A. Burkov and L. Balents, Phys. Rev. Lett. 107, 127205 (2011). 10. P. Hosur, Phys. Rev. B , 195102 (2012). 11. T. Ojanen, Phys. Rev. B , 245112 (2013). 12. A. C. Potter, I. Kimchi, and A. Vishwanath, Nat. Commun. , 5161 (2014). 13. M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010). 14. X.-L. Qi and S.-C. Zhang, Rev. Mod. Phys. 83, 1057 (2011). 15. S.-Y. Xu, C. Liu, S. K. Kushwaha, R. Sankar, J. W. Krizan, I. Belopolski, M. Neupane, G. Bian, N. Alidoust, T.-R. Chang et al., Science , 294 (2015). 16. M. Z. Hasan, S.-Y. Xu, and M. Neupane, Topological Insulators, Topological Crystalline Insulators, Topological Kondo Insulators, and Topological Semimetals. in in Topological Insulators: Fundamentals and Perspectives edited by F. Ortmann, S. Roche, S. O. Valenzuela (John Wiley & Sons, 2015). 17. A. K. Geim and K. S. Novoselov, Nature Mater. , 183 (2007). 18. S.-M. Huang, S.-Y. Xu, I. Belopolski, C.-C. Lee, G. Chang, B. Wang, N. Alidoust, G. Bian, M. Neupane, C. Zhang et al., Nature Commun. , 7373 (2015). 19. S.-Y Xu, I. Belopolski, N. Alidoust, M. Neupane, G. Bian, C. Zhang, R. Sankar, G. Chang, Z. Yuan, C.-C. Lee et al., Science , 613 (2015). 20. S.-Y. Xu, N. Alidoust, I. Belopolski, Z. Yuan, G. Bian, T.-R. Chang, H. Zheng, V. N. Strocov, D. S. Sanchez, G. Chang et al., Nat. Phys. doi:10.1038/nphys3437 (2015). 21. H. Weng, C. Fang, Z. Fang, A. Bernevig, and X. Dai, Phys. Rev. X , 011029 (2015). 22. B. Q. Lv, H. M. Weng, B. B. Fu, X. P. Wang, H. Miao, J. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen et al., Phys. Rev. X , 031013 (2015). 23. B. Q. Lv, N. Xu, H. M. Weng, J. Z. Ma, P. Richard, X. C. Huang, L. X. Zhao, G. F. Chen, C. E. Matt, F. Bisti et al., Nat. Phys. doi:10.1038/nphys3426 (2015). 24. M. N. Ali, J. Xiong, S. Flynn, J. Tao, Q. D. Gibson, L. M. Schoop, T. Liang, N. Haldolaarachchige, M. Hirschberger, N. P. Ong, and R. J. Cava, Nature , 205 (2014). 25. I. Pletikosi, M. N. Ali, A.â V. Fedorov, R.â J. Cava, and T. Valla, Phys. Rev. Lett. , 216601 (2014). 26. J. Jiang, F. Tang, X. C. Pan, H. M. Liu, X. H. Niu, Y. X. Wang, D. F. Xu, H. F. Yang, B. P. Xie, F. Q. Song et al., http://arxiv.org/abs/1503.01422 (2015). 27. Y. Wu, N. H. Jo, M. Ochi, L. Huang, D. Mou, S. L. Budko, P. C. Canfield, N. Trivedi, R. Arita, and A. Kaminski, http://arxiv.org/abs/1506.03346 (2015). 28. A. A. Soluyanov, D. Gresch, Z. Wang, Q. Wu, M. Troyer, X. Dai, and B. A. Bernevig, http://arxiv.org/abs/1507.01603 (2015). 29. P. E. Blchl, Phys. Rev. B , 17953 (1994). 30. G. Kresse and D. Joubert, Phys. Rev. B , 1758 (1999). 31. G. Kresse and J. Furthmller, Computational Materials Science , 15 (1996). 32. J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. , 3865 (1996). 33. B. Brown, The Crystal Structures of WTe and High-Temperature MoTe. Acta Cryst. , 268 (1966). 34. N. Marzari and D. Vanderbilt, Phys. Rev. B , 12847 (1997). 35. I. Souza, N. Marzari, andD.Vanderbilt, Phys. Rev. B , 035109 (2001). 36. A. A. Mostofi, J. R. Yates, Y. S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, Comput. Phys. Commun. , 685 (2008). 37. C. Franchini, R. Kovik, M. Marsman, S. S. Murthy, J. He, C. Ederer, and G. Kresse, J. Phys.: Condens. Matter , 235602 (2012). 38. D. Puotinen, R.E. Newnham, The crystal structure of MoTe. Acta Cryst. , 897 (1961). 39. Y. Qi, P. G. Naumov, M. N. Ali, C. R. Rajamathi, O. Barkalov, Y. Sun, C. Shekhar, S.-C. Wu, V. S, M. Schmidt et al., http://arxiv.org/abs/1508.03502 (2015). 40. A. Mar , S. Jobic , J. A. Ibers, J. Am. Chem. Soc. , 8963 (1992). 41. Y. Sun, S.-C. Wu, M. N. Ali, C. Felser, and B. Yan, http://arxiv.org/abs/1508.03501 (2015). You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
auto_math_text
web
main-content ## Swipe to navigate through the articles of this issue 11-11-2021 | Regular Paper Open Access # Gaze-driven placement of items for proactive visual exploration Journal: Journal of Visualization Authors: Shigeo Takahashi, Akane Uchita, Kazuho Watanabe, Masatoshi Arikawa Important notes ## Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## 1 Introduction Item layout significantly influences the ability to attract the visual attention of viewers when advertising and encouraging the extensive use of products. Digital signage technologies successfully help us enhance such visual representation in this era of multimedia communication. Typical examples include digital displays of products on contemporary vending machines (Fig.  1c) and the exhibition of visual art on information walls (Fig.  1d). While this style of visual presentation successfully facilitates improved placement of items for enhanced impact, the strategy for arranging the associated digital contents still needs to be further investigated. Optimal placements of items have been explored for a long time, even in our daily life. For example, the layout of items significantly impacts sales results in retail stores, such as grocery shops and supermarkets. In this case, we need to attract customers with the layout of products by considering general co-purchasing rules. This is because we usually do not have any prior knowledge about the specific preferences of the respective customers nor do we instantly change the placements of products themselves even when we know their favorites. The layout strategy is different in online shops, in which the layouts of products change according to the preferences of the customers. This is accomplished by accumulating co-purchasing data as a product-customer matrix and extracting characteristic co-purchasing patterns to meet the requirements of each customer. Employing digital display devices poses another unique technical problem. This is because we can freely change the placement of items without sufficient knowledge about the preferred choices of each customer. Thus, we need to provide users with an opportunity to explore their favorite items by suggesting related items even when they do not explicitly identify which specific items are the best. This visual exploration should be sufficiently proactive in that the item placement faithfully simulates the ongoing search context of an individual viewer. In this case, the number of items prepared for exhibition usually exceeds the number of cells in the digital display. This means that we have to carefully select a set of visible items by replacing unwanted items with possibly preferred ones according to the search context. This consideration motivated us to pursue a novel approach for dynamically rearranging item layouts by interacting with the customers through the display. In this study, we employ an eye-tracking interface to explore the personal preferences of viewers by respecting the underlying context in the search for items of interest. The eye-tracking technology helps us infer the preferences of the viewers by analyzing how they focus their visual attention on the layout of items. In practice, eye-tracking input devices have become popular for controlling multimedia contents, including video games, interactive controllers, and communication tools. This type of visual interaction is an important trigger to dynamically update the digital display of items by respecting the underlying search context of the viewer. Besides, we also try to replace current sets of items with favorite ones specific to the viewer by predicting the search path in the configuration space of the items. In this paper, we advance our previous work (Takahashi et al. 2020) by conducting extensive user studies to adequately justify our design criteria in the placement of items. In addition to this novel contribution, we implement sophisticated association rules among items to dynamically update their placement by incorporating text mining techniques based on matrix factorization. In this paper, we first introduce the static placement of items as an optimization problem and important aesthetic criteria as constraints. We then describe the dynamic version so that we can rearrange the items according to the search context of viewers. To guide dynamic item arrangement, we employ an eye-tracker that takes spatiotemporal eye gaze distribution as input. We also construct a context map to retain the association rules among items by applying topic-based mining techniques to annotated texts associated with the items. This allows us to instantly infer the following favorite items from the most focused one by investigating its spatial neighbors on the context map. We demonstrate the feasibility of the proposed approach by simulating a virtual vending system and a digital information wall. We conduct extensive user studies to evaluate the validity of the respective layout constraints incorporated in our approach. This includes an investigation of design criteria for both static and dynamic placement through online questionnaires and eye-tracking experiments. We also compare association rules among items obtained using our context map and those reproduced by conventional recommendation systems based on co-purchasing data. In summary, our technical contribution lies in developing a new scheme for proactively exploring favorite items among possible choices through digital signage technology. More specifically, our advancements can be listed as follows: • Formulating the optimal static placement of items quantitatively as a constrained optimization problem • Developing gaze-driven interaction to improve the dynamic placement of items by respecting the context in the search for favorites • Introducing matrix factorization into topic-based text mining to enhance the quality of association rules among items • Conducting user studies to justify the proposed design criteria for the placement of items The remainder of this paper is organized as follows. We first provide a survey of previous work related to our approach in Sect.  2. We then formulate the static placement of items as a constrained optimization problem in Sect.  3. We detail how we can extend our formulation for static placement to the dynamic version by taking as input spatiotemporal eye gaze distribution. We also introduce text mining techniques to construct a context map so that we can infer a set of association rules among items by respecting the underlying search context. After demonstrating our experiment results, we justify the design criteria through user studies and discuss the possible limitations and future extensions of our approach in Sect.  6. Finally, we conclude this paper in Sect.  7. ## 2 Related work We survey previous studies related to ours by categorizing them into pattern layout optimization, gaze-driven interaction, and machine learning techniques. ### 2.1 Pattern layout optimization Designing aesthetic layouts of visual elements poses a vital problem in enhancing the readability of visual information. This is true for visual exhibitions based on digital signage technologies, including interactive guide maps (Li et al. 2020) and large information walls (Chen et al. 2021). Color map design has been another essential target for optimization (Wang et al. 2021) to improve perceptual experiences in visual interaction. On the other hand, to design schematic pattern layouts, mathematical programming techniques are advantageous; they effectively facilitate the clear distinction between soft and hard constraints. For example, we can employ linear programming (LP) to draw bundled parallel coordinate plots (Zhou et al. 2008), consistently distorted 2D residential maps (Maruyama et al. 2019), and 3D urban maps free of route occlusions (Hirono et al. 2013). We can also apply integer programming (IP) to the design of orthogonal networks (Eiglsperger et al. 2000; Yoghourdjian et al. 2016), schematic metro maps (Nöllenburg and Wolff 2011; Wu et al. 2013), and Sankey diagrams (Zarate et al. 2018). The combination of linear and quadratic programming has been successfully introduced to eliminate unwanted conflicts among visual elements (Meulemans 2019). Constrained optimization techniques were also employed to align visual elements in a grid layout. Gomez-Nieto et al. ( 2013) formulated the problem of arranging thumbnail video snapshots on a grid as an IP problem and later combined it with multidimensional projection techniques (Gomez-Nieto et al. 2016). Strong and Gong ( 2014) invented a Self-Sorting Map for aligning multimedia elements by referring to their mutual dissimilarity. Liu et al. ( 2018) developed a constrained multidimensional scaling (MDS) solver to faithfully retain the underlying grid in the placement of items. Pan et al. ( 2019) introduced an approach for generating a tree-based layout of items by adaptively partitioning the screen space. ### 2.2 Gaze-driven interaction Eye-tracker equipment has been commonly employed to evaluate the quality of visual information and its associated visual interfaces (Andrienko et al. 2012). This research trend inspired the development of gaze-based interactive systems for the analysis of the temporal transition of areas-of-interest (AOIs) as trees (Tsang et al. 2010), river styles (Burch et al. 2013), space-time 3D cubes (Kurzhals et al. 2014), circular transition diagrams (Blascheck et al. 2013), and video editing operations (Jain et al. 2015). Burch et al. ( 2020) recently presented an exciting concept of an attention map, in which a set of visually attended regions are cropped and rearranged in a tag-cloud style. Readers can refer to a detailed survey on visualization of eye-tracking data (Blascheck et al. 2014). Recently, eye-trackers have become the standard interface for interacting with visual displays due to their high cost-effectiveness. Video games enabled by eye-tracking technology (Gomez and Gellersen 2019; Smith and Graham 2006; Sundstedt 2010) are a typical example. Gaze-driven interaction has also been introduced to enhance the understanding of data for possible use in information visualization (Streit et al. 2009), graph visualization (Okoe et al. 2014), visual analytics (Silva et al. 2019), and annotated map visualization (Göbel et al. 2018). Eye-tracking technology is also beneficial for exploring virtual 3D space, including urban exploration (Baldauf et al. 2010) and fly-through travels (Qian and Teather 2018). Duchowski ( 2018) provides a survey of state-of-the-art techniques for gaze-based interactions. Although several user studies have explored the potential design of the gaze-adaptive interface for visualization (Raptis et al. 2017; Steichen et al. 2014a, b), practical system design for the support of digital signage technology remains to be pursued. Our challenge may have some connection with a gaze-driven interface for optimizing the layout of floor plans (Alghofaili et al. 2019), as this technique employs a relatively simple energy-minimization approach. Furthermore, respecting the underlying context of searching for superior objects is crucial, especially when presenting animated visual displays (Li et al. 2019). To the best of our knowledge, this work is the first to design a gaze-adaptive interface that respects the ongoing context in the search for items of interest. ### 2.3 Machine learning techniques Machine learning techniques often give us valuable insights into the relationships among target items by analyzing available data. Among them, Topic Modeling enables us to effectively discover important topics shared by a corpus of documents and often plays a crucial role in visualizing texts in terms of such extracted topics. Lee et al. ( 2012) introduced the topic analysis called latent Dirichlet allocation (LDA) (Blei et al. 2003) to develop an interactive system for visually understanding clusters of documents. In place of LDA, Kim et al. ( 2015) applied non-negative matrix factorization (NMF) (Kim and Park 2011; Lee and Seung 1999) to larger sets of documents in the context of interactive topic analysis and further enhanced the scalability of the system through the hierarchical representation of topics (Kim et al. 2020). A scheme called TopicLens (Kim et al. 2017) offered a lens interface for dynamically exploring document data, where t-distributed stochastic neighbor embedding (t-SNE) (van der Maaten and Hinton 2008) was employed as a 2D embedding algorithm together with NMF. El-Assady et al. ( 2018) addressed the problem of analyzing the thematic composition of document corpora and then enhanced it to implement an incremental hierarchical algorithm for the topic modeling process (El-Assady et al. 2019). Matrix factorization techniques also serve as a basis for implementing recommendation systems (Koren et al. 2009) if we have easy access to extensive training datasets, such as customer-product matrices. In this study, we expect that our approach will effectively support the collection of such data while maximally respecting the search context of the users at the same time. This research compares topic analysis techniques based on LDA and NMF and employs the NMF-based approach as our text mining technique (Sect.  5). ## 3 Optimizing static placement In this study, we assume that items are arranged in a grid on a digital display. We formulate this layout as an IP problem in which a binary variable represents the existence of each item in a specific cell on a grid. ### 3.1 Basic formulation Suppose that we place items on a $$P \times Q$$ grid of cells where $$i (= 1,\ldots ,P)$$ and $$j (= 1,\ldots ,Q)$$ correspond to the row and column IDs in a grid, respectively. Here, we introduce a binary variable, $$X^{i,j}_k$$, to represent the existence of the k-th item ( $$k = 1,\ldots ,N$$) in the cell of the i-th row and j-th column on a grid. This implies that $$X^{i,j}_k = 1$$ if the ( ij)-cell contains the k-th item; otherwise, $$X^{i,j}_k = 0$$. In this setup, we need to prepare $$P \times Q \times N$$ binary variables, $$X^{i,j}_k (i = 1, \ldots , P; j = 1, \ldots , Q; k = 1, \ldots , N)$$, to completely describe the placement of items. Figure  2 shows a $$P = 2 \times Q = 3$$ grid of cells as our running example, where we place numbers ranging from 1 to $$N = 9$$. For example, we can mathematically express the placement of numbers in Fig.  2 as $$X^{1,1}_7 = 1, X^{1,2}_1 = 1, X^{1,3}_8 = 1, X^{2,1}_3 = 1, X^{2,2}_9 = 1, X^{2,3}_5 = 1$$, and the other binary variables, $$X^{i,j}_k (i = 1,2; j=1,2,3; k=1,\ldots ,9)$$, equal 0. Here, we can impose several known constraints as equations and inequalities. First of all, a trivial constraint can be given by \begin{aligned} \sum _{k=1}^N X^{i,j}_k = 1 \quad (i = 1,\ldots ,P; j=1,\ldots ,Q), \end{aligned} for each cell because every grid cell retains only one single item out of N choices. We can introduce additional constraints as \begin{aligned} L_k \le \sum _{i=1}^P \sum _{j=1}^Q X^{i,j}_k \le U_k \nonumber \end{aligned} if we bind the number of appearances for the k-th item within $$[L_k, U_k]$$. For example, we can set $$\sum _{i=1}^P \sum _{j=1}^Q X^{i,j}_k = 1$$ if we place the k-th item only once on the $$P \times Q$$ grid. ### 3.2 Design criteria for static placement In addition to the basic formulation described above, we incorporate several criteria for designing the static placement of items as follows: (S1) Aligning items of the same size in a specific row (S2) Placing two items of the same category next to each other (S3) Arranging the same items in a sub-matrix These criteria were inspired by understanding the underlying rules in placing items in actual vending machines and information walls through observation. We will explain how we can implement each criterion in the remainder of this section. We will report the validity of these design criteria obtained from user studies later in this paper. Recall that, as described earlier, we have more items to exhibit than the number of available cells in the display. We determine which items should be visible in the optimization process (See Sect.  4.2). (S1) Aligning items of the same size in a specific row The criterion (S1) helps us maximize space efficiency in the layout of items by arranging them compactly according to their sizes and has commonly been employed in the display interface in contemporary vending machines. This design rule can be accomplished by intentionally restricting the available cells for a specific item. For example, in the case of Fig.  2, we can fix 7 at (1, 1)-cell by setting $$X^{1,1}_7 = 1$$ and $$X^{1,1}_k = 0$$ for ( $$k = 1,\ldots ,6,8,9$$). Conversely, imposing the condition $$X^{1,1}_7 = 0$$ lets us explicitly exclude 7 from (1, 1)-cell. This naturally inspires us to specifically reject the k-th item from the i-th row of the $$P \times Q$$ grid by introducing multiple constraints, i.e., $$X^{i,j}_k = 0$$ for $$j=1,\ldots ,Q$$. This formulation can be applied when we need to align items of a specific size in a column or a row. Thus, if we know the sizes of items beforehand, we can intentionally align items of the same size in a specific row or column. Figure  3 presents such an example, where we successfully restrict small cans to appear in the bottom row only by rejecting cans of other sizes. (S2) Placing two items of the same category next to each other The layout rule (S2) plays an essential role in promoting users’ favorite items. This is because they are likely to identify what kinds of items are present in the layout at first glance and identify their most preferred items in the neighborhood of relevant ones. This design principle also inspires them to explore their favorites even when they have not yet decided what they really want. We often respect this convention for arranging items to intentionally draw more attention from viewers. Examples include arrangements of products on supermarket shelves, where we are instructed to place items in the same category next to each other. Let us explain our formulation with the $$P \times Q$$ grid case, where we intended to place the two numbers k and l so that they are horizontally adjacent to each other. One simple case is to place k-th and l-th items in ( ij)- and $$(i,j+1)$$-cells, respectively, which results in equation $$X^{i,j}_k + X^{i,j+1}_l = 2$$. We can rewrite this constraint by introducing an additional binary variable, $$\chi ^{i,j}_{k,l}$$, as: \begin{aligned} 2 - G(1 - \chi ^{i,j}_{k,l}) \le X^{i,j}_k + X^{i,j+1}_l \le 2 + G(1 - \chi ^{i,j}_{k,l}), \end{aligned} (1) where $$G$$ represents a large constant value and is set to be 128 by default in our implementation. This condition holds if $$\chi ^{i,j}_{k,l} = 1$$. Of course, we can exchange the positions of k and l. We can implement this by introducing another binary variable, $$\chi ^{i,j}_{l,k}$$, and imposing a similar condition as Eq. ( 1). If we do not care about the order of the two items in ( ij)- and $$(i,j+1)$$-cells, we can write the condition as $$\chi ^{i,j}_{k,l}+\chi ^{i,j}_{l,k}=1$$. Since our ultimate goal is to place the two numbers next to each other in a row, we can write the condition by enumerating all possible cases in the $$P \times Q$$ grid, as follows: \begin{aligned} \sum _{i=1}^P \sum _{j=1}^{Q-1} ( \chi ^{i,j}_{k,l} + \chi ^{i,j}_{l,k} ) = 1. \nonumber \end{aligned} In our implementation, we encoded this type of requirement as a soft constraint by summing up $$w\bigl ( \sum _{i=1}^P \sum _{j=1}^{Q-1} ( \chi ^{i,j}_{k,l} + \chi ^{i,j}_{l,k} ) \bigr )$$ to the overall objective function to be maximized. $$w$$ indicates a specific weight value for this constraint and can be adjusted according to the degree of requirement. A similar formulation can be employed to align two items next to each other in a column. Figure  3a shows an example in which drinks of the same category are placed next to each other. In practice, we can find juice bottles, black tea bottles, and small coffee cans next to each other in a row, while oolong tea bottles and cans are aligned vertically adjacent to each other. (S3) Arranging the same items in a sub-matrix The placement guideline (S3) successfully prevents the user from being distracted by identical items repeatedly scattered separately over the display. It is also better to place important items in a block to draw the user’s attention using a proposed layout strategy that is aesthetically pleasing. The most common scenario associated with this guideline is to line up the same items in multiple cells in a group. Suppose that we want to line up the k-th item m times in a row in the $$P \times Q$$ grid. Here, we can identify a set of possible sequences of m cells as ( ij)-, $$\cdots$$, and $$(i,j+m-1)$$-cells where $$i = 1,\ldots ,P$$ and $$j = 1,\ldots ,Q-m+1$$. This observation leads us to the formulation for the sequence of cells as \begin{aligned} m - G(1 - \chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{m}\;\text{ times }}}) \le \sum _{\delta=1}^m X^{i,j+{\delta-1}}_k \le m + G(1 - \chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{m}\;\text{ times }}}), \nonumber \end{aligned} again by introducing a new binary variable, $$\chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{m}\;\text{ times }}}$$. In our formulation, we simultaneously impose a hard constraint to prohibit the sequence of the k-th items from splitting into multiple separate groups by imposing the following constraints: \begin{aligned} \sum _{i=1}^{P} \sum _{j=1}^Q X^{i,j}_k = m \quad \text{ and } \quad \sum _{i=1}^P \sum _{j=1}^{Q-m+1} \chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{m}\;\text{ times }}} = 1. \nonumber \end{aligned} Fig.  3b exhibits an example of three mineral bottles lined up in a row. This can be easily extended to arrange the same item in an $$n \times m$$ sub-matrix in the $$P \times Q$$ grid by introducing the constraints \begin{aligned} \sum _{i=1}^{P} \sum _{j=1}^Q X^{i,j}_k = n \times m \quad \text{ and } \quad \sum _{i=1}^{P-n+1} \sum _{j=1}^{Q-m+1} \chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{n \times m}\;\text{ times }}} = 1, \nonumber \end{aligned} where \begin{aligned} n \times m - G(1 - \chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{n \times m}\;\text{ times }}}) \le \sum _{\gamma=1}^n \sum _{\delta=1}^m X^{i+\gamma-1,j+{\delta-1}}_k \le n \times m + G(1 - \chi ^{i,j}_{\underbrace{k,\ldots ,k}_{{n \times m}\;\text{ times }}}). \end{aligned} (2) See Fig.  7 for placements of items in sub-matrix forms, where each block of images is replaced with a larger image. ## 4 Gaze-driven dynamic placement Improving the static placement of items is essential to draw attention to specific items, especially for print media, such as newspapers and magazines. However, contemporary digital signage technologies promote superior demonstration of items by dynamically updating the selection and arrangement of items on display devices. This is crucial because we need to guide item selection for users by customizing item placement according to the intermediate context of searching for their favorite items. In our research, we employ an eye-tracking device to understand the current focus on the items by analyzing the spatiotemporal distribution of the visual attention of viewers. We then adaptively rearrange the layout of items to maximally respect the ongoing interest in searching for their preferred items. In this section, we explain a technique for retrieving the temporal change in the spatial distribution of visual attention on the display and updating the associated placement of items accordingly. In the next section, we will detail our approach for replacing visible items with those hidden behind the display as well as rearranging them by referring to the ongoing search context. ### 4.1 Computing the distribution of visual attention An eye-tracker assists our investigation of the spatiotemporal movement of eye gaze points to understand how viewers focus on specific areas of interest in a scene. In our approach, we make maximum use of the eye-tracker for identifying which subset of items from the visible item set draws the most visual attention. This scheme allows us to improve the placement of items based on the distribution of such gaze points. In practical scenarios, we compute the spatiotemporal distribution of visual attention by convolving each gaze point in the sequence with a Gaussian weighting kernel (Daae Lampe and Hauser 2011). We often visualize the spatial distribution of visual attention as a colored map called a heatmap, in which the color changes from blue to green to red as the degree of attention increases. Figure  1a shows the placement of drinks with the overlaid heatmap, where cola bottles around the top right corner attracted the most visual attention of the viewer. Moreover, we visualize the spatiotemporal trajectory of the eye gazes as gaze plots in the figure, where the gaze point is represented as larger if it was a more recent one. In our heatmap computation, we incorporate gaze points recorded within the last two seconds, and the Gaussian weight is scaled down linearly in terms of the time from the present. ### 4.2 Updating the priority values of items In our problem setup, our challenge is to replace the items on display and rearrange them according to the preferences of viewers by analyzing their ongoing context while searching for their favorites. This means that we want to maximize their satisfaction by presenting their favorite items on display and hiding unwanted ones through dynamic rearrangement. This is accomplished by maximizing the objective function that includes the weighted sum of binary variables assigned to the following items \begin{aligned} \sum _{k=1}^N \alpha _k (\sum _{i=1}^P \sum _{j=1}^Q X^{i,j}_k ), \end{aligned} (3) where $$\alpha _k \, (k = 1,\ldots ,N)$$ is the priority value of the k-th item. Here, we assume that each priority value $$\alpha _k$$ is normalized into [0, 1]. We can increase Eq. ( 3) as we accommodate more items associated with higher priority values in the grid. This implies that items having low priority values are likely to be excluded from the current list of candidate items for selection and thus become invisible. Specifically, the number of cells dominated by the k-th item depends on the choice of its priority value $$\alpha _k$$ and the limits for the number of its appearances $$[L_k, U_k]$$ mentioned in Sect.  3.1. In our scenario, we expect that items that are initially hidden behind the display will replace visible items when the corresponding priority values become higher. Thus, our task here is to formulate a scheme for plausibly updating the priority values $$\alpha _k \, (k= 1,\ldots ,N)$$ that reflect the individual preferences of the items. For this purpose, we update the priority value of each item by calculating the integral of the heatmap within the area of each cell in the grid. The integrated values are accumulated for each item if the corresponding item dominates multiple cells. Suppose that we obtain the percentage of the entire visual attention for the k-th item as $$r_k$$. In our implementation, we replace $$r_k$$ with an enhanced version, $$r^{\prime }_k$$, by applying a winner-takes-all enhancement to the distribution of the percentage values. This is biologically inspired by the concept of a saliency map (Itti et al. 1998; Koch and Ullman 1987), since we are likely to focus on a unique region of interest that pops up the most from the scene. Finally, we linearly normalize $$r^{\prime }_k \, (k=1,\ldots ,N)$$ so that the maximum value of $$r^{\prime }_k \, (k=1,\ldots ,N)$$ becomes 1.0. These normalized rates of visual attention guide us to find the updated priority values for $$\alpha _k \, (k = 1,\ldots ,N)$$. ### 4.3 Design criteria for dynamic placement We employ the following criteria for controlling the dynamic placement of items. (D1) Fixing primarily focused items to their corresponding cells (D2) Respecting the most focused item to prepare the subsequent placement These two criteria serve as the basis for further elaborating the dynamic updating of item selection and placement, which can be detailed below. (D1) Fixing primarily focused items to their corresponding cells As the first design criteria for dynamic placement, we fix the positions of items that attract more attention to prevent viewers from being surprised to find them suddenly jumping out of their field of view. This is implemented by adding temporary constraints to fix the items at the corresponding cells when computing the subsequent placement. To select such a set of focused items, we first sum up the integrals of the heatmap within the cells associated with the items. We then identify each of them as a focused item if the corresponding sum is more than the predefined threshold. In our implementation, one to three different kinds of items are identified as specifically focused items. This design specification allows viewers to freely focus on their favorite items without being disturbed by other items that will potentially be replaced later. (D2) Respecting the most focused item to prepare the subsequent placement Of course, we need to respect how the viewers pay visual attention to the items on display and maximally consider their choices for preparing the following layout. As described earlier, we compute the degree of visual attention for each item by summing up the heatmap values over the corresponding cells, and we identify the most focused item. By adjusting the priority value associated with the most focused item, we can present this item together with the closely related ones in the subsequent display for the convenience of the viewers. One may feel that handling the most popular item as a particular one could cause an unexpected problem, especially when the first and second top items attract an almost equivalent amount of attention. Nonetheless, such competing items are often likely to be close to each other in the grid placement. This is due to the characteristic in the spatial distribution of the heatmap within a relatively short period. Thus, such items are often in the same category due to the design criterion (S2) employed in the static placement. This indicates that selecting the most attractive item or others close to it does not significantly affect the subsequent placements of items. A major exception will be discussed in Sect.  6.3. ## 5 Context-aware updates of items Our formulation for updating the priority values does not explicitly consider items hidden behind the display. This inspires us to increase the priority values of such invisible items so that they can replace visible ones that are not of interest. For this reason, we want to adequately understand the ongoing context in which the viewers try to find their favorite items. Our solution here is to construct a context map that retains plausible association rules among items in diagram style. This means that the map represents pairwise relationships between the items as a spatial layout of the corresponding points on a 2D diagram. The association rules enable us to instantly discern the preferred items to be exhibited next in the display from the most focused items at present. This becomes possible by retrieving the nearest neighbors within the vicinity of the item currently in focus on the map. Successively following the association rules helps us adaptively adjust the priority values of the items according to the history of the most focused item, which is identified by faithfully interpreting the distribution of the viewers’ visual attention. ### 5.1 Topic-based mining of annotated texts It is often the case that the automatic construction of context maps requires considerable effort to reproduce the authentic relationships among items. Undoubtedly, we can take advantage of machine learning techniques if we have easy access to extensive training datasets that reveal how specific items are selected simultaneously. However, we need to provide alternatives because such training data are not immediately available when presenting items to newcomers through display devices. Our choice here is to employ text mining techniques that infer meaningful relationships between pairs of items by taking as input associated annotated texts. It is often effortless to collect explanatory texts about the individual items, such as from internet resources. In particular, we introduce topic-based text analysis to efficiently reproduce association rules among them. Topic modeling (Blei 2012) is a technique for extracting important topics and has been developed in the area of natural language processing. We can characterize the explanatory description for each item as the linear sum of such representative topics, each of which constitutes a weighted sum of basic terms. Suppose that the i-th topic is represented as follows: \begin{aligned} \tau ^{i} = \beta ^{i}_{1} \omega ^{i}_{1} + \beta ^{i}_{2} \omega ^{i}_{2} + \ldots , \qquad (i = 1,\ldots ,R), \end{aligned} where $$\{\omega ^{i}_r\} \, (r = 1,2,\ldots )$$ are the basic terms that constitute the i-th topic, $$\{\beta ^{i}_r\}$$ are the associated weight coefficients, and $$R$$ is the total number of topics. Here, $$R$$ is optimized in a pre-process by finding the best number that maximizes a coherence score during the topic analysis. We specifically focus on nouns as our target terms and excluded other categories of terms such as verbs, adjectives, adverbs, and prepositions. This effectively allows us to extract relationships among items since the nouns refer to names and identities commonly shared by items of a specific category. Once we can extract the specific number of topics $$\{\tau ^{i}\} \, (i = 1,\ldots ,R)$$, we transform the annotated text associated with the k-item as an $$R$$-dimensional feature vector, consisting of $$R$$ coefficients assigned to the respective topics, as \begin{aligned} (\lambda ^{1}_k, \ldots , \lambda ^{R}_k), \qquad (k = 1, \ldots , N). \nonumber \end{aligned} This implies that we represent each item as a feature vector consisting of such coefficients, which allows us to measure the similarity between every pair of items by computing the index of dissimilarity between the corresponding feature vectors. In our approach, we project an $$R$$-dimensional feature vector of each explanatory text onto the 2D space using t-SNE (van der Maaten and Hinton 2008) to construct the context map. The map actually helps us search for a specific number of items related to the one most focused on by referring to their spatial positions in its 2D domain. In our approach, we employ this 2D projected map not only for visualization purposes but also to retrieve items that are closely relevant to the current most focused item. Refer to Sect.  5.2 for further details. LDA (Blei et al. 2003) is one of the most widely used techniques in topic modeling. We have previously employed LDA for that purpose (Takahashi et al. 2020); however, as pointed out by Kim et al. ( 2015), LDA is not scalable enough to handle even medium-sized collections of texts and often requires relatively long computation times. In this study, instead of LDA, we employed NMF as our topic modeling method by referring to the matrix spanned by occurrence histograms of the representative words for the respective documents. We employed time frequency-inverse document frequency (TF-IDF) (Robertson 2004) as a weighting factor to transform the word occurrence matrices before factorizing them in order to properly consider the importance of the respective words in the overall document set. We also found that the NMF-based topic extraction was likely to discriminate between drink categories as faithfully as expected. Figure  4 shows such a comparison between the 2D context maps obtained using LDA and NMF. ### 5.2 Design criteria for context-aware priority updates As described earlier, our dynamic placement was implemented by adjusting the priority values $$\alpha _k \, (0< \alpha _k < 1, k = 1,\ldots ,N)$$ associated with the given N items. The context map provides us with association rules among the items so that we can select the items that will attract the viewers’ attention next. In connection with the association rules based on the context map, we introduce two additional criteria for the design of dynamic placement: (D3) Proactively replacing visible items with hidden ones (D4) Prioritizing items closely associated with the one currently in focus These design criteria permit us to replace the current priority value of the k-th item with the new value after it is normalized. We describe our implementation of these design criteria that present preferred items to viewers. (D3) Proactively replacing visible items with hidden ones In our approach, we proactively replace unwanted visible items with those hidden behind the display because they are more likely to attract the visual attention of the viewers. Another reason is that we want to make the best use of time by presenting as many items as possible within a fixed period. Thus, we update the priority values of items currently visible on the display according to the degree of interest they attract. We also increase the priority values of invisible items without exception to intentionally replace unwanted visible ones in the next update. This strategy will proactively replace current items with invisible ones that can potentially attract more visual attention. However, we still run the potential risk of missing some items of interest if the corresponding priority values are kept relatively low during a specific period of time. This may be inevitable even though we try to maximally bring hidden items into the display by raising their priority. We will discuss this issue later in Sect.  5.3. (D4) Prioritizing items closely associated with the one currently in focus At the same time, we faithfully follow association rules yielded by the context map to select the following items to be exhibited. More specifically, we activate items that have a strong association with the most focused item by increasing their priority values, regardless of whether they are currently on display or not. For this purpose, we conduct the k-nearest neighbor search around the currently attended item on the context map by calculating the 2D distance from it. Another option is to retrieve the similarity between items by directly computing mutual distances in the original high-dimensional feature space. In our approach, we did not employ this because we may accidentally include irrelevant items in the list of neighbors. In practice, topic-based text mining successfully identifies a group of closely related items as a cluster in the feature space. However, it is still possible that such clusters are intricately embedded in the original high-dimensional space, and items belonging to different clusters happen to be close to each other unexpectedly. This is because each item has a high-dimensional neighborhood, which means that many other items can easily approach it from any direction. The dimensionality reduction based on t-SNE successfully retains the mutual closeness of items in each cluster while keeping different clusters apart. This allows us to successfully retrieve a set of closely related items to the currently focused one from the 2D projected version of the context map. We actually tested the approximate nearest neighbor search (Li et al. 2019) on the high-dimensional context map before applying dimensionality reduction (i.e., t-SNE). However, in this case, we encountered several undesirable cases in which we retrieved irrelevant items. Thus, we decided to employ the configuration of the items after they are projected onto 2D space. ### 5.3 Notes on the initial placement of items As described previously, the design criterion (D3) may risk hiding a particular set of items if their priority values are kept lower than those of other items. We can considerably mitigate this risk with the help of the proposed context-aware adjustment of the priority values for the items. However, it may be possible that items of some specific category remain invisible on the display if their priority values are not fully activated by their related items through the association rules on the context map. This issue can be alleviated by guiding the initial placement so that the visible items represent their different categories. The context map also facilitates the selection of such initial items since the clusters of items on the map effectively characterize such categorization. In practice, in our implementation, we simulate a conventional layout rule that picks up representative items from different categories by raising their initial priority values intentionally. This means that following this rule is equivalent to choosing important items from each cluster on the context map. In this way, devising the initial placement of items allows users to avoid missing their preferred items even when they belong to any category. ## 6 Results In this section, we first demonstrate the experimental results of our approach, then we justify the design criteria for the context-aware placement of items through user studies, and finally, we discuss the feasibility of our formulation. We implemented our prototype system on a MacBook Pro with an Intel Core i7 processor (four cores, 2.3GHz, 512KB L2 Cache per core, and 8MB L3 Cache) and 32GB of RAM. The source code was written in C++ using OpenGL/GLUT for UI and drawing items, OpenCV for handling images, and IBM ILOG CPLEX for solving IP optimization problems. Python programs were also combined with the system to conduct a topic-based analysis of annotated texts based on NMF. We introduced a Tobii Pro Nano eye-tracker together with the Tobii SDK library to retrieve the viewer’s gaze positions. The annotated texts associated with the items were collected from Internet resources, such as Wikipedia. ### 6.1 Experimental results In the first demonstration of our approach, we constructed a virtual vending machine of drinks, as exhibited in Fig.  1. We composed three search scenarios to see how the context-aware approach produces different layouts of drinks, as follows: A) from orange juice to mineral water, B) from cola to mineral water, and C) from oolong tea to mineral water. Figure  5a and b present the initial placement of drinks and the corresponding context map computed by topic-based text mining using matrix factorization. We also clarify how our search path can be indicated on the context map according to the three scenarios in Fig.  5b. In the initial placement, we first classified the set of drinks into several categories. We then employed the design criterion (S2) to place drinks from similar categories next to each other, including water (mineral and sparkling), cultured milk (Calpico, etc.), tea (black, green, oolong, etc.), and juice (orange, apple, etc.). Drinks of different sizes are arranged in a vertical alignment (i.e., a bottle and a can of oolong tea). We use the criterion (S1) to restrict small cans and brick cartons to the bottom row. We force bottles with the same drink to line up horizontally with the criterion (S3). In this case, we also bound the numbers of mineral water, sparkling water, cola, cola zero, orange juice, apple juice, oolong tea, and green tea bottles within [2, 4], [1, 2], [2, 3], [1, 2], [1, 2], [0, 2], [1, 2], and [1, 3]. The critical mechanism is that the system waits for the visual attention of viewers to be entirely focused on a small area of the display before updating the placement of items. The system also lists items attracting more attention in the hotlist to record the history of preferred items in the bottom row and next to the output port of the vending machine (Fig.  5a). We smoothly replace items in other cells using a cross-dissolve animation in order not to disturb the viewers’ focus on their preferred items. Figure  6 presents sequences of snapshots to depict how the three scenarios control the dynamic placement of drinks differently. The context map represents the priority value of each drink by coloring its corresponding plots and name labels, so that the color changes from blue to yellow to red as the priority value increases. In Scenario A), we begin by focusing on orange juice in the middle row on the right and replace unattended items with an additional apple juice bottle by following the association rules provided by the context map. Although the layout still kept multiple juice bottles, it soon added mineral water along with sparkling water once the mineral water attracted more attention. In Scenario B), more cola and cola zero bottles were lined up in the first half and still stayed on display even after mineral water bottles attracted more attention. The same effect can be found in Scenario C), where oolong tea, along with other tea bottles, joined the placement. Mineral water bottles finally won more cells on the display, while most of the tea bottles remained in the end. These results demonstrated that we could successfully take advantage of the association rules based on the context map to maximally respect the underlying context of searching for favorite items. We also simulated the digital information wall in which we arranged a series of landscape woodblock prints called Thirty-Six Views of Mt. Fuji painted by the famous Japanese Ukiyo-e artist, Hokusai Katsushika. Figure  7 demonstrates how the paintings are updated dynamically according to the spatiotemporal distribution of visual attention. We supposed a possible scenario in which visitors in a museum freely looked at their favorite paintings on the information wall and simulated how the system dynamically rearranged the paintings when they paid particular attention to the display. We again employed the context map obtained using topic modeling based on matrix factorization to prepare a set of association rules among the paintings. In this setup, we grouped multiple cells into a single block matrix if they retained the same painting. ### 6.2 User evaluation studies We recruited participants for user studies and asked them to evaluate the validity of each of the design criteria (S1)–(S3) and (D1)–(D4), which were described earlier in this paper. In the user studies, we evaluated each of the design criteria by invalidating each criterion to generate the different placement of items and asked the participants to conduct a side-by-side comparison with the original placement obtained with the complete set of criteria. For the respective criteria for designing the static placement (S1)–(S3), we asked each participant to complete an online questionnaire. We conducted an eye-tracking study to validate the design criteria for the dynamic placement (D1)–(D4). We also compared the association rules suggested by our context map with those obtained with a conventional recommendation system based on matrix factorization. #### 6.2.1 Justifying the design criteria (S1)–(S3) Table 1 Side-by-side comparisons between static placements for the criteria (S1)–(S3). Also refer to Figs.  8 and 9 Design criteria Without criterion With criterion p-value (S1) in the $$3 \times 6$$ placement of drinks 6 (13.6%) 38 (86.4%) $$4.72 \times 10^{-7}$$ (S2) in the $$3 \times 6$$ placement of drinks 11 (25.0%) 33 (75.0%) $$6.30 \times 10^{-4}$$ (S3) in the $$3 \times 6$$ placement of drinks 6 (13.6%) 38 (86.4%) $$4.72 \times 10^{-7}$$ (S1) in the $$2 \times 10$$ placement of drinks 4 (9.1%) 40 (90.9%) $$8.53 \times 10^{-9}$$ (S2) in the $$2 \times 10$$ placement of drinks 5 (11.4%) 39 (88.6%) $$7.03 \times 10^{-8}$$ (S3) in the $$2 \times 10$$ placement of drinks 9 (20.5%) 35 (79.5%) $$5.30 \times 10^{-5}$$ (S2) in the $$5 \times 5$$ placement of paintings 17 (38.6%) 27 (61.4%) $$8.71 \times 10^{-2} (> 0.05)$$ (S3) in the $$5 \times 5$$ placement of paintings 4 (9.1%) 40 (90.9%) $$8.53 \times 10^{-9}$$ We initiated our user evaluation by assessing the design criteria (S1)–(S3) for the static placement of items through an online questionnaire. We first prepared three pairs of drink placements in $$3 \times 6$$ and $$2 \times 10$$ grids, in each of which we excluded one of the three design criteria (S1)–(S3) for the comparison by the participants. We also prepared $$5 \times 5$$ grid layouts of paintings; in this case, we tested the design criteria (S2) and (S3) only because all of the paintings are in the same size and aspect ratio. We recruited 44 participants (8 females and 36 males) for the online questionnaire. Their ages ranged from 19 to 65, and more than half were university students (ages 19–24) majoring in computer sciences and relevant fields. For each evaluation task, we requested the participants to conduct side-by-side comparisons between a pair of static placements described above and select the one they liked. We provided participants with information about the aspect in which the two placements were different without informing them about which placement misses the corresponding design principle. Table  1 exhibits the comparison results we collected from the online questionnaire. Figure  8 lists the first three pairs of drink layouts, and Fig.  9 presents the last two pairs of painting placements used in this study. In the first pair in Fig.  8a, drinks of small size are scattered over the arrangement on the left due to the absence of (S1), while they are all aligned in the bottom row on the right. Thus, most of the participants indicated the placement on the right as their favorite choice. Figure  8b presents the side-by-side comparison between the placements of drinks while the left placement lacks the design criterion (S2). As shown in the layout on the right, drinks in the same category are arranged next to each other, including juice bottles, tea bottles, and carbonated drink bottles. On the other hand, tea bottles are irregularly located, and juice bottles are split in the arrangement on the left. The majority of the participants supported the placement on the right also in this case. The last design criterion, (S3), was employed to systematically line up multiple bottles of mineral water, as exhibited on the right, while they were split into multiple blocks on the left. Again, in this case, the design criteria were liked by many participants. Table  1 also demonstrates that these three design criteria collected the majority of votes even in a $$2 \times 10$$ grid placement of items. As for the placement of paintings in Fig.  9a, the design criterion (S2) was only weakly supported. This was probably because fewer participants noticed that the paintings were grouped individually according to the target motifs, such as mountains, seas, ships, and streets. Conversely, the last rule (S3) was explicitly supported because copies of the same painting were aesthetically grouped in a block, as presented in Fig.  9b. In summary, all the three design criteria (S1)–(S3) in both cases were highly evaluated in the user study to varying degrees. We will discuss this issue again later in this section. #### 6.2.2 Justifying the design criteria (D1)–(D4) Evaluating the design criteria in the context of the dynamic placement of items requires us to carefully design an eye-tracking study as a laboratory experiment. For this purpose, we prepared a scenario that guides each participant to watch a specific set of items in a specific order. We then screencast the actual update of the dynamic placement so that the participants could refer to how the items were replaced and rearranged according to their spatiotemporal eye gaze movements. We recruited eight participants with normal vision; all participants were male university students majoring in computer sciences, and their ages ranged from 19 to 24. As our running example, we employed the dynamic placement of drinks. We performed an approximately 30–40 minute study for each participant, which consisted of explaining the objectives and overall scenario of the experiment, obtaining a written approval form, calibrating the eye-tracking devices, and performing a preliminary practice followed by four comparison sessions. Each session contained a comparison between a pair of dynamic placement of items, where the participants were guided to look at cola (bottle), oolong tea (bottle), and orange juice (bottle), in that order, until the overall placement of drinks was updated twice for each drink. In each section, we asked the participants to select the better dynamic presentation of drinks while invalidating one of the design criteria (D1)–(D4) in either of the two eye-tracking tasks. Although we informed the participants of how the two placements differed in advance, we did not tell them which placement corresponded to the case in which we invalidated the criterion. This experiment was conducted under the approval of the research ethics committee, and adequate measures against COVID-19 were taken to protect the participants (See Fig.  10). In particular, in this eye-tracking study, each participant was compensated with 1000 JPY because they needed to spend longer time for their participation. Table  2 presents the comparison results of this laboratory experiment. Again, all the design criteria were supported highly by the participants. In the comparison task for the design criterion (D1), all the participants liked the focused item to stay in the same cell for a while, as shown in Fig.  11a. In the second comparison for the criterion (D2), most participants rejected the dynamic placement in which we prepared the next update by ignoring the drink that attracted the most visual attention. The participants preferred the criterion (D3) that exhibited various drinks, including those hidden behind the display. Lastly, our context-aware dynamic placement criterion (D4) received higher ratings from many of the participants. The rightmost image in Fig.  11b shows such a case, in which the cola zero bottle and matched carbonated drink bottles were more likely to appear on the display after the participants focused their attention on the cola bottle, especially when it is compared with the case on the left. These observations demonstrate that our design criteria for dynamic placement promote an attractive exhibition of items according to the visual interest of the viewers. Table 2 Side-by-side comparisons between dynamic placements for the criteria (D1)–(D4) Design criteria Without criterion With criterion p-value (D1) in the $$3 \times 6$$ placement of drinks 0 (0.0%) 8 (100.0%) $$3.91 \times 10^{-3}$$ (D2) in the $$3 \times 6$$ placement of drinks 2 (25.0%) 6 (75.0%) $$1.45 \times 10^{-1} (> 0.05)$$ (D3) in the $$3 \times 6$$ placement of drinks 1 (12.5%) 7 (87.5%) $$3.52 \times 10^{-2}$$ (D4) in the $$3 \times 6$$ placement of drinks 1 (12.5%) 7 (87.5%) $$3.52 \times 10^{-2}$$ #### 6.2.3 Comparison with conventional association rules Table 3 Side-by-side comparisons between recommended drinks using the matrix factorization and context map. The percentage to the right of each recommended drink represents the approval rating in the user study Focused item Matrix factorization Context map Cola Energy drink (73.0%) Energy drink (73.0%) Cola zero (94.6%) Cola zero (94.6%) Sparkling water (40.5%) Lemon black tea (0.0%) Oolong tea Green tea (75.7%) Green tea (75.7%) Barley tea (86.5%) Barley tea (86.5%) Lemon black tea (13.5%) Black tea (43.2%) Orange juice Banana juice (56.8%) Vegetable juice (51.4%) Oolong tea (0.0%) Tomato juice (35.1%) Soy milk (0.0%) Banana juice (56.8%) Blend coffee Black coffee (91.9%) Black coffee (91.9%) Cafe au lait (94.6%) Cafe au lait (94.6%) Green tea (2.7%) Cultured milk soda (0.0%) Mineral water Sparkling water (94.6%) Sparkling water (94.6%) Soy milk (0.0%) Cola zero (0.0%) Cola zero (0.0%) Lemon black tea (8.1%) For the final user study, we compared association rules among items based on our context map with those obtained by conventional recommendation techniques. For this purpose, we again employed the association rules among drinks as an example. To retrieve the association rules reproduced by the recommendation techniques, we first recruited 30 participants (7 females and 23 males; ages ranged from 19 to 59) who majored in computer sciences and related fields, and asked them to complete an online questionnaire. In the questionnaire, they were requested to answer five questions, including “Which drinks do you like when you are thirsty, tired, etc.,” and they were asked to select a specific set of drinks from 30 available drinks for each question. From this data, we composed a $$30 \times 150$$ matrix, where the matrix is spanned by 150 ( $$=$$ 30 persons $$\times$$ 5 questions) binary vectors representing how the 30 drinks were selected simultaneously for the individual cases. We applied NMF (Koren et al. 2009) to decompose the matrix into representative patterns of drink selections and inferred which drinks should be recommended if one specific drink was chosen. The results are listed in Table  3, which shows the three relevant drinks recommended by the two approaches if we visually focus on a specific drink. As for the approval ratings for the respective drinks in Table  3, we administered another online questionnaire to which we invited 37 participants (7 females and 30 males; ages ranged from 19 to 65) to answer whether they approved of the recommendation for each candidate drink. Comparing the recommended drinks enables us to claim that our context map can produce drink association rules as reasonable as those obtained by conventional recommendation techniques. These results also support the validity of our context-aware dynamic placement of items. ### 6.3 Discussion As described above, we could confirm that our design choices for static and dynamic placement of items were largely supported by the participants in the user studies. To verify this claim, we computed p-values for each side-by-side comparison between the two placements on the assumption that the probability of choosing either of them is 0.5 (50%) for both. Tables  1 and 2 list the p-values for each comparison. If we employ 0.05 (5%) as a threshold for the significance test, all the biases in the layout selections were statistically significant, except for (S2) in the $$5 \times 5$$ grid placement of paintings and (D2) in the eye-tracking experiment. Even in these exceptional cases, the p-values were relatively close to the threshold for statistically significant biases. This consideration lets us conclude that we can statistically support the proposed design criteria for static and dynamic placement. The proposed design criteria for the dynamic placement of items have limitations and remain to be further elaborated. In the design criterion (D2), we hypothesize that viewers focus on a set of items of the same category in the search for their favorites. However, they may try to intentionally compare items of different categories, especially in the early stage of their visual exploration. In this exceptional case, our system expects viewers to focus on some item first and then shift to another item of a different category. The proposed approach tries to faithfully track such a temporal change in the distribution of visual attention and update the placement of items accordingly. However, in this case, the system will produce different placements according to the order of the two focused items and need more time to reflect the viewer’s intention in the placement for the second item. Exploring multiple items of different categories in our approach is left as our future research. In the proposed scenario, we assume that the users do not decide which items they really want at first and need some kind of association to find their favorites by looking at related items as a hint. We interviewed several participants after the eye-tracking experiment to find out the validity of such an assumption. They mostly liked such an idea of encouraging users to identify what they really want through a visual exploration of items. Furthermore, some participants suggested that we can potentially find more preferred items other than the original one through the proposed gaze-based interaction. Other participants argued that this type of interaction could lead to a novel type of recommendation system that effectively helps us make reasonable decisions. Visually tracking dynamic placements can potentially incur unwanted perceptual stress for users. This perceptual issue has been alleviated, as described previously, by leaving a set of focused items untouched (cf. (D1)) and animating the replacement of other items through a cross-dissolve transition. Moreover, participants in the eye-tracking study also emphasized selecting the proper speed in updating the contents on display. This suggests that users will be comfortable in their exploration if they have sufficient time to look at the items before they are replaced with the next set of items. Pursuing a perceptually plausible timing for updating the placement of items will be an interesting theme, although this is beyond the scope of this research. The proposed approach will inspire us to develop new techniques for arranging visual symbols, such as icons and pictograms. In particular, we can incorporate the underlying spatial layout of such symbols as constraints into our approach to maximally respect their expected placement. An interesting example is aligning cartographic symbols at a grid while respecting their original geographic positions. This type of visualization method is beneficial because the grid layout of items often leads to aesthetically pleasing representations and improves the readability of the associated visual information (Wood et al. 2011; Cano et al. 2015). In particular, our approach facilitates visualizing spatiotemporal trends inherent in the geospatial data while respecting the underlying temporal context. Scalability in terms of the number of items is another critical issue. Since we have newly introduced matrix factorization in the topic-based text analysis, we can compose the context map even when we have many more items. This consideration leads us to the idea of first classifying the items into several categories, then selecting the representative items from each group, and finally arranging them in the placement. This hierarchical mechanism provides the viewers with a gaze-driven interface for exploring across multiple levels of detail on demand. We can also strive to pursue the maximum use of our interface to collect different types of data that help us satisfy users’ specific preferences. More extensive integration of our approach with state-of-the-art machine learning techniques is also an exciting theme for future research. ## 7 Conclusion We have presented an approach for optimizing the dynamic placement of items by respecting the ongoing search context of the viewers. Our gaze-driven system first obtained the spatiotemporal distribution of eye gaze points over the display using an eye-tracker device. It then maximally reflected the personal preferences of the viewers by referring to association rules among the items for more proactive visual exploration. We introduced a context map that maintained the association rules as the mutual relationship between items on a 2D diagram. For this purpose, we applied NMF-based topic modeling techniques to annotated texts associated with the items and introduced dimensionality reduction to the corresponding set of high-dimensional feature vectors. We also demonstrated experimental results and justified the choice of our design criteria through user studies, followed by a discussion on the limitations and future extensions of this work. ## Acknowledgements We thank anonymous reviewers for their valuable comments and suggestions, which helped us improve the manuscript. This work has been partially supported by JSPS KAKENHI (Grant Numbers 19H04120 and 16H02825). Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. ## Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Literature
auto_math_text
web
Objective:To reach the estimated 84.1 million U.S. adults with prediabetes, lower-cost alternatives to the National Diabetes Prevention Program (NDPP) are needed, which is estimated to cost $500 per person. In a previous randomized controlled trial, we demonstrated efficacy of a 12-month text message support program (SMS4PreDM) in individuals with prediabetes. We now explore effectiveness in a pragmatic study in a healthcare system, in addition to calculating per-person costs of SMS4PreDM. Research Design and Methods:English- and Spanish-speaking patients with diabetes risks (e.g., A1C 5.7-6.4) were referred by their healthcare providers and offered NDPP classes, SMS4PreDM, or both. This analysis focused on comparing weight outcomes among SMS4PreDM-only participants to a usual care control group of patients with diabetes risks who were not referred. As a pragmatic study, weights for both groups were collected from electronic health records at baseline and 12 months. Rates of achieving ≥3% weight loss were compared using logistic regression, including a sub-analysis by language. Results:Among 183 SMS4PreDM-only participants, 51.4% (N=94) had documented pre- and post-intervention weights and 30.9% (N=29) achieved ≥3% weight loss, compared with 23.4% of 1,871 control patients (p=0.10). English-speakers receiving SMS4PreDM trended toward ≥3% weight loss at a higher rate than English-speaking controls (36.5% vs. 25.6%, p=.07). There was no significant difference among Spanish-speakers. In an intention-to-treat analysis that assumed no weight loss among participants with missing data, there was no significant difference in achieving ≥3% weight loss as compared to controls (20.8% vs. 20.2%, p=.49). Costs of SMS4PreDM were$100.92 per capita. Conclusions:SMS4PreDM trends toward greater achievement of ≥3% weight loss compared to usual care and at a lower cost than the NDPP. However, results may not be sufficiently robust or generalizable to support long-term implementation. Disclosure H. Fischer: None. S. Raghunath: None. J. Durfee: None. N. Ritchie: None. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered. More information is available at http://www.diabetesjournals.org/content/license.
auto_math_text
web
1.An Untrollable Mathematician post by Abram Demski 538 days ago | Alex Appel, Sam Eisenstat, Vanessa Kosoy, Jack Gallagher, Jessica Taylor, Paul Christiano, Scott Garrabrant and Vladimir Slepnev like this | 1 comment Follow-up to All Mathematicians are Trollable. It is relatively easy to see that no computable Bayesian prior on logic can converge to a single coherent probability distribution as we update it on logical statements. Furthermore, the non-convergence behavior is about as bad as could be: someone selecting the ordering of provable statements to update on can drive the Bayesian’s beliefs arbitrarily up or down, arbitrarily many times, despite only saying true things. I called this wild non-convergence behavior “trollability”. Previously, I showed that if the Bayesian updates on the provabilily of a sentence rather than updating on the sentence itself, it is still trollable. I left open the question of whether some other side information could save us. Sam Eisenstat has closed this question, providing a simple logical prior and a way of doing a Bayesian update on it which (1) cannot be trolled, and (2) converges to a coherent distribution. 2. Where does ADT Go Wrong? discussion post by Abram Demski 605 days ago | Jack Gallagher and Jessica Taylor like this | 1 comment 3. An Approach to Logically Updateless Decisions discussion post by Abram Demski 786 days ago | Sam Eisenstat, Jack Gallagher and Scott Garrabrant like this | 4 comments 4. Index of some decision theory posts discussion post by Tsvi Benson-Tilsen 1012 days ago | Ryan Carey, Jack Gallagher, Jessica Taylor and Scott Garrabrant like this | discuss 5.Logical Inductors that trust their limits post by Scott Garrabrant 1028 days ago | Jack Gallagher, Jessica Taylor and Patrick LaVictoire like this | 2 comments Here is another open question related to Logical Inductors. I have not thought about it very long, so it might be easy. Does there exist a logical inductor $$\{\mathbb P_n\}$$ over PA such that for all $$\phi$$: 1. PA proves that $$\mathbb P_\infty(\phi)$$ exists and is in $$[0,1]$$, and 2. $$\mathbb{E}_n(\mathbb{P}_\infty(\phi))\eqsim_n\mathbb{P}_n(\phi)$$? 6.Universal Inductors post by Scott Garrabrant 1035 days ago | Sam Eisenstat, Jack Gallagher, Benja Fallenstein, Jessica Taylor, Patrick LaVictoire and Tsvi Benson-Tilsen like this | discuss Now that the Logical Induction paper is out, I am directing my attention towards decision theory. The approach I currently think will be most fruitful is attempting to make a logically updateless version of Wei Dai’s Updateless Decision Theory. Abram Demski has posted on here about this, but I think Logical Induction provides a new angle with which we can attack the problem. This post will present an alternate way of viewing Logical Induction which I think will be especially helpful for building a logical UDT. (The Logical Induction paper is a prerequisite for this post.) ### NEW DISCUSSION POSTS [Note: This comment is three by Ryan Carey on A brief note on factoring out certain variables | 0 likes There should be a chat icon by Alex Mennen on Meta: IAFF vs LessWrong | 0 likes Apparently "You must be by Jessica Taylor on Meta: IAFF vs LessWrong | 1 like There is a replacement for by Alex Mennen on Meta: IAFF vs LessWrong | 1 like Regarding the physical by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes I think that we should expect by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes I think I understand your by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes This seems like a hack. The by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes After thinking some more, by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes Yes, I think that we're by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes My intuition is that it must by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes To first approximation, a by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes Actually, I *am* including by Vanessa Kosoy on The Learning-Theoretic AI Alignment Research Agend... | 0 likes Yeah, when I went back and by Alex Appel on Optimal and Causal Counterfactual Worlds | 0 likes > Well, we could give up on by Jessica Taylor on The Learning-Theoretic AI Alignment Research Agend... | 0 likes
auto_math_text
web
PREPRINT E0B88125-F527-49F8-8BAE-92875A5F6CAE # The Interstellar Interlopers David Jewitt and Darryl Z. Seligman Submitted on 16 September 2022 ## Abstract Interstellar interlopers are bodies formed outside of the solar system but observed passing through it. The first two identified interlopers, 1I/Oumuamua and 2I/Borisov, exhibited unexpectedly different physical properties. 1I/Oumuamua appeared unresolved and asteroid-like whereas 2I/Borisov was a more comet-like source of both gas and dust. Both objects moved under the action of non-gravitational acceleration. These interlopers and their divergent properties provide our only window so far onto an enormous and previously unknown galactic population. The number density of such objects is $\sim$ 0.1 AU${}^{-3}$ which, if uniform across the galactic disk, would imply 10${}^{25}$ to 10${}^{26}$ similar objects in the Milky Way. The interlopers likely formed in, and were ejected from, the protoplanetary disks of young stars. However, we currently possess too little data to firmly reject other explanations. ## Preprint Comment: 40 pages, 19 figures, 7 tables, invited review in ARA&A Volume 61, submitted, comments welcome Subjects: Astrophysics - Earth and Planetary Astrophysics; Astrophysics - Astrophysics of Galaxies
auto_math_text
web
# Variation of Conductivity and Molar Conductivity with Concentration Do you think the conductivity of a solution would vary when you alter its concentration? Never tried out an experiment like that? Well, it is an absolutely important concept and also digs deeper into the depths of this concept of variation of conductivity. ## What is Specific Conductivity? The specific conductivity or conductivity of an electrolytic solution at any given concentration is the conductance of the unit volume of the solution. It is the conductance when kept between two platinum electrodes with a unit area of cross-section. The electrodes are at a distance of unit length. Conductivity decreases with a decrease in concentration as the number of ions per unit volume that carries the current in a solution decrease on dilution. The molar conductivity of a solution at a given concentration is the conductance of volume V of the solution containing one mole of electrolyte kept between two electrodes with an area of cross-section A and distance of unit length. Ʌm = К/c Here, c = concentration in moles per volume, К = specific conductivity and Ʌm = molar conductivity. As the solution contains only one mole of electrolyte, the above equation can be modified as: Ʌm = КV ### Browse more Topics under Electrochemistry Learn more about the Working and setup of Galvanic Cells here. ## Change in Molar Conductivity Molar conductivity increases with a decrease in concentration. This happens because the total volume, V, of the solution containing one mole of electrolyte also increases. Upon dilution, the concentration decreases. Furthermore, when the concentration approaches zero, the molar conductivity of the solution is known as limiting molar conductivity, Ë°m. Variation of molar conductivity with concentration is different for both, strong and weak electrolytes. ### Variation of Molar Conductivity with Concentration of Strong Electrolytes For strong electrolytes, the molar conductivity increases slowly with the dilution. The plot of the molar conductivity and $$\sqrt{c}$$ is a straight line having y-intercept equal to Ë°m. The value of limiting molar conductivityË°m can be determined from the graph or with the help of Kohlrausch law. For strong electrolytes, the molar conductivity increases slowly with the dilution. Thus, the plot of the molar conductivity and c1/2 is a straight line having y-intercept equal to Ë°m. The general equation for the plot is: Ʌm = Ë°m  A(\sqrt{c}\) Where A is a constant equal to the slope of the line. Furthermore, the value of “A” for a given solvent depends on the type of electrolyte at a particular temperature. Hence, it differs from solution to solution. What are Electrochemical Cells? ### Variation of Molar Conductivity with Concentration for Weak Electrolyte For weak electrolytes, the graph plotted between molar conductivity and (\sqrt{c}\)  (where c is the concentration) is not a straight line. This is because weak electrolytes have lower molar conductivities and lower degrees of dissociation at higher concentrations which increases steeply at lower concentrations. Hence, we use Kohlrausch law of independent migration of ions for determining to limit molar conductivity, Ëm° of weak electrolytes. ## Solved Example for You Question: How does the concentration of a solution affect its specific conductivity? Answer: Specific Conductivity decreases with a decrease in concentration. Since the number of ions per unit volume that carry current in a solution decrease on dilution. Hence, concentration and conductivity are directly proportional to each other. Share with friends Customize your course in 30 seconds Which class are you in? 5th 6th 7th 8th 9th 10th 11th 12th Get ready for all-new Live Classes! Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes. Ashhar Firdausi IIT Roorkee Biology Dr. Nazma Shaik VTU Chemistry Gaurav Tiwari APJAKTU Physics Get Started Download the App Watch lectures, practise questions and take tests on the go. Customize your course in 30 seconds Which class are you in? No thanks.
auto_math_text
web
# Three-dimensional reconstruction and characterization of bladder deformations 2023-01-18 09:28:59 Augustin C. Ogier, Stanislas Rapacchi, Marc-Emmanuel Bellemare ##### Abstract Background and Objective: Pelvic floor disorders are prevalent diseases and patient care remains difficult as the dynamics of the pelvic floor remains poorly known. So far, only 2D dynamic observations of straining exercises at excretion are available in the clinics and the understanding of three-dimensional pelvic organs mechanical defects is not yet achievable. In this context, we proposed a complete methodology for the 3D representation of the non-reversible bladder deformations during exercises, directly combined with synthesized 3D representation of the location of the highest strain areas on the organ surface. Methods: Novel image segmentation and registration approaches have been combined with three geometrical configurations of up-to-date rapid dynamic multi-slices MRI acquisition for the reconstruction of real-time dynamic bladder volumes. Results: For the first time, we proposed real-time 3D deformation fields of the bladder under strain from in-bore forced breathing exercises. The potential of our method was assessed on eight control subjects undergoing forced breathing exercises. We obtained average volume deviation of the reconstructed dynamic volume of bladders around 2.5\% and high registration accuracy with mean distance values of 0.4 $\pm$ 0.3 mm and Hausdorff distance values of 2.2 $\pm$ 1.1 mm. Conclusions: Immediately transferable to the clinics with rapid acquisitions, the proposed framework represents a real advance in the field of pelvic floor disorders as it provides, for the first time, a proper 3D+t spatial tracking of bladder non-reversible deformations. This work is intended to be extended to patients with cavities filling and excretion to better characterize the degree of severity of pelvic floor pathologies for diagnostic assistance or in preoperative surgical planning. ##### URL https://arxiv.org/abs/2301.07385 ##### PDF https://arxiv.org/pdf/2301.07385 Tags
auto_math_text
web
Article | Open Reconstructing cell cycle pseudo time-series via single-cell transcriptome data • Nature Communications 8, Article number: 22 (2017) • doi:10.1038/s41467-017-00039-z Accepted: Published online: Abstract Single-cell mRNA sequencing, which permits whole transcriptional profiling of individual cells, has been widely applied to study growth and development of tissues and tumors. Resolving cell cycle for such groups of cells is significant, but may not be adequately achieved by commonly used approaches. Here we develop a traveling salesman problem and hidden Markov model-based computational method named reCAT, to recover cell cycle along time for unsynchronized single-cell transcriptome data. We independently test reCAT for accuracy and reliability using several data sets. We find that cell cycle genes cluster into two major waves of expression, which correspond to the two well-known checkpoints, G1 and G2. Moreover, we leverage reCAT to exhibit methylation variation along the recovered cell cycle. Thus, reCAT shows the potential to elucidate diverse profiles of cell cycle, as well as other cyclic or circadian processes (e.g., in liver), on single-cell resolution. Introduction Cell cycle studies, a long-standing research area in biology, are supported by transcriptome profiling with traditional technologies, such as qPCR1, microarrays2, and RNA-seq3, which have been used to quantitate gene expression during cell cycle. However, these strategies require a large amount of synchronized cells, i.e., microarray and bulk RNA-seq, or they may lack observation of whole transcriptome, i.e., qPCR. Moreover, in the absence of elaborative and efficient cell cycle labeling methods, a high-resolution whole transcriptomic profile along an intact cell cycle remains unavailable. Recently, single-cell RNA-sequencing (scRNA-seq) has become an efficient and reliable experimental technology for fast and low-cost transcriptome profiling at the single-cell level4, 5. The technology is employed to efficiently extract mRNA molecules from single cells and amplify them to certain abundance for sequencing6. Single-cell transcriptomes facilitate research to examine temporal, spatial and micro-scale variations of cells. This includes (1) exploring temporal progress of single cells and their relationship with cellular processes, for example, transcriptome profiling at different time phases after activation of dendritic cells7, (2) characterizing spatial-functional associations at single-cell resolution which is essential to understand tumors and complex tissues, such as space orientation of different brain cells8, and (3) unraveling micro-scale differences among homogeneous cells, inferring, for example, axonal arborization and action potential amplitude of individual neurons9. One of the major challenges of scRNA-seq data analysis involves separating biological variations from high-level technical noise, and dissecting multiple intertwining factors contributing to biological variations. Among all these factors, determining cell cycle stages of single cells is critical and central to other analyses, such as determination of cell types and developmental stages, quantification of cell–cell difference, and stochasticity of gene expression10. Related computational methods have been developed to analyze scRNA-seq data sets, including identifying oscillating genes and using them to order single cells for cell cycle (Oscope)11, classifying single cells to specific cell cycle stages (Cyclone)12, and scoring single cells in order to reconstruct a cell cycle time-series manually13. Besides, several computational models have been proposed to reconstruct the time-series of differentiation process, including principal curved analysis (SCUBA)14, construction of minimum spanning trees (Monocle15 and TSCAN16), nearest-neighbor graphs (Wanderlust17 and Wishbone18) and diffusion maps (DPT)19. In fact, even before scRNA-seq came into popular use, the reconstruction of cell cycle time-series was accomplished using, for example, a fluorescent reporter and DNA content signals (ERA)20, and images of fixed cells (Cycler)21. However, despite these efforts, accurate and robust methods to elucidate time-series of cell cycle transcriptome at single cell resolution are still lacking. Here we propose a computational method termed reCAT (recover cycle along time) to reconstruct cell cycle time-series using single-cell transcriptome data. reCAT can be used to analyze almost any kind of unsynchronized scRNA-seq data set to obtain a high-resolution cell cycle time-series. In the following, we first show one marker gene is not sufficient to give reliable information about cell cycle stages in scRNA-seq data sets. Next, we give an overview of the design of reCAT, followed by an illustration of applying reCAT to a single cell RNA-seq data set called mESC-SMARTer, and the demonstration of robustness and accuracy of reCAT. At the end, we give detailed analyses of several applications of reCAT. All data sets used in this study are listed in Table 1. Results High variation of expression measures within cells We found that the expression level of one marker gene was insufficient to reveal the cell cycle stage of a single cell as a result of high stochasticity of gene expression and heterogeneity of cell samples. Therefore, we propose to use a group of cell cycle marker genes, combined with proper computational models, to reconstruct pseudo cell cycles from scRNA-seq data with high accuracy. Using a mouse embryonic stem cells (mESC) scRNA-seq data set developed by Buttener et al. (2015)22, we showed that the expression of cell cycle marker genes has high stochasticity. The data set, termed mESC-SMARTer, consists of 232 eligible samples labeled according to cell cycle stages by Hoechst staining. We examined several high-confidence cell cycle marker genes, as shown in Fig. 1a. The cell cycle stages in which these genes have maximum mean relative expression levels are consistent with their existing records29, but the distribution of expression levels between two cell cycle stages showed high overlap (Fig. 1a), indicating that a single marker gene is insufficient to determine the cell cycle stage for a single cell. In addition, we showed that mean gene expression levels, averaging over 20 cell samples, remain highly stochastic (Supplementary Fig. 1). We further examined the consistency of cell cycle stages of maximum mean expression levels of cell cycle marker genes between different cell populations. We selected six single-cell transcriptome sample groups from different tissues and experimental conditions (Table 1), and performed four pairwise comparisons, showing the results in Fig. 1b. Assuming consistency between maximum mean expression levels of marker genes and their corresponding cell cycle stages, all drops should be located along the diagonal. In fact, however, many counts spread into off-diagonal entries, showing apparent relatively low consistency (Fig. 1b). An overview of the reCAT approach Given an scRNA-seq data set, reCAT reconstructs cell cycle time-series and predicts cell cycle stages along the time-series. The reconstructed time-series generally consists of multiple cell cycle phases (e.g., ≥10), each of which may contain one or multiple cells. Two fundamental assumptions underlie the cell cycle model: (1) different cell cycle phases form a cycle and (2) transcriptome at a certain cell cycle phase would have a smaller difference relative to that of its most adjacent phase compared to a more distant phase. Hence, reCAT models the reconstruction of the time-series as a traveling salesman problem (TSP), which herein finds the shortest possible cycle by passing through each cell/cluster exactly once and returning to the start. As shown in Fig. 1c, reCAT can be described as a process consisting of four steps. The first step is data processing, including quality control, normalization, and clustering of single cells using the Gaussian mixture model (GMM) according to a user-defined phase number k. We defined the distance between two clusters as the Euclidian distance between their means. In the second step, the order of the clusters was recovered by finding a traveling salesman cycle. Since TSP is a well-known NP-hard problem, we developed a novel and robust heuristic algorithm, termed consensus-TSP, to find the solution. For the third step, we designed two scoring methods, Bayes-scores and mean-scores, to discriminate among cycle stages (G0, G1, S, or G2/M). Finally, in the fourth step, we designed a hidden Markov model (HMM) based on these two scoring methods to segment the time-series into G0, G1, S and G2/M, and a Kalman smoother to estimate the underlying gene expression levels of the single-cell time-series (Methods). An illustration of reCAT working principles The mESC-SMARTer data set (Buettner, et al. 2015) was used to illustrate the principles underlying the reCAT approach. Only cell cycle genes listed in Cyclebase31 (378) were used in reCAT to get the expression matrix, while other genes were excluded based on the risk of adding noise to the model. The samples were clustered into eight classes (k = 8), and the mean expression levels of these eight clusters were arranged into the optimal traveling salesman cycle. Fig. 2a displays all single cells and a cycle formed by eight cluster centers in a two-dimensional plot using principal component analysis (PCA), in which colors correspond to experimentally determined cell cycle stages. In Fig. 2b, we linearized the traveling salesman cycle into a pseudo time-series of eight phases and plotted the composition of single cells at each phase. The figure shows agreement between the predicted pseudo time-series and the experimentally determined cell cycle stage labels, thereby supporting the validity of the TSP model. In summary, both plots demonstrate a gradual and smooth transition of labeled single-cell components along the pseudo time-series. In the Supplementary Material, we showed that the expression trends of well-studied cell cycle marker genes (Supplementary Table 2) are coherent with the order of the clusters (Supplementary Fig. 2). Moreover, we converted the covariance matrices of each cluster into a vector (Methods) and computed a traveling salesman cycle using these cluster vectors. The generated time-series (Supplementary Fig. 3a) is also consistent with the above one (Fig. 2a), demonstrating that the traveling salesman cycle is inherent within the data. Components of reCAT and their validation At the center of reCAT is a novel heuristic algorithm, termed consensus-TSP (Methods), to solve TSP robustly. It should be noted that no known polynomial time algorithm can solve the TSP problem for every case. On the other hand, scRNA-seq data are highly noisy; even the optimal traveling salesman cycle may not represent the correct cell cycle order. To overcome these problems, we designed a two-step strategy. In the first step, consensus-TSP groups a set of n single cells into k clusters for various values of k ≤ n, and for each set of k clusters, it generates one TSP route using the arbitrary insertion algorithm32. Then the second step of consensus-TSP integrates these routes to produce a consensus traveling salesman cycle (Supplementary Fig. 4, Supplementary Note 2). Consensus-TSP was shown to outperform Oscope7, the arbitrary insertion algorithm (Fig. 2c), and other well-known TSP algorithms (Supplementary Figs 5 and 6) according to the correlation-score, a Pearson correlation coefficient (PCC)-based scoring function that measures the agreement between a predicted pseudo time-series and experimentally determined cell cycle stage labels (Methods). In Fig. 2d, we demonstrated that consensus-TSP also outperformed current single-cell pseudo time reconstruction methods, including SCUBA14, Monocle15, TSCAN16, Wanderlust17/Wishbone18 and DPT19 (also in Supplementary Fig. 7 and Supplementary Note 4). The comparisons were based on the correlation-scores and change-index values (Methods). The latter index measures how frequent experimentally determined single cell labels change along the time-series. Consensus-TSP is not only robust (Fig. 2c, Supplementary Note 4) but also scales up well for thousands of single cells (Supplementary Fig. 4f). We observed similar results using the cell cycle stage-labeled mouse embryonic stem cell Quartz-seq (mESC-Quartz) data set23 (the left panel of Fig. 3a) and the cell cycle stage-labeled human embryonic stem cell SMART-seq (hESC) data set11 (Supplementary Fig. 8a). Of course, the scoring methods of evaluation may have their own limitations. In addition, one point should be noted about the data generation, if cells with the same cell cycle labels were processed and sequenced in the same batch, these cells can be clustered together nicely because of the batch effects, which leads to high scoring values, but cells within each cell cycle stage may not be properly ordered. We designed two scoring methods, called ‘Bayes-scores’ and ‘mean-scores’, to discriminate among the cell cycle stages (Methods). The Bayes-score is a supervised learning method, which computes Naive Bayesian likelihood values using expression level comparisons of pre-selected gene pairs as input features. The model uses a training data set to determine a fixed number of informative gene pairs33. This Naive Bayesian design is able to decrease the effect of stochasticity in scRNA-seq data (Supplementary Fig. 9, Supplementary Note 2). The mean-score is an unsupervised method, which computes the mean of log expression levels of a selected set of marker genes specific to each cell cycle stage. The values of these scores reveal membership of a cluster (or a cell) to a certain cell cycle stage. We trained the Bayes-scores using the mESC-SMARTer data, and we tested both Bayes-scores and mean-scores on the mESC-Quartz, mESC-SMARTer (only mean-scores) and hESC data sets. The curves of these scores are shown in Fig. 3a, Supplementary Figs 7a and 8b,c, respectively. We observe clear cyclic variations of these curves along cell cycle. In practice, the Bayes-scores performed especially well in distinguishing G0/G1/S from G2/M. The peak for the G1/S mean-score values is usually near the start site of the S stage (Supplementary Figs 7a8b and 10), while the peak for the G2/M mean-score values is often near the late G2 stage. For each kind of mean-score, the values at the G0 stage are significantly lower than those at the other stages (Supplementary Note 3), which can be combined into the HMM to discriminate G0 from the other cell cycle stages. Identification of cell cycle-related genes The noise of gene expression measurements of single cells is high. Therefore, to better observe gene expression variation along the cell cycle time-series computed by reCAT, we designed a Kalman smoother to estimate the sequential expression levels for a gene (Methods). We employed two statistics, distance correlation (dCor)34 and K nearest neighbors (KNN)-mutual information (KNN-MI)35, to test the significance of the associations between the sequential expression levels of a gene and the pseudo time-series, in order to identify cell cycle-related genes not listed in Cyclebase. We applied the Kalman smoother to the multi-potent progenitor cells from young mice (young-MPP) in the mouse hematopoietic stem cell SMART-seq (mHSC) dataset (Table 1) which contain several groups of mouse hematopoietic stem cells, tested all genes and ranked them according to their significance scores (Supplementary Table 3, Supplementary Fig. 11). Afterwards, the sequential expression levels of the top five non-Cyclebase genes by dCor and KNN-MI were plotted in Fig. 3b. Eight out of the ten genes were confirmed to be strongly related to cell cycle by published literature, although functions of the other two were not clearly recorded (Supplementary Table 4). For instance, Ncapd2 (non-SMC condensin I complex subunit D2), a protein coding gene, has high expression levels at S and G2 stages (Fig. 3b). It belongs to a large protein complex involved in chromosome condensation, and it is annotated as a cell cycle-related gene by Gene Ontology36. However, it was not included in Cyclebase. Decomposing proportions of cell cycle stages for mHSCs Leveraging Bayes-scores and mean-scores along the pseudo cell cycle time-series, reCAT applies an HMM to segment the time-series into cell cycle stages of G0, G1, S and G2/M (Methods, Supplementary Fig. 12). We applied reCAT to mHSC data, and at the G1 stage, results showed that young individuals had a higher proportion of long-term HSCs (LT-HSC), 41 out of 167 cells, when compared to old individuals with 10 out of 183 cells (Fig. 4a). This is an independent and quantitative confirmation of the original findings by using the staining approach. High-resolution transcription atlas of cell cycle in mESCs We next applied reCAT to the mESC samples, termed mESC-Cmp, which were cultured in serum, 2i and a2i medium, respectively25 for comparison (Kolodziejczyk et al. 2015). Previously, Granovskaia et al.37 built a high-resolution transcription profile using synchronized budding yeast cells. Similarly, we obtained a high-resolution transcription atlas of the mitotic cell cycle in mESCs (Fig. 4b, Supplementary Fig. 13) from scRNA-seq data without synchronization through an in silico approach. Two adjacent cells on the recovered pseudo time-series have a time gap theoretically less than 5 min on average according to the doubling time of about 20 h, which shows a higher resolution than that produced by Granovskaia et al. for budding yeast. During the cell cycles, known cell cycle related genes, arranged by their recorded peak time in Cyclebase (Supplementary Table 5), display two main types of expression waves (Fig. 4b, Supplementary Figs 2 and 13), which correspond to the two well-known checkpoints, G1 and G2. We can also observe decreased expression of cell cycle genes at the end of the cell cycle, which may be caused by degradation of mRNA molecules38. We leveraged the decreased expressions to estimate the doubling time of the 2i and serum samples and found it consistent with the values reported in the original paper (Supplementary Fig. 14). Changes of stage proportions during differentiation We examined scRNA-seq data of human myoblast (hMyo)15, as developed by Trapnell, et al. (2014), termed hMyo, which consist of differentiating myoblasts sampled at 0th, 24th, 48th, and 72nd hour time points, respectively. We applied reCAT to reconstruct a pseudo cell cycle time-series for each of the four sample groups. Fig. 4c shows the proportions of different cell cycle stages estimated at each sampling time point using the HMM model. A strong negative correlation is shown between differentiation progress and cell cycle activity, as a higher proportion of cells are found in cell cycle at the start of differentiation compared to later differentiation time points. The relatively low proportion of cells in cell cycle at the 72-h time-point is also consistent with the reduced proportion of differentiated cells to divide, as previously documented (Fig. 4c, Supplementary Fig. 15). We obtained a similar result using the mouse distal lung epithelium (mDLM) SMART-seq data set26, termed mDLM, which consists of four groups of cells sampled at four different developmental stages (Supplementary Fig. 16). In the absence of synchronization procedures during differentiation, each of the four cell groups contains slight inner heterogeneity, further proving that reCAT is unaffected by that factor. Even in a cancer cell data set of human metastatic melanoma27, termed hMel, with cancer cell heterogeneity in each sample group, reCAT clearly identified cell cycle status of single cells (Supplementary Fig. 17). Recovery of methylation profile along cell cycle Using a parallel single-cell genome-wide methylome and transcriptome sequencing data set28, termed mESC-MT, we show that reCAT is able to recover time-series epigenome along cell cycle via scRNA-seq data. The 61 mESCs were concurrently processed by both SMART-seq for scRNA-seq data and bisulfite sequencing (BS) for single-cell methylation data. We processed the scRNA-seq data first using reCAT to obtain the pseudo time-series (Supplementary Fig. 18) and associated the methylation data with the time-series. We scanned the whole genome methylation levels along the cell cycle (Methods) and discovered that the methylation rate was higher at G1/S phase compared to other cell cycle stages (Fig. 4d). The observation agrees with and extends the conclusion by Brown, et al. (2007)39, but it contradicts the conclusion of Vandiver, et al. (2015)40. Furthermore, we calculated the mean methylation level for promoter regions of gene sets with peak gene expression levels in G1 and G2/M, respectively (Methods). The results imply that the methylation levels for promoter regions of the cell cycle genes vary along cell cycle (Fig. 4d). Discussion Aiming to obtain a high-resolution transcriptomic change that occurs along cell cycle, we developed an scRNA-seq data analysis approach called reCAT. In basic cell cycle studies, reCAT can (1) recover transcriptome change without cell synchronization, which might otherwise alter the native processes, and (2) examine those cells in a developing population or tissue, e.g., during differentiation, that have entered G0 vs. those that continue to divide, thus linking transcriptional changes during development to cell cycle. Therefore, as a novel computational approach to reconstruct cycle along time for unsynchronized single-cell transcriptome data, reCAT is a promising tool with a number of merits. With higher quality and quantity41 of sequencing samples, more delicate time-series profiles can be modeled in general. Moreover, reCAT has the potential to observe various epigenomics42, 43 along cell cycle, leveraging parallel sequencing of RNA and DNA44, which has been demonstrated in this work. Even further, reCAT method can be used in research of other cyclic or circadian expression (e.g., in liver)45. reCAT could be refined in several ways. Instead of the preselected gene set (378 genes), we would prefer semi-supervised selection of cell cycle genes from the data, as this could lead to better performance in future analysis. The scoring metrics (i.e., Bayes-scores and mean-scores) to indicate cell cycle stages also need improvements to be less noisy and more informative. Additionally, in a given cell cycle, variation of cell cycle-related gene expression predominates over that of the corresponding differentiation. Accordingly, reCAT separates cell cycle analysis from differentiation, which may introduce some bias, but this, too, can be further improved by a combined model. On the contrary, although some reported studies treated cell cycle as noises to be filtered, cell cycle has considerable influence on the investigated biological processes, e.g. myogenesis and embryogenesis. Thus, a model is needed for considering multiple processes simultaneously. Methods Data set selection Ten data sets were used for analysis (Table 1). Among them, four data sets have experimentally derived cell cycle stage labels: the mouse embryonic stem cell RNA-seq data (mESC-SMARTer), mESC-Quartz, hESC, and three cell lines, H9, MB and PC3, sequenced by qPCR. The hESC samples were labeled by fluorescent ubiquitination-based cell-cycle indicators (FUCCI)30, while others were labeled by Hoechst staining. The six unlabeled data sets include mHSC, mESCs scRNA-seq samples from different culture conditions (mESC-Cmp), hMyo cells sampled at four different time points, mDLM cells sampled at four different time points, hMel scRNA-seq samples, and the mESCs processed by scRNA-seq and bisulfite in parallel (mESC-MT). The mHSC, mESC-Cmp and mESC-MT data sets consist homogeneous cells within each group, while the hMyo, mDLM and hMel data sets were sampled from heterogeneous cells. Quality control, normalization and preprocessing We processed scRNA-seq data in the following procedure. For data with FPKM or TPM expression levels, we considered samples having more than 4000 genes with expression levels exceeding 2, as eligible. For data with counts for expression levels, we followed existing procedures22 for quality control. Then we deleted genes whose mean expressions were excessively low, e.g., lower than 2 for mean TPM, in order to focus on informative genes. We used the normalization step developed in DESeq46 to obtain relative expression levels. After quality control and normalization, the expression levels of the 378 cell cycle genes, as defined in Cyclebase, were extracted for downstream analysis. Finally, all gene expression levels were transformed by log2(Exp + 1) to prevent domination of highly expressed genes. For methylation data, methylation status of a CpG site was considered a binary value in a single cell, unlike a rate in bulk BS. The binary value for single-cell BS data was determined by comparing methylated and unmethylated counts of a CpG site. We generated two results from methylation data of the mESC-MT data set in our analysis. The first result is overall methylation level of whole genome, which is the ratio of the number of methylated sites over the number of all measured sites. The second result is mean methylation levels for promoter regions of two gene sets, which contain Cyclebase genes labeled with G1 and G2/M peak expression, respectively. A gene promoter region was defined as a +/−3 kbp window centered on the transcriptional start site. After methylation levels were obtained, the curves of methylation levels along the pseudo time-series were drawn using an average smoother of nine points. Definition of gene sets We mainly use four gene sets correlated with cell cycle. (A) The first gene set was obtained from Cyclebase 3.0 which collected 378 genes from dozens of cell cycle-related papers. For genes in Cyclebase, expression peak time, significance and source organisms, for example, are documented. (B) The second set (Supplementary Table 1) consists of 60 highest ranked Cyclebase genes, with 20 having their maximum expression levels at each of three cell cycle stages (G1, S, and G2/M). (C) The third set (Supplementary Table 2) contains 15 high confidence cell cycle related genes selected according to published literatures. (D) The fourth gene set (Supplementary Table 5) includes 120 highest ranked Cyclebase genes, with 20 having their maximum expression levels at each of six cell cycle stages (G1, G1/S, S, G2, G2/M and M). Clustering method Assume that we are given n single cells, each with an observed expression vector e i  = (e i1, … ,e im ) for m genes and i = 1,2, … ,n. Considering that negative binomial distribution is widely used to model gene expression levels, we approximate the logarithm of the negative binomial distribution by a Gaussian distribution (lognormal). Thus, we used the GMM to model clusters of gene expression profiles of single cells. A GMM with k clusters can be described as: $gmm e i := ∑ r = 1 k π r N e i μ r , Φ r ,$ (1) where $N ⋅ ∣ μ , Φ$ denotes the Gaussian pdf with mean gene expression vector μ and covariance matrix Φ, and {π1, … ,π k } are mixture weights satisfying $∑ r = 1 k π r =1$ where 0 ≤ π r ≤ 1, r{1, … ,k}. The mixture model can be solved by an expectation maximization algorithm. Modeling as a TSP We cluster n single cells into K clusters through the GMM whose mean gene expression vectors are μ1, … , μ K , each representing a cell cycle phase. Using these K mean vectors, we construct an undirected weighted complete graph G, where nodes correspond to the K mean vectors, and the edges that connect every pair of nodes are weighted by the Euclidean distance between the two vectors. Our goal is to find a Hamilton cycle C K in this graph such that every node appears in the cycle exactly once, and the total edge weight of the cycle is minimized. This describes the TSP, which is the classic NP-hard problem in computer algorithm theory. In our case, the TSP is actually a Euclidean TSP because it satisfies three criteria: non-negative distances, symmetry of distances, and triangle inequality of distances. It should be noted that the Euclidean TSP is also an NP-hard problem, and no known polynomial time algorithm can solve this problem for every case. We therefore designed a heuristic algorithm, called consensus-TSP, which is based on an arbitrary insertion algorithm, to solve the TSP problem32. The arbitrary insertion algorithm is a randomized algorithm with O(n2)-running time for a graph with n nodes, and for the worst case, it gives a 2ln(n)-approximation. We chose this algorithm because it can produce a more robust solution than the greedy nearest neighbor algorithm. Given the generated K clusters, there are two steps for the heuristic TSP algorithm. The first step is to compute traveling salesman cycles for different k (e.g., k = 7,8, … , K), and the second step is to merge the cycles into a consensus cycle. In the first step, for each k, it takes the k clusters computed from the GMM as input, runs the arbitrary insertion algorithm nfold · k times, and selects the shortest TSP cycle among these nfold · k cycles. In the second step, it merges the K−6 shortest cycles generated in the first step into a consensus-TSP cycle (Supplementary Methods, Supplementary Fig. 4). Time-series scoring metrics The goal is to develop a quantitative measure of accuracy of computed TSP cycle C k using known cell cycle stage labels. Our idea is to compute the PCC between C k and experimentally determined cell cycle labels. Let an n-dimensional vector $l ̃ = ( l ̃ 1 , … , l ̃ n )$ denote the experimentally determined cell cycle labels for given n single cells, where $l ̃ ∈ 1 , 2 , 3$ with 1, 2, and 3 indicating the G0/G1, S, and G2/M cell cycle stages, respectively. If cells are labeled by other stages, e.g., G0 or M, the label numbers can be adjusted. Then we transform the generated traveling salesman cycle C k into an n-dimensional vector l as follows. Assume that C k consists of a circle of k clusters, c1c2c k c1. Without loss of generality, we cut the edge c k c1 to open the cycle and form a linear path, c1c2c k , which represents a pseudo-time series with c1 and c k as the start and the end of a cell cycle, respectively. We assign a sequential index j to every cell in j-th cluster: l i  = j if the i-th single cell belongs to the j-th cluster along the time-series. Thus we obtain a vector l = (l1, … ,l n ) where l i {1,2, … ,k}. We then calculate the PCC between $l ̃$ and l to measure how well the linear path c1c2c k fits with the experimental data. Since C k has k edges, it can be cut into k different linear paths: c1c2 − … − c k , c2c3 − … − c k c1,…, and c k c1 − … − c k − 1, and their k reverse paths: c k c k−1 − … − c1, c1c k c k − 1 − … − c2,…, and c k−1c k−2 − … − c1c k . For each of these 2k paths, we can compute a PCC score and select the maximum PCC score ρ to represent the correlation-score between the traveling salesman cycle C k and the experimentally determined cell cycle labels $l ̃$. The second metric is called “change-index”, which measures how frequent an experimentally determined single cell labels changes along the time-series. Ideally, a perfect time-series would change labels twice, G1 to S and S to G2/M. Thus, we define the change-index as 1−(s c −2)/(N−3), where s c means the sum of the label changes between two adjacent cells. A perfect time-series would have change-index value of 1, while the worst time-series where s c  = N−1would have a value of 0. Bayes-scores and mean-scores to assess cell cycle phases Given a traveling salesman cycle C k computed from single cell data, we want to determine where the cell cycle stages are located. We designed two methods for this purpose: a supervised Naive Bayes model to compute the probability that a cluster belongs to each of three cell cycle stages, including ‘G1’, ‘S’, and ‘G2/M’ (Bayes-scores), and an unsupervised method to compute the mean expression of a selected subset of cell cycle genes for each of six cell cycle stages, including ‘G1’, ‘G1/S’, ‘S’, ‘G2’, ‘G2/M’, ‘M’ (mean-scores) (Supplementary Methods). Thus Bayes-scores consist of three dimensions and mean-scores consist of six dimensions. We used the cell cycle-labeled mESC-SMARTer data to train the Bayes-scores. Following the literature33, we selected a set of informative gene pairs specific to each of the three cell cycle stage; then the gene pairs selected for each stage were unified with Np pairs (Supplementary Methods). Without loss of generality, we focused on the G1 stage and converted expression of each cluster (or single cell) into a binary vector as follows. For the i-th of the Np pairs, i.e., gene a and gene b, we assign a value −1 if their expression levels satisfy e a  < e b , and 1 otherwise. Let the probability p i be the fraction of G1 stage clusters with value 1 for the i-th gene pair, and let the probability 1−p i be that with value −1. The Naive Bayes model can be expressed as follows: Let $x= x 1 , … , x N p$ be the binary vector computed from the gene pairs for an unlabeled cluster. The posterior probability that x belongs to G1 can be expressed as $P G 1 x ∝P x G 1 P G 1 =P G 1 ∏ i = 1 N p P x i G 1 =P G 1 ∏ i = 1 N p p i x i 1 - p i 1 - x i$ (2) Thus the Bayes-scores are log10(P(x|G1)P(G1)), log10(P(x|S)P(S)), and log10(P(x|G2M)P(G2M)), respectively, with the prior P(G1) = P(S) = P(G2M). We also tested the Lasso-Logistic regression (Supplementary Note 2, Supplementary Methods), but the Naive Bayes had better performance. To determine the mean-scores of a cluster, which is based on the mean of log2(TPM + 1) of cell cycle genes, we compute the expression mean of a selected subset of marker genes for each cell cycle stage. We selected six gene sets with recorded ‘Peaktime’ as ‘G1’, ‘G1/S’, ‘S’, ‘G2’, ‘G2/M’, and ‘M’ stage from the Cyclebase genes (378) and then computed the corresponding scores for each cluster (single cell). HMM for segmentation Given a traveling salesman cycle of K clusters, we applied a HMM (Supplementary Fig. 12) to determine cell cycle stages. Let H = {G0,G1,S,G2/M} denote the set of hidden states (cell cycle stages) and A = (a ij ) N × N be the matrix of transition probabilities between the stages, where N = 4 denotes the number of stages. If no obvious sign indicates the existence of G0 cells, we only consider G1, S and G2/M. Thus, a state transition exists only when it is from a cell cycle stage to itself or to a physiologically subsequent stage. Along the generated time-series, we characterize a cell i{1,2, … , n} using a nine-dimensional scoring vector o i  = (o i1,o i2, … ,o i9), which includes three Bayes-scores and the six mean-scores to describe membership of a cell to a specific cell cycle stage. Therefore, when a cell is at a stage hH, it emits a nine-dimentional scoring vector described by a multivariate Gaussian distribution $N μ h , Σ h$. Provided with this formulation, we first estimate the parameters Θ = (A,μ h ,Σ h ) from the observed scores of cells O = (o1,o2, … ,o n ) along the time-series using the Baum–Welch (BW) algorithm. To determine the cell cycle starting point, we tried each cell in the cycle as a starting point, and selected the one that has the highest likelihood for observation. In the implementation of the BW algorithm47, we adopted logarithm transformation to small intermediate probabilities to avoid underflow. We then implement the Viterbi algorithm to obtain the most likely assignment of the cells, thereby partitioning the time-series into cell cycle stages (Supplementary Methods). Kalman smoother and correlation detection As scRNA-seq expression noise obeys negative binomial distribution48, it can be regarded as normal distribution after logarithm. Hence, time-series expression of single cells can be modeled as a random walk plus (RWP) noise model, which is one of the simplest dynamic linear models. Each cell i has a time-series index t i {1,2, … n}; hence, the cells can be arranged as (1,2, … ,T) with n = T here. For a selected gene, cells have the observed expression e t (t = 1,2, … ,T) and the real expression z t (t = 1,2, … ,T) along the cell cycle time-series. Hence, the RWP model can be expressed as: $e t = z t + v , v ~ N 0 , σ e z t = z t - 1 + w , w ~ N 0 , σ z$ (3) In other words, two adjacent cells have a first-order Markov correlation along the time-series, and the observed expression is generated by adding a normally distributed noise of zero mean to the real expression. In practice, we use Kalman smoother equations, or the Rauch–Tung–Striebel equations (Rauch et al. (1965)) to estimate the real expression $ẑ t$. With the noise filtered out, we are able to determine whether the expression of a gene exhibits a time-series pattern along the cell cycle by correlating the estimated expression values $ẑ t$ with the time-series index t. Apparently, neither Pearson’s nor Spearman’s correlation coefficients can work here, owing to the non-monotonic property of expression along a time series. Therefore, we adopted three statistical methods (dCor28, KNN-MI29, MIC49) capable of detecting the nonlinear relationship between two variables. Code availability The open source implementation of reCAT in R is available on GitHub: https://github.com/tinglab/reCAT. Data availability No new data was generated in this study. All the data sets used can be find through the accession numbers provided in the original publications cited in Table 1. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. References 1. 1. Zhao, Y. et al. Dysregulation of cardiogenesis, cardiac conduction, and cell cycle in mice lacking miRNA-1-2. Cell 129, 303–317 (2007). 2. 2. Spellman, P. T. et al. Comprehensive identification of cell cycle-regulated genes of the yeast Saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell 9, 3273–3297 (1998). 3. 3. Ly, T. et al. A proteomic chronology of gene expression through the cell cycle in human myeloid leukemia cells. Elife 3, e01630 (2014). 4. 4. Wu, A. R. et al. Quantitative assessment of single-cell RNA-sequencing methods. Nat. Methods 11, 41–46 (2014). 5. 5. Tang, F. et al. mRNA-Seq whole-transcriptome analysis of a single cell. Nat. Methods 6, 377–382 (2009). 6. 6. Kolodziejczyk, A. A., Kim, J. K., Svensson, V., Marioni, J. C. & Teichmann, S. A. The technology and biology of single-cell RNA sequencing. Mol. Cell 58, 610–620 (2015). 7. 7. Shalek, A. K. et al. Single-cell RNA-seq reveals dynamic paracrine control of cellular variation. Nature 510, 363–369 (2014). 8. 8. Satija, R., Farrell, J. A., Gennert, D., Schier, A. F. & Regev, A. Spatial reconstruction of single-cell gene expression data. Nat. Biotechnol. 33, 495–502 (2015). 9. 9. Cadwell, C. R. et al. Electrophysiological, transcriptomic and morphologic profiling of single neurons using Patch-seq. Nat. Biotechnol. 34, 199–203 (2016). 10. 10. Stegle, O., Teichmann, S. A. & Marioni, J. C. Computational and analytical challenges in single-cell transcriptomics. Nat. Rev. Genet. 16, 133–145 (2015). 11. 11. Leng, N. et al. Oscope identifies oscillatory genes in unsynchronized single-cell RNA-seq experiments. Nat. Methods 12, 947–950 (2015). 12. 12. Scialdone, A. et al. Computational assignment of cell-cycle stage from single-cell transcriptome data. Methods 85, 54–61 (2015). 13. 13. Kowalczyk, M. S. et al. Single-cell RNA-seq reveals changes in cell cycle and differentiation programs upon aging of hematopoietic stem cells. Genome Res. 25, 1860–1872 (2015). 14. 14. Marco, E. et al. Bifurcation analysis of single-cell gene expression data reveals epigenetic landscape. Proc. Natl Acad. Sci. USA 111, E5643–E5650 (2014). 15. 15. Trapnell, C. et al. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat. Biotechnol. 32, 381–386 (2014). 16. 16. Ji, Z. & Ji, H. TSCAN: pseudo-time reconstruction and evaluation in single-cell RNA-seq analysis. Nucleic Acids Res. 44, e117 (2016). 17. 17. Bendall, S. C. et al. Single-cell trajectory detection uncovers progression and regulatory coordination in human B cell development. Cell 157, 714–725 (2014). 18. 18. Setty, M. et al. Wishbone identifies bifurcating developmental trajectories from single-cell data. Nat. Biotechnol. 34, 637–645 (2016). 19. 19. Haghverdi, L., Buttner, M., Wolf, F. A., Buettner, F. & Theis, F. J. Diffusion pseudotime robustly reconstructs lineage branching. Nat. Methods 13, 845–848 (2016). 20. 20. Kafri, R. et al. Dynamics extracted from fixed cells reveal feedback linking cell growth to cell cycle. Nature 494, 480–483 (2013). 21. 21. Gut, G., Tadmor, M. D., Pe’er, D., Pelkmans, L. & Liberali, P. Trajectories of cell-cycle progression from fixed cell populations. Nat. Methods 12, 951–954 (2015). 22. 22. Buettner, F. et al. Computational analysis of cell-to-cell heterogeneity in single-cell RNA-sequencing data reveals hidden subpopulations of cells. Nat. Biotechnol. 33, 155–160 (2015). 23. 23. Sasagawa, Y. et al. Quartz-Seq: a highly reproducible and sensitive single-cell RNA sequencing method, reveals non-genetic gene-expression heterogeneity. Genome Biol. 14, R31 (2013). 24. 24. McDavid, A. et al. Modeling Bi-modality improves characterization of cell cycle on gene expression in single cells. PLoS Comput. Biol. 10, e1003696 (2014). 25. 25. Kolodziejczyk, A. A. et al. Single Cell RNA-sequencing of Pluripotent States unlocks modular transcriptional variation. Cell Stem Cell 17, 471–485 (2015). 26. 26. Treutlein, B. et al. Reconstructing lineage hierarchies of the distal lung epithelium using single-cell RNA-seq. Nature 509, 371–378 (2014). 27. 27. Tirosh, I. et al. Dissecting the multicellular ecosystem of metastatic melanoma by single-cell RNA-seq. Science 352, 189–196 (2016). 28. 28. Angermueller, C. et al. Parallel single-cell sequencing links transcriptional and epigenetic heterogeneity. Nat. Methods 13, 229–232 (2016). 29. 29. Whitfield, M. L. et al. Identification of genes periodically expressed in the human cell cycle and their expression in tumors. Mol. Biol. Cell 13, 1977–2000 (2002). 30. 30. Sakaue-Sawano, A. et al. Visualizing spatiotemporal dynamics of multicellular cell-cycle progression. Cell 132, 487–498 (2008). 31. 31. Santos, A., Wernersson, R. & Jensen, L. J. Cyclebase 3.0: a multi-organism database on cell-cycle regulation and phenotypes. Nucleic Acids Res. 43, D1140–D1144 (2015). 32. 32. Rosenkrantz, D. J., Stearns, R. E., Philip, M. & Lewis, I. An analysis of several heuristics for the traveling salesman problem. SIAM J. Comput. 6, 563–581 (1977). 33. 33. Tan, A. C., Naiman, D. Q., Xu, L., Winslow, R. L. & Geman, D. Simple decision rules for classifying human cancers from gene expression profiles. Bioinformatics 21, 3896–3904 (2005). 34. 34. Kosorok, M. R. On Brownian distance covariance and high dimensional data. Ann. Appl. Stat. 3, 1266–1269 (2009). 35. 35. Kraskov, A., Stögbauer, H. & Grassberger, P. Estimating mutual information. Physical Review E 69, 066138 (2004). 36. 36. Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene ontology consortium. Nat. Genet. 25, 25–29 (2000). 37. 37. Granovskaia, M. V. et al. High-resolution transcription atlas of the mitotic cell cycle in budding yeast. Genome Biol. 11, R24 (2010). 38. 38. Sharova, L. V. et al. Database for mRNA half-life of 19,977 genes obtained by DNA microarray analysis of pluripotent and differentiating mouse embryonic stem cells. DNA Res. 16, 45–58 (2009). 39. 39. Brown, S. E., Fraga, M. F., Weaver, I. C., Berdasco, M. & Szyf, M. Variations in DNA methylation patterns during the cell cycle of HeLa cells. Epigenetics 2, 54–65 (2007). 40. 40. Vandiver, A. R., Idrizi, A., Rizzardi, L., Feinberg, A. P. & Hansen, K. D. DNA methylation is stable during replication and cell cycle arrest. Sci. Rep. 5, 17911 (2015). 41. 41. Macosko, E. Z. et al. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell 161, 1202–1214 (2015). 42. 42. Nagano, T. et al. Single-cell Hi-C reveals cell-to-cell variability in chromosome structure. Nature 502, 59–64 (2013). 43. 43. Buenrostro, J. D. et al. Single-cell chromatin accessibility reveals principles of regulatory variation. Nature 523, 486–490 (2015). 44. 44. Macaulay, I. C. et al. G&T-seq: parallel sequencing of single-cell genomes and transcriptomes. Nat. Methods 12, 519–522 (2015). 45. 45. Zhang, R., Lahens, N. F., Ballance, H. I., Hughes, M. E. & Hogenesch, J. B. A circadian gene expression atlas in mammals: implications for biology and medicine. Proc. Natl Acad. Sci. USA 111, 16219–16224 (2014). 46. 46. Anders, S. & Huber, W. Differential expression analysis for sequence count data. Genome Biol. 11, R106 (2010). 47. 47. Mann, T. P. Numerically stable hidden Markov model implementation. http://bozeman.genome.washington.edu/compbio/mbt599_2006/hmm_scaling_revised.pdf. (2006). 48. 48. Grun, D., Kester, L. & van Oudenaarden, A. Validation of noise models for single-cell transcriptomics. Nat. Methods 11, 637–640 (2014). 49. 49. Reshef, D. N. et al. Detecting novel associations in large data sets. Science 334, 1518–1524 (2011). Acknowledgements We thank Xuegong Zhang, Peter Kharchenko, Grace Xiao, Lin Wan and Jianyang Zeng for constructive criticism. We are grateful to Xiangyu Li, Kui Hua, Jun Li, Weilong Guo and Zhiyi Qin for fruitful discussion. We also thank Siqi Qu, Qiongye Dong, Aleksandra A. Kolodziejczyk and Florian Buettner for their technical support. This work was supported by the National Science Foundation of China [61673241, 61561146396], National Basic Research Program of China [2012CB316504, 2012CB316503]; Hi-tech Research and Development Program of China [2012AA020401]; NSFC [61305066, 91010016, 91519326, 31361163004]; NIH/NHGRI [5U01HG006531-03; 4R01HG006465] and the Joint NSFC-ISF Research Program, jointly funded by the National Natural Science Foundation of China and the Israel Science Foundation. Author information Affiliations 1. MOE Key Laboratory of Bioinformatics, Bioinformatics Division and Center for Synthetic & Systems Biology, TNLIST, Department of Automation, Tsinghua University, Beijing, 100084, China • Zehua Liu • , Michael Q. Zhang •  & Rui Jiang 2. MOE Key Laboratory of Bioinformatics, Bioinformatics Division and Center for Synthetic & Systems Biology, TNLIST, Department of Computer Sciences, State Key Lab of Intelligent Technology and Systems, Tsinghua University, Beijing, 100084, China • Huazhe Lou • , Kaikun Xie • , Hao Wang • , Ning Chen •  & Ting Chen 3. Program in Computational Biology and Bioinformatics, University of Southern California, Los Angeles, CA, 90089, USA • Oscar M. Aparicio •  & Ting Chen 4. Department of Molecular and Cell Biology, Center for Systems Biology, University of Texas at Dallas, 800 West Campbell Road, RL11, Richardson, TX, 75080-3021, USA • Michael Q. Zhang Contributions Z.L. conceived the main strategies and developed the method. Z.L., T.C. and M.Q.Z. designed the study. Z.L., H.L., K.X. and H.W. performed the analysis. Z.L., T.C., R.J., K.X., N.C. and O.M.A. wrote the manuscript. Competing interests The authors declare no competing financial interests. Corresponding authors Correspondence to Rui Jiang or Ting Chen.
auto_math_text
web
Article | Open | Published: # Rice Blast Disease Recognition Using a Deep Convolutional Neural Network ## Abstract Rice disease recognition is crucial in automated rice disease diagnosis systems. At present, deep convolutional neural network (CNN) is generally considered the state-of-the-art solution in image recognition. In this paper, we propose a novel rice blast recognition method based on CNN. A dataset of 2906 positive samples and 2902 negative samples is established for training and testing the CNN model. In addition, we conduct comparative experiments for qualitative and quantitatively analysis in our evaluation of the effectiveness of the proposed method. The evaluation results show that the high-level features extracted by CNN are more discriminative and effective than traditional hand-crafted features including local binary patterns histograms (LBPH) and Haar-WT (Wavelet Transform). Moreover, quantitative evaluation results indicate that CNN with Softmax and CNN with support vector machine (SVM) have similar performances, with higher accuracy, larger area under curve (AUC), and better receiver operating characteristic (ROC) curves than both LBPH plus an SVM as the classifier and Haar-WT plus an SVM as the classifier. Therefore, our CNN model is a top performing method for rice blast disease recognition and can be potentially employed in practical applications. ## Introduction Rice as a food source provides protein and energy to more than half of the world’s population1. Moreover, rice consumption and demand are increasing with the growth of the population. To meet the increased food demand, rice production must be increased by more than 40% by 20302. Unfortunately, rice diseases have caused a great deal of loss in yield, and rice blast disease is considered as one of the main culprits3, reducing yield by between 60% and 100%4. Currently, the use of pesticides and deployment of blast-resistant cultivars are the main methods of combating the disease5. However, excessive use of pesticides not only increases the cost of rice production but also causes considerable environmental damage6. Moreover, in practice, diagnosis of rice blast is often manually conducted and this is subjective and time-consuming even for well-experienced experts. In modern agricultural practices, it is very important to manage pests and diseases using highly efficient methods with minimum damage to the environment7. In recent decades, combined with crop images, computer-aided diagnostic methods have become dominant for monitoring crop diseases and pests8,9,10. An automated rice disease diagnostic system could provide information for prevention and control of rice disease, set aside time for disease control, minimize the economic loss, reduce the pesticide residues, and improve the quality and quantity of agricultural products. In order to achieve such a system, research in effective algorithms of feature extraction and classification of rice disease is critical. Currently, there exists no public dataset for rice blast disease classification. To fill this void, we establish in this work a rice blast disease dataset and use it for training and testing a disease classification model, based on convolutional neural network (CNN). The rice blast disease images are obtained from the Institute of Plant Protection, Jiangsu Academy of Agricultural Sciences, Nanjing, China. These images are captured in a naturally-lit environment while plant protection experts conduct field investigation. As a result, the trained CNN model on the dataset can be expected to have direct applicability. At the same time, the dataset is useful for other people who are interested in rice or even crop disease classification research. In recent years, due to its ability to extract good features, CNN has been employed extensively in machine learning and pattern recognition research11,12,13,14,15,16,17. Hinton et al.18 stated that a multi-layer neural network has excellent learning ability, and that the learned features can abstract and express raw data conveniently for classification. CNN provides an end-to-end learning solution that avoids image pre-processing, and extracts relevant high-level features directly from raw images. The CNN architecture was inspired by the visual cortex of cats in Hubel’s and Wiesel’s early work19. In particular, Krizhevsky20 performed object classification and won the first place in the ImageNet Large Scale Visual Recognition Challenge 2012 using a deep CNN. This is followed by the emergence of many improved algorithms and applications of CNN21,22,23. Since20, similar CNN architectures have been successfully developed to solve a variety of image classification tasks. With full consideration of CNN’s excellent performance, we propose a method that uses CNN for rice blast image feature extraction and disease classification, and we are able to obtain remarkable performance through fine tuning the structure and the parameters of a CNN model. We conduct comparative experiments for rice blast disease recognition with two traditional feature extraction methods, LBPH and Haar-WT. As well, we combine an SVM classifier with the deep features extracted from the CNN to further investigate and verify the effectiveness of deep features of CNN. The major contributions of this paper are summarized as follows. First, we introduce a rice blast disease dataset with the assistance of plant protection experts. The dataset is used to train and verify our model. The dataset is useful for other researchers who are interested in rice or even crop disease recognition. The dataset is available from the, http://www.51agritech.com/zdataset.data.zip. Second, we propose an effective rice blast feature extraction and classification method using CNN. The evaluation results show that the high-level features extracted by the CNN are more discriminative than LBPH and Haar-WT, with classification accuracies above 95%. The remainder of this paper is organized as follows. Section 2 describes the dataset and the feature extraction and the rice blast disease classification methods. Section 3 describes the evaluation criteria of the feature extraction and recognition methods. The experiments and results are also provided and discussed in this section. Finally, the conclusions and future work are given in Section 4. ## Rice Blast Disease Dataset and Proposed Classification Method ### Dataset Rice images with rice blast disease are obtained from the Institute of Plant Protection, Jiangsu Academy of Agricultural Sciences, Nanjing, China. The Institute mainly conducts research in the mechanism and the technologies in controlling the disease and insect pests of such crops as rice, wheat, cotton, rape, fruit and vegetables in Jiangsu Province and across China. To avoid duplicates and ensure label quality, each image in our dataset is examined and confirmed by plant protection experts. There is no special requirement for rice blast disease images and their pixels, and no special preprocessing is done. All the rice blast images are patches of 128 × 128 pixels in size, extracted from original larger images with a moving window of a stride of 96 pixels. Then, the patches containing rice blast lesions are identified by domain experts and used as positive samples, and patches without lesions are used as negative samples. The final dataset includes 5808 image patches of which 2906 are positive and 2902 negative. Some positive and negative samples are shown in Fig. 1. In addition to scale, rotation, illumination and partial viewpoint changes, the dataset also has the following characteristics. First, the background of rice canopy texture, water body, and soil can cause great difficulty to recognition, as do dead leaves and other plant lesion. Second, rice blast lesion shape and location are not predictable. Overall, the combination of above factors poses significant challenges for rice blast disease recognition. ### Feature extraction from rice blast images Feature extraction is a key step in object recognition. It requires the features to be sufficiently discriminating to be able to separate the different object classes while retaining invariant characteristics within the same class. Feature extraction is also a dimension reduction process for efficient pattern recognition and machine learning in image analysis. In this work, CNN, Harr-wavelet and LBPH feature extraction methods are employed and compared to process rice blast images. #### The CNN model CNN24 is a multi-layer neural network with a supervised learning architecture that is often made up of two parts: a feature extractor and a trainable classifier. The feature extractor contains feature map layers and retrieves discriminating features from the raw images via two operations: convolutional filtering and down sampling25. Convolutional filtering as the key operation of CNN has two vital properties: local receptive field and shared weights. Convolutional filtering can be seen as a local feature extractor used to identify the relationships between pixels of a raw image so that the effective and appropriate high-level features can be extracted to enhance the generalization ability of a CNN model26. Furthermore, down sampling and weight sharing can greatly reduce the number of trainable parameters and improve the efficiency of training. The classifier and the weights learned in the feature extractor are trained by a back-propagation algorithm. A convolutional layer computes feature maps by applying convolution kernels to input data followed by an activation function as follows27: $${y}_{j}^{l}=f({z}_{j}^{l})$$ (1) $${z}_{j}^{l}=\sum _{i\in {M}_{j}}\,{x}_{i}^{l-1}\ast {k}_{ij}^{l}+{b}_{j}^{l}$$ (2) where, $${y}_{j}^{{l}}$$ is the output feature maps at layer l; $$f(\,\cdot \,)$$ is the activation function (commonly used functions include sigmoid, tanh, and ReLU, etc., of which ReLU was chosen); $${z}_{j}^{i}$$ is the activation of the j channel at layer l; $${x}_{i}^{l-1}$$ is the feature maps of the l − 1 layer; Mj is the subset of input feature maps; $${k}_{ij}^{l}$$ is convolution kernel matrix at layer l; * is the convolution operation; and $${b}_{j}^{l}$$ is the offset. For a more detailed explanation of convolutional neural networks, we refer the reader to LeCun et al.24 and Krizhevsky et al.20. In this study, two network structures similar to Lenet5 (LeCun et al.)24 are established. As shown in Fig. 2, the first network contains four convolutional layers, four max-pooling layers, and three fully connected layers, and ReLU is added after each layer (Fig. 2(a)). The second network has the same convolutional layers and max-pooling layer structure as the first network, but has two fully connected layers (Fig. 2(b)). To avoid over-fitting, one spatial dropout layer is added after the C5 layer for the both models, and another dropout layer is added after the F10 layer for the first model and after the F9 layer for the second model, respectively. The related parameters of CNN are shown in Fig. 2. The models are implemented using Torch7 which is a scientific computing framework. The main steps of the second model are shown in Fig. 3. Stochastic gradient descent (SGD) is employed for training, and the number of training epochs is 150. Other training parameters are as shown in Fig. 3. Comparative experiments are conducted for the two CNN models, and classification accuracy are computed. To reduce possible biases in the selection of the validation set, 5-fold cross-validation is employed. In 5-fold cross-validation, the original sample is randomly partitioned into five equal size subsamples. Of the five subsamples, a single subsample is retained as the validation set, and the other four subsamples are used for training. The cross-validation process is then repeated five times, and the results are averaged. As shown in Table 1, there is no obvious performance improvement of the first CNN model with more connected layers. In order to ensure that there is no over-fitting, the learning curves are generated. Here, 10% of the original samples are reserved as a test set, and 500 samples are randomly selected from the remaining dataset as training samples at starting point. By increasing 500 samples for training incrementally, we repeat the training process ten times in each step. The classification accuracy of training set and validation set are averaged, and the learning curves are obtained (Fig. 4). It can be seen that the two models have low bias and variance, good convergence, and high accuracy, and that there is no over-fitting. However, the stability of the first model is poor with small samples. Therefore, the second CNN model is chosen in the remainder of this study. #### Haar-WT Haar-WT is chosen as a competing hand-crafted feature in our evaluation. Haar-WT is an extension of the wavelet transform to simplify computation, and it is commonly used in image feature extraction. Haar-WT is a multi-resolution approach for image texture analysis28 that employs two important functions of WT: the high pass filter and the high pass filters29. At each level, a 2D-image is processed through low pass and high pass filters, separately. The result includes four sub level images which are one sub level of approximation of the original image (LL) and three sub levels of detail in horizontal, vertical and diagonal directions, respectively (LH, HL and HH). This process is called one level decomposition. With repeated decomposition on the approximation sub level, more sub level decomposition of an image can be obtained. The low pass filtering and high pass filtering of Haar-WT are computed as follows30: $${A}_{i}=\frac{{x}_{i}+{x}_{i+1}}{2},i\in [1,N]$$ (3) $$w{c}_{i}=\frac{{x}_{i}-{x}_{i+1}}{2},i\in [1,N]$$ (4) where xi and xi+1 are two adjacent elements, Ai is the low-pass filtering, wci is the high-pass filtering, and N is the number of elements along row and column of input 2D data. In this study, we perform Haar-WT decomposition of the rice blast image in the RGB color space. The decomposition is done up to level 5, and approximation sub levels are integrated as a single feature vector on each level. The feature vectors of 3rd, 4th and 5th level of decomposition are obtained. Using SVM as the classifier, comparative experiments are conducted, and classification accuracy is computed via 5-fold cross validation (Table 2). It could be seen that the 4th level obtains higher classification accuracy than any of the other levels. Therefore, the fourth level is chosen in our study as the Haar-WT feature, and compared with CNN. #### LBPH Local Binary Pattern Histograms (LBPH) is chosen as the second competing hand-crafted feature in our study. The LBP is a simple and efficient operator, which has been used for texture discrimination and image feature extraction and has shown to be robust with respect to the variations in rotation and illumination31,32. The operator labels the pixels by thresholding the 3 × 3 neighbourhood of each pixel with the center value to produce a binary patch. LBPH uses the histogram of the labels as a texture descriptor of the patch. Later the operator is extended to a circular neighborhood of different sizes, named as circular LBP33. Another extension of the original operator is called uniform pattern34,35. In our study, we first obtain the circular LBP of all images from the dataset, and then compute the uniform LBP patterns. The LBP feature image is then divided into m × m local blocks36, and the histogram of each local block is extracted and integrated as a single feature vector. Using SVM as the classifier, comparative experiments are conducted, and classification accuracy is computed via 5-fold cross validation (Table 3). It can be seen that the 1 × 1 division obtained a higher classification accuracy than any of the others. Therefore, the undivided uniform LBPH patterns are chosen as the image feature, and compared with CNN. ### SVM The SVM is a powerful classifier that works well on a wide range of complex classification problems25. SVM with different kernel functions can transform a nonlinear separable problem into a linear separable problem by projecting data into a higher dimensional space to maximize the classification distance and achieve the desired classification. In this study, the radial basis function (RBF)37, a popular kernel function of SVM, is chosen as the kernel function. The LIBSVM38, as an efficient open source tool, is chosen to build SVMs in our experiments. Szarvas et al.39 have evaluated the automatically optimized features learned by CNN on pedestrian detection, and showed that the CNN + SVM combination can achieve a very high accuracy. Therefore, we employ SVM as classifier for two purposes: comparison of feature extraction methods and improvement of the performance of rice blast disease classification. ## Results and Discussion ### Evaluation metrics To evaluate the performance of the competing methods, several statistical parameters are used to be as the performance metrics. The selected quantitative measures are accuracy, ROC, and AUC, all of which are popular evaluation metrics for classification methods. The classification accuracy is the principal indicator; the higher the accuracy, the better the performance by a classifier. The accuracy can be computed by Eq (5). ROC is another important objective evaluation metric in the task of image classification, which is defined by true positive rate and false positive rate; the larger the area under the ROC curve, the better the classification performance. To analyze the reliability and the generalization ability of the feature extraction and classification methodology, the 5-fold cross-validation (CV) technique40 is applied. $${\rm{Accuracy}}=({\rm{TP}}+{\rm{TN}})/({\rm{TP}}+{\rm{TN}}+{\rm{FP}}+{\rm{FN}})$$ (5) where TP, FP, TN and FN are the numbers of true positives, false positives, true negatives, and false negatives in the detection results, respectively. To assess the performance of feature extraction, the t-distributed stochastic neighbor embedding (t-SNE)41 method is employed. The t-SNE was proposed by Hinton and has proven to be an effective qualitative indicator. In this study, we select the two-dimensional space as the mapping space for visualization, and a more linearly separable two-dimensional map implies better feature extraction performance. ### Results and observations To investigate the performance of three feature extraction methods, the t-SNE method is used to visualize the feature maps of CNN, LBPH, and Haar-WT using the same dataset. Figure 5(a–c) present the maps of the S8 layer features of CNN, LBPH features, and Harr-WT, respectively. The map of CNN in Fig. 5(a) clearly indicates that samples are almost separated in the two-dimensional space. In contrast, it is difficult to separate the two classes using LBPH and Haar-WT features, shown in Fig. 5(b,c). This result suggests that the features extracted using CNN are more discriminative than those extracted using LBPH and Haar-WT. To further explore the effect of the features extracted by CNN, we conduct comparative experiment and quantitatively analyze in terms of accuracy, ROC, and AUC. For consistency, SVM is employed as the classifier, RBF is used for the kernel function, and the grid method is used to select the optimal c (cost) and g (gamma) parameters. To reduce possible biases in the selection of the validation set, all evaluation metrics were computed in 5-fold cross validation experiments. First, CNN is primarily used to obtain high-level features from raw images and the Softmax is often used to classify and evaluate the accuracy. In addition, we employ SVM for classification combined with the CNN features (generated from its S8 layer). After parameter optimization, the accuracy of SVM-based classifier reached 95.82% (c = 8.0, g = 0.0078125; Table 4). Using the SVM combined with LBPH and Haar-WT features, on the other hand, we obtain two sets of comparison results shown in Fig. 6 and Table 4. To obtain accurate comparison results, we ensure that the same set of training and testing datasets were used for every method. As shown in Table 4, it can be observed that the feature extraction method of CNN model combined with the SVM classifier achieves a remarkable performance in terms of the recognition rates, far superior to the LBPH and Haar-WT. The results of quantitative analysis in terms of accuracy, ROC and AUC are in agreement with the qualitative analysis using t-SNE. Therefore, the results verify that the features extracted using CNN can be effective in solving the rice blast classification problem. For the same CNN features, the SVM and Softmax obtain higher accuracy (SVM:95.82%, Softmax:95.83%) and AUC value (SVM:0.99, Softmax: 0.99) than LBPH and Haar-WT (Fig. 6). Hence, the CNN and CNN + SVM showed a remarkable performance and are better for rice blast identification than LBPH + SVM and Haar-WT + SVM. In comparison, the SVM classifier has similar accuracy and AUC value to the Softmax. However, CNN is a black box model with random parameter initialization and, as a result, the output features of each trained model are different. SVM is a data-driven classifier that needs to optimize its parameters for different feature data. Therefore, CNN + SVM is less convenient than CNN + softmax in terms of efficiency and system implementation, although it can be considered as a strong competitive method for rice blast recognition. In order to understand the reasons of the misclassification, we analyze the misclassification samples some of which are shown in Fig. 7. We can observe from Fig. 7 that most notable mistakes in images of Fig. 7(a) are caused by blur, water droplets and small or incomplete lesion, and the main reason of misclassification in Fig. 7(b) are shadow, light spot, water droplets and complex background. Finally shown in Fig. 8 is an example classification result of the CNN model presented by this study on a complete original image. This example demonstrates that the CNN model can correctly and effectively recognize almost all of the rice blast lesions. ## Conclusions and Future Work In this study, we present a rice blast feature extraction and disease classification method based on deep convolutional neural networks (CNN). Because of the absence of image dataset for this particular recognition research, as our first contribution, we established a rice blast disease dataset with the assistance of plant protection experts. The dataset can be combined with other rice disease images to build a content-rich dataset. Our hope is that this dataset will be useful for other people who are interested in rice or even crop disease recognition research. In addition, we conduct comparative experiment based on the dataset and analyze the experimental results. Qualitative assessment by t-SNE indicates that the high-level features extracted by CNN are more discriminative and representative than LBPH and Haar-WT. Quantitative analysis results indicate that CNN with Softmax and CNN + SVM have almost the same performance, which is better than that of LBP + SVM and Haar-WT + SVM by a wide margin. The occurrence of rice disease is regular, and the type and the probability of the rice disease vary with the stages of rice growth. Therefore, different rice disease identification systems should and can be established using the method presented by this study, and then the automated rice disease diagnosis can be realized by combining identification models and domain knowledge of rice disease. Although our method of automatic identification of rice blast has achieved satisfactory results, substantial further work is needed to improve its accuracy and reliability in rice disease diagnosis systems. In particular, we plan to address the following two issues in future studies: 1. (1) Expand the dataset of rice disease, and establish a comprehensive tool for rice disease diagnosis system. The data augmentation method will be employed for building a good classifier when the number of samples is insufficient. 2. (2) Study other deep neural network architectures and take full advantage of the deep learning algorithms to improve the classification accuracy, and enhance the reliability and robustness of the rice disease diagnosis systems. ## Data Availability The rice blast disease dataset used for training and testing CNN model is available from the, http://www.51agritech.com/zdataset.data.zip, and all the data generated during and/or analyzed during the current study are included in the manuscript. ## References 1. 1. Khush, G. S. What it will take to Feed 5.0 Billion Rice consumers in 2030. Plant Molecular Biology. 59, 1–6 (2005). 2. 2. Roy-Barman, S. & Chattoo, B. B. Rice blast fungus sequenced. Current Science 89, 930–931 (2005). 3. 3. Abed-Ashtiani, F., Kadir, J. B., Selamat, A. B., Hanif, A. H. B. & Nasehi, A. Effect of Foliar and Root Application of Silicon Against Rice Blast Fungus in MR219 Rice Variety. Plant Pathology Journal. 28, 164–171 (2012). 4. 4. Kihoro, J., Bosco, N. J., Murage, H., Ateka, E. & Makihara, D. Investigating the impact of rice blast disease on the livelihood of the local farmers in greater Mwea region of Kenya. Springerplus. 2 (2013). 5. 5. Dadley-Moore, D. Fungal pathogenesis - Understanding rice blast disease. Nature Reviews Microbiology. 4, 323–323 (2006). 6. 6. Wu, Y. et al. Characterization and evaluation of rice blast resistance of Chinese indica hybrid rice parental lines. The Crop Journal. 5, 509–517 (2017). 7. 7. Abed-Ashtiani, F. et al. Plant tonic, a plant-derived bioactive natural product, exhibits antifungal activity against rice blast disease. Industrial Crops & Products. 112, 105–112 (2018). 8. 8. Lu, Y., Yi, S., Zeng, N., Liu, Y. & Zhang, Y. Identification of rice diseases using deep convolutional neural networks. Neurocomputing. 267, 378–384 (2017). 9. 9. Sengupta, S. & Das, A. K. Particle Swarm Optimization based incremental classifier design for rice disease prediction. Computers and Electronics in Agriculture. 140, 443–451 (2017). 10. 10. Phadikar, S., Sil, J. & Das, A. K. Rice diseases classification using feature selection and rule generation techniques. Computers and Electronics in Agriculture. 90, 76–85 (2013). 11. 11. Jiao, Z. C., Gao, X. B., Wang, Y. & Li, J. A deep feature based framework for breast masses classification. Neurocomputing 197, 221–231 (2016). 12. 12. Ypsilantis, P. P. et al. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks. Plos One. 10 (2015). 13. 13. Liu, Z. Y., Gao, J. F., Yang, G. G., Zhang, H. & He, Y. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network. Scientific Reports. 6 (2016). 14. 14. Johnson, J., Karpathy, A. & Fei-Fei, L. DenseCap: Fully Convolutional Localization Networks for Dense Captioning. 2016 IEEE Conference on Computer Vision And Pattern Recognition (Cvpr). 4565–4574 (2016). 15. 15. Kang, M. J. & Kang, J. W. Intrusion Detection System Using Deep Neural Network for In-Vehicle Network Security. Plos One. 11 (2016). 16. 16. LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature. 521, 436–444 (2015). 17. 17. Duan, M., Li, K., Yang, C. & Li, K. A hybrid deep learning CNN–ELM for age and gender classification. Neurocomputing. 275, 448–461 (2018). 18. 18. Hinton, G. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science. 313, 504–507 (2006). 19. 19. Hubel, D. H. & Wiesel, T. N. Receptive fields,binocular interaction and functional architecture in the cat’s visual cortex. J.Physiol. 160, 106–154 (1962). 20. 20. Krizhevsky, A., Sutskever, I. & Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Communications of the Acm. 60, 84–90 (2017). 21. 21. Brahimi, S., Ben Aoun, N. & Ben Amar, C. Very Deep Recurrent Convolutional Neural Network for Object Recognition. Ninth International Conference on Machine Vision (Icmv 2016). 10341 (2017). 22. 22. Zeiler, M. D. & Fergus, R. Visualizing and Understanding Convolutional Networks. Computer Vision - Eccv 2014, Pt I. 8689, 818–833 (2014). 23. 23. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Networks. 61, 85–117 (2015). 24. 24. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 86, 2278–2324 (1998). 25. 25. Niu, X. X. & Suen, C. Y. A novel hybrid CNN–SVM classifier for recognizing handwritten digits. Pattern Recognition. 45, 1318–1325 (2012). 26. 26. Liu, X. et al. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network. PLOS ONE. 12, e0168606 (2017). 27. 27. Ding, W. & Taylor, G. Automatic moth detection from trap images for pest management. Computers and Electronics in Agriculture. 123, 17–28 (2016). 28. 28. Jadid, M. A.& Rezaei, M. Facial Age Estimation Using Hybrid Haar Wavelet and Color Features with Support Vector Regression. 2017 Artificial Intelligence And Robotics. (Iranopen): 6–12 (2017). 29. 29. Lionnie, R. & Alaydrus, M. An Analysis of Haar Wavelet Transformation for Androgenic Hair Pattern Recognition. 2016 International Conference on Informatics And Computing (ICIC). 22–26 (2016). 30. 30. Sarker, M. Content-based Image Retrieval Using Haar Wavelet Transform and Color Moment. The Smart Computing Review. 3 (2013). 31. 31. Kamencay, P. et al. Accurate Wild Animal Recognition Using PCA, LDA and LBPH. 2016 Elektro 11th International Conference. 62–67 (2016). 32. 32. Ojala, T., Pietikäinen, M. & Harwood, D. A comparative study of texture measures with classification based on feature distributions. Pattern Recognition. 29(1), 51–59 (1996). 33. 33. Benzaoui, A., Kheider, A. & Boukrouche, A. Ear Description and Recognition Using ELBP and Wavelets. 2015 International Conference on Applied Research In Computer Science And Engineering (Icar) (2015). 34. 34. Ojala, T., Pietikainen, M. & Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Ieee Transactions on Pattern Analysis And Machine Intelligence 24, 971–987 (2002). 35. 35. Ahonen, T., Hadid, A. & Pietikainen, M. Face recognition with local binary patterns. Computer Vision - Eccv 2004, Pt 1. 3021: 469–481 (2004). 36. 36. Liao, S. C., Zhu, X. X., Lei, Z., Zhang, L. & Li, S. Z. Learning multi-scale block local binary patterns for face recognition. Advances In Biometrics, Proceedings. 4642, 828–837 (2007). 37. 37. Fan, R. E., Chen, P. H. & Lin, C. J. Working set selection using second order information for training support vector machines. Journal Of Machine Learning Research. 6, 1889–1918 (2005). 38. 38. Chang, C. C. & Lin, C. J. LIBSVM: A Library for Support Vector Machines. Acm Transactions on Intelligent Systems And Technology. 2 (2011). 39. 39. Szarvas, M., Yoshizawa, A., Yamamoto, M. & Ogata, J. Pedestrian detection with convolutional neural networks. 2005 IEEE Intelligent Vehicles Symposium Proceedings. 224–229 (2005). 40. 40. Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. International Joint Conference on Artificial Intelligence1995. p. 1137–45 (1995). 41. 41. van der Maaten, L. & Hinton, G. Visualizing Data using t-SNE. Journal Of Machine Learning Research. 9, 2579–2605 (2008). ## Acknowledgements This research was supported and funded by the Fund of Jiangsu Academy of Agricultural Sciences (No. 6111646), by the Program of Foshan Innovation Team (Grant No. 2015IT100072), and by NSFC (Grant No. 61673125), and by the Natural Science and Engineering Research Council of Canada. ## Author information Wan-jie Liang performed the experiments, analyzed the results and wrote the manuscript. Hong Zhang supervised the work and revised the manuscript. Gu-feng Zhang and Hong-xin Cao performed data collection and labeling. All authors reviewed the manuscript and and agree with its contents. Correspondence to Wan-jie Liang. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
auto_math_text
web
dc.contributor.advisor Scuseria, Gustavo E. Colt, John R. 2009-06-04T00:35:50Z 2009-06-04T00:35:50Z 1993 http://hdl.handle.net/1911/13708 Ab initio self-consistent field Hartree-Fock calculations employing minimal and double-zeta basis sets have been carried out on the possible isolated-pentagon fullerene isomers of C$\sb{76}$ and C$\sb{78}$. Two possible isolated-pentagon fullerene isomers exist for C$\sb{76}$: a chiral $D\sb2$ structure with a closed-shell electronic configuration and a $T\sb{d}$ structure with an open-shell electronic configuration that symmetry lowers to a closed-shell $\sp1A\sb1$ state in $D\sb{2d}$ symmetry. The D$\sb2$ isomer is found to be 43 kcal/mol more stable than the $T\sb{d}$ $\to$ $D\sb{2d}$ isomer. Five isolated-pentagon isomers exist for C$\sb{78}$ ($C\sb{2v}$(I), $C\sb{2v}$(II), $D\sb3$, $D\sb{3h}$(I) and $D\sb{3h}$(II)). The predicted order of stability for the five structures is found to be $C\sb{2v}$,(I) $>$ $C\sb{2v}$(II) $>$ D$\sb3$ $>$ $D\sb{3h}$(I) $>$ $D\sb{3h}$(II). Thermodynamic predictions of C$\sb{76}$ and C$\sb{78}$ seem to straightforwardly correlate with experimental results. 54 p. application/pdf eng Physical chemistryEngineeringMaterials scienceMolecular physics A theoretical study of the higher fullerenes carbon(76) and carbon(78) Thesis Text Chemistry Natural Sciences Rice University Masters Master of Arts Colt, John R.. "A theoretical study of the higher fullerenes carbon(76) and carbon(78)." (1993) Master’s Thesis, Rice University. http://hdl.handle.net/1911/13708. 
auto_math_text
web
Output floors on modelled capital requirements will become the binding regulatory constraint for almost one in two global systemically important banks (G-Sibs) once the Basel III rules are fully implemented, up from around just one in four today, a study by the Basel Committee shows. The committee estimates that the Basel III output floor – which forces internal model banks to hold minimum capital equivalent to 72.5% of the amount generated by the revised Basel III standardised approach by
auto_math_text
web
### JEMS-FDTD软件在运输机电磁特性仿真中的应用 Application of JEMS-FDTD to electromagnetic characteristics simulation of transport plane #### 3北京应用物理与计算数学研究所, 北京 100094 Abstract Research on the electromagnetic environment effect of aircraft has received attention in study of complex electromagnetic environment. With the rapid development of computer technology, using numerical simulation technology to obtain the electromagnetic characteristics of vehicle has become an important means to study aircraft electromagnetic environment effects. This paper demonstrated a massively parallel 3D full-wave electromagnetic field simulation program software JEMS-FDTD and its application to electromagnetic characteristics simulation of a transport plane, and obtained the electromagnetic information in pulse irradiation, such as the time and frequency domain, and near field/far field and electromagnetic distribution. In order to ensure the accuracy of the calculation, a non-uniform and high order FDTD computing technology was used. 00 11
auto_math_text
web
# Finding Nano-Ötzi: Semi-Supervised Volume Visualization for Cryo-Electron Tomography Ngan Nguyen*, Ciril Bohak*, Dominik Engel, Peter Mindek, Ondřej Strnad, Peter Wonka, Sai Li, Timo Ropinski, Ivan Viola * denotes shared first authorship Published in IEEE Transactions on Visualization and Computer Graphics, 2022 ### Abstract Cryo-Electron Tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural detail. Existing volume visualization methods, however, cannot cope with its very low signal-to-noise ratio. In order to design more powerful transfer functions, we propose to leverage soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning where we combine the advantages of two segmentation algorithms. A first weak segmentation algorithm provides good results for propagating sparse user provided labels to other voxels in the same volume. This weak segmentation algorithm is used to generate dense pseudo labels. A second powerful deep-learning based segmentation algorithm can learn from these pseudo labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses the deep-learning based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through histogram analysis. Finally, our visualization uses gradient-free ambient occlusion shading to further suppress visual presence of noise, and to give structural detail desired prominence. The cryo-ET data studied throughout our technical experiments is based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques. ### Citation @article{nguyen2021nano-oetzi, title={Finding Nano-\"Otzi: Semi-Supervised Volume Visualization for Cryo-Electron Tomography}, author={Ngan Nguyen and Ciril Bohak and Dominik Engel and Peter Mindek and Ondřej Strnad and Peter Wonka and Sai Li and Timo Ropinski and Ivan Viola}, year={2022}, doi={10.1109/TVCG.2022.3186146} }
auto_math_text
web
PREPRINT 169355DB-7402-4408-B5E8-1462C2458307 # Luminosity Distribution of Dwarf Elliptical-like Galaxies Mira Seo, Hong Bae Ann arXiv:2207.02216 Submitted on 4 July 2022 ## Abstract We present the structural parameters of $\sim 910$ dwarf elliptical-like galaxies in the local universe ($z\lesssim 0.01$) derived from the $r-$band images of the Sloan Digital Sky Survey (SDSS). We examine the dependence of structural parameters on the morphological types (dS0, dE,dE${}_{bc}$, dSph, and dE${}_{blue}$). There is a significant difference in the structural parameters among the five sub-types if we properly treat the light excess due to nucleation in dSph and dE galaxies. The mean surface brightness within the effective radius ($<{\mu }_{e}>$) of dSph galaxies is also clearly different from that of other sub-types. The frequency of disk features such as spiral arms depends on the morphology of dwarf galaxies. The most pronounced difference between dSph galaxies and other sub-types of early-type dwarf galaxies is the absence of disk feature which is thought to be closely related to their origin. ## Preprint Comment: 14 pages, 12 figures Subject: Astrophysics - Astrophysics of Galaxies
auto_math_text
web
# Intermediate ## Overview DecentralChain is an open blockchain protocol and development toolset for Web $$3.0$$ applications and decentralized solutions. Blockchain is a distributed ledger that ensures data immutability and transparency. ### Accounts DecentralChain uses an account-based model. Each transaction is created on behalf of an account, all assets and data are associated with an account. An account has a pair of cryptographically bound keys: a private key that the account uses to sign transactions, and a public key that allows anyone to verify the signature. More about accounts. To create an account, store keys, and sign transactions, you can use Decentral.Exchange. ### Transactions and Blocks Blockchain data is presented as transactions. A transaction is a record of an action, such as a token issue, cryptocurrency transfer, smart contract creation or invocation and more. More about transactions. Transactions are stacked into blocks. Besides transactions, every block also contains the hash of the previous block and the digital signature of the node that generated the block. The previous block contains the data hash of its preceding block, and so on. As a result, the signature of each block depends on the data of all the preceding blocks. More about blocks. In other words, the blockchain is a sequence of blocks linked by cryptographic hashes. Each transaction stays intact indefinitely. An attempt to change any data in a block would invalidate the block and all the later blocks. ### Nodes A node is a computer that serves the blockchain network. The DecentralChain nodes store a full copy of the blockchain data, validate transactions and blocks, verify signatures and hashes, and synchronize the data with other nodes. The DecentralChain network consists of nodes hosted around the world. This ensures that the blockchain’s data is protected against counterfeit or deletion, either malicious or accidental. Everyone can launch a node and join the network. The nodes that hold at least $$1,0000$$ DecentralCoins (by ownership or lease), can participate in block generation to receive block generation rewards and transaction fees. The more tokens the node holds, the greater its chance to add the next block. More about nodes. ### dApps A decentralized application (dApp) is an application empowered by the blockchain. A dApp can store data on the blockchain and invoke a script assigned to an account. There is, therefore, no centralized database that might be hacked or compromised. Any user can view the script code and the result of its invocation. More about dApps. ## Account DecentralChain uses an account-based model: • Each transaction is created on behalf of a certain account. • All the tokens belong to certain accounts. • All the data is associated with accounts. For details, see the account data storage article. ### Account Keys Unlike centralized applications, users do not have usernames and passwords on the blockchain. User identification and validation of their actions are performed using a cryptographically bound key pair: • The private key is used to sign transactions or orders. • The public key allows the verification of the digital signature. Each transaction contains the public key of the sender’s account. The sender generates a digital signature of the transaction using the account’s private key. The signature and the sender’s public key are used to verify the authenticity of the transaction’s data and to check that the signature of the transaction matches the public key. DecentralChain uses an asymmetric cryptographic system based on the elliptic curve Curve25519-ED25519 with X25519 keys. The guideline for generating keys and signatures is given in the cryptographic practical details article. The private and public keys are $$32$$ byte arrays. In UIs, the keys are displayed as base58 encoded strings. Base58-encoded keys can be of different lengths, the maximum length is $$44$$ characters. Example private key in base58: 6yCStrsBs4VgTmYcSgF37pmQhCo6t9LZk5bQqUyUNSAs Example public key in base58: 5cqzmxsmFPBHm4tb7D8DMA7s5eutLXTDnnNMQKy2AYxh ### Secret (Seed) Phrase The private key can be generated from some random seed phrase using hashing functions. The public key is obtained from the private key using an elliptic curve multiplication. The account address is obtained from the public key. All these transformations are unidirectional. The opposite direction is almost impossible in terms of the required computations. The secret phrase (a.k.a. seed phrase, backup phrase) can be any combination of symbols, words, or bytes. DecentralChain wallet apps typically use a random set of $$15$$ English words out of $$2048$$ words available. Using such a phrase is secure since the probability of generating two identical seed phrases is $$\frac{1}{2048^{15}}$$, so brute-force will take millions of years on an average CPU. The point of using a secret phrase (rather than a private key) is to simplify user experience: the secret phrase is much easier to write down or remember. Example of a secret phrase: body key praise enter toss road cup result shrimp bus blame typical sphere pottery claim Security Information: • The secret phrase or the private key derived from it provide complete control over the account, including the ability to dispose of funds. Do not give your secret phrase or private key to anyone, and do not publish or send them. • The secret phrase cannot be changed: another secret phrase (even one that differs by a single character) will generate a different key pair, and therefore a different account. • If you lose your secret phrase or private key, you will no longer be able to access your account ever again. We strongly encourage you to backup of your secret phrase. • If the secret phrase is compromised (you have accidentally sent it to someone or suspect that it was taken by fraudsters), immediately create a new account and transfer all the assets to it. For ways to generate account keys, see the creating an account article. ### Creating an Account To create an account means to generate an account key pair and address based on a secret (seed) phrase. You can use Decentral.Exchange online to create an account. • On the main screen click Create Account then in the Create Password box type in the password, type it again in the Confirm Password box, accept the Terms and Conditions as well as the Privacy Policy and click Continue. • On the next screen select Create Account and then choose the avatar you like the most for your account and click Continue. • After that, select the name you want the account to have on that particular device and click Continue. • At this point you will be forwarded to your wallet page. You must do a backup of your seed phrase. ### Backup Seed Phrase • Open Decentral.Exchange main screen and make sure you are logged into your account. Click on the account avatar and navigate to Settings > Security. • Click Show in the Backup Phrase box. • Write down the phrase and store it in a secure location. Do not store the backup phrase unencrypted on any electronic device. We strongly recommend backing up the seed phrase, since this is the only way to restore access to your account in case of loss or theft of the device. • Open Decentral.Exchange main screen and click Create Account then in the Create Password box type in the password, type it again in the Confirm Password box, accept the Terms and Conditions as well as the Privacy Policy and click Continue. • On the next screen select Import Accounts, then choose the Seed or Key option. • After that type in the seed you backed up in the past and click Continue, then select the name you want the account to have on that particular device and click Continue. • At this point you will be forwarded to your wallet page. • Open Decentral.Exchange main screen and click Forgot Password then select the Reset All option. • On the next screen, in the Create Password box type in the password, type it again in the Confirm Password box, accept the Terms and Conditions as well as the Privacy Policy and click Continue. • When this is done, select Import Accounts, then choose the Seed or Key option. • After that type in the seed you backed up in the past and click Continue, then select the name you want the account to have on that particular device and click Continue. • At this point you will be forwarded to your wallet page. Address is an account attribute derived from the public key. The address also contains the chain ID that identifies the blockchain network, therefore the address on the Mainnet cannot be used on the Testnet and vice versa. The address is a $$26$$ byte array (see the address binary format). In UIs the address is displayed as a base58 encoded string. 3PDfnPknnYrg2k2HMvkNLDb3Y1tDTtEnp9X Normally, the address starting with 3P refers to the Mainnet, and the address starting with 3M or 3N refers to Testnet or Stagenet. • Open Decentral.Exchange main screen and make sure you are logged into your account. Click on the account avatar and navigate to Address. • Copy the address and use it, or you can also use the generated QR code. ### Alias Alias is a short, easy to remember, name of the address. The alias is unique on the blockchain. One address can have several aliases. The alias can be used instead of the address: The alias cannot be deleted. #### Alias Requirements The length of an alias can be from $$4$$ to $$30$$ bytes ($$1$$ character can take up to $$4$$ bytes). The following characters are allowed: • lowercase Latin letters • numbers • dot • underscore • hyphen • @ #### Create Alias You can use Decentral.Exchange online to create an alias. • Make sure you are logged into your account. On the main screen click on the account avatar and navigate to Aliases. • On the next screen select Create New and then type in the name of the alias and click Create New again to complete the process. #### View Aliases The list of account aliases, as well as other blockchain data, is public and can be read by anyone. For example, you can see aliases in DecentralChain Explorer. To do this, find an account by its address and switch to the Aliases tab. Using Node REST API, you can obtain a list of aliases by address using the GET/alias/by-address/{address} method and an address by alias using the GET /alias/by-alias/{alias} method. #### Binary Format See the alias binary format article. ### Account Balance Account balance is the amount of a token (asset) that belongs to the account. One account can store different tokens in different amounts. For example, an account can have $$50$$ DecentralCoins and USD-N at the same time. The amount of the Y token on the account is called the account balance in Y token. If there is no Y token on the account, it is said that the account balance in Y token is equal to zero. #### Account Balance in DecentralCoin There are four types of balances in DecentralChain: • regular • available • effective • generating The regular balance is the amount of DecentralCoins that belongs directly to the account. Thе other types of balances are determined counting leased DecentralCoins. Let us introduce the following notation: R is the regular balance, Lo is the amount of DecentralCoins which the account leased to other accounts, Li is the amount of DecentralCoins which are leased to the account by other accounts. Then: Available balance = R – Lo Effective balance = R – Lo + Li Generating balance is the minimum value of the effective balance during the last 1000 blocks. The generating balance of a node account affects the ability to participate in block generation. To generate blocks, you need a generating balance of at least $$10000$$ DecentralCoins. The larger the generating balance, the greater the chance to add the next block is. #### View Account Balance The balances of any account, as well as other blockchain data, are public and can be read by anyone. For example, you can see the list of tokens and their amount on the account in DecentralChain Explorer. To do this, find an account by its address or alias. Balances in DecentralCoin are displayed right under the address, balances in other assets are at the Assets tab, and non-fungible tokens (NFT) are at the Non-fungible tokens tab. #### Top up Balance You can buy DecentralCoin tokens at Decentral.Exchange. ### Account Data Storage Account data storage is a key-value storage associated with an account. The key of each entry is a unique string. The value is the data being stored, it’s store using one of the types: • String • Boolean • Integral • Array of bytes The size of an account data storage is unlimited. For key and value size limitations, see the data transaction article. #### View Account Data Data storage of any account, as well as other blockchain data, are public and can be read by anyone. For example, you can see data entries in DecentralChain Explorer. To do this, find an account by its address or alias and switch to the Data tab. The account owner can add, modify or delete entries of the account data storage via a data transaction. A dApp script can add, modify or delete entries in the dApp’s data storage as a result of an invoke script transaction via script actions: ### dApp and Smart Account An account with a script assigned to it becomes a dApp or smart account. dApp is the account with the dApp script assigned. dApp is an application whose functions can be called from other accounts via an invoke script transaction. Callable functions can accept payments to the dApp and also perform actions applied to the blockchain: • Add, modify or delete entries of the dApp account data storage. • Transfer tokens from the dApp balance. • Issue, reissue, burn tokens on behalf of the dApp, sponsorship setup. Beyond that, a dApp script can comprise the verifier function that allows or denies transactions and orders that are sent on behalf of the dApp account depending on the specified conditions. The verifier function replaces the default verification that is used to verify the sender’s signature and allows you to set more complex rules, such as multisignature. Using dApps, you can implement various blockchain-empowered applications: gaming and gambling, DeFi, digital identity, supply chains, and many others. A smart account is an account with the account script assigned. The account script is similar to a verifier function of a dApp script. Please note: • To assign a script to an account, you have to send a set script transaction on behalf of the account. • You can also change or delete the script via the set script transaction, unless the script itself prohibits it. • The minimum fee for any transaction sent from a dApp or smart account is increased by $$0.004$$ DecentralCoins if the complexity of sender’s account script or dApp script verifier function exceeds the sender complexity threshold. ## Token (Asset) Token is a digital asset on the blockchain. A token can be used: • As a cryptocurrency to pay for goods and services within a project, as well as for crowdfunding; • As an object or resource in games etc. A token can represent a physical or an intangible object. The words “token” and “asset” are used interchangeably in the DecentralChain ecosystem. DecentralCoin is the native token on the DecentralChain blockchain. More about DecentralCoin. All other tokens are custom tokens issued on behalf of some account. Any account that has enough DecentralCoins to pay the fee can issue its own token. The new token is immediately available: ### Token Issue You can use Decentral.Exchange online to create an asset. • On the main screen make sure you are logged into your account, then click on Create Token. • On the next screen specify the token parameters: • Name: The name of the created asset can not be shorter than $$4$$ characters. • Description: A short description where you can include website links that can be particularly useful. • Quantity: Define the total supply of your asset. The total supply can either be fixed at the issuance or increased later by making the asset re-issuable. • Reissuable: Defines if the asset total supply can be increased later. If set to reissuable, the issuer can increase the supply at any time (If reissuable is selected when the asset is created, it can be changed to not reissuable at a later stage). • Decimals: Specify how many decimals your asset will have. For example, if you specify $$8$$ decimals, as in Bitcoin, your asset can be divided down to $$0.00000001$$. • Smart asset: A smart asset is an asset with an attached script that places conditions on every transaction made for the asset in question. • Script (for issuing a smart asset). • Before creating a new asset, carefully read the creation conditions. If necessary, change the name of the asset according to the conditions, then select the I understand… checkbox and click Generate. • On the next screen double-check the entered data and if everything is correct click Send to finish the creation or click Go Back to make corrections.. The transaction fee is $$1$$ DecentralCoin for a regular token or $$0.001$$ DecentralCoins for a non-fungible token (NFT). Moreover, the token can be issued by the dApp script as a result of the invoke script transaction when the callable function result contains the issue action. The minimum fee for invoke script transaction is increased by $$1$$ DecentralCoin for each non-NFT token issued. ### Token ID Token ID is a byte array calculated as follows: • If the token is issued by issue transaction, the token ID is the same as the transaction ID. • If the token is issued by invoke script transaction when the callable function of dApp script performed the issue action, the token ID is calculated as the BLAKE2b-256 hash of the byte array containing transaction ID and the fields of the Issue structure. In the Node REST API, the token identifier is encoded in base58. For example: "assetId": "8LQW8f7P5d5PZM7GtZEBgaqRPGSzS3DfPuiXrURJ4AJS" The DecentralCoin token has no identifier. The Node REST API uses null for DecentralCoin. ### Token Operations • Transfer to another account Can be done via a transfer transaction or a mass transfer transaction. A dApp script can transfer the token via a script transfer script action as a result of an invoke script transaction. Three accounts can participate in the exchange: one user creates an order to buy a token, the other creates an order to sell a token. The matcher combines buy and sell orders with suitable parameters and creates an exchange transaction. • Burning Decreases the amount of token on the account and thereby the total amount of the token on the blockchain. Any token owner can burn it, not only the issuer. It is impossible to burn DecentralCoin. Can be done via a burn transaction. A dApp script can burn the token via a burn script action as a result of the Invoke script transaction. An invoke script transaction can contain up to two payments to the dApp. Payment amount and token are available to the callable function. #### Operations Available Only to Issuer The following token operations can only be performed by the account that issued the token: The token issuer can enable sponsorship which allows all users to pay fees in this token (instead of DecentralCoins) for invoke script transactions and transfer transactions. More about sponsorship. Enabling or disabling sponsorship can be done via a sponsor fee transaction. A dApp script can set up sponsorship using a SponsorFee as a result of the invoke script transaction. • Reissue Increases the amount of token on the blockchain. The reissuable field of token determines whether the token can be reissued. Can be done via a reissue transaction. A dApp script can reissue the token via a reissue script action as a result of the invoke script transaction. • Replacing the asset script Can be done via a set asset script transaction. If the token is not a smart asset, that is, the script was not attached when the token was issued, then it is impossible to attach the script later. • Modifying the token name and / or description Can be done via an update asset info transaction. ### Token Types #### Non-Fungible Token Non-fungible token or NFT is a special type of a token that is issued with the following parameters: • “quantity”: $$1$$ • “decimals”: $$0$$ • “reissuable”: false NFT is a singular entity that has a unique ID. This contrasts with a regular token, two coins of which (for example, two WBTC) cannot be distinguished from each other. NFTs can be used as in-game items, collectibles, certificates, or unique coupons. ##### Issue of NFT NFT can be issued in the same ways as a regular token, see token issue. The minimum fee for an NFT issue is $$0.001$$ DecentralCoins, $$1000$$ times less than for a regular token. #### Smart Asset Smart asset is a token that has an asset script assigned to it. By default, tokens on the DecentralChain blockchain are not smart contracts, and any transactions with them are allowed. The script endows a token with functionality that sets the rules for its circulation. Each transaction involving a smart asset is automatically checked against the conditions specified in the script. If the asset’s script allows the transaction, it will be executed; if the script denies the transaction, it is either not put onto the blockchain at all or saved as failed (for details, see the transaction validation article). Using smart assets, you can implement various financial instruments on the blockchain (options, interval trading, taxation), game mechanics (allowing transactions only between characters with certain properties). Please note: • If a token is issued without a script, then the script cannot be added later. • The script cannot be removed, so it is impossible to turn a smart asset into a regular one. • The asset script can be changed using the set asset script transaction, unless prohibited by the asset script itself (as well as by the dApp or account script assigned to the issuer account). • The minimum fee for transaction is increased by $$0.004$$ DecentralCoins for each smart asset involved, except for: #### Tokens of Other Blockchains A token issued on another blockchain cannot be used directly on the DecentralChain blockchain. A new token representing the original one can be issued on the DecentralChain blockchain, and a gateway that pegs the two tokens $$1:1$$ can be deployed. ### DecentralCoin DecentralCoin is the native token of the DecentralChain blockchain. Block generators receive transaction fees and block rewards in DecentralCoins, which encourages generators to maintain and develop the blockchain network infrastructure. The more DecentralCoins the generator holds (by ownership or lease), the greater its chance to add the next block is. #### DecentralCoin Parameters DecentralCoins are present on the blockchain since inception, there is no issue transaction for it, therefore the DecentralCoin token does not have an ID. The REST API uses null for DecentralCoins. The number of decimal places (decimals) for DecentralCoins is $$8$$. The atomic unit called Decentralite is $$\frac{1}{100,000,000}$$ DecentralCoins. #### Leasing The owner of DecentralCoins can lease them via a lease transaction. DecentralCoins received on lease are included in the generating balance. Block generators send back different percentages as rewards to lessors. A lessor can cancel the lease at any time via a lease cancel transaction. More about leasing. #### How to Get DecentralCoin You can buy DecentralCoins tokens at Decentral.Exchange, or at one of the centralized exchanges. In addition, cryptocurrency gateways can be used to transfer external cryptocurrencies such as Bitcoin, Ethereum etc. from the external blockchain to the DecentralChain blockchain and vice versa. The gateway provides the user with the address on the external blockchain. After receiving a confirmation of transfer to this external address, the gateway transfers the corresponding asset (minus the fee) to the user’s DecentralChain address. ### Token Custom Parameters Below is an example of JSON representation returned by the GET /assets/details/{assetId} method of Node REST API: { "issueHeight": 1806810, "issueTimestamp": 1574429393962, "issuer": "3PC9BfRwJWWiw9AREE2B3eWzCks3CYtg4yo", "issuerPublicKey": "BRnVwSVctnV8pge5vRpsJdWnkjWEJspFb6QvrmZvu3Ht", "name": "USD-N", "description": "Neutrino USD", "decimals": 6, "reissuable": false, "quantity": 999999999471258900, "scripted": false, } Token Custom Parameters Field Description assetId Token ID: base58 encoded byte array. The token ID is calculated as a hash of the token parameters upon issue. See also the token ID article. issueHeight Blockchain height (the sequence number of the block) at which the token is issued. issueTimestamp Token issue timestamp: Unix time in milliseconds. issuer Address of issuer account: base58 encoded byte array. issuerPublicKey Public key of issuer account: base58 encoded byte array. name Token name. From $$4$$ to $$16$$ bytes ($$1$$ character can take up to $$4$$ bytes). description Token description. From $$0$$ to $$1000$$ bytes. decimals Number of decimal places, from $$0$$ to $$8$$. reissuable Reissue availability flag. quantity Total supply of token on the blockchain specified in atomic units. From $$1$$ to $$9,223,372,036,854,775,807$$. Total supply can change as a result of reissue or burning, see token operations below. scripted There being a script: true for smart asset, false for regular token. More about smart assets. For sponsored asset only: an amount of asset that is equivalent to $$0.001$$ DecentralCoins. More about sponsorship. originTransactionId ID of the transaction that issued the token: base58 encoded byte array. scriptDetails For smart asset only: asset script and its attributes. #### Atomic Unit The amount of token is displayed differently in UIs and in the JSON representation used by the Node REST API. In API requests and responses, amount values are integers indicated in atomic units to avoid precision issues in floating-point calculations. An atomic unit is the minimum fraction (“cent”) of a token, it is equal to $$10^{-decimals}$$. The amount of token in JSON is the real quantity multiplied by $$10^{decimals}$$. For USD-N in the example above: • decimals = $$6$$, • atomic unit is $$\frac{1}{1,000,000}$$ USD-N. • “quantity”: $$999999999471258900$$ corresponds to $$999,999,999,471.258900$$ USD-N in UIs, “minSponsoredAssetFee”: $$7420$$ corresponds to $$0.007420$$ USD-N. ## Transaction ### Transaction Issue #### How to Sign and Send Transactions • In Decentral.Exchange you can create some types of transactions such as transfer, issue/reissue/burn, sponsor fee transaction, set asset script, create alias. • Via Node REST API: • The POST /transactions/broadcast method sends a signed transaction to a node; • The POST /transactions/sign method generates transaction signature (but this method is only available to the node owner). #### Transaction Sender and Signature Each transaction contains the public key of the sender’s account, on behalf of which the action is performed on the blockchain. Smart accounts and dApps can set their own rules for outgoing transactions verification. Transactions that are sent from an ordinary account (without script) must contain the sender’s digital signature. The sender generates a signature using the account’s private key. Along with the signature, the transaction contains the sender’s public key, so the node (and anyone) can verify the integrity of the transaction data and the authenticity of the signature, that is, make sure that the signature of the transaction matches the public key. #### After Transaction is Sent Upon receiving a transaction, the node validates its signature, checks the sender’s balance, and so on, see the transaction validation article for details. If the transaction is valid, the node puts the transaction to the UTX pool, which is a list of transactions awaiting to be added, and also broadcasts the transaction to other nodes of the blockchain network. Due to block size limitation ($$1$$ MB) the transaction may not get to the block immediately. First of all, nodes add the most “profitable” transactions with the highest fee per byte. After being added to a block, the transaction changes the blockchain state: account balances, records in the account data storage, and so on. The transaction may never be added to a block if it becomes invalid while waiting in the UTX pool. For example, the transaction has expired (the timestamp is more than $$2$$ hours behind current time) or another transaction has changed the blockchain state and now the sender’s balance is insufficient to execute the transaction or the account or asset script denies the transaction. ### Transaction Proofs #### Verification by Script If the transaction sender is a dApp or smart account, then the transaction is verified by the script assigned to the account instead of signature verification. The script allows or denies the transaction depending on whether it meets the specified conditions. In particular, the script can run various verifications of the proofs. A common example is a smart account with a multisignature where three co-owner users store shared funds. ### Transaction Fees Transaction fee is a fee that an account owner pays to send a transaction. A transaction sender can specify any amount of fee but not less than the minimum amount. The larger the fee is, the quicker the transaction will be added to the new block. For invoke script transactions and transfer transaction, a sender can specify a transaction fee nominated in a sponsored asset instead of DecentralCoins, see the section fee in sponsored asset below. #### Regular Fees ##### Minimum Fee The minimum fees in DecentralCoins for each type of transaction are listed below. • If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. If the order sender in exchange transaction is a dApp or smart account, this does nor affect the minimum fee. • The minimum fee is increased by $$0.004$$ DecentralCoins for each smart asset involved, except for: • Smart assets used as matcher fees in exchange transactions. Example 1 • The minimum fee for a transfer transaction: • No smart account or smart assets: $$0.001$$ DecentralCoins. • Transfer from smart account*: $$0.001 + 0.004 = 0.005$$ DecentralCoins. • Transfer of smart asset: $$0.001 + 0.004 = 0.005$$ DecentralCoins. • Transfer of smart asset sent from smart account*: $$0.001 + 0.004 + 0.004 = 0.009$$ DecentralCoins. If the account script complexity is higher than the sender complexity threshold. Example 2 The minimum fee for an Invoke Script transaction: • No smart account, no assets issued: $$0.005$$ DecentralCoins. • dApp script invocation is sent from a smart account*: $$0.005 + 0.004 = 0.009$$ DecentralCoins. • dApp script invocation issues an asset that is not non-fungible tokens: $$0.005 + 1 = 1.005$$ DecentralCoins. • dApp script invocation is sent from smart account*, and $$10$$ assets that are not non-fungible tokens are issued: $$0.005 + 0.004 + 10 × 1 = 10.009$$ DecentralCoins. If the account script complexity is higher than the sender complexity threshold. Minimum Fees Transaction type Transaction type ID Minimum transaction fee in DecentralCoins Burn transaction $$6$$ $$0.001$$ Create alias transaction $$10$$ $$0.001$$ Data transaction $$12$$ $$0.001$$ per kilobyte. The size is rounded up to an integer number of kilobytes. Exchange transaction $$7$$ $$0.003$$ Invoke script transaction $$16$$ $$0.005 + K$$. $$K$$ is the number of assets issued as a result of dApp script invocation that are not non-fungible tokens. Issue transaction $$3$$ $$1$$ for reqular token. $$0.001$$ for non-fungible token. Lease cancel transaction $$9$$ $$0.001$$ Lease transaction $$8$$ $$0.001$$ Mass transfer transaction $$11$$ $$0.001 + 0.0005 × N$$. $$N$$ is the number of transfers inside of the transaction. The value is rounded up to the three decimals. Reissue transaction $$5$$ $$0.001$$ Set asset script transaction $$15$$ $$1$$ Set script transaction $$13$$ $$0.01$$ $$14$$ $$0.001$$ Transfer transaction $$4$$ $$0.001$$ Update asset info transaction $$17$$ $$0.001$$ ##### Fee for Failed Transactions Invoke script transactions and exchange transactions can be saved on the blockchain even if the result of a dApp script or asset script execution failed. In this case, the sender is charged a fee. For an exchange transaction, the matcher is charged the transaction fee but the order senders are not charged the matcher fee. More about transaction validation. An issuer of an asset can set up sponsorship — so that any user can specify a transaction fee in this asset for invoke script transactions and transfer transactions. To activate sponsorship, the issuer puts a sponsor fee transaction that specifies an amount of asset that is equivalent to the minimum fee of $$0.001$$ DecentralCoins. For example, if minSponsoredAssetFee: $$5$$, then the fee in this asset for an invoke script transaction equals $$5 * \frac{0.005}{0.001} = 25$$. ### Transaction Representations #### JSON Representation The Node REST API of DecentralChain nodes uses the JSON representation of transactions. You can send transactions to a node and read transactions stored on the blockchain via REST API in JSON. Here is an example of JSON representation: { "senderPublicKey": "BVv1ZuE3gKFa6krwWJQwEmrLYUESuUabNCXgYTmCoBt6", "sender": "3N8S4UtauvDAzpLiaRyDdHn9muexWHhBP4D", "feeAssetId": null, "proofs": [ "22QJfRKX7kUQt4qjdnUqZAnhqukqhnofE27uvP8Q5xnBf8M6PCNtWVGq2ngm6m7Voe7duys59D1yU9jhKrmdXDCe" ], "fee": 100000, "alias": "91f452553298770f", "type": 10, "version": 2, "timestamp": 1548443069053, "height": 466104 } JSON Representation Field Description senderPublicKey Public key of the transaction sender: base58 encoded byte array. sender Address of the transaction sender: base58 encoded byte array. feeAssetId ID of the fee token. null means that the fee is in DecentralCoins. The sender can specify the fee for invoke script transactions and transfer transactions in a sponsored asset, see the sponsored fee article for details. proofs Array of transaction proofs. Up to $$8$$ proofs, each proof up to $$64$$ bytes base58 encoded. fee Transaction fee: an integer value indicated in the minimum fraction (“cent”) of the fee asset. For example, if the fee is $$0.001$$ DecentralCoins, $$100000$$ is indicated in the JSON representation, so far as $$1$$ DecentralCoin = $$10^{8}$$ Decentralites. id Transaction ID. For the transaction ID calculation method, see the cryptographic practical details article. type Transaction type. Type IDs are listed in the transaction type article. version Transaction version. Versions for each type of transaction are listed in transaction binary format descriptions. applicationStatus Status of transaction execution: 1) succeeded: transaction is successful. 2) script_execution_failed: the dApp script or the asset script failed. See the transaction validation article for details. timestamp Transaction timestamp specified by the sender: Unix time in milliseconds. The transaction cannot be added to the blockchain if the timestamp value is more than $$2$$ hours behind or $$1.5$$ hours ahead of current block timestamp.” height The sequence number of the block that contains the transaction. The sender, id, applicationStatus, and height fields do not need to be filled when sending a transaction, and they are not stored on the blockchain. The node calculates these fields when providing transaction data via the Node REST API. The fields that depend on the type of transaction are listed in the description of each type of transaction. #### Binary Format Transactions are stored on the blockchain in the binary format (byte representation). Node extensions such as gRPC server can work directly with data in binary format. The transaction signature and ID are also formed on the basis of the binary format. The guideline for generating a signature and ID is given in the cryptographic practical details article. Transaction binary format is described in the transaction binary format article. You can get the transaction by ID, or the list of transactions by certain account address, or the list of all transactions in the block: • Via Node REST API using the following methods: • GET /transactions/info/{id} returns transaction data by transaction ID. • GET /blocks/at/{height} returns block data at the specified height including all transactions in the block. ### Transaction Types #### Tokenization Tokenization Transaction type ID Name Description $$3$$ Issue transaction Issues a token. $$5$$ Reissue transaction Reissues a token. $$6$$ Burn transaction Decreases the amount of token. $$15$$ Set asset script transaction Modifies the asset script. $$17$$ Update asset info transaction Changes the token name and description. ##### Issue Transaction Issue transaction creates a new token. Fee The minimum fee for an issue transaction is $$1$$ DecentralCoins, in case of issue of a non-fungible tokens (NFT) $$0.001$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "quantity": 50000, "fee": 100000000, "description": "Script true.", "type": 3, "version": 2, "reissuable": true, "script": "base64:AQa3b8tH", "sender": "3Mz9N7YPfZPWGd4yYaX6H53Gcgrq6ifYiH7", "feeAssetId": null, "chainId": 84, "proofs": [ "4yjVxzrLuXUq5y2QCa2LDn1Fp9P63hPBmqDLGQCqn41EB1uZ1pys79NP81h7FxRBnZSbpNGbz1xjwckHcPAQHmFX" ], "assetId": "7Xpp9PPeZbG4wboJrcbRQdq3SxCJqbeFRUjjKccM1DsD", "decimals": 2, "name": "Smart", "id": "7Xpp9PPeZbG4wboJrcbRQdq3SxCJqbeFRUjjKccM1DsD", "timestamp": 1548653407494, "height": 469677 } Issue Transaction JSON Representation Field Description name Token name. From $$4$$ to $$16$$ bytes ($$1$$ character can take up to $$4$$ bytes). description Token description. From $$0$$ to $$1000$$ bytes. quantity Token quantity: an integer value specified in the minimum fraction (“cents”), that is, the real quantity multiplied by $$10^{decimals}$$. From $$1$$ to $$9,223,372,036,854,775,807$$. $$1$$ for NFT. decimals Number of decimal places, from $$0$$ to $$8$$. $$0$$ for NFTs. reissuable Reissue availability flag, see the reissue transaction article. False for NFTs. script For the smart asset: the compiled Asset script, up to $$8192$$ bytes, base64 encoded. For the token without a script: null. The token issued without a script cannot be converted to a smart asset. chainId Chain ID assetId Token ID base58 encoded. The token ID is the same as the Issue transaction ID. The assetId field does not need to be filled when sending a transaction, and it is not stored on the blockchain. The node calculates these fields when providing transaction data via the REST API. The fields that are common to all types of transactions are described in the transaction article. Binary Format See the issue transaction binary format. Ride Structure The IssueTransaction structure is used for transaction handling in smart contracts. ##### Reissue Transaction Reissue transaction increases the amount of the token on the blockchain and/or prohibits its reissue. Only the token issuer can send a reissue transaction. The additional amount of token increases the balance of the transaction sender. The reissuable field of the token determines whether the token can be reissued. Fee The minimum fee for a reissue transaction is $$0.001$$ DecentralCoins. If the token is a smart asset, the minimum fee is increased by $$0.004$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "DjYEAb3NsQiB6QdmVAzkwJh7iLgUs3yDLf7oFEeuZjfM", "quantity": 200000, "fee": 100000000, "type": 5, "version": 2, "reissuable": true, "sender": "3PLJciboJqgKsZWLj7k1VariHgre6uu4S2T", "feeAssetId": null, "chainId": 87, "proofs": [ "5mEveeUwBdBqe8naNoV5eAe5vj6fk8U743eHGkhxhs3v9PMsb3agHqpe4EtzpUFdpASJegXyjrGSbynZg557cnSq" ], "assetId": "GA4gB3Lf3AQdF1vBCbqGMTeDrkUxY7L83xskRx6Z7kEH", "id": "27ETigYaHym2Zbdp4x1gnXnZPF1VJCqQpXmhszC35Qac", "timestamp": 1548521785933, "height": 1368623 } Reissue Transaction JSON Representation Field Description assetId Token ID base58 encoded. quantity Amount of token to reissue: an integer value specified in the minimum fraction (“cents”) of token. The total quantity of token as a result of the reissue should not exceed $$9,223,372,036,854,775,807$$. chainId Chain ID reissuable Reissue availability flag. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The ReissueTransaction structure is used for transaction handling in smart contracts. ##### Burn Transaction Burn transaction decreases the amount of token on sender’s account and thereby the total amount of the token on the blockchain. Any account that owns a token (not necessarily the token issuer) can send the burn transaction. Burned tokens cannot be restored back to the account. Fee The minimum fee for a burn transaction is $$0.001$$ DecentralCoins, in case of burning a smart asset $$0.005$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "9GaQj7gktEiiS1TTTjGbVjU9bva3AbCiawZ11qFZenBX", "amount": 9999, "fee": 100000, "type": 6, "version": 2, "sender": "3P9QZNrHbyxXj8P9VrJZmVu2euodNtA11UW", "feeAssetId": null, "chainId": 87, "proofs": [ "61jCivdv3KTuTY6QHgxt4jaGrXcszWg3vb9TmUR26xv7mjWWwjyqs7X5VDUs9c2ksndaPogmdunHDdjWCuG1GGhh" ], "assetId": "FVxhjrxZYTFCa9Bd4JYhRqXTjwKuhYbSAbD2DWhsGidQ", "id": "csr25XQHT1c965Fg7cY2vJ7XHYVsudPYrUbdaFqgaqL", "timestamp": 1548660675277, "height": 1370971 } Burn Transaction JSON Representation Field Description amount Amount of token to burn: an integer value specified in the minimum fraction (“cents”) of token. assetId Token ID base58 encoded. chainId Chain ID The fields that are common to all types of transactions are described in the transaction article. Binary Format See the burn transaction binary format. Ride Structure The BurnTransaction structure is used for transaction handling in smart contracts. ##### Set Asset Script Transaction Set asset script transaction replaces the asset script. Only the token issuer can send an asset script transaction. If a token is issued without a script, then no script can be assigned to it. It is also impossible to remove the script and turn the smart asset into a regular one. Fee The minimum fee for a set asset script transaction is $$1$$ DecentralCoin. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "AwQYJRHZNd9bvF7C13uwnPiLQfTzvDFJe7DTUXxzrGQS", "fee": 100000000, "type": 15, "version": 1, "script": "base64:AQa3b8tH", "feeAssetId": null, "chainId": 87, "proofs": [ "nzYhVKmRmd7BiFDDfrFVnY6Yo98xDGsKrBLWentF7ibe4P9cGWg4RtomHum2NEMBhuyZb5yjThcW7vsCLg7F8NQ" ], "assetId": "7qJUQFxniMQx45wk12UdZwknEW9cDgvfoHuAvwDNVjYv", "id": "FwYSpmVDbWQ2BA5NCBZ9z5GSjY39PSyfNZzBayDiMA88", "timestamp": 1547201038106, "height": 1346345 } Set Asset Script Transaction JSON Representation Field Description assetId Token ID base58 encoded. chainId Chain ID script Compiled asset script, up to $$8192$$ bytes, base64 encoded. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The SetAssetScriptTransaction structure is used for transaction handling in smart contracts. ##### Update Asset Info Transaction Update asset info transaction modifies the name and description of the token. Fee The minimum fee for an update asset info transaction is $$0.001$$ DecentralCoins, in case of a smart asset $$0.005$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "6a6r9d3r2ccyE9SvuxmdZbfSHXmKPUoExnigvippJLfu", "fee": 100000, "description": "xxxXXXxxx", "type": 17, "version": 1, "applicationStatus": "succeeded", "sender": "3MQdH4MAmM5RNz5TAT43UXXCvMtCa9YgHq9", "feeAssetId": null, "chainId": 83, "proofs": [ "4DfvJL4cVisQaMuMB7ar15EtYZTvTZzAUQQMkq4RA3uTMzziVYLrbNHSL2a1eCqBV3YQb7dddXdjywETXHuu65ij" ], "assetId": "syXBywr2HVY7wxqkaci1jKY73KMpoLh46cp1peJAZNJ", "name": "zzzz", "id": "4DL8K4bRvYb9Qrys9Auq7hSGuLGq8XsUYZqDDBBfVGMf", "timestamp": 1591886337668, "height": 411389 } Update Asset Info Transaction JSON Representation Field Description name Token name. From $$4$$ to $$16$$ bytes. description Token description. From $$0$$ to $$1000$$ bytes. chainId Chain ID assetId Token ID base58 encoded. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The UpdateAssetInfoTransaction structure is used for transaction handling in smart contracts. #### Usage Usage Transaction type ID Name Description $$4$$ Transfer transaction Transfers a token to another account. $$7$$ Exchange transaction Exchanges two different tokens between two accounts. Contains two counter orders: a buy order and a sell order. $$10$$ Create alias transaction Creates alias for the sender’s address. $$11$$ Mass transfer transaction Transfers a token, up to $$100$$ recipients. $$12$$ Data transaction Adds, modifies and deletes data entries in the sender’s account data storage. $$13$$ Set Script transaction Assigns the dApp script or account script to the sender’s account. $$16$$ Invoke Script transaction Invokes a callable function of a dApp. ##### Transfer Transaction Transfer transaction transfers a certain amount of token to another account. Fee The minimum fee for a transfer transaction is $$0.001$$ DecentralCoins, in case of transferring a smart asset $$0.005$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "Cs4DShy4nTx6WyxjKRoDtoYsGhvT663pYLysPCLeVZHE", "amount": 15540, "signature": "5EaYqFx2xFJmdvwZ1gT3yLecKr88z3jByCj5GE1MjE1ossvehExZKoT7uhGatiYCGM9Co8iUR8Q5ce52XDmno3rn", "fee": 100000, "type": 4, "version": 1, "attachment": "3vrgtyozxuY88J9RqMBBAci2UzAq9DBMFTpMWLPzMygGeSWnD7k", "sender": "3PN2bVFxJjgudPKqEGZ41TVsD5ZJmxqnPSu", "feeAssetId": null, "proofs": [ "5EaYqFx2xFJmdvwZ1gT3yLecKr88z3jByCj5GE1MjE1ossvehExZKoT7uhGatiYCGM9Co8iUR8Q5ce52XDmno3rn" ], "assetId": "7uncmN7dZfV3fYVvNdYTngrrbamPYMgwpDnYG1bGy6nA", "recipient": "3PFmoN5YLoPNsL4cmNGkRxbUKrUVntwyAhf", "feeAsset": null, "id": "D79kL1Jr5xyL2Rmw2FnafQHugJGvuBhNEbLnhMuwMkDC", "timestamp": 1548660895034, "height": 1370973 } Transfer Transaction JSON Representation Field Description assetId Token ID base58 encoded. null means DecentralCoins. amount Amount of token to transfer: an integer value specified in the minimum fraction (“cents”) of token. attachment Arbitrary binary data (typically a comment to transfer) base58 encoded, up to $$4$$ bytes. recipient Recipient address base58 encoded or recipient alias with alias:<chain_id>: prefix, for example alias:T:merry (See chain ID). The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The TransferTransaction structure is used for transaction handling in smart contracts. ##### Exchange Transaction Exchange transaction exchanges two different tokens between two accounts. Commonly the exchange transaction is created by the matcher service that executes orders to buy and sell tokens. The exchange transaction contains two counter orders: a buy order and a sell order. The blockchain guarantees that the terms of the exchange are not worse than those indicated in each order. An order can be filled partially. An order can participate in several exchange transactions, with different counter orders. One of the two exchanged tokens is the amount asset (base currency): it represents the amount of token in orders and in the Exchange transaction. Another token is a price asset (quote currency): it represents the price. Transaction Fee The minimum fee for an exchange transaction is $$0.003$$ DecentralCoins. In case of exchange of a smart asset for an ordinary asset the minimum fee is $$0.007$$ DecentralCoins, in case of exchange of two smart assets the minimum fee is $$0.011$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. Matcher Fee The matcher receives a fee for order execution from each order sender. The minimum matcher fee is set by the matcher. The order sender specifies the fee not less than the minimum amount. If the order is fully filled with one exchange transaction, the matcher receives the entire fee specified in the order. If the order is partially filled, the matcher receives a part of the fee. The blockchain guarantees that the total matcher fee received from the order sender in all exchange transactions does not exceed the fee specified in the order. JSON Representation { "senderPublicKey": "9cpfKN9suPNvfeUNphzxXMjcnn974eme8ZhWUjaktzU5", "amount": 100000000, "fee": 300000, "type": 7, "version": 2, "sellMatcherFee": 750, "sender": "3PEjHv3JGjcWNpYEEkif2w8NXV4kbhnoGgu", "feeAssetId": null, "proofs": [ "LQD8VoFhHEW2b6o2e2ujzDHdZatwMMwigC2tmoSHcFNRGXrowA1yyVxD6nZBNeABLWjs59dnuLhgNP7UMfFKDuR" ], "price": 1134500, "id": "EHLccXcemZPEvUpM9UkASG1GciwMt9R5B3QuYFxywj9g", "order2": { "version": 3, "id": "JCiF3gmprLc8u7xdWR7KUkJ3YfM6yfgxB6CvhJYGJFAa", "sender": "3PRBeeFD64wvTMfS3HEoDDFPXfJs3gFdAxk", "senderPublicKey": "ytgWVbKG9e6TSsQ5buMryr2QyxNoL3RezXP3f9RJ2As", "matcherPublicKey": "9cpfKN9suPNvfeUNphzxXMjcnn974eme8ZhWUjaktzU5", "assetPair": { "amountAsset": null, }, "orderType": "sell", "amount": 40000000000, "price": 1134500, "timestamp": 1591356602063, "expiration": 1593862202062, "matcherFee": 300000, "matcherFeeAssetId": null, "signature": "3D2Ngr7H6MQRs1izMQSix3dMHmDfg4bcRjxamFXFsb4Ku28neNWHdtwE6LtR3eq69Jqr1CvEsAKCWkQEeEEomcoK", "proofs": [ "3D2Ngr7H6MQRs1izMQSix3dMHmDfg4bcRjxamFXFsb4Ku28neNWHdtwE6LtR3eq69Jqr1CvEsAKCWkQEeEEomcoK" ] }, "order1": { "version": 3, "id": "FNvEGPgUqEWnrnpxevZQnaZS3DUTBGE2wa6L75xCw7mo", "sender": "3PDxxx7eSeYLgzTAtuAV7gUCtHeeXeU85fP", "senderPublicKey": "3WEkbavP3Sw4y5tsgxbZvKkWh87BdB3CPVVxhcRUDBsJ", "matcherPublicKey": "9cpfKN9suPNvfeUNphzxXMjcnn974eme8ZhWUjaktzU5", "assetPair": { "amountAsset": null, }, "amount": 100000000, "price": 1134500, "timestamp": 1591356752271, "expiration": 1593862352271, "matcherFee": 300000, "matcherFeeAssetId": null, "signature": "2gvqaYy2BFbK4BJZS8taRJnhgfQ1z2CytF2RqjcyEfzFiu9tkTjN5q4UyFXpPqS3E6eD2WQBUaYCTYDKv98iW1sy", "proofs": [ "2gvqaYy2BFbK4BJZS8taRJnhgfQ1z2CytF2RqjcyEfzFiu9tkTjN5q4UyFXpPqS3E6eD2WQBUaYCTYDKv98iW1sy" ] }, "timestamp": 1591356752456, "height": 2093333 } Exchange Transaction JSON Representation Field Description amount Amount of the amount asset: an integer value specified in the minimum fraction (“cent”) of asset. price Price for the amount asset nominated in the price asset, multiplied by the factor: 1) $$10^{8}$$ for the exchange transaction version 3. 2) $$10^{(8 + priceAssetDecimals – amountAssetDecimals)}$$. Where amountAssetDecimals, priceAssetDecimals are decimals of the assets, for the exchange transaction version 2 or 1. Matcher fee for the buy order execution. The fee token ID is indicated in buy order. sellMatcherFee Matcher fee for the sell order execution. The fee token ID is indicated in sell order. order1, order2 Buy and sell orders. See the order article for details. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The ExchangeTransaction structure is used for transaction handling in smart contracts. ##### Create Alias Transaction Create Alias transaction creates an alias for the sender’s address.A created alias cannot be deleted. Fee The minimum fee for a Create Alias transaction is $$0.001$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey":"BVv1ZuE3gKFa6krwWJQwEmrLYUESuUabNCXgYTmCoBt6", "sender":"3N8S4UtauvDAzpLiaRyDdHn9muexWHhBP4D", "feeAssetId":null, "proofs": [ "22QJfRKX7kUQt4qjdnUqZAnhqukqhnofE27uvP8Q5xnBf8M6PCNtWVGq2ngm6m7Voe7duys59D1yU9jhKrmdXDCe" ], "fee":100000, "alias":"91f452553298770f", "type":10, "version":2, "timestamp":1548443069053, "height":466104 } Create Alias Transaction JSON Representation Field Description alias Alias. From $$4$$ to $$30$$ bytes ($$1$$ character can take up to $$4$$ bytes). The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The CreateAliasTransaction structure is used for transaction handling in smart contracts. ##### Mass Transfer Transaction Mass transfer transaction transfers a token to several accounts, from $$1$$ DecentralCoin to $$100$$ DecentralCoins . Fee The minimum fee for a Mass Transfer transaction is $$0.001 + 0.0005 × N$$ DecentralCoins, in case of transferring a smart asset $$0.001 + 0.0005 × N$$ DecentralCoins, where $$N$$ DecentralCoins is the number of recipients. The fee value is rounded up to three decimals. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "5DphrhGy6MM4N3yxfB2uR2oFUkp2MNMpSzhZ4uJEm3U1", "fee": 5100000, "type": 11, "transferCount": 100, "version": 1, "totalAmount": 500000000000, "attachment": "xZBWqm9Ddt5BJVFvHUaQwB7Dsj78UQ5HatQjD8VQKj4CHG48WswJxUUeHEDZJkHgt9LycUpHBFc8ENu8TF8vvnDJCgfy1NeKaUNydqy9vkACLZjSqaVmvfaM3NQB", "sender": "3P2rvn2Hpz6pJcH8oPNrwLsetvYP852QQ2m", "feeAssetId": null, "proofs": [ "FmGBaWABAy5bif7Qia2LWQ5B4KNmBnbXETL1mE6XEy4AAMjftt3FrxAa8x2pZ9ux391oY5c2c6ZSDEM4nzrvJDo" ], "assetId": "Fx2rhWK36H1nfXsiD4orNpBm2QG1JrMhx3eUcPVcoZm2", "transfers": [ { "recipient": "3PHnjQrdK389SbzwPEJHYKzhCqWvaoy3GQB", "amount": 5000000000 }, { "recipient": "3PGNLwUG2GPpw74teTAxXFLxgFt3T2uQJsF", "amount": 5000000000 }, { "recipient": "3P5kQneM9EdpVUbFLgefD385LLYTXY5J32c", "amount": 5000000000 }, ... ], "timestamp": 1528973951321, "height": 1041197 } Mass Transfer Transaction JSON Representation Field Description assetId Token ID base58 encoded. null means DecentralCoins. attachment Arbitrary binary data (typically a comment to transfer) base58 encoded, up to $$140$$ bytes. transfers.recipient Recipient address base58 encoded or recipient alias with alias:<chain_id>: prefix, for example alias:T:merry (See Chain ID). transfers.amount Amount of token to transfer: an integer value specified in the minimum fraction (“cents”) of token. transferCount Number of recipients. totalAmount Total amount of transfers in transaction. The transferCount and totalAmount fields do not need to be filled when sending a transaction, and they are not stored on the blockchain. The node calculates these fields when providing transaction data via the REST API. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The MassTransferTransaction structure is used for transaction handling in smart contracts. ##### Data Transaction Data transaction adds, modifies and deletes data entries in sender’s account data storage. Limitations are as follows: • The maximum number of entries is $$100$$. • For a transaction version 2 the maximum data size (keys + values) is $$165,890$$ bytes. • For a transaction version 1 the maximum transaction size (except proofs) is $$153,600$$ bytes. Fee The minimum fee for a Data transaction is $$0.001$$ DecentralCoins per kilobyte, the size is rounded up to an integer number of kilobytes. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "38bYRUxFCaoa9h822nMnsoTX1qfczqtHJLgouNcNnd8h", "data": [ { "type": "boolean", "value": true, "key": "bool" }, { "type": "binary", "value": "base64:SGVsbG8gV2F2ZXM=", "key": "bin" }, { "type": "integer", "value": 1234567, "key": "int" }, { "type": "string", "value": "some text", "key": "str" } ], "sender": "3N4iKL6ikwxiL7yNvWQmw7rg3wGna8uL6LU", "feeAssetId": null, "proofs": [ "kE1hjN1yW68j8DsYGNB7Gg1ydC4hqRmt3wBaFQUPkftnbiM7QfJCn1gTHgveJ7pCLXvvqffhKBmiF8qS1Uqk6SR" ], "fee": 100000, "id": "3EPJuvQiJYiu9Y5g6mYDQgHVu8GFUfnZurHrVwwF1ViH", "type": 12, "version": 2, "timestamp": 1591351545000, "height": 1029815 } Data Transaction JSON Representation Field Description data.key Entry key. String, up to $$400$$ bytes for version 2, up to $$100$$ characters for version 1. data.type Entry type: 1) binary. 2) boolean. 3) integer. 4) string. 5) null – delete entry. data.value Entry value. Up to $$32,767$$ bytes. Binary value is base64 encoded. null – delete entry. The fields that are common to all types of transactions are described in the transaction article. Binary Format See the data transaction binary format. Ride Structure The DataTransaction structure is used for transaction handling in smart contracts. ##### Set Script Transaction Set script transaction assigns the dApp script dApp script or account script to the sender’s account. Fee The minimum fee for a Set Script transaction is $$0.001$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "sender": "3N9yCRmNsLK2aPStjLBne3EUiPSKvVHYgKk", "feeAssetId": null, "chainId": 84, "proofs": [ "2ihGFLUbvJHEpuGRqx5MXEXsEzwMuCmB8FgUTZgSPdANA4iab4M3nsNJ7a7hyiuqjrvwNCHoWn69hvUeziJiSAie" ], "fee": 1400000, "id": "28hbeFhYBq6uir1bbjt2dxbpqxCM2B6GKq4c7zf7AbkX", "type": 13, "version": 1, "timestamp": 1592408917668, "height": 1047736 } Set Script Transaction JSON Representation Field Description chainId Chain ID script Compiled script, base64 encoded. Account script up to $$8192$$ bytes, dApp script up to $$32,767$$ bytes. null – delete script. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The SetScriptTransaction structure is used for transaction handling in smart contracts. ##### Invoke Script Transaction Invoke script transaction invokes the callable function of the dApp. In addition to the dApp address, callable function name, and arguments, the Invoke Script transaction can contain payments to dApp. The maximum number of payments is 10. Fee The sender can specify a transaction fee nominated in a sponsored asset instead of DecentralCoins, see the sponsored fee article. The minimum fee in DecentralCoins for an invoke script transaction is Fee $$= 0.005 + S + 1 × I$$. • If the transaction sender is a dApp or smart account, and that the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, then $$S = 0.004$$, otherwise $$S = 0$$. • $$I$$ is the number of issued assets that are not NFT. Total Complexity A dApp callable function can invoke a callable function of another dApp, or another callable function of the same dApp, or even itself. All invoked functions are executed within a single Invoke Script transaction. More about dApp-to-dApp invocation. The total complexity is limited by $$26,000$$ for all callable functions and asset scripts of involved smart assets in a single invoke script transaction. The sender’s account script complexity is not included in that limit. JSON Representation { "type": 16, "id": "DN9Ny8mph4tLjn58e9CqhckPymH9zwPqBSZtcv2bBi3u", "sender": "3Mw48B85LvkBUhhDDmUvLhF9koAzfsPekDb", "senderPublicKey": "BvJEWY79uQEFetuyiZAF5U4yjPioMj9J6ZrF9uTNfe3E", "fee": 500000, "feeAssetId": null, "timestamp": 1601652119485, "proofs": [ "2536V2349X3cuVEK1rSxQf3HneJwLimjCmCfoG1QyMLLq1CNp6dpPKUG3Lb4pu76XqLe3nWyo3HAEwGoALgBhxkF" ], "version": 2, "chainId": 84, "dApp": "3N28o4ZDhPK77QFFKoKBnN3uNeoaNSNXzXm", "payment": [], "call": { "function": "foo", "args": [ { "type": "list", "value": [ { "type": "string", "value": "alpha" }, { "type": "string", "value": "beta" }, { "type": "string", "value": "gamma" } ] } ] }, "height": 1203100, "applicationStatus": "succeeded", "stateChanges": { "data": [ { "key": "3Mw48B85LvkBUhhDDmUvLhF9koAzfsPekDb", "type": "string", "value": "alphabetagamma" } ], "transfers": [], "issues": [], "reissues": [], "burns": [], "leases": [], "leaseCancels": [], "invokes": [] } } Invoke Script Transaction JSON Representation Field Description call.function Callable function name. Up to $$255$$ bytes ($$1$$ character can take up to $$4$$ bytes). call.args.type Argument type: 1) binary. 2) boolean. 3) integer. 4) string. 5) list. call.args.value Argument value. 1) integer: from $$-9,223,372,036,854,775,808$$ to $$9,223,372,036,854,755,807$$ inclusive. 2) string or binary: up to $$32,767$$ bytes. Binary value should be base64 encoded. 3) list: up to $$1000$$ elements. dApp dApp address base58 encoded or dApp alias with alias:<chain_id>: prefix, for example alias:T:merry (See Chain ID). payment.amount Amount of token in payment: an integer value specified in atomic units. payment.assetId ID of token in payment, base58 encoded. null means that the payment is in DecentralCoin. stateChanges Script actions performed by the callable function and dApp-to-dApp invocation results. The stateChanges structure does not need to be filled when sending a transaction, and it is not stored on the blockchain. The node returns this structure when providing transaction data via the REST API. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The InvokeScriptTransaction structure is used for transaction handling in smart contracts. #### Network Network Transaction type ID Name Description $$8$$ Lease transaction Leases DecentralCoins. $$9$$ Lease cancel transaction Cancels the leasing. $$14$$ ##### Lease Transaction Lease transaction leases DecentralCoins to another account. After $$1000$$ block the leased tokens are accounted for by the recipient’s generating balance. The larger the generating balance of the node is, the higher the chances for that node to be selected to generate the next block. Commonly node owners share the reward for generated blocks with lessors. More about leasing. Leased tokens remain locked on the sender’s account with the full control of their owner. The sender can cancel the lease at any time by the lease cancel transaction. Fee The minimum fee for a lease transaction is $$0.001$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "senderPublicKey": "b8AB1PQWE7kH55cS48uDTV5fezrAyDTCf7iePyXNzNm", "amount": 500000000, "signature": "3n34MYd3Acx1JpTtvYffdVYCVySuRgZvSbHMA3AxqQwr4xvfZedv9UbqSB9k84PGY5C8RSwGRjDnMGcYwQu2x7B5", "fee": 100000, "type": 8, "version": 1, "feeAssetId": null, "proofs": [ "3n34MYd3Acx1JpTtvYffdVYCVySuRgZvSbHMA3AxqQwr4xvfZedv9UbqSB9k84PGY5C8RSwGRjDnMGcYwQu2x7B5" ], "recipient": "3P2HNUd5VUPLMQkJmctTPEeeHumiPN2GkTb", "id": "7k4EPgA3VxoE56TMJLjvF9FMpywyfeS5qRJSEEN9XGuU", "timestamp": 1528813353617, "status": "canceled", "height": 1038624 } Lease Transaction JSON Representation Field Description amount Amount of DecentralCoins to lease. recipient Recipient address base58 encoded or recipient alias. status Lease status: 1) active: lease is active. 2) canceled: lease is cancelled, see lease cancel transaction. The status field does not need to be filled when sending a transaction, and it is not stored on the blockchain. The node calculates these fields when providing transaction data via the REST API. The fields that are common to all types of transactions are described in the transaction article. Binary Format See the lease transaction binary format. Ride Structure The LeaseTransaction structure is used for transaction handling in smart contracts. ##### Lease Cancel Transaction Lease cancel transaction cancels the leasing. See the lease transaction article. Fee The minimum fee for a lease cancel transaction is $$0.001$$ DecentralCoins. If the transaction sender is a dApp or smart account, and the complexity of the account script or dApp script verifier function exceeds the sender complexity threshold, the minimum fee is increased by $$0.004$$ DecentralCoins. JSON Representation { "type": 9, "id": "6rzxZ3rEsCxgmkcn6DDPB9f9Phi28D4JWZsCtwcViD8C", "sender": "3Mx7kNAFcGrAeCebnt3yXceiRSwru6N3XZd", "senderPublicKey": "81fxJw7HM2VX1ucq1vNKiedM1XBGX7H2TDUtxN6ib68Z", "fee": 100000, "feeAssetId": null, "timestamp": 1622579112096, "proofs": [ "3eFnprsRSeczc371bQ7AUsbh6qjiUFze6y5BZGKbxyHG27K1cU6jVUgRdthYz9uWVw1FgVpLjMciGCb64rJnMp3k" ], "version": 2, "leaseId": "BhHPPHBZpfp8FBy8DE7heTpWGJySYg2uU2r4YM6qaisw", "chainId": 84, "height": 1551763, "applicationStatus": "succeeded", "lease": { "id": "BhHPPHBZpfp8FBy8DE7heTpWGJySYg2uU2r4YM6qaisw", "originTransactionId": "BhHPPHBZpfp8FBy8DE7heTpWGJySYg2uU2r4YM6qaisw", "sender": "3Mx7kNAFcGrAeCebnt3yXceiRSwru6N3XZd", "recipient": "3Mz9N7YPfZPWGd4yYaX6H53Gcgrq6ifYiH7", "amount": 124935000, "height": 1551763, "status": "canceled" } } Lease Cancel Transaction JSON Representation Field Description leaseId Lease transaction ID. chainId Chain ID lease Parameters of canceled lease. The lease structure does not need to be filled when sending a transaction, and it is not stored on the blockchain. The node returns this structure when providing transaction data via the REST API. The fields that are common to all types of transactions are described in the transaction article. Binary Format Ride Structure The LeaseCancelTransaction structure is used for transaction handling in smart contracts. #### Genesis Genesis Transaction type ID Name Description $$1$$ Genesis transaction Accrues DecentralCoins to an account upon the initial distribution during the creation of the blockchain. ##### Genesis Transaction Genesis transaction accrues DecentralCoins to account upon the initial distribution of DecentralCoins during the creation of the blockchain. The first block of the blockchain, the genesis block, consists of genesis transactions. Binary Format ### Transaction Validation A DecentralChain node validates each transaction in the following cases: • The node receives the transaction via the broadcast endpoint of Node extensions or gRPC server. • The node receives the transaction from another node of the blockchain network using the binary protocol. • The block generator adds the transaction to a block. • The node receives a block (or microblock) from another node in the network. Full transaction validation includes the following checks: 1. Transaction fields check including: 1. Timestamp check: the transaction timestamp should be not more than $$2$$ hours ago or $$1.5$$ hours ahead from the current block timestamp. 2. Transaction version check: all the features required to support this version should be activated. 3. Transaction type check: all the features required to support this type should be activated. 4. Check of token amounts: the values must be non-negative. 5. Check different fields depending on the transaction type. 1. Sender’s balance check. 1. The sender should have enough funds to pay the fee. If a sponsored asset is used for the fee, the sponsor’s balance is also checked. 2. Depending on the type of transaction, the sender should have enough assets for transfer or for payments attached to the invoke script transactions. Order senders in the exchange transaction should have enough funds to exchange. 1. The sender’s signature verification 1. For ordinary accounts (without script). 2. For account script execution if the sender is a smart account. 3. For verifier function execution if the sender is dApp. 4. A similar check is performed for orders in an exchange transaction. 1. For the invoke script transaction: 1. Calculation of the result of dApp callable function. 2. dApp balance check: dApp account should have enough funds for dApp script actions. 3. Check that the transaction fee is not less than the minimum fee based on script actions. 1. Execution of asset scripts if the transaction uses smart assets, including scripts of assets used in dApp script actions. When receiving the transaction via the broadcast endpoint, or adding a transaction to a block, or receiving a block over the network, the node performs full validation of the transaction. When receiving an invoke script transaction over the network, the node performs calculations of the callable function (check 4.1) up to the threshold for saving unsuccessful transactions. #### Validation Result • If one of the checks fails, the transaction is discarded. • If all the checks passed, the transaction is added to the UTX pool, which is the list of transactions waiting to be added to the block. When adding the transaction to the block, the result of validation depends on the transaction type. For the invoke script transaction: • If one of the checks 1–3 failed, the transaction is discarded. • If checks 1–3 passed, and the calculation of the result of the dApp callable function (check 4.1) failed with an error or throwing an exception before the complexity of performed calculations exceeded the threshold for saving failed transactions, the transaction is also discarded. • If checks 1–3 passed but checks 4–5 failed and besides the result of the callable function is calculated successfully or the complexity exceeded the threshold, the transaction is saved on the blockchain but marked as failed: “applicationStatus”: “script_execution_failed”. The sender is charged the transaction fee. The transaction doesn’t entail any other changes to the state of the blockchain. • If all checks passed, the transaction is saved on the blockchain as successful: “applicationStatus”: “succeeded” and the sender is charged the fee. For the exchange transaction: • If one of the checks 1–3 failed, the transaction is discarded. • If checks 1–3 passed but check 5 failed, the transaction is saved on the blockchain but marked as failed: “applicationStatus”: “script_execution_failed”. The sender of the transaction (matcher) is charged the transaction fee. The transaction doesn’t entail any other changes in balances, in particular, the order senders don’t pay the matcher fee. • If all checks passed, the transaction is saved on the blockchain as successful: “applicationStatus”: “succeeded”. The matcher is charged the transaction fee as well as the order senders are charged the matcher fee. For the other transactions: • If one of the checks fails, the transaction is discarded. • If all checks passed, the transaction is saved on the blockchain as successful and the sender is charged the fee. ## Block A block is a link in the chain of the blockchain. Block contains transactions. A block has its height. The maximum block size is $$1$$ MB. The maximum total complexity of scripts in transactions of the block is $$2,500,000$$. The complexity of all executed scripts is taken into account: dApp scripts, account scripts, and asset scripts. ### Block Generation A block generation is a creation of a new block on the blockchain. Blocks are generated by generating nodes according to FPoS algorithm and the DecentralChain-M5 protocol. The block generator signs the block headers only. The block headers contain the merkle root hash of the block transactions. This makes it possible to verify the block headers apart from transactions and to provide evidence of the presence of transactions in the block without the presence of all transactions. See details in the transactions root hash article. #### Base Target The base target is the variable in the average block generation time formula that adjusts block generation time to $$60$$ seconds. #### Generation Signature Generation signature is the variable in the average block generation time formula. It is used to check whether the current generating node is eligible to generate the next block. The generation signature is calculated using VRF (verifiable random function with short proofs and keys) — a pseudo-random function that uses a message and the private key of an account to provide a non-interactively verifiable proof for the correctness of its output. This improvement allows resisting stake grinding attacks aimed at influencing block generation randomness to skip miner’s opportunity to create a block. The use of VRF makes signature generation unpredictable because of the need to know the private key for calculation. Only the holder of the private key can compute the hash, but verifying the correctness of the hash using the public key from block header is available to anyone. The VRF contains calculateVRF function, which calculates proof for some message, and verifyVRF function, which verifies proof from calculateVRF function with a message and the public key of the signer. Considering that a block’s generation signature is equal to calculateVRF output for a previous generation signature with account private key sk (of generator of $$i+1$$ th block): generationSignaturei+1 = VRFproof = calculateVRFsk(VRFi) The output of calculateVRF function is a VRF proof, which means that the validity of the signature can be checked. The output of function verifyVRF(pk $$_i$$, generationSignature $$_i$$) is used to define the time delay between $$i+99$$ and $$i+100$$ blocks for concrete block generator. ### Block Height The block height is a sequence number of a block in the blockchain. ### Block Signature A block signature is a hash that a generating node acquires when it signs the generated block with the private key of the account from the node’s wallet. ### Block Timestamp A block timestamp is a time of block generation. The time is specified in milliseconds that have passed since the beginning of the unix epoch. When the node receives a new block from the blockchain network, it verifies that the timestamp value of the block does not outpace the UTC time by more than $$100$$ milliseconds. The timestamp value of the block is validated by nodes using the formula from FPoS. ### Genesis Block A genesis block is the first block of the blockchain. A genesis block contains one or more genesis transactions. There is one genesis block in the blockchain. ### Transactions Root Hash The transactionsRoot field in the block header contains the root hash of the Merkle tree of transactions of the block. The root hash is the proof that the block contains all the transactions in the proper order. The transactions root hash in the block header has the following purposes: • To prove the integrity of transactions in the block without presenting all transactions. • To sign the block header only, separately from its transactions. #### transactions Root Сalculation 1. The hash of each transaction in the block is calculated. For example: • $$H_A$$ = hash($$T_A$$) • $$H_B$$ = hash($$T_B$$) 1. Each pair of adjacent hashes is concatenated, and the hash is calculated for each resulting concatenation: • $$H_{AB}$$ = hash($$H_A$$ + $$H_B$$) • If the last hash does not have a pair, it is concatenated with the zero byte hash: $$H_{GH}$$ = hash($$H_G$$ + hash(0)) 1. Step 2 is repeated until the root hash is obtained: • $$H_{ABCDEFGH}$$ • The root hash is written in the transactionsRoot field. If the block is empty, then transactionsRoot = hash(0). DecentralChain blockchain uses BLAKE2b-256 hashing function. #### Proof of Transaction in Block Let’s suppose that side $$1$$ stores the full blockchain data and side $$2$$ stores the block headers only. To prove that the block contains a given transaction, side $$1$$ provides the following data: • T: Transaction to check. • merkleProofs: Array of sibling hashes of the Merkle tree, bottom-to-top. • index: Index of the transaction in the block. <<<<<<< HEAD .. image:: _static/02_intermediate/images/11_Proof-of-Transaction-in-Block.png ======= .. image:: _static/02_intermediate/images/image.jpg >>>>>>> b83a891fea7eef2d472ac2e336795eea50776d03 For example, for the $$T_D$$ transaction: • merkleProofs = [ $$H_C$$, $$H_{AB}$$, $$H_{EFGH}$$ ] • index = $$3$$ Side 2 checks the proof: 1. It calculates the hash of the transaction being checked (all the transaction data is hashed, including the signature): $$H_D$$ = hash($$T_D$$) 2. It concatenates the current hash with the corresponding hash of the merkleProofs array and calculates the hash of concatenation. index determines in which order to concatenate the hashes: • If the nth bit of index from the end is $$0$$, then the order is: the current hash + the nth hash of the merkleProofsarray (proof hash is on the right). • If the nth bit is $$1$$ , the order is: the nth hash of the merkleProofsarray + the current hash (proof hash is on the left). For example, index = $$3_{10}$$ = $$11_2$$ , thus: • merkleProofs[0] = $$H_{C}$$ is on the left, • merkleProofs[1] = $$H_{AB}$$ is on the left, • merkleProofs[2] = $$H_{EFGH}$$ is on the right. 1. It repeats step 2 until the root hash is obtained: $$H_{ABCDEFGH}$$ 2. It compares the root hash obtained with the already known transactionsRoot from the block header. If the hashes match, then the transaction exists in the block. #### Tools The following Node API methods accept transaction IDs and provide the proof that the transaction is in a block for each transaction: • GET /transactions/merkleProof • POST /transactions/merkleProof The methods are described in the transaction article. You can check a transaction on the same blockchain without using a root hash, since the DecentralChain nodes store the entire blockchain data, including all transactions. Use the following built-in Ride function: transactionHeightById(id: ByteVector): Int|Unit The function returns the block height if the transaction with the specified ID exists. Otherwise, it returns a unit. See the function description in the blockchain functions article. To check a transaction in a block on the external blockchain you can use the following built-in Ride function: createMerkleRoot(merkleProofs: List[ByteVector], valueBytes: ByteVector, index: Int): ByteVector This function is applicable if the external blockchain uses the same algorithm for calculating the root hash of transactions. The createMerkleRoot function calculates the root hash from the transaction hash and sibling hashes of the merkle tree (see Steps 1–3). To check a transaction in a block, compare the calculated root hash with the transactionsRoot value in the block header. ## Node A node is a host connected to the blockchain network. Node functions are: ### Generating Node Generating node is a node that generates blocks. Each generating node is a validating node. Generating account is an account that a node uses for signing generated blocks. A node can generate blocks if the following conditions are met: • The node’s generating balance is at least $$10000$$ DecentralCoins. This means that the account balance in DecentralCoins, taking into account leasing, was not less than $$10000$$ DecentralCoins in each of the last $$1000$$ blocks (more details in the account balance article). The greater the generating balance, the higher is your chance of being eligible to generate the next block. • Node’s account is not a smart account or dApp. • Block generation is not disabled in node settings. By default, block generation is enabled. • The node is connected to at least the number of peers specified in the required parameters ($$1$$ by default). ### Validating Node A validating node is a node that validates transactions. ### Generator’s Income A node’s income from adding a new block to the blockchain consists of the following amounts: 1. Block reward: The current reward size is $$6$$ DecentralCoins but it can be changed by voting, see the block reward article. 2. $$40\%$$ of the total transaction fees in the current block. The exact value is calculated as follows: • $$\sum_{i}^{} 2 * (\frac{f_i}{5})$$ • Here f $$_i$$ is the fee for the $$i$$-th transaction. For each transaction fee, an integer division by $$5$$ is performed, then a multiplication by $$2$$, and finally they are summed up. 1. $$60\%$$ of the total transaction fees in the previous block. • $$\sum_{i}^{} (f_i - 2 * (\frac{f_i}{5}))$$ • The block generator receives exactly the part of the fee that the previous block generator did not receive. If the transaction fees are specified in a sponsored asset, then the block generators receive the fee equivalent in DecentralCoins instead of the fee (as a general rule, in a $$\frac{40}{60}$$ ratio): feeInDecentralCoins = feeInSponsoredAsset × 0.001 / minSponsoredAssetFee minSponsoredAssetFee is the amount of the sponsored asset equivalent to $$0.001$$ DecentralCoins. The sponsor sets this value when enabling sponsorship. For details, see the sponsored fees article. ### Block Reward Block reward is a blockchain feature under which generating nodes receive a fixed fee in DecentralCoins for each generated block. Block rewards are paid due to the additional issue of the DecentralCoin token. The community of generating nodes can change the size of reward through voting. #### Current Reward Size You can view the current reward size by making a request to the Node REST API. In response to the request, a JSON file is returned, the value of the currentReward field of which is the current block reward size in Decentralites. Example of response: { "height": 1742254, "totalDecentralCoinsAmount": 10001353000000000, "currentReward": 600000000, "minIncrement": 50000000, "term": 100000, "nextCheck": 1839999, "votingIntervalStart": 1830000, "votingInterval": 10000, "votingThreshold": 5001, "increase": 0, "decrease": 0 } } In the example above, the value of the JSON’s currentReward field is 600,000,000 Decentralites— i.e. it’s 6 DecentralCoins. #### The Change of Block Reward Size Over Time Every $$100,000$$ blocks, i.e. approximately every $$70$$ days, a new voting for the current reward size change begins among the generators. The voting duration is $$10,000$$ blocks. During this time, generating nodes vote to increase, decrease or leave the current reward size unchanged. The elected reward size remains unchanged for $$100,000$$ blocks following the end of voting. #### Voting A generating node specifies the new desired reward size via settings in the node configuration file, the setting value is specified in Decentralites. If the value is greater than the current reward size, then the generator votes for the current reward size increase; if the value is smaller — for the decrease. If the setting value is not specified in the configuration file, then the generator votes for keeping the current reward size. When a node generates a block, it writes into that block the value of the desired reward size specified in the setting from its own node configuration file. If the setting value is not specified in the configuration file, then $$-1$$ is written to the block. During the voting time in $$10,000$$ blocks, a single node can generate several blocks, therefore one node can vote several times. How often a node generates blocks is determined by the LPoS consensus. To count the votes, all $$10,000$$ blocks generated during the voting period are inspected. If either $$-1$$ or the value that is equal to the current reward size is recorded to the block, then the generator votes for keeping the current reward size. If the value recorded to the block is greater than the current reward size, then the generator votes for the current reward size increase; if the value is smaller — for the decrease. The block reward is increased/decreased only if more than half of the $$10,000$$ votes — i.e. $$5,001$$ votes or more — were given for increase/decrease. The amount of the current reward is increased/decreased by $$0.5$$ DecentralCoins. Example 1 At the blockchain height of $$2,000,000$$, the block reward equals $$5$$ DecentralCoins. At the height of $$2,090,000$$, another voting starts. During the $$10,000$$ blocks of voting $$6,000$$ votes were given for reward increase, $$1,000$$ — for decrease, $$3,000$$ — for keeping the current reward size. From the height of $$2$$, $$100,000$$ to the height of $$2,199,999$$, the new reward size will be $$5.5$$ DecentralCoins, because the reward change step is $$0.5$$ DecentralCoins. The next voting will take place from the height of $$2,190,000$$ to $$2,199,999$$. Example 2 At the blockchain height of $$2,100,000$$, the block reward equals $$5.5$$ DecentralCoins. At the height of $$2,190,000$$, another voting starts. During the $$10,000$$ blocks of voting $$4,500$$ votes were given for reward increase, $$4,000$$ — for decrease, $$1,500$$ — for keeping the current reward size. From the height of $$2,200,000$$ to the height of $$2,299,999$$, the “new” reward size will be the same — $$5.5$$ DecentralCoins. Although the highest number of votes were given for the reward increase, it was not enough to change the current reward size. In order for the current reward size to be increased, at least $$5,001$$ votes must be given for the increase. The next voting will take place from the height of $$2,290,000$$ to $$2,299,999$$. ### Leased Proof of Stake Leased Proof of Stake (LPoS) is an enhanced type of proof of stake consensus algorithm by which the DecentralChain blockchain network aims to achieve the distributed consensus to secure the network. #### Leasing Benefits for the Node Owner Nodes can use the leased tokens to generate blocks and get the mining reward. For that purpose, the generating balance of a node must be at least $$10000$$ DecentralCoins. #### Leasing Benefits for the Token Holder LPoS allows the token holders to lease their tokens to the DecentralChain nodes and earn a percentage of the payout as a reward. By using LPoS, lessors will be able to participate in the process of generating new blocks because the larger the amount that is leased to a DecentralChain node, the higher the chances for that node to be selected to generate the next block. If that node is selected, then the leaser will receive a reward. When the user starts leasing the tokens, those leased tokens are locked and remain in the same address with the full control of their owner (they are not transferred to the node, they just remain unspendable until the lease is canceled by the lessor). The only thing to consider when leasing is to choose the right node operator, as the operator’s node may work with different efficiency and send back different percentages as rewards. ##### Rewards • The node owner may send the lessor a part of the rewards according to his conditions. • The more transactions that are made on the network, the more rewards the lessors get. • These rewards mostly are in DecentralCoins but also they can be in the form of different tokens with the unique DecentralCoins feature where different tokens can be accepted as a fee. #### LPoS Transactions To start leasing, the token holder needs to create a lease transaction and specify the recipient address (node address) along with the amount of DecentralCoins to lease. There are two types of transactions which are used in the LPoS: #### Create a Lease You can use Decentral.Exchange online to create a lease. • Make sure you are logged into your account. On the main screen navigate to Wallet > Leasing. • On the next screen click Start Lease and then select the recipient between the list of nodes and indicate the amount you want to lease. • Verify all the information and click Start Lease again to confirm. ## Order Order is the instruction from the account to matcher to buy or sell a token on the exchange. ### Asset Pair Each order contains amount asset / price asset pair, also called asset pair. Example "assetPair": { "amountAsset": "3QvxP6YFBKpWJSMAfYtL8Niv8KmmKsnpb9uQwQpg8QN2", "priceAsset": "null" } Asset Pair Fields Asset Pair Fields Field name Description amountAsset ID of the pair’s first asset, that the order’s sender wants to buy or sell. priceAsset ID of the pair’s second asset, in which the price of the order is expressed. null value means that asset is DecentralCoins. ### Order’s Amount and Price In the user interface, the amount and price are usually presented as values with a fractional part (for example, $$0.74585728$$ DecentralCoins), i.e. in the denormalized form. The denormalized form is convenient for humans, but not for calculations. To solve the problem of calculation accuracy, the normalization is performed. In the user interface, the amount and price are usually presented as values with a fractional part (for example, $$0.74585728$$ DecentralCoins), i.e. in the denormalized form. The denormalized form is convenient for humans, but not for calculations. To solve the problem of calculation accuracy, the normalization is performed, i.e. amount and price are represented as an integer. So, $$0.74585728$$ DecentralCoins is $$0.74585728 × 10^{8}$$ or $$74585728$$ Decentralites. In this case, the exponent is $$8$$, because DecentralCoins has $$8$$ decimals after the decimal point. Other assets may have different amount of decimals. For example, TDX has 2 decimals. #### Amount Consider buying $$2.13$$ TDX at the price of $$0.35016774$$ DecentralCoins for one TDX. Here the asset pair is TDX / DecentralCoins. The amount in the order is the number of units sold or bought in conventional “pennies”. This value in the current case is $$213$$, since $$2.13$$ TDX $$= 2.13 × 10^{2}$$ = 213 “pennies” of TDX. So, to bring the amount to the normalized form, it is multiplied by $$10^{amountAssetDecimals}$$. #### Price Price is the value of 1 unit of the amount asset, expressed in the price asset. In the TDX / DecentralCoins example above, this is the price in DecentralCoins for 1 TDX. To normalize price, it is multiplied by: • In orders of versions 1, 2, 3: $$10^{(8 + priceAssetDecimals - amountAssetDecimals)}$$. • In orders of version 4: at $$10^{8}$$. The exponent of $$8$$ is selected because there cannot be an asset with the exceeding quantity of decimals on the DecentralChain blockchain. The matcher algorithm has a limitation in relation to price: the last N digits of the normalized price must be zeros (N is price_decimals minus amount_decimals). If this is not so, then the matcher rejects the order on placement. #### Price Asset Quantity Calculation The quantity of price asset in normalized form which: • Will be given by sender if order is BUY. • Will be acquired by sender if order is SELL. Is calculated by the following formula: amount × price × $$10^{(priceAssetDecimals - amountAssetDecimals - 8)}$$. • In orders of versions 1, 2, 3: amount × price × $$10^{-8}$$ • In orders of version 4: amount × price × $$10^{(priceAssetDecimals - amountAssetDecimals - 8)}$$. If the result of the calculation is a value with a fractional part, then the fractional part is discarded. Designations in the above formula: • Amount — amount in normalized form. • Price — price in normalized form. • PriceAssetDecimals — the number of decimal places of the price asset. • AmountAssetDecimals — the number of decimal places of the amount asset. ### Order Cancellation The order sender may cancel the order before it is executed. Unexecuted orders are automatically canceled at the date and time specified by the order sender. ### Order Expiration Date Order expiration date is the date and time of automatic cancellation of an unexecuted order. The date is specified in milliseconds which have passed since the beginning of the unix epoch. The expiration time can’t be earlier than matcher time + $$1$$ minute and later than matcher time + $$30$$ days. ### Order Timestamp Order timestamp is the time when the matcher added the order to the order book. The time is specified in milliseconds that have passed since the beginning of the unix epoch. ### Order Binary Format See the order binary format page. ## Oracle Oracle is a data provider from the outside world on the blockchain. ### Sources of the Outside World Software oracles handle data accessible on the web. For example, the temperature, costs of products and merchandise, flight or train delays, etc. The information originates from online sources, e.g. API. The product prophet extricates the required data and pushes it into the blockchain. Hardware oracles track real-world objects with devices and sensors. For example, a video camera with an analytics function virtual line crossing tracks vehicles entering a specific zone. If an event is detected, the oracle writes about it on the blockchain. Based on the data of such oracle, some script of decentralized application on the blockchain may be triggered. In this case, for example, a fine and the write-off of tokens from the account of the vehicle owner. But it is not in oracle scope, it is in the scope of the script that is based on the data of such an oracle. Human oracles imply that the data is entered by a human being. ### Oracles Issue The oracle is a way of connecting the blockchain with the outside world. The major problem that is solved by the usage of oracles is the very point that blockchains can only access data that is stored on the blockchain. Here, in blockchain, the point is that it is important that decentralized applications can only access data that is stored on the blockchain so that every execution of the script leads to the same result at a given point in time. Therefore, decentralized applications are not able to access data from outside the blockchain, e.g., provided by web services or other external sources of data. Nevertheless, many interesting applications need access to the outside world, e.g., decentralized applications for insurances, decentralized betting systems, financial services and so forth. Here, the solution is quite straightforward: if external data is necessary for the execution of a decentralized application, this data needs to be stored on the blockchain. To achieve this, there are usually small programs implemented that access the necessary data and write it to the blockchain. Those little programs are called oracles. ### Consensus of Oracles One source may be unsafe if it does not have the authority or high rating. However, several oracles can be used to stay away from the monopoly and be safer. For example, get information from ten oracles and only if the data of $$6$$ out of $$10$$ oracles coincide, to accept them. This is the consensus of the oracles. ## Mainnet, Testnet, Stagenet ### Connecting Node to Blockchain Network You can launch your node in any blockchain network. Select the network in the node configuration file. • For installing a node, see the install DecentralChain node article. • For starting your own blockchain network, see the custom blockchain article. ### Chain ID Chain ID is a symbol that is passed over a network during a handshake and allows nodes not to connect to the nodes of other networks. The chain ID is used while building account addresses, therefore, an address on one blockchain network cannot be used on another network. The chain ID is also indicated in transactions so it is impossible to move transactions between different blockchain networks. Chain ID Blockchain Network Chain ID Mainnet W or $$87$$ (ASCII code of W). Testnet T or $$84$$ (ASCII code of T). Stagenet S or $$83$$ (ASCII code of s). ### Tools #### API of Pool of Public Nodes Chain ID is a symbol that is passed over a network during a handshake and allows nodes not to connect to the nodes of other networks. The chain ID is used while building account addresses, therefore, an address on one blockchain network cannot be used on another network. The chain ID is also indicated in transactions so it is impossible to move transactions between different blockchain networks. Mainnet https://mainnet-node.decentralchain.io Testnet https://testnet-node.decentralchain.io Stagenet TBA #### Data Service API Mainnet https://data-service.decentralchain.io Testnet TBA Stagenet TBA #### Decentral.Exchange Decentral.Exchange is a decentralized exchange. Mainnet https://decentral.exchange/ Testnet TBA Stagenet TBA #### API of Decentral.Exchange Matcher The addresses for order sending and market data obtaining are as follows: Mainnet https://mainnet-matcher.decentralchain.io/api-docs/index.html Testnet https://matcher.decentralchain.io/api-docs/index.html Stagenet TBA #### DecentralChain Explorer DecentralChain Explorer is a service for browsing blockchain data. Mainnet Go to http://decentralscan.com/ and click the three lines, then switch to Mainnet. Testnet Go to http://decentralscan.com/ and click the three lines, then switch to Testnet. Stagenet Go to http://decentralscan.com/ and click the three lines, then switch to Stagenet. #### Faucet: Obtaining Tokens Mainnet Testnet TBA Stagenet TBA ## Protocols & Data Formats ### Cryptographic Practical Details #### Description This section describes all the details of cryptographic algorithms which are used to: • Create private and public keys from seed. • Create addresses from public keys. • Create blocks and transactions signing. We use Blake2b256 and Keccak256 algorithms (in the form of hash chain) to create cryptographic hashes. And Curve25519 (ED25519 with X25519 keys) to create and verify signatures. Base58 to create the string form of bytes. #### Bytes Encoding Base58 All arrays of bytes in the project are encoded by Base58 algorithm with Bitcoin alphabet to make it ease human readable (text readability). Example The string teststring is coded into the bytes $$[5, 83, 9, -20, 82, -65, 120, -11]$$. The bytes $$[1, 2, 3, 4, 5]$$ are coded into the string 7bWpTW. #### Creating a Private Key From a Seed A seed string is a representation of entropy, from which you can re-create deterministically all the private keys for one wallet. It should be long enough so that the probability of selection is an unrealistic negligible. In fact, seed should be an array of bytes but for ease of memorization lite wallet uses Brainwallet, to ensure that the seed is made up of words and easy to write down or remember. The application takes the UTF-8 bytes of the string and uses them to create keys and addresses. For example, seed string manage manual recall harvest series desert melt police rose hollow moral pledge kitten position add After reading this string as UTF-8 bytes and encoding them to Base58, the string will be coded as: xrv7ffrv2A9g5pKSxt7gHGrPYJgRnsEMDyc4G7srbia6PhXYLDKVsDxnqsEqhAVbbko7N1tDyaSrWCZBoMyvdwaFNjWNPjKdcoZTKbKr2Vw9vu53Uf4dYpyWCyvfPbRskHfgt9q A seed string is involved with the creation of private keys. To create a private key using the official web wallet or the node, to $$4$$ bytes of int ‘nonce’ field (big-endian representation), which initially has a value of $$0$$ and increases every time you create the new address, should be prepended to seed bytes. Then we use this array of bytes to calculate hash keccak256(blake2b256(bytes)). This resulting array of bytes we call account seed, from it you can definitely generate one private and public key pair. Then this bytes hash passed in the method of creating a pair of public and private key of Curve25519 algorithm. DecentralChain uses Curve25519-ED25519 signature with X25519 keys (montgomery form), but most embedded cryptography devices and libraries don’t support X25519 keys. There are libraries with conversion functions from: • ED25519 keys to X25519 (Curve25519) crypto_sign_ed25519_pk_to_curve25519(curve25519_pk, ed25519_pk) for public key. • Crypto_sign_ed25519_sk_to_curve25519(curve25519_sk, ed25519_skpk) for private key. NOTE: Not all random $$32$$ bytes can be used as private keys (but any bytes of any size can be a seed). The signature scheme for the ED25519 introduces restrictions on the keys, so create the keys only through the methods of the Curve25519 libraries and be sure to make a test of the ability to sign data with a private key and then check it with a public key, however obvious this test might seem. There are valid Curve25519 realizations for different languages: Also some Curve25519 libraries (as the one used in our project) have the Sha256 hashing integrated, some not (such as most of c/c++/python libraries), so you may need to apply it manually. Note that the private key is clamped, so not any random $$32$$ bytes can be a valid private key. Example Brainwallet seed string manage manual recall harvest series desert melt police rose hollow moral pledge kitten position add As UTF-8 bytes encoded xrv7ffrv2A9g5pKSxt7gHGrPYJgRnsEMDyc4G7srbia6PhXYLDKVsDxnqsEqhAVbbko7N1tDyaSrWCZBoMyvdwaFNjWNPjKdcoZTKbKr2Vw9vu53Uf4dYpyWCyvfPbRskHfgt9q Account seed bytes with nonce $$0$$ before apply hash function in Base58 1111xrv7ffrv2A9g5pKSxt7gHGrPYJgRnsEMDyc4G7srbia6PhXYLDKVsDxnqsEqhAVbbko7N1tDyaSrWCZBoMyvdwaFNjWNPjKdcoZTKbKr2Vw9vu53Uf4dYpyWCyvfPbRskHfgt9q blake2b256(account seed bytes) 6sKMMHVLyCQN7Juih2e9tbSmeE5Hu7L8XtBRgowJQvU7 Account seed ( keccak256(blake2b256(account seed bytes))) H4do9ZcPUASvtFJHvESapnxfmQ8tjBXMU7NtUARk9Jrf Account seed after Sha256 hashing (optional, if your library does not do it yourself) 49mgaSSVQw6tDoZrHSr9rFySgHHXwgQbCRwFssboVLWX Created private key 3kMEhU5z3v8bmer1ERFUUhW58Dtuhyo9hE5vrhjqAWYT Created public key HBqhfdFASRQ5eBBpu2y6c6KKi1az6bMx8v1JxX4iW1Q8 #### Creating Address from a Public Key Our network address obtained from the public key depends on the byte chainID (‘T’ for Testnet, ‘W’ for Mainnet, ‘S’ for Stagenet), so different networks obtained a different address for a single seed (and hence public keys). Example For the public key: HBqhfdFASRQ5eBBpu2y6c6KKi1az6bMx8v1JxX4iW1Q8 Created public key: 3PPbMwqLtwBGcJrTA5whqJfY95GqnNnFMDX #### Signing Curve25519 is used for all the signatures in the project. The process is as follows: create the special bytes for signing for transaction or block, then create a signature using these bytes and the private key bytes. For the validation of signatures it’s enough with signature bytes, signed object bytes and the public key. Do not forget that there are many valid (not unique!) signatures for a one array of bytes (block or transaction). Also you should not assume that the ID of the block or transaction is unique. The collision can occur one day! They have already taken place for some weak keys. Example Transaction Data: Transaction Data Field Value Sender address (not used, just for information) 3N9Q2sdkkhAnbR4XCveuRaSMLiVtvebZ3wp Private key (used for signing, not in tx data) 7VLYNhmuvAo5Us4mNGxWpzhMSdSSdEbEPFUDKSnA6eBv Public key EENPV1mRhUD9gSKbcWt84cqnfSGQP5LkCu5gMBfAanYH 3NBVqYXrapgJP9atQccdBPAgJPwHDKkh6A8 Asset id BG39cCNUFWPQYeyLnu7tjKHaiUGRxYwJjvntt9gdDPxG Amount $$1$$ Fee $$1$$ Fee asset id BG39cCNUFWPQYeyLnu7tjKHaiUGRxYwJjvntt9gdDPxG Timestamp $$1479287120875$$ Attachment (as byte array) $$1[1, 2, 3, 4]$$ Bytes: Bytes # Field name Type Position Length Value Base58 bytes value $$1$$ Transaction type (0x04) Byte $$0$$ $$1$$ $$4$$ $$5$$ $$2$$ Sender’s public key Bytes $$1$$ $$32$$ EENPV1mRhUD9gSKbcWt84cqnfSGQP5LkCu5gMBfAanYH $$3$$ Amount’s asset flag (0-DecentralCoins, 1-Asset) Byte $$33$$ $$1$$ $$1$$ $$2$$ $$4$$ Amount’s asset ID (*if used) Bytes $$34$$ $$0 (32*)$$ BG39cCNUFWPQYeyLnu7tjKHaiUGRxYwJjvntt9gdDPxG $$5$$ Fee’s asset flag (0-DecentralCoins, 1-Asset) Byte $$34 (66*)$$ $$1$$ $$1$$ $$2$$ $$6$$ Fee’s asset ID (**if used) Bytes $$35 (67*)$$ $$0 (32**)$$ BG39cCNUFWPQYeyLnu7tjKHaiUGRxYwJjvntt9gdDPxG $$7$$ Timestamp Long $$35 (67\_ ) (99*\_ )$$ $$8$$ $$1479287120875$$ 11frnYASv $$8$$ Amount Long $$43 (75\_ ) (107*\_ )$$ $$8$$ $$1$$ $$11111112$$ $$9$$ Fee Long $$51 (83\_ ) (115*\_ )$$ $$8$$ $$1$$ $$11111112$$ $$10$$ Bytes $$59 (91\_ ) (123*\_ )$$ $$26$$ 3NBVqYXrapgJP9atQccdBPAgJPwHDKkh6A8 $$11$$ Attachment’s length (N) Short $$85 (117\_ ) (149*\_ )$$ $$2$$ $$4$$ $$15$$ $$12$$ Attachment’s bytes Bytes $$87 (119\_ ) (151*\_ )$$ N $$[1,2,3,4]$$ 2VfUX Total data bytes for sign Ht7FtLJBrnukwWtywum4o1PbQSNyDWMgb4nXR5ZkV78krj9qVt17jz74XYSrKSTQe6wXuPdt3aCvmnF5hfjhnd1gyij36hN1zSDaiDg3TFi7c7RbXTHDDUbRgGajXci8PJB3iJM1tZvh8AL5wD4o4DCo1VJoKk2PUWX3cUydB7brxWGUxC6mPxKMdXefXwHeB4khwugbvcsPgk8F6YB Signature of transaction data bytes (one of an infinite number of valid signatures) 2mQvQFLQYJBe9ezj7YnAQFq7k9MxZstkrbcSKpLzv7vTxUfnbvWMUyyhJAc1u3vhkLqzQphKDecHcutUrhrHt22D Total transaction bytes with signature: 6zY3LYmrh981Qbzj7SRLQ2FP9EmXFpUTX9cA7bD5b7VSGmtoWxfpCrP4y5NPGou7XDYHx5oASPsUzB92aj3623SUpvc1xaaPjfLn6dCPVEa6SPjTbwvmDwMT8UVoAfdMwb7t4okLcURcZCFugf2Wc9tBGbVu7mgznLGLxooYiJmRQSeAACN8jYZVnUuXv4V7jrDJVXTFNCz1mYevnpA5RXAoehPRXKiBPJLnvVmV2Wae2TCNvweHGgknioZU6ZaixSCxM1YzY24Prv9qThszohojaWq4cRuRHwMAA5VUBvUs #### Calculating Transaction ID Transaction ID is not stored in the transaction bytes and for most transactions (except Payment) it can be easily calculated from the special bytes for signing using blake2b256(bytes_for_signing). For payments, the transaction ID is just the signature of this transaction. ### DecentralChain-M5 Solution #### Reasoning The maximum rate of transactions in blockchain systems is limited by the choice of two parameters: block size and block interval. The block interval defines the average amount of time that passes between the creation of two blocks. If we reduce this time, forks will appear more frequently, which will lead to either non-resolved forks or to decreased throughput since a considerable amount of time would be spent on resolving these forks. Larger blocks lead to huge network usage spikes during block propagation, which in turn will lead to throughput problems and huge forks. ##### DecentralChain-M5 Solution With Technical Details DecentralChain addresses this issue by allowing the miner to continuously farm a block during the time of mining. This continuously increasing block is called liquid block, which becomes immutable when the next block referencing it is built and appended. A liquid block consists of a key block and chain of microblocks. The process of creating liquid block goes as follows: • When a miner node observes it has the right to create a block, it creates and sends keyBlock, which is regularly just an empty block. • After that, it creates and sends microblocks every $$3$$ seconds. Microblock is very similar to a regular block: it’s a non-empty pack of transactions, which references its parent: previous microblock or key block. • Microblocks are continuously mined and propagated to the network until a new key block, referencing the current liquid block appears. ##### Microblock Structure generator: PublicKeyAccount transactionData: Seq[Transaction] prevResBlockSig: BlockId totalResBlockSig: BlockId signature: ByteStr totalResBlockSigis the new total signature of a block with all transactions from blockId=prevResBlockSigand owntransactionData. This means that having a_liquid block_consisting of 1_keyblock_and 3_microblock_s: KEYBLOCK() <-MICRO1(tx1,tx2) <-MICRO2(tx3,tx4) <-MICRO3(tx5,tx6) We have 4 versions of last block: Microblock Structure ID Transactions KEYBLOCK.uniqueId MICRO1.totalResBlockSig tx1,tx2 MICRO2.totalResBlockSig tx1,tx2,tx3,tx4 MICRO3.totalResBlockSig tx1,tx2,tx3,tx4,tx5,tx6 Next miner can reference any of these ids in its keyBlock. #### Economy For a miner, it might seem a good idea to reference KEYBLOCK from previous example and pack all txs from microblocks to its own (micro)block(s). In order to make ‘stealing’ transactions less profitable than referencing the best-known version of liquid block(= the last known microblock), we change the mechanics of fees: After activating M5, miner will receive $$40\%$$ of fees from the block it creates and $$60\%$$ of fees from the block he references. #### Configuration The following miner parameters can be tuned(though it’s best not to change them in order to maximize final version of your liquid block in the resulting blockchain): • KeyBlock size (maxTransactionsInKeyBlock, default = $$0$$). If changed, it won’t be rebroadcasted and the usual extension requesting mechanics will be used. • Microblock mining interval (microBlockInterval, default = $$3$$ s). • Max amount of transactions per microblock (maxTransactionsInMicroBlock, default = $$200$$). • Miner will try to reference the best-known microblock with at leastminMicroBlockAgeage(default = $$3$$ s). This is required in order for a miner to reference already-propagated block so its key block doesn’t get orphaned. • Microblock synchronization mechanism can be tuned with waitResponseTimeout(default = $$2$$ s), processedMicroBlocksCacheTimeout(default = $$10$$ s),invCacheTimeout(default = $$10$$ s) which are basically time of awaiting a microblock and times to cache a processed microblock ids and a list of nodes which have a microblock(by id). #### API changes • Upon applying every microblock, the last block gets changed, which means/blocks/lastand/blocks/at/…will reflect that. • /peers/blacklistednow expose ban reason, one can clear a node’s blacklist via/peers/clearblacklist • /debug/and/consensus/section are expanded, _stateHash _doesn’t take _liquid block _into consideration. ### DecentralChain-M5 Protocol #### Scalability Limits and Challenges in Current Blockchain Systems ##### Problem Statement and Motivation Blockchains protocols have some scalability limits and challenges that tradeoff between throughput and latency. The current blockchain technology is not fast enough and does not scale to include more transactions into the system so we have a performance challenge to be considered. There is a united agreement between miners, consumers, and developers with several perspectives that we need to deploy scalability measures, and there has been an ongoing argument on how to improve Bitcoin’s scalability. Current proposals have focused on how big to make the blocks and how to handle the block size increases in the future. All proposals suffer from a major scalability bottleneck: No matter what block size is chosen, the blockchain system can at best reach a proper transaction throughput, increasing from ~ $$3$$ transactions per second to ~ $$7$$ transactions per second. This is so far from the $$30,000$$ transactions per second which are necessary to compete with the existing systems such as VISA transactions. The same major limitations apply to litecoin, Ethereum, and all other currencies that share Bitcoin’s blockchain protocol. DecentralChain-M5 will address the scalability bottleneck by making the network reach the highest throughput depending on the network conditions. It will not only enhance the transaction throughput, it will also reduce transaction latencies. So it will be possible to get an initial transaction confirmation in seconds rather than in minutes. ##### Weaknesses of Current Proposals to Improve Scalability Blockchain Systems can process transactions and the maximum rate of these transactions is limited by the choice of two parameters: block size and block interval. • The block interval defines the average amount of time that passes between the creation of two blocks. By deciding to reduce the block interval to solve the latency limit, the system will have less security (increase forks probability) due to the reason of new miners for every second which will lead to instability where the blockchain is subject to reorganization and the system is in disagreement (Figure 1). If we reduce the time per block, then we will have a situation where a significant number of blocks are solved in less time than it takes to relay a solved block throughout the network. So there will be no way to know which block is the “real” one and which one is a “fork” because the transactions that appeared to have multiple confirmations suddenly have fewer confirmations (or possibly go back to being unconfirmed). <<<<<<< HEAD .. image:: _static/02_intermediate/images/12_Weaknesses-of-Current-Proposals-to-Improve-Scalability-1.png ======= .. image:: _static/02_intermediate/images/image.jpg >>>>>>> b83a891fea7eef2d472ac2e336795eea50776d03 Figure 1: Increasing block frequency with static block size will result in less security. • The throughput of a system is bounded by the maximum block size (given a fixed block interval), as the maximum number of included transactions is directly dependent on the block size. • Larger blocks do however cause slower propagation speeds, which causes more discarded blocks (orphaning risk). An unlimited blocksize could, for example, result in a DoS attack on the system by creating a block that takes a long time to validate. If the choice is to Increase block size in order to improve throughput, there will be Network spikes with longer time to propagate in the network (Figure 2). <<<<<<< HEAD .. image:: _static/02_intermediate/images/13_Weaknesses-of-Current-Proposals-to-Improve-Scalability-2.png ======= .. image:: _static/02_intermediate/images/image.jpg >>>>>>> b83a891fea7eef2d472ac2e336795eea50776d03 Figure 2: Increasing block size with Static block frequency will lead to more discarded blocks and network spikes. ##### Brief Summary of Bitcoin-M5 It is a next-generation blockchain protocol which is an alternative bitcoin scaling solution that does not involve increasing the size of blocks or decreasing the block time interval. This reduces the risk of forks amongst other advantages. Bitcoin-M5 describes that the basic tradeoffs in Bitcoin can be reduced with an alternative blockchain protocol, offering a consensus delay and bandwidth limited only by the Network Plane. The protocol splits time into time periods(epoch). In each time period, a particular leader is responsible for serializing transactions (Figure 3). The leaders take the rule of generating blocks: • Key blocks for the election of a leader. • Micro blocks for ledger records. <<<<<<< HEAD .. image:: _static/02_intermediate/images/14_Brief-Summary-of-Bitcoin-M5.png ======= .. image:: _static/02_intermediate/images/image.jpg >>>>>>> b83a891fea7eef2d472ac2e336795eea50776d03 Figure 3: Bitcoin-M5 time periods structure with serializing transactions. #### DecentralChain-M5 Overlay DecentralChain-M5 is based on the bitcoin next generation protocol that serializes transactions and offers important improvements in the transaction latency(lower latency) and bandwidth(higher throughput) in comparison to Bitcoin without sacrificing other properties. DecentralChain approaches this scalability matter by providing the miner with the ability to farm a block during the time of mining in a continuous approach. This block continues increments called liquid blocks. This liquid block is unchangeable over time once the next block referencing is created and appended. This approach increases effective bandwidth and speed of block creation, which is described as being “especially significant for businesses” using the DecentralChain-M5 protocol since it allows for conducting micro-transactions - without any delays that are typical with traditional blockchain systems. Furthermore, it allows the blockchain to withstand high loads, such as distribution of tokens following crowdsales and airdrops of bonus tokens. The speed of processing trading transactions on the exchange gets increased as well. ##### DecentralChain-M5 Operations The main and core idea of DecentralChain-M5 is to split the Liquid block into two types, Key blocks and Micro blocks. The process of creating liquid block works as follows: • The miner node gets the permission to create a block. • The miner node creates and sends the key block (which does not contain transactions). • The miner node creates and sends the micro blocks (which contain transactions just as in normal blocks with a reference to previous micro blocks or key blocks) with a mining time interval of three seconds. • Miners will mine those micro blocks and propagate them directly to the network until the next new key block appears with a reference to the liquid block. All of the transactions are part of the same block and are contributed all together. In between blocks, the traditional Bitcoin system appears idle to an onlooker, as miners are working to discover the next block, but without apparent progress on the consensus front. In contradiction, in DecentralChain-M5, the key-blocks can be small because they need to contain only the coinbase transaction, which defines the public key that the miner will be using to sign microblocks. Because a key-block requires proof of stake, miners can not just produce one and expropriate the leadership at will. Following the key-block, the lead miner can quickly issue microblocks, simply by signing them with the private key corresponding to the public key named in the key-block’s coinbase (Figure 4). <<<<<<< HEAD .. image:: _static/02_intermediate/images/15_DecentralChain-M5-Operations.png ======= .. image:: _static/02_intermediate/images/image.jpg >>>>>>> b83a891fea7eef2d472ac2e336795eea50776d03 Figure 4: Key-blocks and Micro-blocks signing process. They’re also called “Key Blocks”, these blocks are generated with proof of stake but do not contain transactions. They serve as a leader election mechanism and contain a public key that identifies the chosen leader. Each block has a header that contains, among other fields, the unique reference of its predecessor which is a cryptographic hash of the predecessor header (either a key block or a microblock). Micro Blocks Once a node generates a key block it becomes the leader. As a leader, the node is allowed to generate microblocks at a set rate smaller than a predefined maximum. These micro blocks will contain the ledger entries with no requirement for any Proof of Stake and they’re generated by the elected leader in every block-generation cycle. This block-generation cycle is initiated by a leader block. The only requirement is to sign the micro blocks with the elected leader’s private key. The micro blocks can be generated at a very high speed by the elected leader(miner), thus resulting in increased performance and transaction speed. For a microblock to be valid, all its entries must be valid according to the specification of the state machine, and the signature has to be valid. Figure 5 illustrates the structure. Note that microblocks do not affect the weight of the chain, as they do not contain proof of stake. When all micro blocks have been validated, they will be merged with their key block into one block. ##### DecentralChain-M5 Reward Mechanisms Remuneration consists of two parts. First, each key block entitles its generator a set amount. Second, each ledger entry carries a fee. This fee is split by the leader that places this entry in a microblock and the subsequent leader that generates the next key block. In order to motivate participants to follow the protocol, DecentralChain-M5 uses the following mechanisms: Each transaction pays a fee to the system, but unlike Bitcoin, this fee is distributed, with $$40\%$$ to the leader, and $$60\%$$ to the subsequent leader. Finally, if a leader forks the chain by generating two microblocks with the same parent, it is punished by revoking the subsidy revenue; whoever detects the fraud wins a nominal fee, (Figure 5). <<<<<<< HEAD .. image:: _static/02_intermediate/images/16_DecentralChain-M5-Reward-Mechanisms.png ======= .. image:: _static/02_intermediate/images/image.jpg >>>>>>> b83a891fea7eef2d472ac2e336795eea50776d03 Figure 5: chain structure of the DecentralChain-M5 Protocol. Microblocks (circles) are signed with the private key matching with the public key in the last key block (squares). The fee is distributed $$40\%$$ to the leader and $$60\%$$ to the next one. In practice, the remuneration is implemented by having each key block contain a single coinbase transaction that mints new coins and deposits the funds to the current and previous leaders. As in Bitcoin, this transaction can only be spent after a maturity period of $$100$$ key blocks, to avoid non-mergeable transactions following a fork. ### Fair Proof of Stake In this model, the choice of account that has the right to generate the next block and receive the corresponding transaction fees is based on the number of tokens in the account. The more tokens that are held in the account, the greater the chance that account will earn the right to generate a block. In DecentralChain, we are convinced that each participant in the blockchain should participate in the block generation process proportionally to his stake: we have decided to correct the PoS formula. At the moment we do not have the goal of completely changing the algorithm, since there is no need; we simply want to make some adjustments. We presented an improved PoS algorithm that makes the choice of block creator fair and reduces vulnerability to the multi-branching attacks, in accordance with the shortcomings of the current algorithm. We analyzed the model of the new algorithm for its correspondence to the stake share and the share of blocks, and the results were positive. Also, the algorithm was analyzed for vulnerability to attacks, and results obtained with the new model were better than with the old one. The attacks’ results for the attacker were not so successful in terms of the profits gained. The number of forks and their length decreased. ### Blockchain Data Types The blockchain data types are the data types that are used to describe 02_intermediate of blockchain entities. Here’s a list of blockchain data types: Blockchain Data Types # Keyword Possible values Variable size in bytes $$1$$ Boolean $$0$$ and $$1$$. $$1$$ $$2$$ Byte Integer from $$-128$$ to $$127$$ inclusive. $$1$$ $$3$$ Int Integer from $$-2,147,483,648$$ to $$2,147,483,647$$ inclusive. $$4$$ $$4$$ Long Integer from $$-9,223,372,036,854,775,808$$ to $$9,223,372,036,854,775,807$$ inclusive. $$8$$ $$5$$ Short Integer from $$-32,768$$ to $$32,767$$ inclusive. $$2$$ $$6$$ String From $$0$$ to $$2,147,483,647$$ characters inclusive. From $$1$$ to $$4$$ bytes per character. ### Binary Format Field order number Field Field type Field size in bytes $$1$$ Entity type Byte $$1$$ Value must be $$1$$. $$2$$ Chain ID Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$3$$ Account public key hash Array of bytes $$20$$ First $$20$$ bytes of the result of the Keccak256 (Blake2b256 (publicKey)) hashing function. Here publicKey is the array of bytes of the account public key. $$4$$ Checksum Array of bytes $$4$$ First $$4$$ bytes of the result of the Keccak256 (Blake2b256 (data)) hashing function. Here data is the array of bytes of three fields put together: 1) Entity type. 2) Chain ID. 3) Account public key hash. #### Alias Binary Format Alias Binary Format Field order number Field Field type Field size in bytes $$1$$ Entity type Byte $$1$$ Value must be $$2$$. $$2$$ Chain ID Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$3$$ Number of characters in the alias Short $$2$$ $$4$$ Alias Array of bytes From $$4$$ to $$30$$. #### Block Binary Format Blocks are stored on the blockchain in a binary format (byte representation). Node extensions such as gRPC server can work directly with data in binary format. Version 5 message Block { int32 chain_id = 1; bytes reference = 2; int64 base_target = 3; bytes generation_signature = 4; int64 timestamp = 6; int32 version = 7; bytes generator = 8; int64 reward_vote = 9; bytes transactions_root = 10; } bytes signature = 2; repeated SignedTransaction transactions = 3; } Block Binary Format Version 5 Field Description chain_id Chain ID reference BLAKE2b-256 hash of the previous block header. base_target Base target: а variable that is used in the block generation algorithm. generation_signature Generation signature: а variable that is used in the block generation algorithm ($$32$$ bytes).” List of features for which the block generator votes. See the features. timestamp Block timestamp: Unix time in milliseconds. version Block version: $$5$$. generator Block generator’s account public key ($$32$$ bytes). reward_vote Block generation reward for which the block generator votes. $$-1$$ means that block generator votes for the current reward size. transactions_root Transactions Root Hash ($$32$$ bytes). signature Block header signature ($$64$$ bytes). transactions For each transaction: 1) Body bytes: up to $$165,487$$ bytes. 2) Proofs: up to $$531$$ bytes. See the transaction binary format article for details. Version 4 Block Binary Format Version 4 # Field Field type Field size in bytes $$1$$ Block version Byte $$1$$ The value must be $$4$$. $$2$$ Block timestamp Long $$8$$ Unix time in milliseconds. $$3$$ Signature of the previous block Array[Byte] $$64$$ $$4$$ Base target Long $$8$$ $$5$$ Generation signature Array[Byte] $$32$$ $$6$$ Number of transactions in the block Integer $$4$$ $$7.1$$ Transaction 1 Array[Byte] Body bytes: up to $$165,996$$ bytes. Proofs: up to $$531$$ bytes. Bytes of the 1st transaction in binary format. $$7.2$$ Transaction 2 Array[Byte] Body bytes: up to $$165,996$$ bytes. Proofs: up to $$531$$ bytes. Bytes of the 2nd transaction in binary format. $$7.[N]$$ Transaction N Array[Byte] Body bytes: up to $$165,996$$ bytes. Proofs: up to $$531$$ bytes. Bytes of the Nth transaction in binary format. $$8$$ Number of features for which the block generator votes $$4$$ Integer $$9.1$$ Feature 1 $$2$$ Short $$9.[M]$$ Feature M $$2$$ Short $$10$$ Block generation reward for which the block generator votes $$8$$ Long $$-1$$ means that block generator votes for the current reward size. $$11$$ Block generator’s account public key $$32$$ Array[Byte] $$12$$ Block signature $$64$$ Array[Byte] Version 3 Block Binary Format Version 3 # Field Field type Field size in bytes $$1$$ Block version Byte $$1$$ The value must be $$4$$. $$2$$ Block timestamp Long $$8$$ Unix time in milliseconds. $$3$$ Signature of the previous block Array[Byte] $$64$$ $$4$$ Base target Long $$8$$ $$5$$ Generation signature Array[Byte] $$32$$ $$6$$ Number of transactions in the block Integer $$4$$ $$7.1$$ Transaction 1 Array[Byte] Body bytes: up to $$165,996$$ bytes. Proofs: up to $$531$$ bytes. Bytes of the 1st transaction in binary format. $$7.2$$ Transaction 2 Array[Byte] Body bytes: up to $$165,996$$ bytes. Proofs: up to $$531$$ bytes. Bytes of the 2nd transaction in binary format. $$7.[N]$$ Transaction N Array[Byte] Body bytes: up to $$165,996$$ bytes. Proofs: up to $$531$$ bytes. Bytes of the Nth transaction in binary format. $$8$$ Block generator’s account public key Array[Byte] $$32$$ $$9$$ Block signature Array[Byte] $$64$$ #### Network Message Binary Format ##### Block Message Binary Format Block message is a reply to GetBlock message. Block Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x17) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Block bytes (N) Bytes N ##### Checkpoint Message Binary Format Checkpoint Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x64) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Checkpoint items count (N) Int $$4$$ $$7$$ Checkpoint #1 height Long $$8$$ $$8$$ Checkpoint #1 signature Bytes $$64$$ $$6 + 2 * N - 1$$ Checkpoint #N height Long $$8$$ $$6 + 2 * N$$ Checkpoint #N signature Bytes $$64$$ ##### Get Block Message Binary Format Get Block Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x16) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Block ID Bytes $$64$$ ##### Get Peers Message Binary Format Get peers message is sent when one sending node wants to know about other nodes on the network. Get Peers Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x01) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ ##### Get Signatures Message Binary Format Get Signatures Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x14) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Block IDs count (N) Int $$4$$ $$7$$ Block #1 ID Long $$64$$ $$6 + N$$ Block #N ID Bytes $$64$$ ##### Handshake Message Binary Format Handshake is used to start communication between two nodes. Handshake Message Binary Format # Field name Type Length in Bytes $$1$$ Application name length (N) Byte $$1$$ $$2$$ Application name (UTF-8 encoded bytes) Bytes N $$3$$ Application version major Int $$4$$ $$4$$ Application version minor Int $$4$$ $$1$$ Application version patch Int $$4$$ $$6$$ Node name length (M) Byte $$1$$ $$7$$ Node name (UTF-8 encoded bytes) Bytes M $$8$$ Node nonce Long $$8$$ $$9$$ Declared address length (K) or $$0$$ if no declared address was set Int $$4$$ $$10$$ Declared address bytes (if length is not $$0$$) Bytes K $$11$$ Timestamp Long $$8$$ ##### Peers Message Binary Format Peers message is a response to get peers message. Peers Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x02) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Peers count (N) Int $$4$$ $$7$$ Bytes $$4$$ $$8$$ Peer #1 port Int $$4$$ $$6 + 2 * N - 1$$ Bytes $$4$$ $$6 + 2 * N$$ Peer #N port Int $$4$$ ##### Score Message Binary Format Score Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x18) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Score (N bytes) Int $$N$$ ##### Signatures Message Binary Format Signatures Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x15) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Block signatures count (N) Int $$4$$ $$7$$ Block #1 signature Bytes $$64$$ $$6 + N$$ Block #N signature Bytes $$64$$ ##### Transaction Message Binary Format Transaction Message Binary Format # Field name Type Length in Bytes $$1$$ Packet length (BigEndian) Int $$4$$ $$2$$ Magic Bytes Bytes $$4$$ $$3$$ Content ID (0x19) Byte $$1$$ $$4$$ Int $$4$$ $$5$$ Bytes $$4$$ $$6$$ Transaction (N bytes) Bytes N #### Order Binary Format • An exchange transaction of version 3 can accept orders of versions 1–4. • An exchange transaction of version 2 can accept orders of versions 1–3. • An exchange transaction of version 1 can accept orders of version 1 only. Version 4 message AssetPair { bytes amount_asset_id = 1; bytes price_asset_id = 2; }; message Order { enum Side { SELL = 1; }; int32 chain_id = 1; bytes sender_public_key = 2; bytes matcher_public_key = 3; AssetPair asset_pair = 4; Side order_side = 5; int64 amount = 6; int64 price = 7; int64 timestamp = 8; int64 expiration = 9; Amount matcher_fee = 10; int32 version = 11; repeated bytes proofs = 12; }; message Amount { bytes asset_id = 1; int64 amount = 2; }; Order Binary Format Version 4 Field Size Description chain_id $$1$$ byte Chain ID sender_public_key $$32$$ bytes Public key of the order sender. matcher_public_key $$32$$ bytes Public key of matcher. asset_pair.amount_asset_id $$32$$ bytes for asset. $$0$$ for DecentralCoins. ID of the amount asset. asset_pair.price_asset_id $$32$$ bytes for asset. $$0$$ for DecentralCoins. ID of the price asset. order_side $$1$$ byte amount $$8$$ bytes Amount of the amount asset, specified in the minimum fraction (“cent”) of asset. price $$8$$ bytes Price for the amount asset nominated in the price asset, multiplied by $$108$$. timestamp $$8$$ bytes Order timestamp: Unix time in milliseconds. expiration $$8$$ bytes Unix time in milliseconds when the order will be expired. matcher_fee.asset_id $$32$$ bytes for asset. $$0$$ for DecentralCoins. Matcher fee token ID. matcher_fee.amount $$8$$ bytes Matcher fee version $$1$$ byte Order version: 4. proofs Each proof up to $$64$$ bytes, up to $$8$$ proofs. Order proofs that are used to check the validity of the order. Version 3 Order Binary Format Version 3 # Field name JSON field name Field type Length in bytes Value $$1$$ Order binary format version number version Byte $$1$$ Must be $$3$$. $$2$$ Order sender public key senderPublicKey Array[Byte] $$32$$ $$3$$ Matcher public key matcherPublicKey Array[Byte] $$32$$ $$4.1$$ Asset B (amount asset) flag Byte $$1$$ If token is DecentralCoins, then value is $$0$$, else $$1$$. $$4.2$$ Asset B (amount Asset) ID amountAsset Array[Byte] S If token is not DecentralCoins, then $$S = 32$$, else the field should be absent. $$5.1$$ Asset A (price asset) flag Byte $$1$$ If token is DecentralCoins, then value is $$0$$, else $$1$$. $$5.2$$ Asset A (price asset) ID priceAsset Array[Byte] S If token is not DecentralCoins, then $$S = 32$$, else the field should be absent. $$6$$ Order type orderType Byte $$1$$ If order is for buying, then value is $$0$$, if order is for selling, then value is $$1$$. $$7$$ Amount of asset B (amount asset), which the order sender offers for one price asset(asset A) price Long $$8$$ Bytes in big-endian notation. $$8$$ Amount of asset B (price asset), which the order sender wants to buy or send depending on order type amount Long $$8$$ Bytes in big-endian notation. $$9$$ Amount of milliseconds from the beginning of Unix epoch till the moment of validation of order by matcher timestamp Long $$8$$ Bytes in big-endian notation. $$10$$ Amount of milliseconds from the beginning of Unix epoch till the unfulfilled order cancellation expiration Long $$8$$ Bytes in big-endian notation. $$11$$ Matcher fee matcherFee Long $$8$$ Bytes in big-endian notation. $$12$$ Matcher fee token flag Byte $$1$$ If token is DecentralCoins, then value is $$0$$, else $$1$$ $$13$$ Matcher fee token matcherFeeAssetId Array[Byte] F If token is not DecentralCoins, then $$F = 32$$, else the field should be absent. $$14$$ Proofs proofs Array[Proof] S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + (P_{1} + P_{2} + ... + P_{n})$$, where $$N$$ is amount of proofs in the array, $$P_{n}$$ — size N-th proof in bytes. Maximum amount of proofs in the array is $$8$$. Maximum length of each proof is $$64$$ bytes. JSON Representation of Order Version 3 { "version": 3, "senderPublicKey": "FMc1iASTGwTC1tDwiKtrVHtdMkrVJ1S3rEBQifEdHnT2", "matcherPublicKey": "7kPFrHDiGw1rCm7LPszuECwWYL3dMf6iMifLRDJQZMzy", "assetPair": { "amountAsset": "BrjUWjndUanm5VsJkbUip8VRYy6LWJePtxya3FNv4TQa", "priceAsset": null }, "amount": 150000000, "timestamp": 1548660872383, "expiration": 1551252872383, "matcherFee": 300000, "proofs": [ "YNPdPqEUGRW42bFyGqJ8VLHHBYnpukna3NSin26ERZargGEboAhjygenY67gKNgvP5nm5ZV8VGZW3bNtejSKGEa" ], "id": "Ho6Y16AKDrySs5VTa983kjg3yCx32iDzDHpDJ5iabXka", "sender": "3PEFvFmyyZC1n4sfNWq6iwAVhzUT87RTFcA", "price": 1799925005, } Version 2 Order Binary Format Version 2 # Field name Type Length in Bytes $$1$$ Version Byte (constant, value = $$2$$) $$1$$ $$2$$ Sender’s public key PublicKey (Array[Byte]) $$32$$ $$3$$ Matcher’s public key PublicKey (Array[Byte]) $$32$$ $$4.1$$ Amount asset flag ($$1$$ - asset, $$0$$ - DecentralCoins) Byte $$1$$ $$4.2$$ Amount asset AssetId (ByteStr = Array[Byte]) $$32$$ or $$0$$ (depends on the byte in 4.1). $$5.1$$ Price asset flag ($$1$$ - asset, $$1$$ - DecentralCoins) Byte $$1$$ $$5.2$$ Price asset AssetId (ByteStr = Array[Byte]) $$32$$ or $$0$$ (depends on the byte in 5.1). $$6$$ Order type ($$0$$ - Buy, $$1$$ - Sell) Byte $$1$$ $$7$$ Price Long $$8$$ $$8$$ Amount Long $$8$$ $$9$$ Timestamp Long $$8$$ $$10$$ Expiration Long $$8$$ $$11$$ Matcher’s fee Long $$8$$ $$12$$ Proofs Proofs Version 1 Order Binary Format Version 1 # Field name Type Length in Bytes $$1$$ Sender’s public key PublicKey (Array[Byte]) $$32$$ $$2$$ Matcher’s public key PublicKey (Array[Byte]) $$32$$ $$3.1$$ Amount asset flag ($$1$$ - asset, $$0$$ - DecentralCoins) $$1$$ $$3.2$$ Amount asset AssetId (ByteStr = Array[Byte]) $$32$$ or $$0$$ (depends on the byte in 3.1). $$4.1$$ Price asset flag ($$1$$ - asset, $$0$$ - DecentralCoins) $$1$$ $$4.2$$ Price asset AssetId (ByteStr = Array[Byte]) $$32$$ or $$0$$ (depends on the byte in 4.1). $$5$$ Order type ($$0$$ - Buy, $$1$$ - Sell) Byte $$1$$ $$6$$ Price Long $$8$$ $$7$$ Amount Long $$8$$ $$8$$ Timestamp Long $$8$$ $$9$$ Expiration Long $$8$$ $$10$$ Matcher fee Long $$8$$ $$11$$ Signature Bytes $$64$$ The price listed for amount asset in price asset $$* 10^8$$. Expiration is order time to live, timestamp in future, max $$= 30$$ days in future. The signature is calculated from the following bytes: Order Binary Format Version 1 Bytes # Field name Type Length in Bytes $$1$$ Sender’s public key PublicKey (Array[Byte]) $$32$$ $$2$$ Matcher’s public key PublicKey (Array[Byte]) $$32$$ $$3.1$$ Amount asset flag ($$1$$ - asset, $$0$$ - DecentralCoins) $$1$$ $$3.2$$ Amount asset AssetId (ByteStr = Array[Byte]) $$32$$ or $$0$$ (depends on the byte in 3.1). $$4.1$$ Price asset flag ($$1$$ - asset, $$0$$ - DecentralCoins) $$1$$ $$4.2$$ Price asset AssetId (ByteStr = Array[Byte]) $$32$$ or $$0$$ (depends on the byte in 4.1). $$5$$ Order type ($$0$$ - Buy, $$1$$ - Sell) Bytes $$1$$ $$6$$ Price Long $$8$$ $$7$$ Amount Long $$8$$ $$8$$ Timestamp Long $$8$$ $$9$$ Expiration Long $$8$$ $$10$$ Matcher fee Long $$8$$ #### Transaction Binary Format Transactions are stored on the blockchain in a binary format (byte representation). Node extensions such as gRPC server can work directly with data in binary format. The transaction signature and ID are also formed on the basis of the binary format, namely the transaction body bytes. The contents of transaction body bytes is given in the description of the binary format of each type and version of the transaction. Normally the transaction body bytes include all transaction fields, with the exception of the following fields: • Transaction ID (it is not stored on the blockchain), • Version flag, • Proofs or signature, depending on the version of the transaction. The guideline for generating a signature and ID is given in the cryptographic practical details article. All strings are UTF-8 encoded. ##### Protobuf Protobuf facilitates the development of client libraries for the DecentralChain blockchain, as it avoids serialization errors and streamlines the creation of a correctly signed transaction. How to generate a transaction signature using protobuf: • Download the protocol buffers package for your programming language. Generate the Transaction class on the basis of transaction.proto. • Fill in the transaction fields. • Asset IDs should be specified in the binary format. • Addresses should be specified in the shortened binary format (without the first two and the last four bytes). See the address binary format) article. • Serialize the transaction object to get transaction body bytes. Detailed instructions for various programming languages are provided in protocol buffers tutorials. • Generate the signature for the transaction body bytes with the Curve25519 function using sender private key bytes. The byte representation of a transaction based on the protobuf schema must not contain default values. Make sure that your protocol buffers compiler does not write the field value when serializing if it is equal to the default value for this data type, otherwise the transaction signature will be invalid. Send the signed transaction to a node: • If you use your own node and gRPC server, send the SignedTransaction object. • If you use Node REST API, compose the JSON representation of the transaction and add the base58-encoded signature to the proof array. Send the transaction to a node using POST /transactions/broadcast method. message SignedTransaction { Transaction transaction = 1; repeated bytes proofs = 2; } message Transaction { int32 chain_id = 1; bytes sender_public_key = 2; Amount fee = 3; int64 timestamp = 4; int32 version = 5; oneof data { GenesisTransactionData genesis = 101; PaymentTransactionData payment = 102; IssueTransactionData issue = 103; TransferTransactionData transfer = 104; ReissueTransactionData reissue = 105; BurnTransactionData burn = 106; ExchangeTransactionData exchange = 107; LeaseTransactionData lease = 108; LeaseCancelTransactionData lease_cancel = 109; CreateAliasTransactionData create_alias = 110; MassTransferTransactionData mass_transfer = 111; DataTransactionData data_transaction = 112; SetScriptTransactionData set_script = 113; SetAssetScriptTransactionData set_asset_script = 115; InvokeScriptTransactionData invoke_script = 116; UpdateAssetInfoTransactionData update_asset_info = 117; }; }; message Amount { bytes asset_id = 1; int64 amount = 2; }; Transaction Binary Format Field Size Description chain_id $$1$$ byte Chain ID sender_public_key $$32$$ bytes Public key of the transaction sender. fee.amount $$8$$ bytes Transaction fee in the minimum fraction (“cent”) of the fee asset. fee.asset_id $$32$$ bytes for the fee in a sponsored asset. $$0$$ for the fee in DecentralCoins ID of the token of the fee. The fee in a sponsored asset is only available for invoke script transactions and transfer transactions. See the sponsored fee article. timestamp $$8$$ bytes Transaction timestamp: Unix time in milliseconds. The transaction won’t be added to the blockchain if the timestamp value is more than $$2$$ hours back or $$1.5$$ hours forward of the current block timestamp. version $$1$$ byte Transaction version. proofs Each proof up to $$64$$ bytes,up to $$8$$ proofs. Transaction proofs that are used to check the validity of the transaction. The array can contain several transaction signatures (but not limited to signatures only). The fields that depend on the type of transaction are described in the following articles: ##### Burn Transaction Binary Format Version 3 message BurnTransactionData { Amount asset_amount = 1; }; message Amount { bytes asset_id = 1; int64 amount = 2; }; Burn Transaction Binary Format Version 3 Field Size Description asset_amount.amount $$8$$ bytes Amount of token to burn, specified in the minimum fraction (“cents”). asset_amount.asset_id $$32$$ bytes ID of token to burn. Version 2 Burn Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$6$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4$$ Chain ID chainId Byte 1 $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6$$ ID of the token to burn assetId Array[Byte] $$32$$ $$7$$ Amount of tokens to burn amount Long $$8$$ $$8$$ Transaction fee fee Long $$8$$ $$9$$ Transaction timestamp timestamp Long $$8$$ $$10$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, $$8$$ and $$9$$ are the transaction body bytes. JSON Representation of Transaction { "type":6, "id":"csr25XQHT1c965Fg7cY2vJ7XHYVsudPYrUbdaFqgaqL", "sender":"3P9QZNrHbyxXj8P9VrJZmVu2euodNtA11UW", "senderPublicKey":"9GaQj7gktEiiS1TTTjGbVjU9bva3AbCiawZ11qFZenBX", "fee":100000, "feeAssetId":null, "timestamp":1548660675277, "proofs": [ "61jCivdv3KTuTY6QHgxt4jaGrXcszWg3vb9TmUR26xv7mjWWwjyqs7X5VDUs9c2ksndaPogmdunHDdjWCuG1GGhh" ], "version":2, "assetId":"FVxhjrxZYTFCa9Bd4JYhRqXTjwKuhYbSAbD2DWhsGidQ", "amount":9999, "chainId":87, "height":1370971 } Version 1 Burn Transaction Binary Format Version 1 # Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$6$$. $$2$$ Public key of the transaction sender Array[Byte] $$32$$ $$3$$ ID of the token to burn Array[Byte] $$32$$ $$4$$ Amount of tokens to burn Long $$8$$ $$5$$ Transaction fee Long $$8$$ $$6$$ Transaction timestamp Long $$8$$ $$7$$ Transaction signature Array[Byte] $$64$$ The fields $$1$$, $$2$$, $$3$$, $$4$$, $$5$$ and $$6$$ are the transaction body bytes. ##### Create Alias Transaction Binary Format Version 3 message CreateAliasTransactionData { string alias = 1; }; Create Alias Transaction Binary Format Version 3 Field Size Description alias From $$4$$ to $$30$$ bytes Alias Version 2 Create Alias Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$10$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$5$$ Alias length Short $$2$$ Number of characters in the alias name. $$6$$ Alias alias String from $$4$$ to $$30$$ $$7$$ Transaction fee fee Long $$8$$ $$8$$ Transaction timestamp timestamp Long $$8$$ $$9$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$ and $$8$$ are the transaction body bytes. JSON Representation of Transaction { "type":10, "id":"5CZV9RouJs7uaRkZY741WDy9zV69npX1FTZqxo5fsryL", "sender":"3PNaua1fMrQm4TArqeTuakmY1u985CgMRk6", "senderPublicKey":"B3f8VFh6T2NGT26U7rHk2grAxn5zi9iLkg4V9uxG6C8q", "fee":100000, "feeAssetId":null, "timestamp":1548666019772, "proofs": [ "3cUM8Eq5KfmbS6q1qHDfzhX98YzER1ocnVjVAHG9HSkQdw86zjqxUfmsUPVwnVgwu5zatt3ETLnNFteobRMyR8bY" ], "version":2, "alias":"2.1.0a", "height":1371063 } Version 1 Create Alias Transaction Binary Format Version 1 # Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$10$$. $$2$$ Public key of the transaction sender Array[Byte] $$32$$ $$3$$ Alias length Short $$2$$ Number of characters in the alias name. $$4$$ Alias Array[Byte] From $$4$$ to $$30$$ $$5$$ Transaction fee Long $$8$$ $$6$$ Transaction timestamp Long $$8$$ $$7$$ Transaction signature Array[Byte] $$64$$ The fields $$1$$, $$2$$, $$3$$, $$4$$, $$5$$ and $$6$$ are the transaction body bytes. ##### Data Transaction Binary Format Version 2 Data Transaction Binary Format Version 2 Field Size Description key Up to $$400$$ bytes Entry key. value Up to $$32,767$$ bytes Entry value. If omitted, the transaction deletes the entry. The maximum number of entries is $$100$$. The maximum data size (keys + values) is $$165,890$$ bytes. JSON Representation of Transaction { "type":12, "id":"EByjQAWDRGrmc8uy7xRGy2zsQXZQq59bav7h8oTTJyHC", "sender":"3PLZcCJyYQnfWfzhKXRA4rteCQC9J1ewf5K", "senderPublicKey":"BQMVwAHwf2WEEwRsCxtMVcSLrXUhJ3XtCLmSptLx2e6L", "fee":600000, "feeAssetId":null, "timestamp":1532116120299, "proofs": [ "PZiAGq2ssi1ojh2Cc9dWrzmbuw9nJif2omsQ4dvonU31oiwsJQGbZiio3LG28otatFfFbHPfcX1JVCHwP5i4mKy" ], "version":1, "data": [ {"key":"4900","type":"integer","value":24010000},{"key":"4901","type":"integer","value":24019801}, {"key":"4902","type":"integer","value":24029604},{"key":"4903","type":"integer","value":24039409}, {"key":"4904","type":"integer","value":24049216},{"key":"4905","type":"integer","value":24059025}, {"key":"4906","type":"integer","value":24068836},{"key":"4907","type":"integer","value":24078649}, {"key":"4908","type":"integer","value":24088464},{"key":"4909","type":"integer","value":24098281}, {"key":"4910","type":"integer","value":24108100},{"key":"4911","type":"integer","value":24117921}, {"key":"4912","type":"integer","value":24127744},{"key":"4913","type":"integer","value":24137569}, {"key":"4914","type":"integer","value":24147396},{"key":"4915","type":"integer","value":24157225}, {"key":"4916","type":"integer","value":24167056},{"key":"4917","type":"integer","value":24176889}, {"key":"4918","type":"integer","value":24186724},{"key":"4919","type":"integer","value":24196561}, {"key":"4920","type":"integer","value":24206400},{"key":"4921","type":"integer","value":24216241}, {"key":"4922","type":"integer","value":24226084},{"key":"4923","type":"integer","value":24235929}, {"key":"4924","type":"integer","value":24245776},{"key":"4925","type":"integer","value":24255625}, {"key":"4926","type":"integer","value":24265476},{"key":"4927","type":"integer","value":24275329}, {"key":"4928","type":"integer","value":24285184},{"key":"4929","type":"integer","value":24295041}, {"key":"4930","type":"integer","value":24304900},{"key":"4931","type":"integer","value":24314761}, {"key":"4932","type":"integer","value":24324624},{"key":"4933","type":"integer","value":24334489}, {"key":"4934","type":"integer","value":24344356},{"key":"4935","type":"integer","value":24354225}, {"key":"4936","type":"integer","value":24364096},{"key":"4937","type":"integer","value":24373969}, {"key":"4938","type":"integer","value":24383844},{"key":"4939","type":"integer","value":24393721}, {"key":"4940","type":"integer","value":24403600},{"key":"4941","type":"integer","value":24413481}, {"key":"4942","type":"integer","value":24423364},{"key":"4943","type":"integer","value":24433249}, {"key":"4944","type":"integer","value":24443136},{"key":"4945","type":"integer","value":24453025}, {"key":"4946","type":"integer","value":24462916},{"key":"4947","type":"integer","value":24472809}, {"key":"4948","type":"integer","value":24482704},{"key":"4949","type":"integer","value":24492601}, {"key":"4950","type":"integer","value":24502500},{"key":"4951","type":"integer","value":24512401}, {"key":"4952","type":"integer","value":24522304},{"key":"4953","type":"integer","value":24532209}, {"key":"4954","type":"integer","value":24542116},{"key":"4955","type":"integer","value":24552025}, {"key":"4956","type":"integer","value":24561936},{"key":"4957","type":"integer","value":24571849}, {"key":"4958","type":"integer","value":24581764},{"key":"4959","type":"integer","value":24591681}, {"key":"4960","type":"integer","value":24601600},{"key":"4961","type":"integer","value":24611521}, {"key":"4962","type":"integer","value":24621444},{"key":"4963","type":"integer","value":24631369}, {"key":"4964","type":"integer","value":24641296},{"key":"4965","type":"integer","value":24651225}, {"key":"4966","type":"integer","value":24661156},{"key":"4967","type":"integer","value":24671089}, {"key":"4968","type":"integer","value":24681024},{"key":"4969","type":"integer","value":24690961}, {"key":"4970","type":"integer","value":24700900},{"key":"4971","type":"integer","value":24710841}, {"key":"4972","type":"integer","value":24720784},{"key":"4973","type":"integer","value":24730729}, {"key":"4974","type":"integer","value":24740676},{"key":"4975","type":"integer","value":24750625}, {"key":"4976","type":"integer","value":24760576},{"key":"4977","type":"integer","value":24770529}, {"key":"4978","type":"integer","value":24780484},{"key":"4979","type":"integer","value":24790441}, {"key":"4980","type":"integer","value":24800400},{"key":"4981","type":"integer","value":24810361}, {"key":"4982","type":"integer","value":24820324},{"key":"4983","type":"integer","value":24830289}, {"key":"4984","type":"integer","value":24840256},{"key":"4985","type":"integer","value":24850225}, {"key":"4986","type":"integer","value":24860196},{"key":"4987","type":"integer","value":24870169}, {"key":"4988","type":"integer","value":24880144},{"key":"4989","type":"integer","value":24890121}, {"key":"4990","type":"integer","value":24900100},{"key":"4991","type":"integer","value":24910081}, {"key":"4992","type":"integer","value":24920064},{"key":"4993","type":"integer","value":24930049}, {"key":"4994","type":"integer","value":24940036},{"key":"4995","type":"integer","value":24950025}, {"key":"4996","type":"integer","value":24960016},{"key":"4997","type":"integer","value":24970009}, {"key":"4998","type":"integer","value":24980004},{"key":"4999","type":"integer","value":24990001} ], "height":1091300 } Version 1 Data Transaction Binary Format Version 1 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$12$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$1$$. $$4$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$5$$ Length of the data array Short $$2$$ $$6.1$$ Key 1 length Short $$2$$ $$6.2$$ Key 1 key String Up to $$400$$ Maximum of $$100$$ characters. $$6.3$$ Value 1 type type Byte $$1$$ Options are: 0 - Long. 1 - Boolean. 2 - Array[Byte]. 3 - String. $$6.4$$ Value 1 length Short $$2$$ This field is present only if the value is of type of array of bytes or a string. If the value is of type of integer or a boolean, this field should not be included in the data structure. $$6.5$$ Value 1 value T S T is one of the following: 1) Long, $$S = 8$$. 2) Boolean, $$S = 1$$. 3) Array[Byte], $$S ⩽ 32,767$$. 4) String, $$S ⩽ 32,767$$. $$6.6$$ Key 2 length Short $$2$$ $$6.7$$ Key 2 key String Up to $$400$$ Maximum of $$100$$ characters. $$6.8$$ Value 2 type type Byte $$1$$ Options are: 0 - Long. 1 - Boolean. 2 - Array[Byte]. 3 - String. $$6.9$$ Value 2 length Short $$2$$ This field is present only if the value is of type of array of bytes or a string. If the value is of type of integer or a boolean, this field should not be included in the data structure. $$6.10$$ Value 2 value T S T is one of the following: 1) Long, $$S = 8$$. 2) Boolean, $$S = 1$$. 3) Array[Byte], $$S ⩽ 32,767$$. 4) String, $$S ⩽ 32,767$$. $$6.[5 × N - 4]$$ N-th key length Short $$2$$ $$6.[5 × N - 3]$$ N-th key key String Up to $$400$$ Maximum of $$100$$ characters. $$6.[5 × N - 2]$$ N-th value type type Byte $$1$$ Options are: 0 - Long. 1 - Boolean. 2 - Array[Byte]. 3 - String. $$6.[5 × N - 1]$$ N-th value length Short $$2$$ This field is present only if the value is of type of array of bytes or a string. If the value is of type of integer or a boolean, this field should not be included in the data structure. $$6.[5 × N]$$ N-th value value T S T is one of the following: 1) Long, $$S = 8$$. 2) Boolean, $$S = 1$$. 3) Array[Byte], $$S ⩽ 32,767$$. 4) String, $$S ⩽ 32,767$$. $$7$$ Transaction timestamp timestamp Long $$8$$ $$8$$ Transaction fee fee Long $$8$$ $$9$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is 64 bytes. The fields $$1$$, $$2$$, $$3$$, $$4$$, $$5$$, $$6.1$$, $$6.2$$, $$6.3$$, $$6.4$$, $$6.5$$, $$6.6$$, $$6.7$$, $$6.8$$, $$6.9$$, $$6.10$$, $$6.[5 × N - 4]$$, $$6.[5 × N - 3]$$, $$6.[5 × N - 2]$$, $$6.[5 × N - 1]$$, $$6.[5 × N]$$, $$7$$ and $$8$$ are the transaction body bytes. The maximum number of records is $$100$$. The maximum size of transaction body bytes is $$153,600$$ bytes. ##### Exchange Transaction Binary Format Version 3 Exchange transaction of version 3 can accept orders of versions 1 –4. message ExchangeTransactionData { int64 amount = 1; int64 price = 2; int64 sell_matcher_fee = 4; repeated Order orders = 5; }; Exchange Transaction Binary Format Version 3 Field Size Description amount $$8$$ bytes Amount of the amount asset (base currency) that the buyer received from the seller, specified in the minimum fraction (“cent”) of asset. price $$8$$ bytes Price for the amount asset (base currency) nominated in the price asset (quote currency), multiplied by $$10^{8}$$. For more details see the order article. $$8$$ bytes Buy matcher fee. The fee token ID is indicated in buy order. sell_matcher_fee $$8$$ bytes Sell matcher fee The fee token ID is indicated in sell order. orders Buy order and sell order. See the order binary format. Version 2 Transaction version 2 can accept orders of version 1, 2 and 3. Exchange Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$7$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4.1$$ Int $$4$$ Size including flag 4.2. $$4.2$$ order1.version Byte S $$S = 1$$ if the order version is $$1$$. $$S = 0$$ if the order version is 2 or 3. $$4.3$$ order1 Array[Byte] $$5.1$$ Sell order size Int $$4$$ Size including flag 5.2. $$5.2$$ Sell order version flag order2.version Byte S $$S = 1$$ if the order version is $$1$$. $$S = 0$$ if the order version is 2 or 3. $$5.3$$ Sell order order2 Array[Byte] See order binary format $$6$$ Deal price price Long $$8$$ Price for the amount asset (base currency) nominated in the price asset (quote currency). $$7$$ Amount amount Long $$8$$ Amount of the amount asset (base currency) that the buyer received from the seller. $$8$$ Long $$8$$ $$9$$ Sell matcher fee sellMatcherFee Long $$8$$ $$10$$ Transaction fee fee Long $$8$$ $$11$$ Transaction timestamp timestamp Long $$8$$ $$12$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$1$$, $$2$$, $$3$$, $$4.1$$, $$4.2$$, $$4.3$$, $$5.1$$, $$5.2$$, $$5.3$$, $$6$$, $$6.6$$, $$7$$, $$8$$, $$9$$, $$10$$ and $$11$$ are the transaction body bytes. JSON Representation of Transaction { "type":6, "id":"csr25XQHT1c965Fg7cY2vJ7XHYVsudPYrUbdaFqgaqL", "sender":"3P9QZNrHbyxXj8P9VrJZmVu2euodNtA11UW", "senderPublicKey":"9GaQj7gktEiiS1TTTjGbVjU9bva3AbCiawZ11qFZenBX", "fee":100000, "feeAssetId":null, "timestamp":1548660675277, "proofs": [ "61jCivdv3KTuTY6QHgxt4jaGrXcszWg3vb9TmUR26xv7mjWWwjyqs7X5VDUs9c2ksndaPogmdunHDdjWCuG1GGhh" ], "version":2, "assetId":"FVxhjrxZYTFCa9Bd4JYhRqXTjwKuhYbSAbD2DWhsGidQ", "amount":9999, "chainId":87, "height":1370971 } Version 1 Transaction version 1 can accept orders of version 1 only. Exchange Transaction Binary Format Version 1 # Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$6$$. $$2$$ Int $$4$$ $$3$$ Sell order size Int $$4$$ $$4$$ Array[Byte] $$5$$ Sell order Array[Byte] See order binary format $$6$$ Deal price Long $$8$$ Price for the amount asset (base currency) nominated in the price asset (quote currency). $$7$$ Amount Long $$8$$ Amount of the amount asset (base currency) that the buyer received from the seller. $$8$$ Long $$8$$ $$9$$ Sell matcher fee Long $$8$$ $$10$$ Transaction fee Long $$8$$ $$11$$ Transaction timestamp Long $$8$$ $$12$$ Transaction signature Array[Byte] $$64$$ The fields $$1$$, $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, $$8$$, $$9$$, $$10$$ and $$11$$ are the transaction body bytes. ##### Genesis Transaction Binary Format Genesis Transaction Binary Format # Field JSON field name Field type Field size in bytes Comment $$1$$ Transaction type ID type Byte $$1$$ Value must be $$1$$. $$2$$ Transaction timestamp timestamp Long $$8$$ $$3$$ recipient Array[Byte] $$26$$ $$4$$ Amount of DecentralCoins that will be transferred to the account amount Long $$8$$ JSON Representation of Transaction { "type":1, "id":"2DVtfgXjpMeFf2PQCqvwxAiaGbiDsxDjSdNQkc5JQ74eWxjWFYgwvqzC4dn7iB1AhuM32WxEiVi1SGijsBtYQwn8", "fee":0, "timestamp":1465742577614, "signature":"2DVtfgXjpMeFf2PQCqvwxAiaGbiDsxDjSdNQkc5JQ74eWxjWFYgwvqzC4dn7iB1AhuM32WxEiVi1SGijsBtYQwn8", "recipient":"3PAWwWa6GbwcJaFzwqXQN5KQm7H96Y7SHTQ", "amount":9999999500000000, "height":1 } ##### Invoke Script Transaction Binary Format Version 2 message InvokeScriptTransactionData { Recipient d_app = 1; bytes function_call = 2; repeated Amount payments = 3; }; message Recipient { oneof recipient { bytes public_key_hash = 1; string alias = 2; }; }; message Amount { bytes asset_id = 1; int64 amount = 2; }; Invoke Script Transaction Binary Format Version 2 Field Size Description d_app.public_key_hash $$20$$ bytes dApp account public key hash (a component of an address, see the Address binary format article). d_app.alias From $$4$$ to $$30$$ bytes dApp alias. function_call Function name and arguments. Binary format of function call is the same as in version 1. payments.asset_id $$32$$ bytes for asset. $$0$$ for DecentralCoins. ID of token in payment. payments.amount $$8$$ bytes Amount of token in payment, specified in the atomic units. The maximum size of d_app + function_call + payments is $$5120$$ bytes. JSON Representation of Transaction { "type":16, "id":"7CVjf5KGRRYj6UyTC2Etuu4cUxx9qQnCJox8vw9Gy9yq", "sender":"3P5rWeMzoaGBrXJDMifQDDjCMKWJGKTiVJU", "senderPublicKey":"4kKN9G7cZXGQujLQm9ss5gqB7TKX4A9jtFGt7DnHUoQ6", "fee":500000, "feeAssetId":null, "timestamp":1565537422938, "proofs": [ "28s21sisoa7yHWWmmX8U78fbNHW4KXAS9GHD8XmaN77gJxbnP2Q3DssNWpmSQ6hBq6xS985W4YiTmgvENhfWPNt5" ], "version":1, "dApp":"3PJbknfXMsJzZmksmsKSMz56tVdDqF5GdNM", "payment":[], "call": { "function":"returnSellVST", "args": [ { "type":"string", "value":"GiEBRfGhEeGqhPmLCjwJcYuakyvaz2GHGCfCzuinSKD" } ] }, "height":1656369, "stateChanges": { "data": [ { "key":"sell_GiEBRfGhEeGqhPmLCjwJcYuakyvaz2GHGCfCzuinSKD_spent", "type":"integer", "value":10000000000 } ], "transfers": [ { "asset":"4LHHvYGNKJUg5hj65aGD5vgScvCBmLpdRFtjokvCjSL8", "amount":10000000000 } ], "issues":[], "reissues":[], "burns":[], "leases":[], "leaseCancels":[], "invokes":[] } } Version 1 Invoke Script Transaction Binary Format Version 1 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$16$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$1$$. $$4$$ Chain ID Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6$$ dApp See Address Binary Format, Alias Binary Format S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$. $$7.1$$ Function presence flag Byte $$1$$ $$0$$ — the default function of the dApp is invoked. $$1$$ — function from the current transaction should be invoked in the dApp. $$7.2$$ Function call ID Byte $$1$$ Constant. The value must be $$9$$. $$7.3$$ Function type ID Byte $$1$$ Constant. The value must be $$1$$. $$7.4$$ Function name length Int $$4$$ $$7.5$$ Function name function String Up to $$255$$ $$7.6.1$$ Amount of arguments of the function Int $$4$$ $$7.6.2$$ ID of argument 1 type type Byte $$1$$ $$0$$ — argument type is long. $$1$$ — argument type is an array of bytes. $$2$$ — argument type is a string. $$6$$ — argument type is logical True. $$7$$ — argument type is logical False. $$11$$ – argument type is list. $$7.6.3$$ Argument 1 value Options are: 1) Long. 2) Array[Byte]. 3) String. 4) Logical True. 5) Logical False. 6) List. S $$S = 8$$, if argument type is long. If the argument type is an array of bytes, string, or list, the field size is limited only by the total transaction size. If the type is list, then 1) its length must not exceed $$1000$$ elements. 2) amount of its elements represents first $$4$$ bytes of the current field. 3) each list element is serialized similarly to the function argument: the element type ID takes first place followed by the element’s value. $$S = 0$$, if argument type is logical True or False. $$7.6.4$$ ID of argument 2 type type Byte $$1$$ $$0$$ — argument type is long. $$1$$ — argument type is an array of bytes. $$2$$ — argument type is a string. $$6$$ — argument type is logical True. $$7$$ — argument type is logical False. $$11$$ – argument type is list. $$7.6.5$$ Argument 2 value Options are: 1) Long. 2) Array[Byte]. 3) String. 4) Logical True. 5) Logical False. 6) List. S $$S = 8$$, if argument type is long. If the argument type is an array of bytes, string, or list, the field size is limited only by the total transaction size. If the type is list, then 1) its length must not exceed $$1000$$ elements. 2) amount of its elements represents first $$4$$ bytes of the current field. 3) each list element is serialized similarly to the function argument: the element type ID takes first place followed by the element’s value. $$S = 0$$, if argument type is logical True or False. $$7.6.[2 × N]$$ ID of argument N type type Byte $$1$$ 0 — argument type is long. 1 — argument type is an array of bytes. 2 — argument type is a string. 6 — argument type is logical True. 7 — argument type is logical False. 11 – argument type is list. $$7.6.[2 × N + 1]$$ Argument N value Options are: 1) Long. 2) Array[Byte]. 3) String. 4) Logical True. 5) Logical False. 6) List. S $$S = 8$$, if argument type is long. If the argument type is an array of bytes, string, or list, the field size is limited only by the total transaction size. If the type is list, then 1) its length must not exceed $$1000$$ elements. 2) amount of its elements represents first $$4$$ bytes of the current field. 3) each list element is serialized similarly to the function argument: the element type ID takes first place followed by the element’s value. $$S = 0$$, if argument type is logical True or False. $$8.1$$ Amount of payments Short $$2$$ $$8.2$$ Payment 1 length Short $$2$$ $$8.3$$ Amount of token in payment 1 amount Long $$8$$ $$8.4$$ Flag of payment 1 token Byte $$1$$ $$0$$DecentralCoins. $$1$$ — other token. $$8.5$$ ID of payment 1 token Array[Byte] $$32$$ Field is applicable if the token is not DecentralCoins. $$8.[4 × N – 2]$$ Payment N length Short $$2$$ $$8.[4 × N – 1]$$ Amount of token in payment N amount Long $$8$$ $$8.[4 × N]$$ Flag of payment N token Byte $$1$$ $$0$$DecentralCoins. $$1$$ — other token. $$8.[4 × N + 1]$$ ID of payment N token Array[Byte] $$32$$ Field is applicable if the token is not DecentralCoins. $$9$$ Transaction fee fee Long $$8$$ $$10.1$$ Flag of fee token Byte $$1$$ $$0$$DecentralCoins. $$1$$ — other token. $$10.2$$ Fee token ID feeAssetId Array[Byte] S $$S = 0$$, if token is DecentralCoins. $$S = 32$$, if it is other token. $$11$$ Transaction timestamp timestamp Long $$8$$ $$12$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The maximum number of payments is $$10$$. The maximum size of transaction including proofs is $$5120$$ bytes. ##### Issue Transaction Binary Format Version 3 message IssueTransactionData { string name = 1; string description = 2; int64 amount = 3; int32 decimals = 4; bool reissuable = 5; bytes script = 6; }; Issue Transaction Binary Format Version 3 Field Size Description name From $$4$$ to $$16$$ bytes Token name. description From $$0$$ to $$1000$$ bytes Token description. amount $$8$$ bytes Amount of token to issue, specified in the minimum fraction (“cents”). decimals $$1$$ byte Number of decimal places. reissuable $$1$$ byte Reissue availability flag. script Up to $$8192$$ bytes Version 2 Issue Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$0$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$0$$ Value must be $$3$$. $$3$$ Transaction version version Byte $$0$$ Value must be $$2$$. $$4$$ Chain ID chainId Byte $$0$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] 32 $$6.1$$ Token name length Short $$2$$ $$6.2$$ Token name name Array[Byte] From $$4$$ to $$16$$ $$7.1$$ Token description length Short $$2$$ $$7.2$$ Token description description Array[Byte] From $$0$$ to $$1000$$ $$8$$ Amount of the token that will be issued quantity Short $$8$$ $$9$$ Number of decimal places of the token decimals Byte $$0$$ $$10$$ Reissue flag reissuable Boolean $$0$$ If the value is $$0$$, then token reissue is not possible. If the value is $$1$$, then token reissue is possible. $$11$$ Transaction fee fee Short $$8$$ $$12$$ Transaction timestamp timestamp Short $$8$$ $$13.1$$ Script existence flag Boolean $$0$$ If the value is $$0$$, then the token does not have a script. If the value is $$1$$, then the token has a script. $$13.2$$ Script length in bytes Short S $$S = 0$$ if the value of the script existence flag field is $$0$$. $$S = 2$$ if the value of the script existence flag field is 1. $$13.3$$ Asset script script String S $$S = 0$$ if the value of the script existence flag field is $$0$$. $$0 < S ≤ 8192$$, if the value of the script existence flag field is 1. $$14$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$82$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6.1$$, $$6.2$$, $$7.1$$, $$7.2$$, $$8$$, $$9$$, $$10$$, $$11$$, $$12$$, $$13.1$$, $$13.2$$ and $$13.3$$ are the transaction body bytes. JSON Representation of Transaction { "type":3, "id":"FTQvw9zdYirRksUFCKDvor3hiu2NiUjXEPTDEcircqti", "sender":"3PPP59J1pToCk7fPs4d5EK5PoHJMeQRJCTb", "senderPublicKey":"E8Y8ywedRS9usVvvcuczn9hsSg1SNkQVBMcNeQEnjDTP", "fee":100000000, "feeAssetId":null, "timestamp":1548666518362, "proofs": [ "3X7GpKW1ztto1aJN5tQNByaGZ9jGkaxZNo4BT268obZckbXuNQHGKjAUxtqcSEes5aZNMaQi2JYBGeKpcaPTxpSC" ], "version":2, "assetId":"FTQvw9zdYirRksUFCKDvor3hiu2NiUjXEPTDEcircqti", "name":"DCVN", "quantity":990000000000000000, "reissuable":false, "decimals":8, "description":"Tài chính cho nền dân chủ", "script":null, "chainId":87, "height":1371069 } Version 1 Issue Transaction Binary Format Version 1 # Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$3$$. $$2$$ Transaction signature Array[Byte] $$64$$ $$3$$ Transaction type ID Byte $$1$$ This field duplicates field 1. $$4$$ Public key of the transaction sender Array[Byte] $$32$$ $$5.1$$ Token name length Short $$2$$ $$5.2$$ Token name Array[Byte] From $$4$$ to $$16$$ $$6.1$$ Token description length Short $$2$$ $$6.2$$ Token description Array[Byte] From $$0$$ to $$1000$$ $$7$$ Amount of the token that will be issued Long $$8$$ $$8$$ Number of decimal places of the token Byte $$1$$ $$9$$ Reissue flag Boolean $$1$$ $$10$$ Transaction fee Long $$8$$ $$11$$ Transaction timestamp Long $$8$$ The fields $$3$$, $$4$$, $$5.1$$, $$5.2$$, $$6.1$$, $$6.2$$, $$7$$, $$8$$, $$9$$, $$10$$ and $$11$$ are the transaction body bytes. ##### Lease Cancel Transaction Binary Format Version 3 message LeaseCancelTransactionData { bytes lease_id = 1; }; Lease Cancel Transaction Binary Format Version 3 Field Size Description lease_id $$32$$ bytes Lease ID. Version 2 Lease Cancel Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$9$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4$$ Chain ID chainId Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6$$ Transaction fee fee Long $$8$$ $$7$$ Transaction timestamp timestamp Long $$8$$ $$8$$ Lease ID Array[Byte] $$32$$ $$9$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, and $$8$$ are the transaction body bytes. JSON Representation of Transaction { "type":9, "id":"7siEtrJAvmVzM1WDX6v9RN4qkiCtk7qQEeD5ZhE6955E", "sender":"3PMBXG13f89pq3WyJHHKX2m5zN6kt2CEkHQ", "senderPublicKey":"BEPNBjo9Pi9hJ3hVtxpwyEfXCW3qWUNk5dMD7aFdiHsa", "fee":100000, "feeAssetId":null, "timestamp":1548660629957, "proofs": [ "3cqVVsaEDzBz367KTBFGgMXEYJ2r3yLWd4Ha8r3GzmAFsm2CZ3GeNW22wqxfK4LNRFgsM5kCWRVhf6gu2Nv6zVqW" ], "version":2, "leaseId":"BggRaeNCVmzuFGohzF4dQeYXSWr8i5zNSnGtdKc5eGrY", "chainId":87, "height":1370970, "lease": { "id":"BggRaeNCVmzuFGohzF4dQeYXSWr8i5zNSnGtdKc5eGrY", "originTransactionId":"BggRaeNCVmzuFGohzF4dQeYXSWr8i5zNSnGtdKc5eGrY", "sender":"3PMBXG13f89pq3WyJHHKX2m5zN6kt2CEkHQ", "recipient":"3PMWRsRDy882VR2viKPrXhtjAQx7ygQcnea", "amount":406813214, "height":1363095, "status":"canceled", "cancelHeight":1370970, "cancelTransactionId":"7siEtrJAvmVzM1WDX6v9RN4qkiCtk7qQEeD5ZhE6955E" } } Version 1 Lease Cancel Transaction Binary Format Version 1 Field order number Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$9$$. $$2$$ Public key of the transaction sender Array[Byte] $$32$$ $$3$$ Transaction fee Long $$8$$ $$4$$ Transaction timestamp Long $$8$$ $$5$$ Lease ID Array[Byte] $$32$$ $$6$$ Transaction signature Array[Byte] $$64$$ The fields $$1$$, $$2$$, $$3$$, $$4$$, and $$5$$ are the transaction body bytes. ##### Lease Transaction Binary Format Version 3 message LeaseTransactionData { Recipient recipient = 1; int64 amount = 2; }; message Recipient { oneof recipient { bytes public_key_hash = 1; string alias = 2; }; }; Lease Transaction Binary Format Version 3 Field Size Description recipient.public_key_hash $$20$$ bytes Recipient’s account public key hash (a component of an address, see the address binary format article). recipient.alias From $$4$$ to $$30$$ bytes Recipient’s alias. amount $$8$$ bytes Amount of DecentralCoins to lease (that is, amount of Decentralites multiplied by $$10^{8}$$). Version 2 Lease Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$8$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4$$ Reserved field Byte $$1$$ Value must be equal to $$0$$. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6$$ Address or alias of the recipient recipient S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$. $$7$$ Amount of DecentralCoins that will be leased to the account amount Long $$8$$ $$8$$ Transaction fee fee Long $$8$$ $$9$$ Transaction timestamp timestamp Long $$8$$ $$10$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, $$8$$ and $$9$$ are the transaction body bytes. JSON Representation of Transaction { "type":8, "id":"J6jZCzLpWJX8EDVhopKFx1mcbFizLGHVb44dvqPzH4QS", "sender":"3PMYNm8hshzCNjZ8GpPta5SyN7qBTEzS7Kw", "senderPublicKey":"GNswAY61mER5ZyUFeDBo1UyKGkPSSmmnd6yj7axN2n8f", "fee":100000, "feeAssetId":null, "timestamp":1548660916755, "proofs": [ "2opTj7mGKXLRajkJ78wN4ctSWqTeWtvisHaR8BnL2amqJ2KB313BbcpDYJKcqr7o7EpYjL5tppMz2pGjUMWbJe9b" ], "version":2, "amount":14000000000, "recipient":"3PMWRsRDy882VR2viKPrXhtjAQx7ygQcnea", "height":1370973, "status":"canceled" } Version 1 Lease Transaction Binary Format Version 1 # Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$8$$. $$2$$ Public key of the transaction sender Array[Byte] $$32$$ $$3$$ Address or alias of the recipient S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$. $$4$$ Amount of DecentralCoins that will be leased to the account Long $$8$$ $$5$$ Transaction fee Long $$8$$ $$6$$ Transaction timestamp Long $$8$$ $$7$$ Transaction signature Array[Byte] $$64$$ The fields $$1$$, $$2$$, $$3$$, $$4$$, $$5$$ and $$6$$ are the transaction body bytes. ##### Mass Transfer Transaction Binary Format Version 2 message MassTransferTransactionData { message Transfer { Recipient recipient = 1; int64 amount = 2; }; bytes asset_id = 1; repeated Transfer transfers = 2; bytes attachment = 3; }; message Recipient { oneof recipient { bytes public_key_hash = 1; string alias = 2; }; } Mass Transaction Binary Format Version 2 Field Size Description asset_id $$32$$ bytes ID of token to transfer. transfers.recipient.public_key_hash $$20$$ bytes Recipient’s account public key hash (a component of an address, see the address binary format article). transfers.recipient.alias From $$4$$ to $$30$$ bytes Recipient’s alias. transfers.amount $$8$$ bytes Amount of token to transfer, specified in the minimum fraction (“cents”). attachment Up to $$140$$ bytes Arbitrary data (typically a comment to transfer). The maximim number of transfers is $$100$$. JSON Representation of Transaction { "type":11, "sender":"3P2rvn2Hpz6pJcH8oPNrwLsetvYP852QQ2m", "senderPublicKey":"5DphrhGy6MM4N3yxfB2uR2oFUkp2MNMpSzhZ4uJEm3U1", "fee":5100000, "feeAssetId":null, "timestamp":1528973951321, "proofs": [ "FmGBaWABAy5bif7Qia2LWQ5B4KNmBnbXETL1mE6XEy4AAMjftt3FrxAa8x2pZ9ux391oY5c2c6ZSDEM4nzrvJDo" ], "version":1, "assetId":"Fx2rhWK36H1nfXsiD4orNpBm2QG1JrMhx3eUcPVcoZm2", "attachment":"xZBWqm9Ddt5BJVFvHUaQwB7Dsj78UQ5HatQjD8VQKj4CHG48WswJxUUeHEDZJkHgt9LycUpHBFc8ENu8TF8vvnDJCgfy1NeKaUNydqy9vkACLZjSqaVmvfaM3NQB", "transferCount":6, "totalAmount":500000000000, "transfers": [ {"recipient":"3PHnjQrdK389SbzwPEJHYKzhCqWvaoy3GQB","amount":5000000000}, {"recipient":"3PGNLwUG2GPpw74teTAxXFLxgFt3T2uQJsF","amount":5000000000}, {"recipient":"3P5kQneM9EdpVUbFLgefD385LLYTXY5J32c","amount":5000000000}, {"recipient":"3P2j9FZyygnVDCQvmSc41VCAKwwCQm8QUhA","amount":5000000000}, {"recipient":"3PNBZutLvMpjzxGAiQGqQuDyanhWyLi2Fhi","amount":5000000000}, {"recipient":"3P84vdYxzDPFbS5zj9J6yCkmKKA2QMo1DKA","amount":5000000000}, ], "height":1041197 } Version 1 Mass Transaction Binary Format Version 1 # Field JSON field name Field type Field size in bytes Comment $$1$$ Transaction type ID type Byte $$1$$ Value must be $$11$$. $$2$$ Transaction version version Byte $$1$$ Value must be $$1$$. $$3$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$4.1$$ Flag DecentralCoins/token Byte $$1$$ Value is $$0$$ for transferring DecentralCoins. Value is $$1$$ for transferring other tokens. $$4.2$$ Token ID assetId Array[Byte] S $$S = 0$$ if the value of the flag DecentralCoins/token field is $$0$$. $$S = 32$$ if the value of the flag DecentralCoins/token field is $$1$$. $$5.1$$ Number of transfers transferCount Short $$2$$ $$5.2$$ Address or alias of the recipient recipient S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$. $$5.3$$ Amount of tokens in the transfer 1 amount Long $$8$$ $$5.4$$ Address or alias of the recipient recipient S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$. $$5.5$$ Amount of tokens in the transfer 2 amount Long $$8$$ $$5.[2 × N]$$ Address or alias of the recipient recipient S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$. $$5.[2 × N + 1]$$ Amount of tokens in the transferN amount Long $$8$$ $$6$$ Transaction timestamp timestamp Long $$8$$ $$7$$ Transaction fee fee Long $$8$$ $$8.1$$ Attachment length Short $$2$$ $$8.2$$ Attachment Array[Byte] $$2$$ Arbitrary data attached to the transaction. $$9$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$1$$, $$2$$, $$3$$, $$4.1$$, $$4.2$$, $$5.1$$, $$5.2$$, $$5.3$$, $$5.4$$, $$5.5$$, $$5.[2 × N]$$, $$5.[2 × N + 1]$$, $$6$$, $$7$$, $$8.1$$ and $$8.2$$ are the transaction body bytes. ##### Reissue Transaction Binary Format Version 3 message ReissueTransactionData { Amount asset_amount = 1; bool reissuable = 2; }; message Amount { bytes asset_id = 1; int64 amount = 2; }; Reissue Transaction Binary Format Version 3 Field Size Description asset_id $$32$$ bytes ID of token to reissue. asset_amount.amount $$8$$ bytes Amount of token to reissue, specified in the minimum fraction (“cents”). reissuable $$1$$ byte Reissue availability flag. Version 2 Reissue Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is 2 or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$5$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4$$ Chain ID chainId Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6$$ Token ID assetId Array[Byte] $$32$$ $$7$$ Amount of token that will be reissued quantity Long $$8$$ $$8$$ Reissue flag reissuable Boolean $$1$$ If the value is $$0$$, then token reissue is not possible. If the value is $$1$$, then token reissue is possible. $$9$$ Transaction fee fee Long $$8$$ $$10$$ Transaction timestamp timestamp Long $$8$$ $$11$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, $$8$$, $$9$$ and $$10$$ are the transaction body bytes. JSON Representation of Transaction { "type":5, "id":"27ETigYaHym2Zbdp4x1gnXnZPF1VJCqQpXmhszC35Qac", "sender":"3PLJciboJqgKsZWLj7k1VariHgre6uu4S2T", "senderPublicKey":"DjYEAb3NsQiB6QdmVAzkwJh7iLgUs3yDLf7oFEeuZjfM", "fee":100000000, "feeAssetId":null, "timestamp":1548521785933, "proofs": [ "5mEveeUwBdBqe8naNoV5eAe5vj6fk8U743eHGkhxhs3v9PMsb3agHqpe4EtzpUFdpASJegXyjrGSbynZg557cnSq" ], "version":2, "assetId":"GA4gB3Lf3AQdF1vBCbqGMTeDrkUxY7L83xskRx6Z7kEH", "quantity":200000, "reissuable":true, "chainId":87, "height":1368623 } Version 1 Reissue Transaction Binary Format Version 1 Field order number Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$5$$. $$2$$ Transaction signature Array[Byte] $$64$$ $$3$$ Transaction type ID Byte $$1$$ This field duplicates field $$1$$. $$4$$ Public key of the transaction sender Array[Byte] $$32$$ $$5$$ Token ID Array[Byte] $$32$$ $$6$$ Amount of token that will be reissued Long $$8$$ $$7$$ Reissue flag Boolean $$1$$ If the value is $$0$$, then token reissue is not possible. If the value is $$1$$, then token reissue is possible. $$8$$ Transaction fee Long $$8$$ $$9$$ Transaction timestamp Long $$8$$ The fields $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, $$8$$ and $$9$$ are the transaction body bytes. ##### Set Asset Script Transaction Binary Format Version 2 message SetAssetScriptTransactionData { bytes asset_id = 1; bytes script = 2; }; Set Asset Script Transaction Binary Format Version 2 Field Size Description asset_id $$32$$ bytes ID of asset. script Up to $$8192$$ bytes The maximim number of transfers is $$100$$. JSON Representation of Transaction { "type":15, "id":"FwYSpmVDbWQ2BA5NCBZ9z5GSjY39PSyfNZzBayDiMA88", "senderPublicKey":"AwQYJRHZNd9bvF7C13uwnPiLQfTzvDFJe7DTUXxzrGQS", "fee":100000000, "feeAssetId":null, "timestamp":1547201038106, "proofs": [ "nzYhVKmRmd7BiFDDfrFVnY6Yo98xDGsKrBLWentF7ibe4P9cGWg4RtomHum2NEMBhuyZb5yjThcW7vsCLg7F8NQ" ], "version":1, "assetId":"7qJUQFxniMQx45wk12UdZwknEW9cDgvfoHuAvwDNVjYv", "script":"base64:AQa3b8tH", "chainId":87, "height":1346345 } Version 1 Set Asset Script Transaction Binary Format Version 1 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$15$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$1$$. $$4$$ Chain ID chainId Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6$$ Token ID to which the asset script is attached assetId Array[Byte] $$32$$ $$7$$ Transaction fee fee Long $$8$$ $$8$$ Transaction timestamp timestamp Long $$8$$ $$9.1$$ Script existence flag Boolean $$1$$ If the value is $$0$$, then the token does not have a script. If the value is $$1$$, then the token has a script. $$9.2$$ Script size in bytes Short S $$S = 0$$ if the value of the script existence flag field is $$0$$. $$S = 2$$ if the value of the script existence flag field is $$1$$. $$9.3$$ Asset script script String S $$S = 0$$ if the value of the script existence flag field is $$0$$. $$0 < S ≤ 8192$$, if the value of the script existence flag field is $$1$$. $$10$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. The fields $$2$$, $$3$$, $$4$$, $$5$$, $$6$$, $$7$$, $$8$$, $$9.1$$, $$9.2$$ and $$9.3$$ are the transaction body bytes. ##### Set Script Transaction Binary Format Version 2 message SetScriptTransactionData { bytes script = 1; }; Set Script Transaction Binary Format Version 2 Field Size Description script Up to $$32,768$$ bytes JSON Representation of Transaction { "type":13, "id":"8Nwjd2tcQWff3S9WAhBa7vLRNpNnigWqrTbahvyfMVrU", "sender":"3PBSduYkK7GQxVFWkKWMq8GQkVdAGX71hTx", "senderPublicKey":"3LZmDK7vuSBsDmFLxJ4qihZynUz8JF9e88dNu5fsus5p", "fee":2082496, "feeAssetId":null, "timestamp":1537973512182, "proofs": [ "V45jPG1nuEnwaYb9jTKQCJpRskJQvtkBcnZ45WjZUbVdNTi1KijVikJkDfMNcEdSBF8oGDYZiWpVTdLSn76mV57" ], "version":1, "script":"base64:AQQAAAAEaW5hbAIAAAAESW5hbAQAAAAFZWxlbmECAAAAB0xlbnVza2EEAAAABGxvdmUCAAAAC0luYWxMZW51c2thCQAAAAAAAAIJAAEsAAAAAgUAAAAEaW5hbAUAAAAFZWxlbmEFAAAABGxvdmV4ZFt5", "chainId":87, "height":1190001 } Version 1 Set Script Transaction Binary Format Version 1 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$13$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$1$$. $$4$$ Chain ID chainId Byte $$1$$ $$87$$ — for Mainnet. $$84$$ — for Testnet. $$83$$ — for Stagenet. $$5$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$6.1$$ Script existence flag Boolean $$1$$ If the value is $$0$$, then the token does not have a script. If the value is $$1,$$ then the token has a script. $$6.2$$ Script length Short S $$S = 0$$ if the value of the script existence flag field is $$0$$. $$S = 2$$ if the value of the script existence flag field is $$1$$. $$6.3$$ Script script String S $$S = 0$$ if the value of the script existence flag field is $$0$$. $$0 < S ≤ 32,768$$, if the value of the script existence flag field is $$1$$. $$7$$ Transaction fee fee Long $$8$$ $$8$$ Transaction timestamp timestamp Long $$8$$ $$9$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where $$N$$ is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. ##### Transfer Transaction Binary Format Version 3 message TransferTransactionData { Recipient recipient = 1; Amount amount = 2; bytes attachment = 3; }; message Recipient { oneof recipient { bytes public_key_hash = 1; string alias = 2; }; message Amount { bytes asset_id = 1; int64 amount = 2; }; Transfer Transaction Binary Format Version 3 Field Size Description recipient.public_key_hash $$20$$ bytes Recipient’s account public key hash (a component of an address, see the address binary format article). recipient.alias From $$4$$ to $$30$$ bytes Recipient’s alias. amount.asset_id $$32$$ bytes ID of token to transfer. amount.amount $$8$$ bytes Amount of token to transfer, specified in the minimum fraction (“cents”). attachment Up to $$140$$ bytes Arbitrary data (typically a comment to transfer). Version 2 Transfer Transaction Binary Format Version 2 # Field JSON field name Field type Field size in bytes Comment $$1$$ Version flag Byte $$1$$ Indicates the transaction version is $$2$$ or higher. Value must be $$0$$. $$2$$ Transaction type ID type Byte $$1$$ Value must be $$4$$. $$3$$ Transaction version version Byte $$1$$ Value must be $$2$$. $$4$$ Public key of the transaction sender senderPublicKey Array[Byte] $$32$$ $$5.1$$ Transferring token type flag Byte $$1$$ Value is $$0$$ for transferring DecentralCoins. Value is $$1$$ for transferring other token. $$5.2$$ Transferring token ID assetId Array[Byte] S math:S = 0 if the value of the flag 5.1 is $$0$$. $$S = 32$$ if the value of the flag 5.1 is $$1$$. $$6.1$$ Fee token type flag Byte $$1$$ Value is 0 for fee in DecentralCoins. Value is $$1$$ for fee in other token. $$6.2$$ Fee token ID feeAssetId Array[Byte] S Token to pay the fee. $$S = 0$$ if the value of the flag 6.1 is $$0$$. $$S = 32$$ if the value of the flag 6.1 field is $$1$$. $$7$$ Transaction timestamp timestamp Long $$8$$ $$8$$ Amount of token in the transfer amount Long $$8$$ $$9$$ Transaction fee fee Long $$8$$ $$10$$ recipient S If the first byte of the field is $$1$$, then it is followed by address. S in this case equals $$26$$. If the first byte of the field is $$2$$, then it is followed by alias. In this case $$8 <= S <= 34$$ $$11.1$$ Attachment length Short $$2$$ $$11.2$$ Attachment attachment Array[Byte] Up to $$140$$ bytes Arbitrary data attached to the transaction. $$12$$ Transaction proofs proofs S If the array is empty, then $$S = 3$$. If the array is not empty, then $$S = 3 + 2 × N + 64 × N$$, where N is the number of proofs in the array. The maximum number of proofs in the array is $$8$$. The size of each proof is $$64$$ bytes. JSON Representation of Transaction { "type":4, "id":"2UMEGNXwiRzyGykG8voDgxnwHA7w5aX5gmxdcf9DZZjL", "sender":"3PCeQD3nAyHmzDSYBUnSPDWf9qxqzVU2sjh", "senderPublicKey":"6kn1XPDh2XUjVAgznxNousHq3EnKKLx7BRWyJzVFU76J", "fee":100000, "feeAssetId":null, "timestamp":1583160322998, "proofs": [ "2z5fnoigbsCBqRPWqTDeDmGJF6qJwnm2WLspen6c6qziTc73sBh9Kh81kPhUT9DGg7ANwqsXMxQauEvyw3RxNH7z" ], "version":2, "recipient":"3P45uRnyVygTnbEJNxc2CHLUiC4izQxbuuS", "assetId":"51LxAtwBXapvvTFSbbh4nLyWFxH6x8ocfNvrXxbTChze", "feeAsset":null, "amount":30077000000, "attachment":"2d6RhvQATwGbyv7dKT3L77758iJx", "height":1954598 } Version 1 Transfer Transaction Binary Format Version 1 # Field Field type Field size in bytes Comment $$1$$ Transaction type ID Byte $$1$$ Value must be $$4$$. $$2$$ Transaction signature Array[Byte] $$64$$ $$3$$ Transaction type ID Byte $$1$$ This field duplicates field $$1$$. $$4$$ Public key of the transaction sender Array[Byte] $$32$$ $$5.1$$ Transferring token type flag Byte $$1$$ Value is $$0$$ for transferring DecentralCoins. Value is 1 for transferring other token. $$5.2$$ Transferring token ID Array[Byte] S $$S = 0$$ if the value of the flag 5.1 is $$0$$. $$S = 32$$ if the value of the flag 5.1 is $$1$$. $$6.1$$ Fee token type flag Byte $$1$$ Value is $$0$$ for fee in DecentralCoins. Value is $$1$$ for fee in other token. $$6.2$$ Fee token ID Array[Byte] S Token to pay the fee. $$S = 0$$ if the value of the flag 6.1 is 0. $$S = 32$$ if the value of the flag 6.1 field is $$1$$. $$7$$ Transaction timestamp Long $$8$$ $$8$$ Amount of token in the transfer amount Long $$8$$ $$9$$ Transaction fee fee Long $$8$$ $$10$$ recipient S $$11.1$$ Attachment length Short $$2$$ $$11.2$$ Attachment attachment Array[Byte] Up to $$140$$ bytes. ##### Update Asset Info Transaction Binary Format Version 1 message UpdateAssetInfoTransactionData { bytes asset_id = 1; string name = 2; string description = 3; } Update Asset Transaction Binary Format Version 1 Field Size Description asset_id $$32$$ bytes Token ID. name From $$4$$ to $$16$$ bytes Token name. description From $$0$$ to $$1000$$ bytes Token description. #### Transaction Proofs Binary Format Transaction Proofs Binary Format # Field Type Size in bytes Comment $$1$$ Proofs version Byte $$1$$ Value is 1. $$2$$ Proofs count Short $$2$$ $$3$$ Proof 1 length Short $$2$$ Value is 64. $$4$$ Proofs 1 Array[Byte] $$64$$ $$5$$ Proof 2 length Short $$2$$ $$6$$ Proof 2 Array[Byte] $$64$$ The maximum number of proofs is $$8$$. ### Validation Rules #### Account Validation Account is valid then it is a valid Base58 string and the length of the corresponding array is $$26$$ bytes. Version of address (1st byte) is equal to $$1$$. The network byte (2nd byte) is equal to network ID. The checksum of address (last $$4$$ bytes) is correct. #### Transactions Validation ##### Transfer Transaction Validation Transfer transaction is valid then: • Recipient address is valid. If not, InvalidAddress validation result will be returned. • Size of attachment is less than or equals MaxAttachementSize($$140$$ bytes). In other case TooBigArray validation result will be returned. • Transaction’s amount is more than $$0$$, otherwise NegativeAmount validation result is returned. • Transaction’s fee is positive, otherwise InsufficientFee validation result is returned. • Adding fee to amount does not lead to Long overflow. In case of Long overflow OverflowError validation result will be returned. • Transaction’s signature is valid, otherwise InvalidSignature validation result is returned. ##### Issue Transaction Validation Issue transaction is valid then: • Sender’s address is valid. If not, InvalidAddress validation result will be returned. • Quantity of asset is positive, otherwise NegativeAmount validation result is returned. • Transaction’s fee is more than or equals MinFee($$100000000$$ Decentralites = $$1$$ DecentralCoin), in other case InsufficientFee validation result is returned. • Size of description is less than or equals MaxDescriptionLength($$1000$$ bytes), otherwise TooBigArray is returned. • Size of name is more than or equals MinAssetNameLength and less or equals MaxAssetNameLength, in other case InvalidName validation result will be returned. • Decimals is positive and less than or equals MaxDecimals, in other case TooBigArray is returned. • Transaction’s signature is valid, otherwise InvalidSignature validation result is returned. ##### Reissue Transaction Validation Reissue transaction is valid then: • Sender’s account is valid. Otherwise InvalidAddress validation result is returned. • Quantity is positive, in other case NegativeAmount validation result will be returned. • Transaction’s fee is positive, in other case InsufficientFee result will be returned. • Transaction’s signature is valid, otherwise InvalidSignature validation result is returned. #### Block Validations Block is valid then: • Block chain contains referenced blocks. • Block’s signature is valid. • Block’s consensus data is valid. • Block’s transactions are valid. ##### Consensus Data Validation Block’s consensus data is valid then: • Block creation time is no more than MaxTimeDrift($$15$$ seconds) in future. • Block’s transactions are sorted. This rule works only after $$1477958400000$$ on Testnet and $$1479168000000$$ on Mainnet. • Block chain contains parent block or block chain height is equal $$1$$. • Block’s base target is valid. • Block’s generator signature is valid. • Generator’s balance is more than or equals MinimalEffectiveBalanceForGeneration($$1000000000000$$ Decentralites). This rule always works on Testnet and works only after $$1479168000000$$ on Mainnet. • Block’s hit is less than calculated block’s target. • Voted features are sorted in ascending order and are not repeated. ##### Transactions Data Validation Block’s transactions are valid then: • Creation time of every transaction in block is less than block’s creation time no more than on MaxTxAndBlockDiff($$2$$ hours). • All transactions are valid against state. Transaction validation against state. Transactions are valid then: • Transaction is valid by transaction validation rules. • Transaction creation time more than block’s creation time no more than on MaxTimeForUnconfirmed($$90$$ minutes). This limitation works always on Testnet and only after $$1479168000000$$ on Mainnet. • Application of transaction to accounts should not lead to temporary negative balance. This rule works after $$1479168000000$$ on Mainnet and after $$1477958400000$$ on Testnet. • Changes made by transaction should be sorted by their amount. This rule works on both Mainnet and Testnet after $$1479416400000$$. • Application of transaction’s amount to current balance should not lead to Long overflow. • After application of all block’s transactions affected balances should not be negative. #### Unconfirmed Transactions Pool Validation Transaction could be inserted in unconfirmed transactions pool then: • Transaction is valid by transaction validation rules. • If transaction’s fee is more than or equals minimum fee that was set by the owner of a node. • There is a space for a new transaction if unconfirmed transactions pool. By default the pool is limited by $$1000$$ transactions. • unconfirmed transactions pool does not contain transaction with the same ID. • Transaction created not later than MaxTimeForUncofimed($$90$$ minutes) after the last block was created. • Transaction creation time is no more than MaxTimeDrift($$15$$ seconds) in future. • Transaction is valid against state. ## Glossary ### A Account An account is a cryptographically connected pair of public and on the private key. Accounts uniquely correlate transactions and orders with their senders. Account Data Storage An account data storage is the store of data records in the key-value format associated with the account. Each account has single data storage. The size of the account data storage is unlimited. Account Script An account script is a Ride script that has the following directives: {-# CONTENT_TYPE EXPRESSION #-} {-# SCRIPT_TYPE ACCOUNT #-} The account script is attached to the account using the set script transaction. Only one script can be attached to an account. An account with an account script attached is called a smart account. An address is a unique account identifier. The address can be represented as an alphanumeric string. Airdrop An airdrop is a simultaneous sending of tokens to multiple addresses. As a rule, the airdrop is used as an incentive for holders of a certain token as part of a marketing campaign to promote a project, increase its recognition, and attract investors. Alias An alias is a short, easy-to-remember address name. There cannot be two aliases with the same name. A single address can have multiple aliases. Asset An asset is a synonym for the token. Asset Script An asset script is a Ride script that has the following directives: {-# CONTENT_TYPE EXPRESSION #-} {-# SCRIPT_TYPE ASSET #-} The asset script is attached to the asset using the set asset script transaction. You can attach a script to an asset only at the time of the asset creation. However, you can change the script later, if needed. An asset with a script attached to it is called a smart asset. ### B Block A block is a unit of the blockchain chain. The block contains transactions: from $$0$$ to $$6000$$ inclusive. The maximum block size is $$1$$ MB. Blockchain A blockchain is a continuous sequential chain of blocks that are linked using cryptography. The blockchain has its own blockchain height. Block Height A block height is the block’s sequence number in the blockchain. Blockchain Height A blockchain height is a sequence number of the last block in the blockchain. Blockchain Network A blockchain network is a computer network that consists of node. Block Signature A block signature is a hash that the mining node receives when it signs the generated block with the private key of the mining account. ### C Consensus The consensus is a set of rules in accordance with which blockchain operates. DecentralChain uses the LPoS consensus. Cryptocurrency A cryptocurrency is a type of digital currency, the creation and control of which is based on cryptographic methods. ### D dApp A dApp is an account with the dApp script attached. dApp Script A dApp script is a Ride script used to create dApp. The dApp script has the following directive: {-# CONTENT_TYPE DAPP #-} ` dApp Script can be attached to the account using the set script transaction, and, as a result, the dApp will be created. Decentralized Application A decentralized application is an application that is stored and executed on the blockchain network. ### E Explorer Explorer (or DecentralChain Explorer) is an online service that displays DecentralChain blockchain data in a human-readable form. ### F Faucet A test network faucet (or faucet) is a DecentralChain Explorer tool that refills the test network accounts with the DecentralCoins test tokens. For one recharge, the user receives $$10$$ testnet DecentralCoins. ### G Gateway Gateway is a centralized payment solution that allows transferring cryptocurrencies from one blockchain to another and vice versa; as well as transferring fiat money to and out of the blockchain. Genesis Block The genesis block (or genesis) is the very first block of the blockchain. The genesis block contains one or several genesis transactions. Genesis Transaction Genesis transaction is a genesis block transaction that charges DecentralCoins to an account. The genesis transactions define the initial distribution of DecentralCoins between accounts during the creation of the blockchain. ### H Hash A hash is a result of applying a hash function. Hash Function A hash function (or fold function) is a function that converts an array of input data of arbitrary length into a bit string of a fixed length, performed by a certain algorithm. ### L Leasing Leasing is a temporary reversible transfer of DecentralCoins from one account to another to increase the stability and security of the network, as well as potentially get mining reward. Note that the DecentralCoin tokens are not actually being transferred to another account, they remain on the sender’s balance, however, they are ‘frozen’ and cannot participate in the buying and selling operations, as well as they cannot be sent to another account. The leased tokens provide the leasing recipient with a greater chance of mining a block. The recipient of the lease can share the income from mining with the one who leased DecentralCoins to him. However, the DecentralChain protocol does not regulate the payment process for LPoS mining, this remains at the discretion of the miner. At any time, the sender can ‘unfreeze’ tokens by invoking the Lease Cancel transaction. LPoS LPoS (or Leased Proof-of-Stake) is a consensus algorithm in which the probability of generating the next block by the participant is proportional to the share of cryptocurrencies belonging to this participant or leased to this participant from their total supply. In other words, the more tokens on the account of the miner (own and leased to them), the higher the probability of generating the next block. ### M Mainnet The mainnet (or main network) is the main DecentralChain blockchain network. Matcher Matcher is a service that executes orders on the exchange. Matcher Fee A matcher fee is a fee that matcher takes from both accounts that participate in the exchange of the pair of tokens. Miner A miner is the owner of the mining node. Mining Mining is the process of generating a block by a mining node, as a result of which a new block is added to the blockchain and DecentralCoin tokens are issued. For block generation, miners receive a reward for mining, as well as transaction fees, according to the rules of the DecentralChain-M5 protocol. Mining Account A mining account is an account that the mining node uses to block the generated blocks. Mining Node A mining node is a node that can perform mining. Each mining node is a validating node node. Multisignature Multisignature is an implementation of an electronic signature that requires the use of several private key as a condition for transactions execution. ### N NFT NFT (Non-Fungible Token) is a tokens with unique ID. Two ‘regular’ tokens can not be distinguished from each other — they are the same, i.e. fungible. Each NFT is unique; there cannot be two identical NFTs. Most often NFTs are used in games. Node A node is a host that is connected to the blockchain network using the DecentralChain node application. The node stores blocks, sends and validates transactions. ### O Oracle Oracle is a provider of data from the outside world to the blockchain. Oracle Card An oracle card is a public description of the oracle in the blockchain according to a standardized protocol in the form of a data transaction. Order Order (or exchange order) is an instruction to buy or sell a tokens on the exchange. ### P PoS PoS (Proof-of-Stake) is a consensus algorithm in which the probability of generating the next block is proportional to the share of cryptocurrencies belonging to this participant from their total supply. In other words, the more tokens on the account of a miner, the higher the probability of generating the next block. PoW PoW (Proof-of-Work) is a consensus algorithm in which it is required to perform a complex calculation in order to generate a new block. That is, the higher the performance of the miner’s equipment, the higher the probability of generating the next block. Private Key The private key is one of a pair of account keys. The account owner signs the transaction with the private key before sending it, and, as a result, gets the digital signature of the transaction. Public Key An account script is a Ride script that has the following directives: ### R Ride The Ride is a functional expression-based programming language. Ride is used to write scripts. The language has strong static typing, it is case sensitive, has no loops and goto-like expressions, and therefore it is Turing-incomplete ### S Script A script is the source code on the Ride language. There are three types of scripts: dApp script, account script, asset script. Secret Phrase Secret phrase (or Seed) is a set of characters (usually, it is 15 English words with spaces between them) that allows you to access your DecentralChain address and, accordingly, the funds on your account. When registering an account, you are asked to keep your secret phrase safe. Smart Account A smart account is an account with an account script attached. Only one script can be attached to an account. The account script is attached to the account using the set script transaction. Smart Asset A smart asset is a tokens with an asset script attached. Stagenet Stagenet (or staging network) is the DecentralChain blockchain network, which is used for experiments, intermediate testing of new functionality, as well as providing access for the DecentralChain community to intermediate releases. It is important to consider that this network is unstable, a frequent rollback of blockchain data to the N-th height in the past is possible. ### T Test Network Test network (or testnet) is a DecentralChain blockchain test network, which is used by developers to test their products, and by users to get acquainted with the blockchain. Token A token is a blockchain object that represents another object from the physical or virtual world or an abstract concept. Transaction Transaction is an action on the blockchain on behalf of the account. Transactions can be sent only from the account — thus, any transaction can be correlated with a certain account. Transaction Body Bytes An account script is a Ride script that has the following directives: ### U UTX pool UTX pool (or Unconfirmed Transactions pool) is a pool of unconfirmed node transactions that are waiting for validation. ### V Validating Node A validating node is a node that validates transactions. ### W Decentralites One Decentralite is 1/100 000 000 DecentralCoin. 1 Decentralite is the minimum number of DecentralCoins that you can work with within the DecentralChain blockchain. DecentralCoins DecentralCoin is the main token of the DecentralChain blockchain. 1 DecentralCoin equals 100,000,000 Decentralites. WCT An account script is a Ride script that has the following directives:
auto_math_text
web
Would a Deeply Bound b\bar{b}b\bar{b} Tetraquark Meson be Observed at the LHC? # Would a Deeply Bound b¯bb¯b Tetraquark Meson be Observed at the LHC? Estia Eichten,    Zhen Liu ###### Abstract There has been much theoretical speculation about the existence of a deeply bounded tetra-bottom state. Such a state would not be expected to be more than a GeV below threshold. If such a state exists below the threshold it would be narrow, as Zweig allowed strong decays are kinematically forbidden. Given the observation of pair production at CMS, such a state with a large branching fraction into is likely discoverable at the LHC. The discovery mode is similar to the SM Higgs decaying into four leptons through the channel. The testable features of both production and the four lepton decays of such a tetra-bottom ground state are presented. The assumptions required for each feature are identified, allowing the application of our results more generally to a resonance decaying into four charged leptons (through the channel) in the same mass region. ###### Keywords: institutetext: Theoretical Physics Department, Fermi National Accelerator Laboratory, Batavia, IL, 60510\preprint FERMILAB-PUB-17-395-T ## 1 Introduction Since the discovery of the X(3872) Choi:2003ue () the possibility of meson states with four valence quarks has received considerable attention. Many other quarkonium-like states, the so-called XYZ states Agashe:2014kda () have since been observed. Theoretical models have been proposed to explain these states involving systems involving a heavy quark-antiquark pair (c or b) and a light quark-antiquark pair (u, d, s) Brambilla:2010cs (); Bodwin:2013nua () . In particular, the discovery of isospin one resonances with hidden heavy flavor quarks, the and states Patrignani:2016xqp () makes the interpretation of all these states without additional light valence quarks impossible. In all the presently observed XYZ states, the tetraquark state is very near or above the threshold for strong decays (Zweig allowed) into a pair of heavy-light mesons. It is therefore natural to ask what happens as the mass of the lighter quarks is raised so that all four quarks become heavy. Could the binding become stronger as the mass increases as is observed for heavy quark-antiquark (quarkonium) systems? This could lead to narrow deeply bound tetraquark systems (without Zweig allowed strong decays to quarkonium states). There is some theoretical reasons to suggest this maybe the case. In the QED analog, the lowest state of two positronium atoms (a positronium molecule) is bound Wheeler:1946 (); Hyllerass:1947 (); Varga:1998 () and has been unambiguously observed in 2007 Cassidy:2007 (). Similarly in the perturbative NRQCD limit of four heavy quarks, the Van der Waals force between two color singlet mesons separated by large distance is attractive Brambilla:2017ffe (). The heaviest tetraquark system involve four bottom flavored quarks. If the mass of the lowest such tetraquark state were below threshold, the decays would occur only by the annihilation of one quark-antiquark pair and the state will be very narrow. In the following sections this possibility and its consequences for observation at the LHC are explored in detail. ## 2 States with four heavy quarks If all four quarks are heavy, we may use NRQCD to study these systems. There are three approaches to study these systems: (1) Direct measurement of the spectrum using Lattice QCD, (2) QCD sum rule approach, and (3) Non relativistic potential models motivated by QCD expectations. Direct lattice calculations should be very informative but have not yet been done. If the ground state of the four quark system would be significantly below the threshold of pair production of two quarkonium states in the same channel, the required lattice calculations is greatly simplified. Some calculations using the QCD sum rule approach have been presented recently Chen:2016jxd (); Wang:2017jtz (). They conclude that for the 4 b quark system the ground states are below the strong decay threshold but for the 4 c quark systems all the states are above thresholds for strong decay. The third approach of using QCD inspired potential models will be discussed below. The Hamiltonian, H, for four heavy quarks is the sum of the non relativistic kinetic energy, T, and a potential energy, V, which expresses the interactions between the heavy quarks, H=Tkin+V. (1) Consider two quarks at positions and antiquarks at positions respectively. In the non relativistic limit the quark spin can be treated as a relativistic correction. The overall position, , and angular momentum, L, separate as with the usual two-body Schrödinger equation. However, we are left with six variables and a very complicated Hamiltonian to solve for the energies and wavefuctions of the various states. This Hamiltonian can only be solved numerically, so we are limited here to present some general remarks about the form. Denote the relative distances between the four quarks by the six values for the short distance behaviour of the potential is given by perturbative QCD. In lowest order V=VpQCD+Vstring. (2) is the perturbative one gluon exchange terms of the form VpQCD=∑i,j for i where are the SU(3) Clebsch-Gordon coefficients for single gluon exchange. In the limit that all quark masses are extremely large the ground state is determined by perturbative QCD alone. However, not even the b quark is sufficiently heavy to ignore the non perturbative QCD interactions. The long distance part is modeled by the string terms as shown in Figure 1. It is determined by the shortest path that creates a local color singlet state. , and with L being the length of the shortest path that couples all the quarks (see Fig 1). The string tension is denoted . Finally . This form has the interesting behaviour of flipping from one form to another as relative distances change. The form of this potential is consistent with recent lattice studies of the tetraquark static potential Bicudo:2017usw (). Unlike the usual mesons and baryons, the tetraquark system has two separate color singlet combinations: can be decomposed as [or an alternately basis ], i.e. two unique ways to get a singlet. Thus the wavefunction of the tetraquark states have two components in color space. If quarks (1,2,3,4) have colors indices (i,j,k,l) we can chose the basis () for and () for to represent these two components. So properly the potential is a matrix in color space. Thus is given by ⎧⎨⎩−43(v(r13)+v(r24))49(v(r12)+v(r34))49(v(r12)+v(r34))−43(v(r14)+v(r23))⎫⎬⎭ (4) with . In a similar way can be written in the form: Vstring[ψIψII]={V1+β11V3−β12V3−β21V3V2+β22V3}[cψIψII] (5) where is the matrix projection of the potential on the two color states. In the various limits the expected behaviour is recovered. For and fixed and all the other distances becoming large, the solutions decompose into the two mesons and and . Similarly with and fixed and other distances large the the solutions decompose into two mesons and . Notice that in both of these cases the resulting Hamiltonian is just the usual potential for quarkonium states A and B. In either case it is useful to decompose the kinetic energy of the reduced system (6) where and are the relative position of the quark and antiquark in meson A and B respectively; is the relative position of the center of masses of meson and ; and and are the associated reduced masses of the subsystems. 111For example, for and , , , , , , and . In the limit and fixed and the other distances becoming large, the diquark-antidiquark system is approximated. Here the dynamics is separated into the binding of the diquark and antidiquark systems into and systems. These systems then are bound in the overall singlet state just like a quarkonium system. The wavefunction for the diquark-antidiquark state () is simply . Note that the and systems will not be relevant for low-lying states because the short range piece of is repulsive and the lowest order long range string potential requires the diquark and antidiquark to be and respectively. In general the full spectrum of systems with four heavy quarks has not yet been calculated. Even the dominate spatial contributions to the wavefunction of the ground state system remains unresolved. Under various assumptions tetraquark systems have been studied. Detailed studies of within the Bethe-Salpeter approach has been presented by Heupel, Eichmann and Fischer Heupel:2012ua (); Eichmann:2015nra (); Eichmann:2015zwa () for tetraquark systems with lighter quark masses (up to the charm quark mass). However, only the lowest order QCD one gluon exchanges are included in the kernel at present. In the limit of sufficiently heavy quarks the inclusion of only the lowest order gluon exchanges would be would be rigorous. In this heavy quark limit, ground state masses has been investigated using a variational technique by Czarnecki, Leng and VoloshinCzarnecki:2017vco (). They conclude that such tetraquark systems with all equal masses are not bound. More phenomenological approaches in which it is assumed that dynamics of the tetraquark system is approximated by a diquark-antidiquark system have also been studied Berezhnoy:2011xn (). Here narrow tetraquark states below threshold for strong decays are found for both and systems. Using the two-body subsystems for 4 heavy quarks the tetraquark spectrum has been studied Popovici:2014usa (). Bai, Lu and Osborne have studied the ground state of the system including the non-perturbative string potential (Fig.1) using the Diffusion Monte Carlo method Bai:2016int (). They find the ground state tetra-b quark state is bound, while Richard, Valcarce and Vijande argue that such states will not be bound Richard:2017vry () and a phenomenological analysis of Karliner, Nussinov and Rosner puts this state just below di- threshold Karliner:2016zzc (). One can only conclude at present that the issue of binding awaits a definitive Lattice QCD calculation. We will discuss what can be said reliably in the next section. ## 3 Phenomenology of the Low-lying b¯bb¯b tetraquark states At leading order the spin-splittings can be ignored and states are described by its radial quantum numbers and its orbital angular momentum . The ground state would be expected to be fully symmetric, so with the subsystem angular momenta all zero as well. After adding spin there are in general six degenerate states: . If the two quarks are identical as in the system then will be antisymmetric in color, then for the ground state the total spin of the two quarks (two antiquarks) must also be symmetric state, hence the tetraquark system can have . In terms of the diquark-antidiquark basis the states are shown in Table 1. ### 3.1 Decay properties The decay property of this tetraquark state could be fully determined by the effective Lagrangian, ΔL=12(∂μϕ)2−12ΛϕΥμΥμ+..., (7) where the dimensionful quantity characterizes the interaction between the ground state and states. In principle, represents 1S, 2S, 3S, etc., states and the coefficients could differ for different state combinations. We omit possible higher dimensional interaction terms (or a more general form factor) that feature different Lorentz structure as we anticipate them to generate sub-leading contributions to the production and decays of the tetraquark state. Assuming the state is a deeply bounded state (below threshold), it is not unreasonable to assume that the ground state dominates. Throughout this paper, unless otherwise noted, we only consider the state overlapping with the through this basic interaction term. There is no priori knowledge about the size of this parameter .222The amplitude for the strong decay of a tetraquark ) state into two ) quarkonium states would expected to be a typical QCD scale (i.e. 200 MeV). Similar to the case explained above, if the underlying state is a pseudoscalar, vector or a tensor, different forms of allowed Effective Field Theory (EFT) with di-Upsilon system are allowed. If such a state exists, the differential observables would help determine the structure of the EFT and thus the associated of the tetraquark state. We tabulate the possible states and operators in Table. 3, keeping only the lowest dimensional operators. We anticipate leading observable state would be the tetra quark ground state and thus list a few other possibilities to contrast and check, we omit the possibility spin-1 or CP-odd spin-2 state for simplicity. Considering the observability at the hadron collider environment, we focus on the purely leptonic decays of the tetraquark state. These purely leptonic decays can be understood as mediated by the intermediate (on-shell or off-shell) vector meson states . Since the pair production has been measured directly at both experiment with low background, such a resonant four lepton final state will be also observable if produced with sufficient rate. The production properties for this possible deeply bounded tetraquark state will be discussed in the next section. For the tetraquark state decaying into four leptons final states, there are several important physics observables for us to understand the properties of this state. We discuss them in order. Assuming that the only observable decays are from , we can derive many useful differential distributions that are informative in identifying the physics origin of this state, similar to the case of the SM Higgs decaying to four leptons through intermediate -bosons Choi:2002jk (). We use the differential formalism detailed in Eq.(27) and include additional off-shell suppression and velocity suppression factors for the invariant mass distribution detailed in Eq.(23) of Ref Choi:2002jk (). We note here that due to vector meson dominance, the axial vector terms proportional to and in these formulae are all zero. Furthermore, the coefficients of our interaction terms under consideration in Table. 3 can be identified as coefficients of , and for the , and states, respectively, in Table. 1 of Ref. Choi:2002jk () when matching the helicity components for the calculation. To study the differential distributions of the tetraquark ground state, we choose the mass to be slightly below the threshold, 18.5 GeV throughout the text unless noted otherwise. In the upper panel of Fig. 2 we show the (polar) angular distributions of the di-lepton pairs in their corresponding rest frames. For a state, helicity component , proportional to , is independent from additional suppression in addition to the common off-shell propagator suppression. In contrast, the decay into the transverse vector states , proportional to , are suppressed by additional factors. These behaviors result in the dependence of the angular distribution on the mass of the off-shell . In the lower panel of this figure, we show the angular distribution for various assumptions on spin and parity of the underlying resonant particle. For the state, the interaction is dominated by the transversely polarized vector mesons and thus have a behavior favoring the forward and backward direction for the polar-angular distribution of the leptons. In contrast, for both the and case the transversely polarized intermediate vector mesons are suppressed and thus exhibit comparatively less angular dependence. In the upper panel of Fig. 3, we show the angular distribution of the angles between the decay planes of the system for several benchmark values of the off-shell masses from a state. In the extreme case of the off-shell pair mass approaching zero, the distribution is completely flat, as the helicity component vanishes. In other cases, we can see distributions generated proportional to superimposed on the flat distribution from helicity component. In the lower panel of this figure we show the angular distribution of the decay planes for various assumption on the spin and parity of the underlying resonant particle. The CP-odd state features an opposite behavior in contrast to the case of the CP-even state of and , as expected since this plane angle is a CP-sensitive observable. In Fig. 4 we show the normalized off-shell invariant mass distribution for the process of . As anticipated from the off-shell suppression behavior, the off-shell dilepton system is inclined to have high invariant mass to minimize the off-shell suppression. Still, different underlying state results in different behaviours in details of the off-shell dilepton invariant mass distribution. We can see the and hypothesis provide the sharply peaked invariant mass distribution because they are -wave processes, while the hypothesis is -wave suppressed. ### 3.2 Production properties The production of the tetraquark state at the LHC will be mainly from the initial state and with subsequent splitting into heavy quark pairs. These heavy quark pairs then form the color singlet states , and color octet states , . When having low relative momentum, these pairs of bottomonium states can form the tetraquark state. For simplicity, we focus on the contribution from the state for the production, which provide a conservative estimate of the production rate for the tetraquark state. The total inclusive production rate can then be expressed as σ(pp→ϕ) = ∫τmaxτmindτdLs dτ^σ(gg→ΥΥ)dPS1dPS2|⟨ΥΥ||ϕ⟩|2 (8) = ∫τmaxτmindτdLs dτ^σ(gg→ΥΥ)8πΛ2τs (9) with dLdτ=fg⊗fg(τ)=∫1τdxxfg(x;Q2)fg(τx;Q2), (10) where and denote the one- and two-body phase space, respectively. The integration range is roughly determined by the sum of the masses of the constituent quarks to the threshold, assuming partial sum rule to be valid. In this calculation, we adjust the pole mass of the accordingly to the bottom masses. A more rigorous treatment could be developed using the techniques of QCD sum rules applied to the four current correlator Wang:2017jtz () in an analogous way that finite energy sum rules have been employed to study the threshold region for heavy quark pair production in collisions. The partonic cross section for di-Upsilon production using s-wave production approximation can be found in Ref. Li:2009ug () for color singlet pair productions and Ref. Ko:2010xy () for color octet pair productions. We adopt these formulas for the partonic cross sections and convolute with NNPDF Ball:2012cx (). We reproduced their results and further verified our implementations for these double vector quarkonium production with current LHC measurements Khachatryan:2014iia (); CMS-PAS-BPH-14-008 (). We obtain 39 fb for total inclusive double production and 11 fb with rapidity cut of 2 on the final state s at 8 TeV LHC. Current experimental results are pb CMS-PAS-BPH-14-008 (). For the production, there will be contributions from double-parton scattering and from decays of higher excitation states, which will not contribute to the tetraquark ground state production. Hence, we use the matrix element squared obtained from Ref. Li:2009ug () for our estimation of the cross section. We further require the center of mass energy in a window between 17.7 GeV to 18.8 GeV, where the lower bound is four times the bottom mass () and the upper bound is the threshold. The decays branching fractions to four leptons near the threshold can be approximated by the square of the branching fraction of to dileptons, BR()=4.9% Patrignani:2016xqp (). With above formalism for the production in Eq. 9, following our parameterization of the wavefunction of the tetraquark state in Eq. 7, we can express the LHC production rate for such a ground state as 8 TeV: σ(pp→ϕ→4ℓ)∼3(Λ0.2GeV)2 fb 13 TeV: σ(pp→ϕ→4ℓ)∼5(Λ0.2GeV)2 fb. The production rate for the whole process is then fb and also for and doubled for final states at 13 TeV LHC. However, the ground state is also anticipated to have sizable wavefunction overlaps with the state, the production of which will further increase the cross section for the tetra-quark states.333Having this wavefunction overlap with will change the decay partial width to four leptons as well. Hence, the equation provided above can be viewed as an estimation for the typical production rate. In addition to the production rate information, the rapidity distribution of the signal events also can be useful as some consistency check. In Fig. 5 we show the signal rapidity distribution for both LHC 8 TeV and 13 TeV, in red and blue lines, respectively. The tetraquark state turns to be produced with a peak rapidity of 3.8 (4.4) for LHC 8 (13) TeV, which would further impact the kinematic distribution of decay products. A general feature of the decay products should be having at least one dilepton pairs in the forward region. The bands in the figure indicates the rapidity distribution in variation according to the partonic center of mass energy that varies between 17.7 GeV and 18.8 GeV. We further draw band of typical acceptance in rapidity for ATLAS/CMS and LHCb. The LHCb forward coverage features a generically larger acceptance for the low-lying state produced through gluon-gluon fusion. However, given the relatively low cross section for this process and the lower luminosity accumulated by the LHCb, it would still be hard for such a state to be found in current data. However, the tetra-quark state is anticipated to decay into many other final states, at least following the behavior of pair-produced states. LHCb may provide unique probe or discovery for such a state with their lower threshold designed to capture forward bottom hadrons. Despite that the acceptance should be applied to final state leptons instead of the resonant tetraquark state, the rapidity distribution does provide a testable property if this state is observed. Furthermore, since the rapidity behavior if driven by the gluon PDF, the distribution show in the right panel of Fig. 5 would still hold for a generic resonant particle produced though the gluon-gluon-fusion process in the vicinity of the mass window under consideration. ## 4 Summary and outlook In this paper we have investigated the observational details of detecting a bound tetra-bottom state with a mass below the di- threshold. The ground state, , would be very narrow and likely would have . The most promising discovery mode at the LHC would be through the decay of into four charged leptons approximately described by an effective interaction of the form . Although decays involving and might also be observed, the ground state of should dominate. With this simple model, many properties of the possible low-lying tetraquark state can be tested. We compute the expected angular distributions for the arising from decays of the on-shell and off-shell states as a function of the off-shell dilepton system invariant mass in Fig. 2. Furthermore, an off-shell dilepton mass dependent angular correlations between the decay planes of the system can be found in Fig. 3. The off-shell dilepton invariant mass distribution should be peaked toward high invariant mass, as preferred by the off-shell propagator. We show the angular distribution of the dilepton system, angular distributions between the decay planes, and the invariant mass distributions with different underlying assumptions about the spin and CP property of the tetraquark state in (the lower panels of) Fig. 2, Fig. 3 and Fig. 4, respectively. Furthermore, we estimated the possible cross section of the low-lying tetraquark state using partial sum rules, and found its cross section dependence on the model parameters, with a typical cross section of for . The rapidity distributions of such a tetra-bottom state should be dominated by the gluon-gluon-fusion process, which features a very forward behaviour due to the gluon PDF behavior. This behaviour provides testable predictions and interesting implications on the complementarity between ATLAS/CMS and LHCb experiment at the LHC. Finally we note, that the decay angular distributions are generic for a massive state that couples to plus a massive vector state in the mass range we consider, depending only on the of the decaying state; and that the rapidity distribution of production only depends on the PDF behavior of the initial state gluons. Hence, most of our results would generically useful for testing low-lying states at the LHC. Acknowledgments: We thank Yang Bai, Kiel Howe, Ciaran Hughes for helpful discussion. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. ## References • (1) Belle Collaboration, S. K. Choi et al., Observation of a narrow charmonium - like state in exclusive B+- K+- pi+ pi- J / psi decays, Phys. Rev. Lett. 91 (2003) 262001, [hep-ex/0309032]. • (2) Particle Data Group Collaboration, K. A. Olive et al., Review of Particle Physics, Chin. Phys. C38 (2014) 090001. • (3) N. Brambilla et al., Heavy quarkonium: progress, puzzles, and opportunities, Eur. Phys. J. C71 (2011) 1534, [arXiv:1010.5827]. • (4) G. T. Bodwin, E. Braaten, E. Eichten, S. L. Olsen, T. K. Pedlar, and J. Russ, Quarkonium at the Frontiers of High Energy Physics: A Snowmass White Paper, in Proceedings, Community Summer Study 2013: Snowmass on the Mississippi (CSS2013): Minneapolis, MN, USA, July 29-August 6, 2013, 2013. • (5) Particle Data Group Collaboration, C. Patrignani et al., Review of Particle Physics, Chin. Phys. C40 (2016), no. 10 100001. • (6) J. A. Wheeler, Polyelectrons, Ann. NY Acad. Sci. 48 (1946) 219–238. • (7) E. A. Hylleraas and A. Ore, Binding energy of the positronium molecule, Phys. Rev. 71 (1947) 493–496. • (8) K. Varga, J. Usukura, and Y. Suzuki, Second bound state of the positronium molecule and biexcitons, Phys. Rev. Lett. 80 (1998) 1876–1879. • (9) D. B. Cassidy and A. P. Mills, The production of molecular positronium, Nature 449 (09, 2007) 195–197. • (10) N. Brambilla, V. Shtabovenko, J. Tarrús Castellà, and A. Vairo, Effective field theories for van der Waals interactions, Phys. Rev. D95 (2017), no. 11 116004, [arXiv:1704.03476]. • (11) W. Chen, H.-X. Chen, X. Liu, T. G. Steele, and S.-L. Zhu, Hunting for exotic doubly hidden-charm/bottom tetraquark states, arXiv:1605.01647. • (12) Z.-G. Wang, Analysis of the tetraquark states with QCD sum rules, Eur. Phys. J. C77 (2017), no. 7 432, [arXiv:1701.04285]. • (13) P. Bicudo, M. Cardoso, O. Oliveira, and P. J. Silva, Lattice QCD static potentials of the meson-meson and tetraquark systems computed with both quenched and full QCD, arXiv:1702.07789. • (14) W. Heupel, G. Eichmann, and C. S. Fischer, Tetraquark Bound States in a Bethe-Salpeter Approach, Phys. Lett. B718 (2012) 545–549, [arXiv:1206.5129]. • (15) G. Eichmann, C. S. Fischer, and W. Heupel, Four-point functions and the permutation group S4, Phys. Rev. D92 (2015), no. 5 056006, [arXiv:1505.06336]. • (16) G. Eichmann, C. S. Fischer, and W. Heupel, Tetraquarks from the Bethe-Salpeter equation, Acta Phys. Polon. Supp. 8 (2015) 425, [arXiv:1507.05022]. • (17) A. Czarnecki, B. Leng, and M. B. Voloshin, Stability of tetrons, arXiv:1708.04594. • (18) A. V. Berezhnoy, A. V. Luchinsky, and A. A. Novoselov, Tetraquarks Composed of 4 Heavy Quarks, Phys. Rev. D86 (2012) 034004, [arXiv:1111.1867]. • (19) C. Popovici and C. S. Fischer, Heavy tetraquark confining potential in Coulomb gauge QCD, Phys. Rev. D89 (2014), no. 11 116012, [arXiv:1403.5900]. • (20) Y. Bai, S. Lu, and J. Osborne, Beauty-full Tetraquarks, arXiv:1612.00012. • (21) J.-M. Richard, A. Valcarce, and J. Vijande, String dynamics and metastability of all-heavy tetraquarks, Phys. Rev. D95 (2017), no. 5 054019, [arXiv:1703.00783]. • (22) M. Karliner, S. Nussinov, and J. L. Rosner, states: masses, production, and decays, Phys. Rev. D95 (2017), no. 3 034011, [arXiv:1611.00348]. • (23) S. Y. Choi, D. J. Miller, M. M. Muhlleitner, and P. M. Zerwas, Identifying the Higgs spin and parity in decays to Z pairs, Phys. Lett. B553 (2003) 61–71, [hep-ph/0210077]. • (24) R. Li, Y.-J. Zhang, and K.-T. Chao, Pair Production of Heavy Quarkonium and B(c)(*) Mesons at Hadron Colliders, Phys. Rev. D80 (2009) 014020, [arXiv:0903.2250]. • (25) P. Ko, C. Yu, and J. Lee, Inclusive double-quarkonium production at the Large Hadron Collider, JHEP 01 (2011) 070, [arXiv:1007.3095]. • (26) R. D. Ball et al., Parton distributions with LHC data, Nucl. Phys. B867 (2013) 244–289, [arXiv:1207.1303]. • (27) CMS Collaboration, V. Khachatryan et al., Measurement of prompt pair production in pp collisions at = 7 Tev, JHEP 09 (2014) 094, [arXiv:1406.0484]. • (28) CMS Collaboration Collaboration, Observation of pair production at CMS, Tech. Rep. CMS-PAS-BPH-14-008, CERN, Geneva, 2016. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
auto_math_text
web
# [time 326] Re: [time 325] Re: Fisher information and relativity Tue, 18 May 1999 23:33:55 +0900 Dear Matti, Your question is meaningful. Indeed it cuts the seemingly continuous argument of Frieden as I will explain below. ----- Original Message ----- From: Matti Pitkanen <matpitka@pcu.helsinki.fi> Sent: Tuesday, May 18, 1999 5:48 PM Subject: [time 325] Re: Fisher information and relativity [snip] > > Is Fisher information > > > still in question when one uses imaginary coordinate x0 =it? > > > > > Coordinates correspond to kind of parameters in Fisher > information: unfortunately I have not clear picture > about what kind of parameters are in question. Parameters are coordinates in the book at least as I read till now. Other examples may be in the book. What troubled and > still troubles me is whether the imaginary > value of parameter is indeed consistent with this > interpretation. You seem to point out the gap in Frieden's development of the theory. Frieden writes in page 64 in section 3.1.2 entitled "On covariance": [beginning of quotation] ... By definition of a conditional probability p(x|t)=p(x,t)/p(t) (Frieden, 1991). This implies that the corresponding amplitudes (cf. the second Eq. (2.18)) obey q(x|t)= + or - q(x,t)/q(t). The numerator treats x and t covariantly, but the denominator, in only depending upon t, does not. Thus, principle (3.1) is not covariant. [HK: (3.1) reads: \delta I[q(x|t)]=0, q(x|t) = (q_1(x|t), ... , q_N(x|t).] From a statistical point of view, principle (3.1) is objectionable as well, because it treats time as a deterministic, or known, coordinate while treating space as random. Why should time be a priori known any more accurate than space? These problems can be remedied if we simply make (3.1) covariant. This may readily be done, by replacing it with the more general principle \delta I[q(x)]=0, q(x)=(q_1(x), ... , q_N(x)). (3.2) Here I is given by Eq. (2.19) and the q_n(x) are to be varied. Coordinates x are, now, any four-vector of coordinates. In the particular case of space-time coordinates, x now includes the time. [end of quotation] Here Frieden transforms the Euclidean coordinates to the coordinates possibly covariant wrt Lorentz or any other coordinates transformations. By this transformation of his theory, he misses the I-theorem, which reads till he introduces the covariant coordinates: dI ---- (t) < or = 0 for any t. dt This has been assuring that the information I decreases as t increases. Hence I takes a minimum value as t goes to infinity (since I > or = 0), and this fact has been ensuring the validness of taking the solution of the variational problem (3.1) as the physical reality: \delta I[q(x|t)] = 0. (3.1) Just when he introduces the covariant coordinates and hence pure imaginary time, this I-theorem breaks down and he loses the foundation upon which the validity of variational principle has been relying. He then instead postulates the variational principle as one of his three axioms for "the measurement process" in pages 70-72. (In fact there is no quotation of I-theorem after page 63 till chapter 12 in page 273 entitled "Summing up" according to the index.) This means that the introductory part till page 63 is just an illustration which leads to the introduction of his axioms 1 to 3 in pp. 70-72, not a justification of the axioms in any sense. And his axiom 1: \delta (I - J) =0, with axiom 2: I=4 \int dx \sum_n \nabla q_n \cdot \nabla q_n and J= \int dx \sum_n j_n(x), (here n varies from 1 to N, N denoting the number of independent measurements done.) is almost the same requirement as the usual variational principle which gives Lagrangian of the system under consideration. Thus his contribution is just that the free energy part I is given as above in his axiom 2. That the form of Fisher information I gives the free energy part of Euler-Lagrange equation may be a progress of human knowledge. This is but a small calculation which was described in [time 321], and does not seem to need a hard covered book. Frieden's purpose might be in his philosophy. However, he abandons himself his philosophy (i.e. I-theorem) as you pointed out: > whether the imaginary > value of parameter is indeed consistent with this > interpretation. The imaginary value of parameters is not consistent with Frieden's own philosophy, I-theorem. So he just assumes the principle of the least action as axiom 1 in his derivation of Lagrangian. Here is no new thing except for an observation that the free energy part follows from the form of the Fisher information. Another point which shows the shallowness of his theory is that he does not give any consideration about time. As in the above quotation, he thinks at first that time is given. Then he comments that time should be considered an inaccurate unknown parameter as other space coordinates, and turns to time as a component of the covariant coordinates. This is a too easy way for one to construct a unification of physics. In conclusion, Frieden's theory looks like but a repetition of the principle of the least action except for the discovery of the relation between Fisher information and the free energy. Best wishes, Hitoshi This archive was generated by hypermail 2.0b3 on Sun Oct 17 1999 - 22:10:32 JST
auto_math_text
web
0 TECHNICAL PAPERS: Soft Tissue In Vitro Dynamic Strain Behavior of the Mitral Valve Posterior Leaflet [+] Author and Article Information Zhaoming He Wallace H. Coulter Department of Biomedical Engineering,  Georgia Institute of Technology and Emory University, 315 Ferst Drive, Atlanta, GA 30332-0535zhe@bme.gatech.edu Jennifer Ritchie Wallace H. Coulter Department of Biomedical Engineering,  Georgia Institute of Technology and Emory University, 315 Ferst Drive, Atlanta, GA 30332-0535gtg073k@prism.gatech.edu Jonathan S. Grashow Department of Bioengineering and the McGowan Institute for Regenerative Medicine,  University of Pittsburgh, Room 234, 100 Technology Drive, Pittsburgh, PA 15219Jsgst29@pitt.edu Michael S. Sacks Department of Bioengineering and the McGowan Institute for Regenerative Medicine,  University of Pittsburgh, Room 234, 100 Technology Drive, Pittsburgh, PA 15219msacks@pitt.edu Ajit P. Yoganathan Wallace H. Coulter Department of Biomedical Engineering,  Georgia Institute of Technology and Emory University, 315 Ferst Drive, Atlanta, GA 30332-0535 Phone : 404-894-2849ajit.yoganathan@bme.gatech.edu J Biomech Eng 127(3), 504-511 (Jan 31, 2005) (8 pages) doi:10.1115/1.1894385 History: Received April 27, 2004; Revised November 23, 2004; Accepted January 31, 2005 Abstract Knowledge of mitral valve (MV) mechanics is essential for the understanding of normal MV function, and the design and evaluation of new surgical repair procedures. In the present study, we extended our investigation of MV dynamic strain behavior to quantify the dynamic strain on the central region of the posterior leaflet. Native porcine MVs were mounted in an in-vitro physiologic flow loop. The papillary muscle (PM) positions were set to the normal, taut, and slack states to simulate physiological and pathological PM positions. Leaflet deformation was measured by tracking the displacements of 16 small markers placed in the central region of the posterior leaflet. Local leaflet tissue strain and strain rates were calculated from the measured displacements under dynamic loading conditions. A total of 18 mitral valves were studied. Our findings indicated the following: (1) There was a rapid rise in posterior leaflet strain during valve closure followed by a plateau where no additional strain (i.e., no creep) occurred. (2) The strain field was highly anisotropic with larger stretches and stretch rates in the radial direction. There were negligible stretches, or even compression $(stretch<1)$ in the circumferential direction at the beginning of valve closure. (3) The areal strain curves were similar to the stretches in the trends. The posterior leaflet showed no significant differences in either peak stretches or stretch rates during valve closure between the normal, taut, and slack PM positions. (4) As compared with the anterior leaflet, the posterior leaflet demonstrated overall lower stretch rates in the normal PM position. However, the slack and taut PM positions did not demonstrate the significant difference in the stretch rates and areal strain rates between the posterior leaflet and the anterior leaflet. The MV posterior leaflet exhibited pronounced mechanically anisotropic behavior. Loading rates of the MV posterior leaflet were very high. The PM positions influenced neither peak stretch nor stretch rates in the central area of the posterior leaflet. The stretch rates and areal strain rates were significantly lower in the posterior leaflet than those measured in the anterior leaflet in the normal PM position. However, the slack and taut PM positions did not demonstrate the significant differences between the posterior leaflet and the anterior leaflet. We conclude that PM positions may influence the posterior strain in a different way as compared to the anterior leaflet. <> Topics: Valves , Deformation Figures Figure 4 Three dimensional surface fit results for the marker array area of the posterior leaflet, with t=0 defined as the first frame where all markers are visible. Vectors represent the principal stretch direction and color fringe local major principal stretch magnitude (PS1). Here, the u and v axes are coincident with the circumferential and radial axes, respectively. Figure 1 The Georgia Tech left heart simulator and flow loop were driven by a bladder pump. A data acquisition system recorded the transmitral pressure and flow rate. Two high speed cameras placed at an angle in the atrial side of the mitral valve recorded the marker array. Images and transmitral pressures were synchronized with a trigger signal from the pulse generator. Figure 2 A marker array on the central region of the posterior leaflet between the annulus and the coaptation line. A sequence of images from camera A and B covering the period of valve closing and opening were recorded, digitized, and analyzed later to determine the principal stretches and areal strains of the central area of the posterior leaflet. Figure 3 An example of flow rate and transmitral pressure of the tested valve. The valve closed and opened between 0 and 0.35 s. The valve closed during 0 s and 0.15 s, the valve was closed between 0.15 s and 0.25 s, and the valve opened during 0.25 s and 0.35 s. The transmitral pressure increased only a few mm Hg before 0.1 s. The mitral flow was negative during valve closure, which is predominantly the closing volume. It increased up to 15L∕min during valve opening. Figure 5 An example of principal angle of the deformation of the central region of the posterior leaflet during valve closing and opening. It deviated slightly from 90°. This deviation rarely went above 20°, demonstrating that the principal angle usually aligned with the radial and circumferential directions. Figure 6 An example of the shear angle α for the posterior leaflet, which represents the change in angle that two originally orthogonal lines undergo with deformation. The shear angle α was generally small (usually less than 5°), suggesting that the differences between the principal stretches and the stretch values resolved into the collagen fiber preferred directions were sufficiently small and did not yield any new information. Figure 7 A representative example of the major and minor principal stretches and stretch rates during valve closing and opening in the normal PM position. The principal stretches demonstrate a rapid rise early in valve closure, followed by a plateau in systole, suggesting that the collagen fibers were fully straightened. Principal stretches decreased to the original state when the valve opened. There was a significant difference between the major and minor principal stretches. The peak major principal stretch rate was higher than the minor principal stretch rate during both valve closing and opening. The peak major principal stretch rate was higher during the valve unloading (i.e., opening) process than during valve loading (i.e., closing) process. Figure 8 An example of the areal strain and areal strain rate during valve closing and opening in the normal papillary muscle position. The areal strain demonstrated a similar trend as principal stretches, and there was a plateau when the valve was closed. Figure 9 An example of the transmitral pressure versus areal strain during the valve closure in the normal papillary muscle positions. These results show a dramatic stiffening of the central region of the posterior leaflet. Figure 10 An example of SALS experiment results were superposed on the posterior leaflet, showing the mapping of preferred fiber orientation on the posterior leaflet. The posterior leaflet generally had a more irregular structure. The preferred fiber orientation is relatively uniform in the central region of the posterior leaflet. Discussions Some tools below are only available to our subscribers or users with an online account. Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
auto_math_text
web
Survey of $^{17}$O excited states selectively populated by five-particle transfer reactions Abstract : The highly selective reactions 12C(7Li,d)17O and 12C(6Li,p)17O have been used to populate high-lying excited states in 17O up to 16 MeV in excitation. Several of the states are newly observed, and the existence of others in a previous study of 12C(6Li,p)17O is confirmed. The observed spectra show a clear gap of about 3 MeV, indicating an energy gap between 3p-2h and 5p-4h states in 17O. Differential cross section angular distributions have been extracted from the data for both reactions and they have been compared with finite-range DWBA calculations by assuming a “5He” cluster transfer. Possible spins and parities are reported for states at 11.82 MeV (7/2+), 12.00 MeV (9/2+),12.22 MeV (7/2-), and 12.42 MeV (9/2+). Document type : Journal articles http://hal.in2p3.fr/in2p3-00286498 Contributor : Suzanne Robert <> Submitted on : Tuesday, June 10, 2008 - 12:00:03 PM Last modification on : Wednesday, September 16, 2020 - 4:00:16 PM Citation A.M. Crisp, B.T. Roeder, O.A. Momotyuk, N. Keeley, K.W. Kemper, et al.. Survey of $^{17}$O excited states selectively populated by five-particle transfer reactions. Physical Review C, American Physical Society, 2008, 77, pp.044315. ⟨10.1103/PhysRevC.77.044315⟩. ⟨in2p3-00286498⟩ Record views
auto_math_text
web
• Open Problems, Questions, and Challenges in Finite-Dimensional Integrable Systems(1804.03737) April 10, 2018 math.DG, math.SG, math.DS The paper surveys open problems and questions related to different aspects of integrable systems with finitely many degrees of freedom. Many of the open problems were suggested by the participants of the conference "Finite-dimensional Integrable Systems, FDIS 2017" held at CRM, Barcelona in July 2017. • Random SU(2)-symmetric spin-$S$ chains(1512.04542) We study the low-energy physics of a broad class of time-reversal invariant and SU(2)-symmetric one-dimensional spin-S systems in the presence of quenched disorder via a strong-disorder renormalization-group technique. We show that, in general, there is an antiferromagnetic phase with an emergent SU(2S+1) symmetry. The ground state of this phase is a random singlet state in which the singlets are formed by pairs of spins. For integer spins, there is an additional antiferromagnetic phase which does not exhibit any emergent symmetry (except for S=1). The corresponding ground state is a random singlet one but the singlets are formed mostly by trios of spins. In each case the corresponding low-energy dynamics is activated, i.e., with a formally infinite dynamical exponent, and related to distinct infinite-randomness fixed points. The phase diagram has two other phases with ferromagnetic tendencies: a disordered ferromagnetic phase and a large spin phase in which the effective disorder is asymptotically finite. In the latter case, the dynamical scaling is governed by a conventional power law with a finite dynamical exponent. • Chiral spin-orbital liquids with nodal lines(1505.06171) July 1, 2016 cond-mat.str-el Strongly correlated materials with strong spin-orbit coupling hold promise for realizing topological phases with fractionalized excitations. Here we propose a chiral spin-orbital liquid as a stable phase of a realistic model for heavy-element double perovskites. This spin liquid state has Majorana fermion excitations with a gapless spectrum characterized by nodal lines along the edges of the Brillouin zone. We show that the nodal lines are topological defects of a non-Abelian Berry connection and that the system exhibits dispersing surface states. We discuss some experimental signatures of this state and compare them with properties of the spin liquid candidate Ba_2YMoO_6. • Strong correlations generically protect d-wave superconductivity against disorder(1510.08152) Oct. 29, 2015 cond-mat.str-el We address the question of why strongly correlated d-wave superconductors, such as the cuprates, prove to be surprisingly robust against the introduction of non-magnetic impurities. We show that, very generally, both the pair-breaking and the normal state transport scattering rates are significantly suppressed by strong correlations effects arising in the proximity to a Mott insulating state. We also show that the correlation-renormalized scattering amplitude is generically enhanced in the forward direction, an effect which was previously often ascribed to the specific scattering by charged impurities outside the copper-oxide planes. • Emergent SU(3) symmetry in random spin-1 chains(1404.1924) Oct. 20, 2015 cond-mat.str-el We show that generic SU(2)-invariant random spin-1 chains have phases with an emergent SU(3) symmetry. We map out the full zero-temperature phase diagram and identify two different phases: (i) a conventional random singlet phase (RSP) of strongly bound spin pairs (SU(3) "mesons") and (ii) an unconventional RSP of bound SU(3) "baryons", which are formed, in the great majority, by spin trios located at random positions. The emergent SU(3) symmetry dictates that susceptibilities and correlation functions of both dipolar and quadrupolar spin operators have the same asymptotic behavior. • Mottness-induced healing in strongly correlated superconductors(1311.2576) Jan. 8, 2015 cond-mat.str-el We study impurity healing effects in models of strongly correlated superconductors. We show that in general both the range and the amplitude of the spatial variations caused by nonmagnetic impurities are significantly suppressed in the superconducting as well as in the normal states. We explicitly quantify the weights of the local and the non-local responses to inhomogeneities and show that the former are overwhelmingly dominant over the latter. By quantifying the spatial range of the local response, we show that it is restricted to only a few lattice spacings over a significant range of dopings in the vicinity of the Mott insulating state. We demonstrate that this healing effect is ultimately due to the suppression of charge fluctuations induced by Mottness. We also define and solve analytically a simplified yet accurate model of healing, within which we obtain simple expressions for quantities of direct experimental relevance. • Symmetry breaking and physical properties of the bosonic single-impurity Anderson model(1405.3245) May 13, 2014 cond-mat.quant-gas We show how exact diagonalization of small clusters can be used as a fast and reliable impurity solver by determining the phase diagram and physical properties of the bosonic single-impurity Anderson model. This is specially important for applications which require the solution of a large number of different single-impurity problems, such as the bosonic dynamical mean field theory of disordered systems. In particular, we investigate the connection between spontaneous global gauge symmetry breaking and the occurrence of Bose-Einstein condensation (BEC). We show how BEC is accurately signaled by the appearance of broken symmetry, even when a fairly modest number of states is retained. The occurrence of symmetry breaking can be detected both by adding a small conjugate field or, as in generic quantum critical points, by the divergence of the associated phase susceptibility. Our results show excellent agreement with the considerably more demanding numerical renormalization group (NRG) method. We also investigate the mean impurity occupancy and its fluctuations, identifying an asymmetry in their critical behavior across the quantum phase transitions between BEC and `Mott' phases. • Dynamical mean-field theories of correlation and disorder(1112.6184) We provide a review of recently-develop dynamical mean-field theory (DMFT) approaches to the general problem of strongly correlated electronic systems with disorder. We first describe the standard DMFT approach, which is exact in the limit of large coordination, and explain why in its simplest form it cannot capture either Anderson localization or the glassy behavior of electrons. Various extensions of DMFT are then described, including statistical DMFT, typical medium theory, and extended DMFT, methods specifically designed to overcome the limitations of the original formulation. We provide an overview of the results obtained using these approaches, including the formation of electronic Griffiths phases, the self-organized criticality of the Coulomb glass, and the two-fluid behavior near Mott-Anderson transitions. Finally, we outline research directions that may provide a route to bridge the gap between the DMFT-based theories and the complementary diffusion-mode approaches to the metal-insulator transition. • Mechanism for enhanced disordered screening in strongly correlated metals: local vs. nonlocal effects(1109.3730) Sept. 16, 2011 cond-mat.str-el We study the low temperature transport characteristics of a disordered metal in the presence of electron-electron interactions. We compare Hartree-Fock and dynamical mean field theory (DMFT) calculations to investigate the scattering processes of quasiparticles off the screened disorder potential and show that both the local and non-local (coming from long-ranged Friedel oscillations) contributions to the renormalized disorder potential are suppressed in strongly renormalized Fermi liquids. Our results provide one more example of the power of DMFT to include higher order terms left out by weak-coupling theories. • Quantum ripples in strongly correlated metals(0910.1837) We study how well-known effects of the long-ranged Friedel oscillations are affected by strong electronic correlations. We first show that their range and amplitude are significantly suppressed in strongly renormalized Fermi liquids. We then investigate the interplay of elastic and inelastic scattering in the presence of these oscillations. In the singular case of two-dimensional systems, we show how the anomalous ballistic scattering rate is confined to a very restricted temperature range even for moderate correlations. In general, our analytical results indicate that a prominent role of Friedel oscillations is relegated to weakly interacting systems. • Valence-bond theory of highly disordered quantum antiferromagnets(0810.3043) We present a large-N variational approach to describe the magnetism of insulating doped semiconductors based on a disorder-generalization of the resonating-valence-bond theory for quantum antiferromagnets. This method captures all the qualitative and even quantitative predictions of the strong-disorder renormalization group approach over the entire experimentally relevant temperature range. Finally, by mapping the problem on a hard-sphere fluid, we could provide an essentially exact analytic solution without any adjustable parameters. • Electronic Griffiths phase of the d=2 Mott transition(0808.0913) We investigate the effects of disorder within the T=0 Brinkman-Rice (BR) scenario for the Mott metal-insulator transition (MIT) in two dimensions (2d). For sufficiently weak disorder the transition retains the Mott character, as signaled by the vanishing of the local quasiparticles (QP) weights Z_{i} and strong disorder screening at criticality. In contrast to the behavior in high dimensions, here the local spatial fluctuations of QP parameters are strongly enhanced in the critical regime, with a distribution function P(Z) ~ Z^{\alpha-1} and \alpha tends to zero at the transition. This behavior indicates a robust emergence of an electronic Griffiths phase preceding the MIT, in a fashion surprisingly reminiscent of the "Infinite Randomness Fixed Point" scenario for disordered quantum magnets. • Energy-resolved spatial inhomogeneity of disordered Mott systems(0811.0320) We investigate the effects of weak to moderate disorder on the T=0 Mott metal-insulator transition in two dimensions. Our model calculations demonstrate that the electronic states close to the Fermi energy become more spatially homogeneous in the critical region. Remarkably, the higher energy states show the opposite behavior: they display enhanced spatial inhomogeneity precisely in the close vicinity to the Mott transition. We suggest that such energy-resolved disorder screening is a generic property of disordered Mott systems. • The one-dimensional Kondo lattice model at quarter-filling(0809.3235) We revisit the problem of the quarter-filled one-dimensional Kondo lattice model, for which the existence of a dimerized phase and a non-zero charge gap had been reported in Phys. Rev. Lett. \textbf{90}, 247204 (2003). Recently, some objections were raised claiming that the system is neither dimerized nor has a charge gap. In the interest of clarifying this important issue, we show that these objections are based on results obtained under conditions in which the dimer order is artificially suppressed. We use the incontrovertible dimerized phase of the Majumdar-Ghosh point of the $J_{1}-J_{2}$ Heisenberg model as a paradigm with which to illustrate this artificial suppression. Finally, by means of extremely accurate DMRG calculations, we show that the charge gap is indeed non-zero in the dimerized phase. • La-dilution effects in TbRhIn5 antiferromagnet(0809.2765) Sept. 16, 2008 cond-mat.other We report measurements of temperature dependent magnetic susceptibility, resonant x-ray magnetic scattering (XRMS) and heat capacity on single crystals of Tb1-xLaxRhIn5 for nominal concentrations in the range 0.0 < x < 1.0. TbRhIn5 is an antiferromagnetic (AFM) compound with TN ~ 46 K, which is the highest TN values along the RRhIn5 series. We explore the suppression of the antiferromagnetic (AFM) state as a function of La-doping considering the effects of La-induced dilution and perturbations to the tetragonal crystalline electrical field (CEF) on the long range magnetic interaction between the Tb$^{3+}$ ions. Additionally, we also discuss the role of disorder. Our results and analysis are compared to the properties of the undoped compound and of other members of the RRhIn5 family and structurally related compounds (R2RhIn8 and RIn3). The XRMS measurements reveal that the commensurate magnetic structure with the magnetic wave-vector (0,1/2,1/2) observed for the undoped compound is robust against doping perturbations in Tb0.6La0.4RhIn5 compound. • Symmetry breaking effects upon bipartite and multipartite entanglement in the XY model(0709.1956) March 18, 2008 quant-ph, cond-mat.stat-mech We analyze the bipartite and multipartite entanglement for the ground state of the one-dimensional XY model in a transverse magnetic field in the thermodynamical limit. We explicitly take into account the spontaneous symmetry breaking in order to explore the relation between entanglement and quantum phase transitions. As a result we show that while both bipartite and multipartite entanglement can be enhanced by spontaneous symmetry breaking deep into the ferromagnetic phase, only the latter is affected by it in the vicinity of the critical point. This result adds to the evidence that multipartite, and not bipartite, entanglement is the fundamental indicator of long range correlations in quantum phase transitions. • Correlation amplitude and entanglement entropy in random spin chains(0704.0951) Using strong-disorder renormalization group, numerical exact diagonalization, and quantum Monte Carlo methods, we revisit the random antiferromagnetic XXZ spin-1/2 chain focusing on the long-length and ground-state behavior of the average time-independent spin-spin correlation function C(l)=\upsilon l^{-\eta}. In addition to the well-known universal (disorder-independent) power-law exponent \eta=2, we find interesting universal features displayed by the prefactor \upsilon=\upsilon_o/3, if l is odd, and \upsilon=\upsilon_e/3, otherwise. Although \upsilon_o and \upsilon_e are nonuniversal (disorder dependent) and distinct in magnitude, the combination \upsilon_o + \upsilon_e = -1/4 is universal if C is computed along the symmetric (longitudinal) axis. The origin of the nonuniversalities of the prefactors is discussed in the renormalization-group framework where a solvable toy model is considered. Moreover, we relate the average correlation function with the average entanglement entropy, whose amplitude has been recently shown to be universal. The nonuniversalities of the prefactors are shown to contribute only to surface terms of the entropy. Finally, we discuss the experimental relevance of our results by computing the structure factor whose scaling properties, interestingly, depend on the correlation prefactors. • Mott transition in the Hubbard model away from particle-hole symmetry(cond-mat/0608248) March 27, 2007 cond-mat.str-el We solve the Dynamical Mean Field Theory equations for the Hubbard model away from the particle-hole symmetric case using the Density Matrix Renormalization Group method. We focus our study on the region of strong interactions and finite doping where two solutions coexist. We obtain precise predictions for the boundaries of the coexistence region. In addition, we demonstrate the capabilities of this precise method by obtaining the frequency dependent optical conductivity spectra. • A simple drain current model for Schottky-barrier carbon nanotube field effect transistors(cond-mat/0610882) Oct. 31, 2006 cond-mat.mes-hall We report on a new computational model to efficiently simulate carbon nanotubebased field effect transistors (CNT-FET). In the model, a central region is formed by a semiconducting nanotube that acts as the conducting channel, surrounded by a thin oxide layer and a metal gate electrode. At both ends of the semiconducting channel, two semi-infinite metallic reservoirs act as source and drain contacts. The current-voltage characteristics are computed using the Landauer formalism, including the effect of the Schottky barrier physics. The main operational regimes of the CNT-FET are described, including thermionic and tunnel current components, capturing ambipolar conduction, multichannel ballistic transport and electrostatics dominated by the nanotube capacitance. The calculations are successfully compared to results given by more sophisticated methods based on non-equilibrium Green's function formalism (NEGF). • Physical properties and magnetic structure of TbRhIn5 intermetallic compound(cond-mat/0602612) In this work we report the physical properties of the new intermetallic compound TbRhIn5 investigated by means of temperature dependent magnetic susceptibility, electrical resistivity, heat-capacity and resonant x-ray magnetic diffraction experiments. TbRhIn5 is an intermetallic compound that orders antiferromagnetically at TN = 45.5 K, the highest ordering temperature among the existing RRhIn5 (1-1-5, R = rare earth) materials. This result is in contrast to what is expected from a de Gennes scaling along the RRhIn5 series. The X-ray resonant diffraction data below TN reveal a commensurate antiferromagnetic (AFM) structure with a propagation vector (1/2 0 1/2) and the Tb moments oriented along the c-axis. Strong (over two order of magnitude) dipolar enhancements of the magnetic Bragg peaks were observed at both Tb absorption edges LII and LIII, indicating a fairly high polarization of the Tb 5d levels. Using a mean field model including an isotropic first-neighbors exchange interaction J(R-R) and the tetragonal crystalline electrical field (CEF), we were able to fit our experimental data and to explain the direction of the ordered Tb-moments and the enhanced TN of this compound. The evolution of the magnetic properties along the RRhIn5 series and its relation to CEF effects for a given rare-earth is discussed. • Spin Liquid Behavior in Electronic Griffiths Phases(cond-mat/0412100) We examine the interplay of the Kondo effect and the RKKY interactions in electronic Griffiths phases using extended dynamical mean-field theory methods. We find that sub-Ohmic dissipation is generated for sufficiently strong disorder, leading to suppression of Kondo screening on a finite fraction of spins, and giving rise to universal spin-liquid behavior. • Disorder-Driven Non-Fermi Liquid Behavior of Correlated Electrons(cond-mat/0504411) Systematic deviations from standard Fermi-liquid behavior have been widely observed and documented in several classes of strongly correlated metals. For many of these systems, mounting evidence is emerging that the anomalous behavior is most likely triggered by the interplay of quenched disorder and strong electronic correlations. In this review, we present a broad overview of such disorder-driven non-Fermi-liquid behavior, and discuss various examples where the anomalies have been studied in detail. We describe both their phenomenological aspects as observed in experiment, and the current theoretical scenarios that attempt to unravel their microscopic origin. • Absence of conventional quantum phase transitions in itinerant systems with disorder(cond-mat/0408336) Effects of disorder are examined in itinerant systems close to quantum critical points. We argue that spin fluctuations associated with the long-range part of the RKKY interactions generically induce non-Ohmic dissipation due to rare disorder configurations. This dissipative mechanism is found to destabilize quantum Griffiths phase behavior in itinerant systems with arbitrary symmetry of the order parameter, leading to the formation of a "cluster glass" phase preceding uniform ordering. • Magnetically-controlled impurities in quantum wires with strong Rashba coupling(cond-mat/0408656) We investigate the effect of strong spin-orbit interaction on the electronic transport through non-magnetic impurities in one-dimensional systems. When a perpendicular magnetic field is applied, the electron spin polarization becomes momentum-dependent and spin-flip scattering appears, to first order in the applied field, in addition to the usual potential scattering. We analyze a situation in which, by tuning the Fermi level and the Rashba coupling, the magnetic field can suppress the potential scattering. This mechanism should give rise to a significant negative magnetoresistance in the limit of large barriers. • Quantum anisotropic Heisenberg chains with superlattice structure: a DMRG study(cond-mat/0501419) Jan. 18, 2005 cond-mat.str-el Using the density matrix renormalization group technique, we study spin superlattices composed of a repeated pattern of two spin-1/2 XXZ chains with different anisotropy parameters. The magnetization curve can exhibit two plateaus, a non trivial plateau with the magnetization value given by the relative sizes of the sub-chains and another trivial plateau with zero magnetization. We find good agreement of the value and the width of the plateaus with the analytical results obtained previously. In the gapless regions away from the plateaus, we compare the finite-size spin gap with the predictions based on bosonization and find reasonable agreement. These results confirm the validity of the Tomonaga-Luttinger liquid superlattice description of these systems.
auto_math_text
web
We use the latest available data from the World Income Inequality Database 3.4 and the Penn World Tables 9.0 to examine some of the core issues and concerns that have animated research on global inequality. We begin by reviewing the evidence on trends in within-country inequality, drawing out some of the implications of this for our thinking about inequality and economic development. We examine between-country inequality, computing updated estimates of trends in both unweighted and population-weighted between-country inequality. The data reveal that inequality between countries increased across the latter half of the twentieth century, then turned to decline measurably thereafter. We show that this decline is robust to a range of methodological and measurement decisions identified as important in previous research. We then examine estimates of true global inequality, situating these in relation to lower- and upper-bound estimates of global inequality. We conclude by noting the critical and contested role of globalization in inequality reduction. A few centuries ago the global distribution of income looked quite different than it does today. By current standards and metrics, the world was poor, and most of the inequality in the world distribution of income was attributable to within-nation income differences. That is, imagine that we had income data on every individual or household in the world.1 With that, one could calculate some measure of global income inequality. If we took that measure and decomposed it into its within- and between-country components, we would find that the within component was much larger than the between component. As Milanovic (2013a) has put it, it was a world in which “class” mattered far more than “location” (i.e, the country in which one lived) for one's position in the global distribution of income. Indeed, as recently as the first half of the nineteenth century, our best estimates suggest that something on the order of 80% of global inequality was within country, and only about 20% was between (Bourguinon and Morrisson 2002). Simply put, the gap between rich and poor within countries made a much larger contribution to global inequality than the gap between rich and poor countries. Over the next two centuries, this changed dramatically and fundamentally. The Industrial Revolution and the sustained economic growth that characterized Western nations in the nineteenth century led to a “Great Divergence” between the West and the rest of the world (Pomeranz 2001). In that process, income differences between societies ballooned to dominate the world distribution of income (Milanovic 2011, 2016a). Presently, between-country income inequality accounts for around two-thirds of total global inequality. In other words, if we eliminated inequality within countries entirely–if we took the income generated in each society and divided it equally among the members of that society, and did that for every society around the world–global inequality today would decline only by about a third. Moreover, over half of the variation in personal incomes globally is accounted for simply by the average national income of one's country of residence (Milanovic 2013a). The combined explanatory power of a collection of personal characteristics that are well known to be associated with one's income–social class, gender, race, education, family background, and so on–pale in comparison to that of a single variable today: the country in which you live. Over the past two decades, research in sociology and economics has generated considerable debate over the empirics of global inequality and the factors underlying these trends (e.g., Korzeniewicz and Moran 1997; Firebaugh 2003; Firebaugh and Goesling 2004; Anand and Segal 2008; Dowrick and Akmal 2005). While there is a scholarly consensus that global income inequality had been rising for most of the twentieth century, the trajectory of global inequality in the last decades of the twentieth century has been a subject of debate. Given an emerging consensus that the last decades of the past century saw an increase in income inequality within the typical society (Cornia, Addison, and Kiiski 2003; Firebaugh 2003; Goesling 2001), much of the debate about contemporary inequality trends centers on the between-country component. Some have found that global inequality declined in the last two decades of the twentieth century (e.g., Firebaugh 1999, 2003; Goesling 2001; Melchior and Telle 2001; Sala-i-Martin 2002), while others have argued that such findings are sensitive to methodological decisions, such as the use of purchasing power parity (PPP) versus market exchange rates, how PPP is constructed (i.e., the Geary-Khamis vs. Eltetö-Köves-Szulc methods), the use of population weights, and the inclusion of China (Anand and Segal 2008; Arrighi, Silver, and Brewer 2003; Wade 2004; Clark 2011; Dowrick and Akmal 2005; Firebaugh 2003; Hung and Kucinskas 2011; Melchior and Telle 2001). In this article, we revisit this debate, using the latest available data and ask: What is really happening with global inequality? We proceed in three steps. First, we briefly review the evidence on trends in within-country inequality, noting that, on a global scale, the transition of employment out of agriculture into industry and services is rapidly nearing completion and drawing out some of the implications of this for our understanding of global inequality. Second, we examine between-country inequality, computing updated estimates of trends in unweighted and population-weighted between-country inequality using data from the Penn World Tables (PWT) 9.0 (Feenstra, Inklaar, and Timmer 2015). These data reveal that inequality between countries increased across the latter half of the twentieth century, then turned to decline measurably thereafter. We show that this decline is robust to a range of methodological and measurement decisions identified as important in previous research. Finally, we examine extant estimates of “true” global inequality, situating these in relation to theoretical lower- and upper-bound estimates that we derive from the latest data. ## WITHIN-COUNTRY INCOME INEQUALITY Studies of within-country income inequality have generally found that inequality has increased in most societies since the 1970s. For example, using data from the World Income Inequality Database (WIID) and focusing on the subset of societies for which we have the most reliable data, Cornia, Addison, and Kiiski (2003) find that income inequality increased in about two-thirds of the world's societies. Inequality grew in developed and many developing countries, and in nearly all post-communist societies. Similarly, using the University of Texas Inequality Project Data on pay inequality, Galbraith (2011) finds that inequality increased in the typical country and increased more steeply in developing societies than in developed ones. Finally, using the “AlltheGinis” data set, Milanovic (2016b) likewise reports that income inequality grew in about two-thirds of the countries under examination since the late 1980s. Both Cornia, Addison, and Kiiski (2003) and Milanovic (2016b) find that the inequality upswing was typically substantively meaningful, at 5 Gini points or greater in most of the countries experiencing rising inequality.2 What does an increase in income inequality of 5 Gini points mean? Expressed in terms of a cake-sharing game in which two people divide a cake between them, if Person A received the smaller slice at Time 1, a rise of 5 Gini points means that Person A would receive 2.5% less cake at Time 2 (Subramanian 2002). This upswing in inequality occurring across countries at various levels of development has raised new questions about the relationship between inequality and long-run economic development. For some time, social scientific thinking on this had been dominated by the familiar image of the Kuznets Curve (Kuznets 1955), the central insight being that industrialization is associated with a variety of compositional changes that generate an inverted-U-shaped relationship between inequality and development. “Starting” from a low level of development in which the low productivity/income agrarian sector dominates, the shift of the labor force in the course of industrialization into the higher productivity/income manufacturing and service sectors produces, mechanically, rising then falling inequality over the course of the industrial transition.3 The implications of this classic account for contemporary developing and developed societies has been a subject of regular and recurring debate. Some find support for an inverted-U relationship between economic development and income inequality (e.g., Ahluwalia 1976; Nielsen and Alderson 1995; Barro 2008), while others argue that there is scant evidence, longitudinally, that present-day developing societies conform to the Kuznetsian pattern and that Kuznets's argument is, in any case, poorly equipped to explain the increase in inequality that many developed societies have experienced since the 1970s (e.g., Alderson and Nielsen 2002; Fields 2001; Deininger and Squire 1998; Harrison and Bluestone 1988; Bluestone 1990). What should we infer from this debate, and how should we think about the relationship between inequality and development today? Viewed globally, we think it critical to note that the industrial transition–the shift out of agriculture and into industry and services, which Kuznets saw as the central mechanism lying behind the Kuznets Curve–is drawing to a close. Sometime in the 1980s–for the first time in thousands of years–more people in the world worked outside the agricultural sector than within it. Data on employment by sector collected by the International Labor Organization (2018) reveals that, by 2001, agriculture was no longer even the modal sector of employment globally. In that year, employment in services surpassed that in agriculture. This indicates that, at a global level, we are presently beyond the point at which the dualism between the agriculture and non-agricultural sectors would be expected to drive inequality upward in the typical developing society. A quick look at the data on within-country inequality bears this out. In the left panel of Figure 1, we pool all of the data that we have on income inequality for the 1960s from version 3.4 of the WIID (UNU-WIDER 2017) and plot this against a measure of real GDP/Capita (2011 USD) from version 9.0 of the PWT (Feenstra, Inklaar, and Timmer 2015). In the right panel, we do the same for the present decade. As one can note, in the 1960s, the familiar inverted U of the Kuznets Curve emerges in these data. Doing the same for every decade (not shown), one can observe that, by the 2010s, the world's societies have shifted notably to the right (i.e., average income increases) and most societies have moved well into the industrial transition (i.e., to the right of the inflection point of the Kuznets Curve). In Figure 2, we show how the level of inequality has changed over the past six decades. Consistent with the literature on within-country inequality, we see that inequality, on average, declined across the 1960s, the 1970s, and into the 1980s, after which it turned to increase. While the reader should not put too fine a point on this, the increase that we see between the 1980s and 2010s–an increase at the median of about 4.5 Gini points–is of roughly the same order as that reported by Cornia, Addison, and Kiiski (2003) and Milanovic (2016b), based on a far more careful approach to the data.4 FIGURE 1. Inequality and Development in the World Income Inequality Database 3.4, All Observations Available for the 1960s and the 2010s FIGURE 1. Inequality and Development in the World Income Inequality Database 3.4, All Observations Available for the 1960s and the 2010s FIGURE 2. All of the Data from the World Income Inequality Database 3.4 (N=7447, 1960–2014) FIGURE 2. All of the Data from the World Income Inequality Database 3.4 (N=7447, 1960–2014) Presently, then, we live in a world in which richer societies tend to have lower levels of inequality than poorer societies, but also one in which many societies, rich and poor alike, are on a trajectory of rising inequality. Just as there is nothing in Kuznets's original argument that would lead us to anticipate the U-turn on inequality that began in many rich societies in the 1970s, it is not obvious, within this framework, how one might reconcile the increase in inequality in many developing societies with the fact that they are already well into the industrial transition; that is, beyond the point at which dualism between agriculture and non-agricultural sectors would be expected to drive inequality upward in the typical developing society.5 Interestingly, the post-industrial-transition distribution of the labor force across sectors in these societies appears to have been evolving rather differently than it did in the rich societies. Scholars have recently begun to document deindustrialization in developing nations, particularly in Latin America and Sub-Saharan Africa (Brady, Kaya, and Gereffi 2011; Rodrik 2016). This process of declining manufacturing employment, termed “premature deindustrialization,” is striking because it has begun at a lower level of economic development than it did in rich societies, a phenomenon that Rodrik (2016) explains in terms of heightened global competition. The combination of declining primary- and secondary-sector employment in developing societies, alongside the rapid growth of tertiary-sector employment is a likely culprit in rising inequality in such societies. The greater heterogeneity of the rapidly growing service sector, from high-end information technology services to informal retail services, has long been argued to have contributed to the upswing in inequality in developed societies (e.g., Levy and Murname 1992; Singelmann et al. 1993; Alderson and Nielsen 2002). In sum, then, the post-industrial-transition world that we have moved into is one in which there is very little evidence that economic development will, in the near term, result in a reduction in inequality within the typical society, developed and developing alike. ## BETWEEN-COUNTRY INCOME INEQUALITY If income inequality has been rising within the typical society, what of inequality between countries? In fact, much of the debate over recent trends in global inequality has centered precisely on the between-country component (Korzeniewicz and Moran 1997; Firebaugh 1999, 2003; Firebaugh and Goesling 2004; Anand and Segal 2008; Dowrick and Akmal 2005). Recent work on the subject suggests that population-weighted between-country inequality in PPP-adjusted international dollars has been declining since the last decades of the twentieth century (Firebaugh 2003; Sala-i-Martin 2002; Hung and Kucinskas 2011; cf. Clark 2011). However, other scholars have argued that the finding of declining between-country inequality may have been driven by a range of methodological and measurement issues: the choice of index in PPP construction (Dowrick and Akmal 2005; Anand and Segal 2008); the use of PPP exchange rates versus market exchange rates (Firebaugh 1999; Korzeniewicz and Moran 2000; Wade 2004; Dowrick and Akmal 2005); population-weighting (Babones 2002; Wade 2004); and the exceptional growth of China (Babones 2002; Clark 2011). In what follows, we compute updated measures of between-country inequality using the latest data from the PWT. We find that between-country inequality has declined, but only since 2000. We then show that this decline after 2000 is robust to the methodological and measurement issues raised in the literature. ### Updates to the Penn World Tables’ Treatment of Non-Benchmark Years The main data come from the latest PWT. The PWT employs surveys from the International Comparison Program (ICP) to compute PPP-adjusted GDP per capita. Before presenting the updated estimates of between-country inequality, a brief discussion of how the PWT uses the ICP price data is in order. The ICP has conducted seven cross-national price surveys since 1970, with progressively increasing country-coverage (1970, 1975, 1980, 1985, 1996, 2005, 2011). These price surveys provide data on the relative prices of a basket of goods and enable the computation of GDP figures that adjust for relative price levels across countries. This purchasing-power adjustment is designed to provide a more accurate indication of relative standards of living across countries than can be derived using exchange rates. However, the GDP figures for non-benchmark years cannot be estimated directly because relative price data are not available. In previous versions of the PWT (7 and older), GDP figures for non-benchmark years were estimated by using the single latest ICP benchmark year and then extrapolating using relative national inflation rates for the non-benchmark years. This method is similar to that used by the World Bank to arrive at estimates of GDP per capita in PPP-adjusted (international) dollars. Starting from PWT 8.0, the PPPs are estimated using data not only from the most recent ICP price survey, but also from earlier surveys. PPP estimates are interpolated or extrapolated for each country using the nearest year in which the country participated in an ICP survey. To cite an example from the user guide, if a country participated in the 1996 and 2005 price surveys, PPP data for the years between 1996 and 2005 are interpolated using relative inflation rates such that the PPP estimates for the intervening years are consistent with those in 1996 and 2005. For the years before 1996 and after 2005, PPP data are extrapolated using relative inflation rates. In sum, PWT 9.0 employs information from the latest 2011 ICP survey, but also uses information from earlier surveys. This use of all available ICP survey data is a substantial departure from the data used in studies that rely on earlier versions of PWT or World Bank data.6 ### Estimates of Population-Weighted Between-Country Inequality using PWT 9.0 Figure 3 presents our calculation of population-weighted between-country inequality in PPP-adjusted GDP per capita for three different samples: a series beginning in 1950 with complete data on 61 countries, a series beginning in 1960 with complete data on 111 countries, and a series beginning in 1970 with complete data on 156 countries. The countries in these series contain, respectively, 74%, 87%, and 95% of world population in 1987, the approximate midpoint of the data. The 1970 series has considerably more countries than the 1950 sample because GDP per capita data are available for a wider range of countries in later years. It is also worth noting that data on the post-communist countries that split from the Soviet Union are not available in the PWT. Figure 3 suggests a clear pattern in between-country inequality, regardless of which sample is used. In the 1970 series, weighted between-country inequality oscillates around a Gini of 0.6 and declines steeply after 2000, reaching below 0.5 in 2014. The 1950 and 1960 series suggest a steady upward trend in inequality until 1990, followed by a steep decline after 2000. In sum, the results presented in Figure 3 suggest that weighted between-country inequality has declined substantially in recent years. However, in contrast to previous studies that documented declines in the last few decades of the twentieth century, we find that inequality only declined appreciably after 2000. In what follows, we examine whether this decline in between-country inequality is sensitive to PPP construction, choice of PPPs over exchange rates, population weighting, or the inclusion of China. FIGURE 3. Trend in Population-Weighted Between-Country Inequality FIGURE 3. Trend in Population-Weighted Between-Country Inequality ### Construction of PPP Rates: Penn World Table versus World Bank Data Among other differences between the two main data sources used to measure between-country inequality, the PWT and the World Bank use different methods to construct PPP rates. Traditionally, the PWT has used the Geary-Khamis (GK) index method, while more recent World Bank data relies on the Eltetö-Köves-Szulc (EKS) method. This is, potentially, an important difference, as it is the PPP rate, of course, that renders GDP figures comparable. Research has shown that the GK index method overestimates the real income of poorer nations and biases estimates of international inequality downward (Dowrick and Akmal 2005; Anand and Segal 2008). The EKS method does not suffer from this form of bias. From PWT 8.0 onward, a combination of EKS and GK methods are used to arrive at PPP-adjusted GDP figures. In contrast, the World Bank does not use GK index methods. Also, as mentioned earlier, the PWT uses information from previous ICP price surveys while the World Bank only uses the information from the latest ICP survey (2011).7 Previous work has shown that the calculation of trends in between-country inequality is sensitive to the choice of index method (Dowrick and Akmal 2005; Anand and Segal 2008). For example, Dowrick and Akmal (2005) show that between-country inequality declines between 1980 and 1997 when using the PWT's older GK index method for PPP construction and that when the substitution bias associated with the GK index method is corrected, there is no discernible change in inequality. To address these concerns, we compare the trends in population-weighted between-country inequality using both PWT 9.0 and World Bank data. World Bank PPP-adjusted GDP figures are only available from 1990 onward. Also, the sample of countries available in the World Bank data is different from that contained in the PWT. Therefore, the final sample of countries for this comparison is reduced to the 138 countries available in both sources. Figure 4 plots the trends in weighted between-country inequality using both data sources from 1990 onward. The plots suggest no major differences in trends in inequality in the last 25 years. The World Bank data suggest that inequality started to decline earlier than what the PWT data indicate, but both suggest substantively meaningful declines in inequality, on the order of over 10 Gini points. In sum, Figure 4 indicates that the choice of data source and PPP index method is not driving the post-2000 decline in between-country inequality we observed above. FIGURE 4. Trend in Population-Weighted Between-Country Inequality: Penn World Tables and World Bank PPP Methods Compared FIGURE 4. Trend in Population-Weighted Between-Country Inequality: Penn World Tables and World Bank PPP Methods Compared ### Purchasing Power Parity or Market Exchange Rates Another point of contention in the literature on global inequality has concerned the choice of whether to use PPP exchange rates or official market exchange rates to convert currencies (Firebaugh 1999; Korzeniewicz and Moran 2000; Arrighi, Silver, and Brewer 2003). Put simply, PPP exchange rates take into account domestic price differences between countries while market exchange rates do not. The decision to use one over the other hinges on the type of question we are interested in. If we are interested in comparing average levels of welfare across countries, the use of market exchange rates would tend to underestimate the real income of individuals in poorer countries because prices for goods and services are generally lower in poorer countries (Firebaugh 1999; Melchior and Telle 2001). In this case, the use of a consumption-based PPP exchange rate would better capture differences in standards of living. However, if we are interested in a measure of the “relative command that inhabitants of different countries have [over world income],” then the use of market exchange rates would be more appropriate (Korzeniewicz and Moran 1997:1011; Arrighi, Silver, and Brewer 2003). Most importantly, prior research suggests that the decision to use PPPs or market exchange rates matters crucially for the conclusions we draw about trends in international inequality (Firebaugh 1999). In Figure 5, we examine whether the use of official market exchange rates in the computation of population-weighted between-country inequality affects our conclusions about the trend in inequality. Figure 5 reproduces the original weighted between-country inequality series using PPP GDP per capita figures along with another series that uses market exchange rate GDP per capita. Both plots are for the 1970 sample of 156 societies. While the measured level of between-country inequality is higher when market exchange rates are used, the dynamics are very similar. By the market exchange rate series, between-country inequality starts to gradually decline in the early 1990s before declining more sharply after 2000. While the extent of the decline is less pronounced in the market exchange rate series than in the PPP series, our analysis indicates that–regardless of how we conceptualize between-country inequality, either as disparities in welfare conditions or disparities in the command over world income–inequality has declined since 2000. FIGURE 5. Trend in Population-Weighted Between-Country Inequality: PPP and Market Exchange Rate Methods Compared FIGURE 5. Trend in Population-Weighted Between-Country Inequality: PPP and Market Exchange Rate Methods Compared ### Population-Weighting Milanovic (2013a) distinguishes between “Concept 1” and “Concept 2” versions of between-country international inequality. Concept 1 inequality is simple international inequality (i.e., inequality in average income between countries) while Concept 2 inequality is population-weighted inequality between countries. In other words, the unit of analysis in Concept 1 inequality is the country, while the unit of analysis in Concept 2 inequality is the average individual. In Concept 1, each country is given equal weight in the computation of the dispersion measure, so that, for example, China and Qatar have the same weight. This unweighted measure of inequality is particularly useful when we are interested in convergence/divergence processes between countries or comparing the performance of different economies. For example, analyses that employ a world-systems perspective and are interested in the mobility of nation-states in the world-economy typically use unweighted GDP per capita figures (Wallerstein 2004; Babones 2005; Clark 2016). As Babones (2002) demonstrates, trends in unweighted international inequality do not necessarily have to follow trends in population-weighted between-country inequality. Indeed, if the decline in population-weighted inequality after 2000 is being driven by a handful of high-growth, populous countries, unweighted inequality could still be increasing. In Figure 6, we examine whether the decline in between-country inequality we observe after 2000 in Figure 3 is the result of population-weighting. FIGURE 6. Trend in Unweighted International (Between-Country) Inequality FIGURE 6. Trend in Unweighted International (Between-Country) Inequality Figure 6 displays trends in unweighted between-country inequality for the 1950 and 1960 samples using both PPP rates and market exchange rates. In both of the PPP samples, between-country inequality, while fluctuating from year to year, generally increases until 2000 and declines thereafter. The two exchange rate series are a bit more difficult to characterize, suggesting that, by this measure, simple international inequality–which we know had been rising for centuries–had plateaued or hit a ceiling in the late twentieth century. Here too, however, we see a small decline of a few Gini points after 2000. Overall, the results presented in Figure 5 suggest that Concept 1 inequality (i.e., unweighted international inequality) has declined along with Concept 2 inequality (i.e., weighted between-country inequality) in the last two decades or so. ### The “Chindia” Effect China has maintained a growth rate that is consistently above the world average for the past four decades (World Bank 2017; Hung 2015). Several studies have suggested that, owing to its large population, population-weighted between-country inequality is highly sensitive to the inclusion/exclusion of China (Peacock, Hoover, and Killian 1988; Schultz 1998; Melchior and Telle 2001; Clark 2011; Hung and Kucinskas 2011). For example, Clark (2011) finds that with the inclusion of China, between-country inequality declined between 1990 and 1999 and, with China excluded, between-country inequality increased across the same period. Similarly, Hung and Kucinskas (2011) find that, between 1980 and 2004, between-country inequality declined with China and India included, but, when both countries are excluded, inequality increased until 2000 before leveling out. Given the likely outsized influence of very large countries such as China and India on our measurement of weighted between-country inequality, it is worthwhile to examine their effect on the trend in inequality we observed in Figure 3. Figure 7 thus displays three series using the 1970 sample: the weighted between-country inequality for the full sample; with China excluded; and with both China and India excluded. China and India have meaningful effects on both the level and trajectory of between-country inequality. With the exclusion of China, and both China and India, inequality is lower in 1970, increases until around 2000, and then generally declines thereafter. The exclusion of China and India, interestingly enough, would lead us to underestimate inequality for much of the latter part of the twentieth century and to underestimate the extent of the decline in between-country inequality in the twenty-first century. That said, it is also clear that, even if one were inclined to exclude over 36% of the world's population (as of 2014) and two of the fastest-growing large economies from the calculation of inequality, population-weighted between-country still measurably declined in recent years. FIGURE 7. The “Chindia Effect”: Trend in Population-Weighted Between-Country Inequality without China and without China and India FIGURE 7. The “Chindia Effect”: Trend in Population-Weighted Between-Country Inequality without China and without China and India ### The Gap between the Rich and the Rest Our examination of weighted between-country inequality reveals a real reduction in inequality after 2000. Much of the literature on global inequality and development has, at least implicitly, been concerned with convergence/divergence between the “Rich” and the “Rest” (Amsden 2001; Arrighi, Silver, and Brewer 2003; Firebaugh and Goesling 2004). As our results imply convergence in average real incomes in recent decades, we explicitly illustrate this, comparing historically rich advanced economies to other countries in the 1970 sample. Figure 8 displays the trends in average PPP-adjusted GDP per capita from the PWT. These average real income trends are weighted by each group's/country's population size such that, for instance, China has a larger weight in the computation of the average income series for the Rest group than does Sri Lanka. The Rich country sample consists of all countries in Western Europe, the U.S., Canada, New Zealand, Australia, and Japan. Figure 8 also displays the trend in average real income for China and India alone. Figure 9 presents two series based on the information in Figure 8: the difference in average real income between rich and other countries, and the ratio between the two. FIGURE 8. Trend in Average Real Income: The Rich, the Rest, China, and India FIGURE 8. Trend in Average Real Income: The Rich, the Rest, China, and India FIGURE 9. Trend in the Difference in Average Real Income between the Rich and the Rest and the Ratio of the Income of the Rich to the Rest FIGURE 9. Trend in the Difference in Average Real Income between the Rich and the Rest and the Ratio of the Income of the Rich to the Rest A few facts stand out. The series for the Rich in Figure 8 is visibly responsive to the business cycle, showing declines in response to the global recessions of the mid-1970s, the early 1980s, and 2008. Otherwise, average real income rises continuously from 1970 to 2014. In contrast, the series for the Rest in Figure 8 is comparatively flat across the 1970s, 1980s, and into the 1990s, after which average incomes begin to tick up, undoubtedly helped along by China, which moves from having below-average income for the Rest to above-average by the early 2000s. Turning to Figure 9, the difference in average real income between the Rich and the Rest increased from just over $15,000 in 1970 to more than$33,000 in 2014 (2011 USD), following a decline and partial recovery after 2008. However, the income of the Rich relative to the Rest fell precipitously after 2000, from being 8.3 times larger in 2000 to 4.6 times larger in 2014. Given that average real income in both groups was higher in 2014 than in 2000, this is clear evidence that the Rest have been growing faster than the Rich for all of the present century. Thus, in addition to representing a significant break in inequality, the post-2000 era represents a critical shift in the divergence/convergence dynamics between historically rich and other countries. ## “TRUE” GLOBAL INEQUALITY Over simple international inequality (i.e., unweighted between-country inequality), weighted between-country inequality represents a significant step toward an ideal measure of global inequality (i.e., one based on information on every income-receiving unit–individual, household, or family–in the world). Instead of treating, for instance, China as equivalent to the Seychelles, population-weighting gives China more than 14,500 times the influence of the Seychelles in the calculation of between-country inequality. It is, nevertheless, inherently limited as a measure of global inequality. In weighting GDP per capita by population, one, in effect, gives all members of the society the same income. All 96,000 people in the Seychelles are given the same income, the income per capita of the Seychelles, and all 1.4 billion Chinese are given the same income, the income per capita of China. That is, weighted between-country inequality is a crude approximation of true global inequality, as it assumes there is no inequality of income within countries. Therefore, what the results presented in Figure 3 in fact represent is a measure of the lower bound of global inequality in each year. They represent the lower bound because we know, of course, that there is inequality to varying degrees within each society. Looking back to Figure 3, what this tells us, for example, is that global inequality in 2014 could not possibly have been any lower than around 0.45 or 45 Gini points. We know it could not be any lower than that, but we really don't know how much higher it might be–or how much higher global inequality would be if we factored in within-country inequality, which, as we saw, has been growing in the typical society in recent decades. The fact that population-weighted between-country inequality represents a lower bound raises the question of what the upper bound of global inequality might be. As Milanovic (2013b) has shown, it is easy to calculate this. One can estimate the upper bound or ceiling of global inequality in a given year by relating subsistence income to average income. That is, assume a world in which all but a minuscule elite live on the World Bank's international poverty line of $1.90 a day, or$693.50 a year (2011 USD). One can use per capita income data to calculate the Gini coefficient of inequality under those conditions as: where α is the ratio between mean and subsistence income. By this formula one can see that if mean income is low, the maximum possible Gini will be low, because the surplus available to the elite is small. As average income grows, however, the maximum possible Gini will approach 1.0 as the surplus that could be captured by the elite grows. Estimates of the upper bound of global inequality are presented in Figure 10 using the 1960 series. In this figure, we also reproduce the 1960 and 1970 series of weighted between-country inequality. By this measure, a maximally unequal world of 1960 could have had a Gini coefficient no higher than about 0.83. As average real incomes have grown, potential global inequality has likewise grown, and, by 2014, a maximally unequal world in which nearly everybody lived on \$1.90 a day could have had a Gini coefficient no higher than 0.95. Tracing out this theoretical maximum is helpful as it provides some bounds for our estimates of “true” global inequality. We know that global inequality can be no higher than what we see in Figure 10. Likewise, we know that global inequality can be no lower than the bounds traced out by weighted between-country inequality. Thus an estimate of global inequality of, say, 0.82 in 2014 simply lacks face validity; it is too close to what we would see in a maximally unequal world. Similarly, an estimate of 0.47 in 2014 is simply too low; it is too close to what we would see in a world in which the only source of income inequality is differences in average income between countries. The true value must lie somewhere between these extremes. FIGURE 10. Estimates of True Global Inequality and the Trends in Maximum Possible Inequality and Weighted Between-Country Inequality FIGURE 10. Estimates of True Global Inequality and the Trends in Maximum Possible Inequality and Weighted Between-Country Inequality Milanovic's work gets us as close as we have come to date to the ideal of having information on every income-receiving unit in the world. Milanovic's estimates of global inequality are based on household income surveys that are generalizable to over 80% of the world's population in all years.8 The data we present in Figure 10 are drawn from Milanovic and Roemer (2016). They use the same 2011 PPPs that we have used throughout, allowing an “apples to apples” comparison of their estimates to our upper and lower bounds of global inequality. Estimates of true global inequality roughly mirror the trajectory of weighted between-country inequality, being more or less stable across the late 1980s and 1990s, but turning to decline thereafter. The decline in global inequality between 1998 and 2011 is substantively meaningful, 5.5 Gini points. In terms of the two-person cake-sharing game described above, the decline of 5.5 Gini points is equivalent to the person receiving the smaller slice in 1998 receiving 2.75% more cake by 2011. Given the slow pace at which income inequality typically changes–“like watching the grass grow,” as Aaron (1978) famously quipped–this is a remarkable shift for 13 years (especially when one recalls that the changes underlying this shift have moved tens of millions out of absolute poverty). It is also interesting to note the growing divergence between our best estimates of true global inequality and the trend in maximum possible inequality in Figure 10. As the world has gotten richer, the amount of inequality that could be produced or extracted in the world system has grown, but the proportion produced (i.e., true global inequality/maximum possible inequality) is declining, and notably so after 2000. Thus, even though inequality has been growing within the typical society, we have moved meaningfully further away from a world in which all but a tiny minority live on the edge of subsistence. ## CONCLUSIONS To recapitulate, using the latest available data on within- and between-country inequality, we find that inequality within countries has been on the rise and argue that there is little reason to expect in the near term that further economic development will result in a reduction in inequality within the typical society, whether developed or developing. Inequality between countries continued to grow for much of the latter half of the 20th century, as it had for the last few centuries, but turned to decline measurably after 2000. We have demonstrated that this decline is robust to the methodological and measurement issues raised in the literature and argued that it is primarily attributable to the now familiar improvement in average incomes in a number of developing societies. Tracing out the lower and upper bounds of global inequality, we then situated estimates of true global inequality within them. The data available on true global inequality suggest stability, then a substantively meaningful decline, again, around the turn of the century. Having detailed what is really happening with global inequality, we conclude with a number of thoughts on what we think it really means looking forward. First, in a post-industrial-transition world–one in which further economic development cannot be expected to result in reductions in inequality within societies–most of the “action” on the inequality-reduction front is likely to be “global.” That is, if we are going to see further reduction in global inequality in the near term, it is probably going to be the result of continued international trade, investment, and migration producing rising average incomes in developing societies, rather than the moderation of inequality within countries. Second, it is clear, in practice, despite much discussion to the contrary, policy makers have largely overlooked the stakes involved for their constituents. As we noted earlier, what country you live in remains, by a large margin, the single biggest determinant of your income. This is of course, a pure accident of birth for the vast majority. As Milanovic (2015) has shown, on a global level there is precious little “equality of opportunity:” those who live in rich countries enjoy massive rents on citizenship, rents that very few are prepared to forgo. Third, it is plainly not obvious to all that globalization has always represented a Pareto improvement; that is, it is not obvious to all that global trade and investment have made hundreds of millions people in developing societies better off without making other people–the middle and working classes of the rich countries–worse off. It is thus not at all surprising that the factors underlying the trends we document in this paper have spawned a political constituency in both the Global North and South that is strongly opposed to globalization. Members of that constituency feel that they have a lot to lose–and many certainly do–and they feel as if they have been losing. To our minds, further reductions in global inequality that rely on the primary mechanism (i.e., globalization) through which this has occurred in recent decades may turn critically on the ability of policymakers to devise ways of ensuring that globalization is more broadly experienced and viewed as welfare enhancing. As happened with the last great round of globalization in the late nineteenth and early twentieth centuries (Chase-Dunn, Kawano, and Brewer 2000), the window can close as just as quickly as it opened. That said, the role of globalization in current and future inequality reduction is multifaceted and must be approached critically. For instance, while the latest data indicate a substantial decline in between-country and global inequality after 2000, it is important to note that this is well after the onset of the most recent round of globalization (as conventionally dated) and later than earlier studies had indicated. This may reflect the fact that the pathways through which globalization shapes the growth trajectories of nations varies. While the role of export-led industrialization in China's growth is clear, manufactured exports have played a relatively minor role in India's growth story in the past decade. India's growth has been driven more by domestic consumption fueled by foreign capital inflows (Ghosh and Chandrasekhar 2009; Hung 2015). The experiences of individual nations suggest that the case for simply increasing trade and foreign investment (especially short-term capital flows) is not always unambiguously positive. To be sure, globalization has been important in the export-led take-off of China and, before then, the East Asian Tigers like South Korea and Taiwan. In contrast to the earlier success stories of East Asian countries, China has had a particularly large impact on reducing global inequality because of its size. However, it is also important to remember that these countries did not embrace trade liberalization blindly. As the large literature on developmental states reveals, late industrializers engaged in a more selective integration with the world economy (Amsden 2001; Johnson 1982; Chang 2006; Hung 2015). Furthermore, financial liberalization that encourages short-term capital flows and speculation can lead to instability and crises, like the East Asian financial crisis in the late 1990s. And the competitive shock associated with the entry of Asian countries like China into export markets can “close out” other developing countries (Kaplinsky 2005; Rodrik 2016). In sum, the process of globalization that has played an important role in the rapid growth of some developing countries has not engendered the same in other developing countries, nor can it be assumed that it will do so in the future. ## WORKS CITED WORKS CITED Aaron, Hentry. 1978 . Politics and the Professors: The Great Society in Perspective . Washington D.C. : Brookings Institution . Ahluwalia, Montek S. 1976 . “Income Distribution and Development: Some Stylized Facts.” American Economic Review 66 : 128 35 . Alderson, Arthur S., and François Nielsen. 2002 . “Globalization and the Great U-Turn: Income Inequality Trends in 16 OECD Countries.” American Journal of Sociology 107 : 1244 1299 . Amsden, Alice H. 2001 . The Rise of “the Rest”: Challenges to the West from Late-Industrializing Economies . Oxford : Oxford University Press . Anand, Sudhir, and Paul Segal. 2008 . “What Do We Know about Global Income Inequality?” Journal of Economic Literature 46 : 57 94 . Arrighi, Giovanni, Beverly Silver, and Benjamin Brewer. 2003 . “Industrial Convergence, Globalization, and the Persistence of the North-South Divide.” Studies in Comparative International Development 38 : 3 31 . Babones, Salvatore. 2002 . “Population and Sample Effects in Measuring International Income Inequality.” Journal of World-Systems Research 8 : 8 28 . Babones, Salvatore. 2005 . “The Country-Level Income Structure of the World-Economy.” Journal of World-Systems Research 11 : 29 55 . Barro, Robert. 2008 . “Inequality and Growth Revisited.” Working Paper Series on Regional Economic Integration 11 : 1 14 . Manila, Philippines : Asian Development Bank . Bluestone Barry. 1990 . “The Great U-Turn Revisited: Economic Restructuring, Jobs, and the Redistribution of Earnings.” Pp. 7 43 in Jobs, Earnings, and Employment Growth Policies in the United States , edited by John D. Kasarda. Boston, MA : Kluwer . Bourguignon, François, and Christian Morrisson. 2002 . “Inequality among World Citizens: 1820–1992.” American Economic Review 92 : 727 44 . Brady, David, Yunus Kaya, and Gary Gereffi. 2011 . “Stagnating Industrial Employment in Latin America.” Work & Occupations 38 ( 2 ): 179 220 . Chase-Dunn, Christopher, Yukio Kawano, and Benjamin D. Brewer. 2000 . “Trade Globalization since 1795: Waves of Integration in the World-System.” American Sociological Review 65 : 77 95 . Chang, Ha-Joon. 2006 . The East Asian Development Experience: the Miracle, the Crisis and the Future . London : Zed Books . Clark, Rob. 2011 . “World Income Inequality in the Global Era.” Social Problems 58 ( 4 ): 565 592 . Clark, Rob. 2016 . “Examining Mobility in International Development.” Social Problems 63 ( 3 ): 329 350 . Coady, David, and Allan Dizioli. 2017 . “Income Inequality and Education Revisited: Persistence, Endogeneity, and Heterogeneity.” Working Paper No. 17/126. Washington, DC : International Monetary Fund . Cornia, Giovanni Andrea, Tony Addison, and Sampsa Kiiski. 2003 . “Income Distribution Changes and their Impact in the Post-World War II Period.” UNU/WIDER Discussion Paper No. 2003/28. Helsinki, Finland . Deininger, Klaus, and Lyn Squire. 1998 . “New Ways of Looking at Old Issues: Inequality and Growth.” Journal of Development Economics 57 : 259 87 . Dowrick, Steve, and Muhammad Akmal. 2005 . “Contradictory Trends in Global Income Inequality: A Tale of Two Biases.” Review of Income and Wealth 51 : 201 229 . Feenstra, Robert C., Robert Inklaar and Marcel P. Timmer. 2015 . “The Next Generation of the Penn World Table.” American Economic Review 105 ( 10 ): 3150 3182 Fields, Gary S. 2001 . Distribution and Development: A New Look at the Developing World . New York : Russell Sage Foundation . Firebaugh, Glenn. 1999 . “Empirics of World Income Inequality.” American Journal of Sociology 104 : 1597 1630 . Firebaugh, Glenn. 2003 . The New Geography of Global Income Inequality . Cambridge, MA : Harvard University Press . Firebaugh, Glenn, and Brian Goesling. 2004 . “Accounting for the Recent Decline in Global Income Inequality.” American Journal of Sociology 110 : 283 312 . Galbraith, James K. 2011 . “Inequality and Economic and Political Change: A Comparative Persepective.” Cambridge Journal of Regions, Economy and Society 4 : 13 27 . Goesling, Brian. 2001 . “Changing Income Inequalities within and between Nations: New Evidence.” American Sociological Review 66 : 745 61 . Ghosh, Jayati, and C. P. Chandrasekhar. 2009 . “The Costs of Coupling: The Global Crisis and the Indian Economy.” Cambridge Journal of Economics 33 : 725 739 . Harrison, Bennett, and Barry Bluestone. 1988 . The Great U-Turn . New York : Basic Books . Hung, Ho-Fung, and Jaime Kucinskas. 2011 . “Globalization and Global Inequality: Assessing the Impact of the Rise of China and India, 1980–2005.” American Journal of Sociology 116 : 1478 1513 . Hung, Ho-Fung. 2015 . The China Boom: Why China Will Not Rule the World . New York : Columbia University Press . International Labour Organization . 2018 . ILOSTAT database . Retrieved February 21, 2018. Johnson, Chalmers. 1982 . MITI and the Japanese Miracle: the Growth of Industrial Policy: 1925–1975 . Stanford, CA : Stanford University Press . Kaplinsky, Raphael. 2005 . Globalization, Poverty and Inequality: Between a Rock and a Hard Place . Malden, MA : Polity . Korzeniewicz, Roberto P., and Timothy P. Moran. 1997 . “World-Economic Trends in the Distribution of Income, 1965–1992.” American Journal of Sociology 102 : 1000 39 . Korzeniewicz, Roberto P., and Timothy P. Moran. 2000 . “Measuring World Income Inequalities.” American Journal of Sociology 106 : 209 14 . Kuznets, Simon. 1955 . “Economic Growth and Income Inequality.” American Economic Review 45 : 1 28 . Levy, Frank, and Richard J. Murname. 1992 . “U.S. Earnings Levels and Earnings Inequality: A Review of Recent Trends and Proposed Explanations.” Journal of Economic Literature 30 : 1333 81 . Melchior, Arne, and Kjetil Telle. 2001 . “Global Income Distribution 1965–98: Convergence and Marginalisation.” Forum for Development Studies 1 : 75 98 . Milanovic, Branko. 2011 . “Global Inequality and the Global Inequality Extraction Ratio: The Story of the Past Two Centuries.” Explorations in Economic History 48 : 494 506 . Milanovic, Branko. 2013a . “Global Income Inequality in Numbers: in History and Now.” Global Policy 4 : 198 208 . Milanovic, Branko. 2013b . “The Inequality Possibility Frontier: Extensions and New Applications.” World Bank Policy Research Working Paper 6449. Washington, DC : World Bank . Milanovic, Branko. 2015 . “Global Inequality of Opportunity: How Much of Our Income Is Determined by Where We Live?” Review of Economics and Statistics 97 : 452 460 . Milanovic, Branko. 2016a . Global Inequality: A New Approach for the Age of Globalization . Harvard University Press , Cambridge, MA . Milanovic, Branko. 2016b . “Recent Trends in Global Income Inequality and Their Political Implications.” Presented at Università Bocconi, November 21, Milan, Italy. Retrieved October 21, 2017. (https://www.unibocconi.eu/wps/wcm/connect/c041eee5-b5da-4fc5-8a7d-97f874d5cc22/Milanovic_Recent+trends+in+global+income+inequality.pdf?MOD=AJPERES) Milanovic, Branko, and John E. Roemer 2016 . “Interaction of Global and National Income Inequalities” Journal of Globalization and Development 7 : 109 115 . Moller, Stephanie, Arthur S. Alderson and François Nielsen. 2009 . “Changing Patterns of Income Inequality in U.S. Counties, 1970–2000.” American Journal of Sociology 114 : 1037 1101 . Nielsen, François. 1994 . “Income Inequality and Industrial Development: Dualism Revisited.” American Sociological Review 59 : 654 77 . Nielsen, François, and Arthur S. Alderson. 1995 . “Income Inequality, Development, and Dualism: Results from an Unbalanced Cross-National Panel.” American Sociological Review 60 : 674 701 . Peacock, Walter, Greg Hoover, and Charles Killian. 1988 . “Divergence and Convergence in International Development: A Decomposition Analysis of Inequality in the World System.” American Sociological Review 53 : 838 52 . Pomeranz, Kenneth. 2001 . The Great Divergence: China, Europe, and the Making of the Modern World Economy . Princeton, NJ : Princeton University Press . Rodrik, Dani. 2016 . “Premature Deindustrialization.” Journal of Economic Growth 21 : 1 33 . Sala-i-Martin, Xavier. 2002 . “The Disturbing ‘Rise’ of Global Income Inequality.” NBER Working Paper no. 8904. Cambridge, MA . Schultz, T. Paul. 1998 . “Inequality in the Distribution of Personal Income in the World: How It Is Changing and Why.” Journal of Population Economics 11 : 307 44 . Subramanian, S. 2002 . “An Elementary Interpretation of the Gini Inequality Index.” Theory and Decision 52 : 375 379 . Singelmann, Joachim, Forrest A. Deseran, F. Carson Mencken, and Jiang Hong Li. 1993 . “What Drives Labor Market Growth: Economic Performance of Labor Market Areas: 1980–86.” Pp. 33 49 in Inequalities in Labor Market Areas , edited by Joachim Singlemann and Forrest A. Deseran. Boulder, Colo. : Westview Press . UNU-WIDER . 2017 . “World Income Inequality Database (WIID3.4),” January 2017. 2004 . “Is Globalization Reducing Poverty and Inequality?” World Development 32 : 567 589 . Wallerstein, Immanuel M. 2004 . World-Systems Analysis: An Introduction . Durham, NC : Duke University Press . World Bank . 2008 . Global Purchasing Power Parities and Real Expenditures – 2005 International Comparison Program . Washington, DC : World Bank . World Bank . 2017 . “World Development Indicators” Washington, DC . World Bank . Retrieved January 6, 2017 (https://data.worldbank.org/indicator/SP.POP.TOTL) ## NOTES NOTES Direct all correspondence to Arthur S. Alderson, Department of Sociology, Indiana University, Ballantine Hall 744, Bloomington, IN 47405. Email for Alderson: aralders@indiana.edu. Email for Pandian: rpandian@indiana.edu. 1. This would be an ideal situation for the study of global inequality. Barring such a global census, it would be ideal to have regular, high-quality income surveys of all countries in the world. Neither, to date, exist; in a real sense, the various estimates of global inequality that we produce and discuss below retrace the history of the (always limited, but increasingly successful)attempts of social scientists to approximate this ideal. 2. The Gini coefficient is a measure of inequality that varies between 0 and 1, with 0 representing perfect equality (i.e., all income-receiving units receive the same income) and 1 representing perfect inequality (i.e., one income-receiving unit receives all of the income). Thus a change of “5 Gini points” represents a change in the Gini of 0.05 (or, when multiplied by 100 as in Figures 1 and 2 below, a change of 5). 3. See Nielsen (1994) for an illustration and Nielsen and Alderson (1995) for an application. 4. Data in the WIID are of widely varying quality and there are a host of measurement issues that one must address to treat the data as comparable. Given that Figures 1 and 2 are intended only to illustrate a few key points over which there is very little doubt, we forgo the usual rules of good practice and simply present all of the data for the 1960s and 2010s in the latest release of the WIID. 5. This is not to suggest that the general mechanism identified by Kuznets cannot help us make sense of rising inequality. For instance, in the course of the spread of education, one would expect inequality to rise, then fall, given that the heterogenity of educational attainment will likewise increase, then decrease, as the population shifts from low to high levels of formal schooling (Moller, Alderson, and Nielsen 2008; Coady and Dizioli 2017). See also Milanovic (2016a) on “Kuznets cycles.” 6. For a more detailed description of the updates to PPP construction and other updates to PWT, see Feenstra, Inklaar, and Timmer (2015) or the PWT user guides. 7. For a full description of the differences between the PWT and World Bank PPP-adjusted GDP per capita figures, refer to Feenstra, Inklaar and Timmer (2013) and World Bank (2008). 8. See Milanovic 2016a for details. The surveys employed allow one to draw inferences to between 81% (1988) and 94% (2003) of world population.
auto_math_text
web
This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good. One aspect you might want to add to your scoring is “inflation protection”. At one end, bonds and CDs generally pay a fixed nominal coupon that doesn’t rise with inflation. Stock dividends and Real estate rents (and underlying property value) tend to. Not reallly sure how P2P lending ranks- though I suppose the timeframes are fairly short (1 year or less?) and therefore the interest you receive takes into account the current risk free rate + a premium for your risk. Now that I think about it, P2P lending probably deserves a lower score in the activity column than bonds too (since you probably need to make new loans more often). My esteemed marketing colleagues initially balked at the idea of creating products that generate royalties, so I can understand how creating something from nothing might be daunting for those who aren’t even in creative roles. However, realize there is this enormous world out there of photographers, bloggers, artists, and podcasters who are making a passive income thanks to the Internet. "For long-term savings, investing in low-cost index funds is the ultimate passive strategy," Goudreau says. "As legendary investor Warren Buffett recently told CNBC’s On the Money, 'Consistently buy an S&P 500 low-cost index fund. I think it's the thing that makes the most sense practically of all time.' By not picking individual stocks and, instead, buying a low-cost fund that tracks the market, you pay less in fees and take less of a risk. Then you can sit back and watch your money grow over time." The surveys from home, you added a link for “everything we needed to know” it sent me to a site where I had to pay them $35 or$45 to get started. It doesn’t say anything about how, until you pay them. You sent us too the site BUT, have you checked it? Is it safe? Will they take my $, & I get nothing? If you say its OK, then fine, but usually these things are bad news. I fell into one when I was young. Proof reading at home. They sent you a book on how to do it, & then a “LIST” of all the company’s that hired at home proof readers. Well, they sent me the book, which was fine. But, the list they sent me had nothing but company’s that only hired people with long time prior experience proof reading. So, it was useless to me. ;( I've got a$185,000 CD generating 3% interest coming due. Although the return is low, it's guaranteed. The CD gave me the confidence to invest more aggressively in risk over the years. My online interest income has come down since I aggressively deployed some capital at the beginning of the year and again during the February market correction. You'll see these figures in my quarterly investment-income update. One of the most appealing options, particularly for millennials, would be #12 on your list (create a Blog/Youtube channel). The videos can be about anything that interests you, from your daily makeup routine (with affiliate links to the products you use), recipes (what you eat each day) or as you mention, instructional videos (again with affiliate links to the products you use). Once you gain a large following and viewership, you can earn via Adsense on YouTube. IDA has invested more than $2 billion since 1991 to address the country’s infrastructure gap, partly through the Road Sector Development Program (RSDP). IDA helped build capacity and establish a dedicated road fund for financing maintenance. Working in partnership with other donors, including the European Commission, Germany, Japan, Nordic countries and the United Kingdom, IDA helped increase both the size and quality of Ethiopia’s road network from under 20,000 km in 1991 to over 100,000 km in 2015. Under the CPF, the World Bank continues supporting improvements in transport infrastructure and road connectivity to reduce travel times and improve connectivity between markets and secondary cities. Last but not least Blogging, which is close to my heart. It require lot of patience, skills, knowledge and flair for writing to be a successful blogger. Besides basic skills, you need expertise in SEO & SEM to drive traffic on your blog. For successful bloggers, Blogging is full time income source. Though this place is full of copycats but trust me originality pays. Bloggers earn from content writing, affiliate programs, advertisement and through public appearance/consultancy. Organizations have realized the importance of social media impact and blogs are considered to be the best way to drive traffic on website & customer engagement. Infact many organizations have started hiring full time bloggers. Hi there. I am new here, I live in Norway, and I am working my way to FI. I am 43 years now and started way to late….. It just came to my mind for real 2,5years ago after having read Mr Moneymoustaches blog. Fortunately I have been good with money before also so my starting point has been good. I was smart enough to buy a rental apartment 18years ago, with only 12000$ in my pocket to invest which was 1/10 of the price of the property. I actually just sold it as the ROI (I think its the right word for it) was coming down to nothing really. If I took the rent, subtracted the monthly costs and also subtracted what a loan would cost me, and after that subtracted tax the following numbers appeared: The sales value of the apartment after tax was around 300000$and the sum I would have left every year on the rent was 3750$……..Ok it was payed down so the real numbers were higher, but that is incredibly low returns. It was located in Oslo the capital of Norway, so the price rise have been tremendous the late 18 years. I am all for stocks now. I know they also are priced high at the moment which my 53% return since December 2016 also shows……..The only reason this apartment was the right decision 18 years ago, was the big leverage and the tremendous price growth. It was right then, but it does not have to be right now to do the same. For the stocks I run a very easy in / out of the marked rule, which would give you better sleep, and also historically better rates of return, but more important lower volatility on you portfolio. Try out for yourself the following: Sell the S&P 500 when it is performing under its 365days average, and buy when it crosses over. I do not use the s&P 500 but the obx index in Norway. Even if you calculate in the cost of selling and buying including the spread of the product I am using the results are amazing. I have run through all the data thoroughly since 1983, and the result was that the index gave 44x the investment and the investment in the index gives 77x the investment in this timeframe. The most important findings though is what it means to you when you start withdrawing principal, as you will not experience all the big dips and therefore do not destroy your principal withdrawing through those dips. I hav all the graphs and statistics for it and it really works. The “drawbacks” is that during good times like from 2009 til today you will fall a little short of the index because of some “false” out indications, but who cares when your portfolio return in 2008 was 0% instead of -55%…….To give a little during good times costs so little in comparison to the return you get in the bad times. All is of course done from an account where you do not get taxed for selling and buying as long as you dont withdraw anything. I need to create a passive income stream that has a definable risk profile.I have $250k cash as a safety net in my savings account getting a measily 40 bps but I am somewhat ok with this as it is Not at risk or fluctuation (walk street is tougher nowadays). i have 270k in equity in my house, thinking of paying off the mortgage but probably does make sense since my rate is 3.125 on a 30 yr. I have 275k in my 401(k) and another 45k in a brokerage account that is invested in stocks that pay dividends.$6,000 test - The gross income from the presence of a nonresident in Connecticut does not exceed $6,000 in the taxable year. Important: An employee’s wages for services performed in Connecticut are taxable, regardless of amount, unless the employee’s services meet the Ancillary Activity Test. Also, reportable Connecticut Lottery winnings are taxable regardless of amount. This can be a little easier said than done, but if you have a large social media following, you can definitely earn money promoting a product or advertising for a company. You can even combine this with different marketing campaigns if you are an influencer and have your own blog (advertisement + affiliate income). This is how many bloggers make money! Again, it is not 100% passive but once set up correctly and then scaled, can be surprisingly lucrative. Nah you misunderstood me. I’m working 50 hours a week now to get residency and only taking a couple of classes. I’ll be working 10-20 hours a week when I go back to schoool full time a year from now. I tried working 35 hours and school full time but got burned out last year so no more of that. My grades are so-so. I got a 3.7gpa in all my GE’s and really on a conservative basis planning to remain around there which would mean 1 B for every 2 A’s. To get residency realistically I got to earn 300 dollars in taxable income a week for a year, and in the meantime am allowed to go to school part time given the fact that I can pay for school with the money I have earned within the period I began to establish residency, so no outside cash because my bank accounts will be audited at the end of the year. Peer-to-Peer Lending: Earn up to 10% in returns by lending individuals, organizations and small companies who don't qualify for traditional financing through peer-to-peer lending platforms like Lending Club. You can lend$100, $1,000, or more to borrowers who meet lending platform financial standards. Like a bank, you'll earn interest on the loan - often at higher returns than banks usually get. 7) Never Withdraw From Your Financial Nut. The biggest downfall I see from people looking to build passive income is that they withdraw from their financial nut too soon. There’s somehow always an emergency which eats away at the positive effects of compounding returns. Make sure your money is invested and not just sitting in your savings account. The harder to access your money, the better. Make it your mission to always contribute X amount every month and consistently increase the savings amount by a percentage or several until it hurts. Pause for a month or two and then keep going. You’ll be amazed how much you can save. You just won’t know because you’ve likely never tested savings limits to the max. Most credit card companies offer sign-up bonuses to entice you to open a credit account with them. As long as you don’t spend money just to hit the minimum balance and always pay your balance on time, this can have a minimal impact on your credit score while earning you hundreds – or even thousands – of dollars a year. Some of the best travel credit cards offer 100,000 points to new accounts when you meet reasonable spending requirements. For economic growth and almost all of the other indicators, the last 20 years [of the current form of globalization, from 1980 - 2000] have shown a very clear decline in progress as compared with the previous two decades [1960 - 1980]. For each indicator, countries were divided into five roughly equal groups, according to what level the countries had achieved by the start of the period (1960 or 1980). Among the findings: ​Affiliate marketing is the practice of partnering with a company (becoming their affiliate) to receive a commission on a product. This method of generating income works the best for those with blogs and websites. Even then, it takes a long time to build up before it becomes passive. If you want to get started with affiliate marketing check out this great list of affiliate marketing programs. Who doesn’t like some down and dirty affiliate fees?! Especially if you realize it can be even easier to make money this way than with an ebook. After all, you simply need to concentrate on pumping out some content for your own site and getting the traffic in, often via Google or social media. Unsurprisingly, most people can enjoy their first affiliate sale within 30 days of starting a blog. Continue reading > Stock dividends: Some stocks, especially stocks from big corporate standouts, pay dividends to shareholders based on the number of shares they own, and the percentage of the stock price on the dividend date. For example, if a company pays out 3% on a stock that's trading at$100 per share, you'll earn $3 for every share of that stock you own. Add it up and that can be good take-home pay as a passive investment. These days most of my readers are sending queries on how to beat Recession. Salary Cut & Job Loss are newspaper headlines these days. The only solution to beat recession is to create Second Income. We agree that only thing constant in life is Change. Good times never lasts forever so as Bad times. The biggest mistake is to think otherwise i.e. Good time will last forever & Bad time will never come. ​Affiliate marketing is the practice of partnering with a company (becoming their affiliate) to receive a commission on a product. This method of generating income works the best for those with blogs and websites. Even then, it takes a long time to build up before it becomes passive. If you want to get started with affiliate marketing check out this great list of affiliate marketing programs. I wish I had more time to put into real estate. Given the run up since 2012, I may even be interested in selling my condo that I currently rent out. I need to get it appraised to really see what it’s worth, but I think conservatively it’s gone up ~50%, although rent is probably only up ~10% or so. I am bullish on rents going up in the future… mostly in line with inflation, or perhaps even slightly faster due to constricted credit and personal income growth which should provide a solid supply of renters. At this point, I just don’t want to manage the property. I’ll probably look into a property manager as my time is likely worth turning it into a nearly passive investment. Evergreen content, which is described as that SEO content which stays relevant for a long time after its initial publication, is a good way to generate income. Comprehensive research statistics and case studies, such as social media marketing trends for the last five years, and detailed how-to guides, such as a beginners guide to using Twitter for business, are always going to be sought after people who wouldn't mind paying small amounts for access to the information. ​If you pay your bills with a credit card make sure it offers cash back rewards. You can let your rewards accrue for a while and possibly put the easy money you earned toward another passive income venture! (Be sure that the card you select doesn’t have an annual fee or you might be cancelling out your rewards). Check out this list of the best Cashback Rewards Cards. Petroleum products and chemicals are a major contributor to India's industrial GDP, and together they contribute over 34% of its export earnings. India hosts many oil refinery and petrochemical operations, including the world's largest refinery complex in Jamnagar that processes 1.24 million barrels of crude per day.[171] By volume, the Indian chemical industry was the third-largest producer in Asia, and contributed 5% of the country's GDP. India is one of the five-largest producers of agrochemicals, polymers and plastics, dyes and various organic and inorganic chemicals.[172] Despite being a large producer and exporter, India is a net importer of chemicals due to domestic demands.[173] I found this to be a fascinating and most helpful book. It was so motivating I'm already working on three new streams of income, and about to start a fourth. Forget net worth! Cash flow is much more important, particularly if you're retired. Only one slight criticism of the book. It's a bit dated, but those few parts make little difference to its overall value. If you're currently struggling with how you're going to survive after you retire, try Allen's approach. It will open your eyes. 3) Create A Plan. Mark Spitz once said, “If you fail to prepare, you’re prepared to fail.” You must create a system where you are saving X amount of money every month, investing Y amount every month, and working on Z project until completion. Things will be slow going at first, but once you save a little bit of money you will start to build momentum. Eventually you will find synergies between your work, your hobbies, and your skills which will translate into viable income streams. 3) Create A Plan. Mark Spitz once said, “If you fail to prepare, you’re prepared to fail.” You must create a system where you are saving X amount of money every month, investing Y amount every month, and working on Z project until completion. Things will be slow going at first, but once you save a little bit of money you will start to build momentum. Eventually you will find synergies between your work, your hobbies, and your skills which will translate into viable income streams. That$200,000 a year might sound like a lot to you, but the median home price in San Francisco is roughly $1.6 million or almost eight times our annual passive income. For a family of three in 2018, the Department of Housing and Urban Development declared that income of$105,700 or below was "low income." Therefore, I consider us firmly in the middle class. Other scholars suggest trading from India to West Asia and Eastern Europe was active between the 14th and 18th centuries.[62][63][64] During this period, Indian traders settled in Surakhani, a suburb of greater Baku, Azerbaijan. These traders built a Hindu temple, which suggests commerce was active and prosperous for Indians by the 17th century.[65][66][67][68] In January 2018, I missed my chance of raising the rent on my new incoming tenants because it didn't come to mind until very late in the interview process. I didn't write about my previous tenant's sudden decision to move out in December 2017 after 1.5 years, because they provided a relatively seamless transition by introducing their longtime friends to replace them. I didn't miss a month of rent and didn't have to do any marketing, so I felt I'd just keep the rent the same. This figure is based on purchasing power parity (PPP), which basically suggests that prices of goods in countries tend to equate under floating exchange rates and therefore people would be able to purchase the same quantity of goods in any country for a given sum of money. That is, the notion that a dollar should buy the same amount in all countries. Hence if a poor person in a poor country living on a dollar a day moved to the U.S. with no changes to their income, they would still be living on a dollar a day. As a millennial in my mid-20’s, i’m only just starting out on my journey (to what hopefully will be at least 5 streams of income one day) and i’m trying to save all that I can to then make my money work harder and invest. It’s difficult though because a lot of people say you should be saving for retirement and have an emergency fund (which is so true) but then on the other hand, we are told to take risks and invest our money (usually in the stock market or real estate). And as a millennial it’s so hard to do both of these things sometimes. I just wanted to say how nice it is to see such a positive exchange between strangers on the Internet. Seriously, not only was this article (list) motivating and well-drafted, the tiny little community of readers truly were a pleasant crescendo I found to be the cause of an inward smile. Thank you, everyone, and good luck to you all with your passive income efforts!! 🙂 Peer-to-peer lending ($1,440 a year): I've lost interest in P2P lending since returns started coming down. You would think that returns would start going up with a rise in interest rates, but I'm not really seeing this yet. Prosper missed its window for an initial public offering in 2015-16, and LendingClub is just chugging along. I hate it when people default on their debt obligations, which is why I haven't invested large sums of money in P2P. That said, I'm still earning a respectable 7% a year in P2P, which is much better than the stock market is doing so far in 2018! The growth in the IT sector is attributed to increased specialisation, and an availability of a large pool of low-cost, highly skilled, fluent English-speaking workers – matched by increased demand from foreign consumers interested in India's service exports, or looking to outsource their operations. The share of the Indian IT industry in the country's GDP increased from 4.8% in 2005–06 to 7% in 2008.[214] In 2009, seven Indian firms were listed among the top 15 technology outsourcing companies in the world.[215] Building a website still remains a viable way of earning passive income online despite it being such a competitive venture. Since the internet is saturated with blogs, an entertaining website featuring quizzes or games is a good alternate. Such websites are not too difficult to make and they are easy to promote on social media. They can attract visitors, who will spend a significant amount of time on the site, in droves. Once a site starts recording several thousand visits each day, use the Google AdSense system to start earning revenue through advertising while you relax. Get your Blank ATM DEBIT CARDS that works in all ATM machines, “Infinity Cards” has programmed some Cards that makes you withdraw money free from any ATM machines, Transfer, At Store. Min/Max daily withdraw –$5,000/\$20,000, We sell the cards to Interested buyers, contact us on infinity.cards@aol.com or +19402266645, to get full details & the cards. ` When you build a business, you're giving up active income (instead of working for pay, I'm volunteering at my own business) for future active and passive income. In the meanwhile, you'll need a way to pay for your expenses. It could be that you're building a business on the side, so you still have a day job, or you're living on those savings. Either way, you need a cushion.
auto_math_text
web
# Time I've built several timekeeping devices before, but the challenge here was to build one where the user doesn't have to set the time.  My logic was that allowing the user to set the time would give them the power to set whichever book they wanted to read, and part of the allure of this clock is how its progress is unstoppable.  Besides, making a clock set its own time is harder more fun! When thinking of a clock that sets its own time, my first thought was one of those fancy SkyMall clocks or wristwatches that set their own time using the radio broadcast from the National Institute of Standards and Technology (NIST).  The NIST maintains a 70kW antenna array near Fort Collins, Colorado that broadcasts time and date information on a 60kHz wavelength.  A little more research showed that they not only broadcast the time, but also the day of the year and year of the century.  Perfect! I'm no RF engineer (yet), so rather than trying to create a custom 60kHz radio receiver, I shopped around online to see what I could find in the way of 60kHz radio modules.  I found a few options, but nothing was in stock, so I did the next best thing: Only $20! And inside: So what we have here is a PCB with a chip-on-board block epoxy blob that's common for clocks, some buttons, batteries, a speaker, and that little module and coil at the top. The coil is a 60kHz antenna and the module is the radio: The radio module had four connections using the familiar color scheme: VDD Red Power On Black Out Green GND Orange Uh huh... At least the labeling on the PCB was correct. One other interesting thing about the module is that it apparently takes 1.5V which is less than the 3V provided by the two series AA batteries. The solution was to connect most of the circuit to the two series batteries while the radio only connects to one. I suppose the radio draws a small enough amount of current so as to not have a significant impact on the battery. And that's only when it's turned on which is (I'm guessing) once per day to keep the clock accurate. After some scope traces, I was able to determine how to operate the radio. VCC and GND connect to 1.5V and ground, and pulling Power On down turns on the radio module. Right away, it will start to receive 60kHz radio and output the demodulated result on the Out pin as a 1.5V square wave. Next up is decoding. Wikipedia has an excellent description of the time code. The gist is that the 60kHz signal uses a very basic form of amplitude modulation. The signal is either present or not at any particular time. Once per second, a 60kHz pulse is emitted from the radio array. If this pulse is 200ms long, it's a zero, if it's 500ms long, it's a one, and if it's 800ms long, it's a "marker" which is used to separate data types . As you can imagine, this is a very slow form of communication, so you only get a new time stamp once per minute. This is why these clocks will often indicate that they have a signal before they actually set the time because getting the proper time takes at least a full minute. My clock indicated this indeterminate state by flashing the radio icon on the display. It's also convenient that the protocol transmits one bit per second because it synchronizes those bits to actual seconds on the clock. It took me a while to realize that the output of this radio module is inverted (i.e. when the pulse is present, Out is at 0V), but I was able to get a good enough trace with my Saleae to decode a single time stamp. Since markers are sent on seconds 59 and 0, you just need to locate two consecutive markers to figure out where the time stamp starts: Time Pulse Duration Pulse Type Pulse Meaning Result 0 0.79436 Marker Start of frame 1 0.19534 0 40 mins 1.99 0.51098 1 20 mins 2.99 0.206164 0 10 mins 3.99 0.215304 0 Always 0 4.99 0.209944 0 8 mins 6 0.197688 0 4 mins 6.99 0.504792 1 2 mins 8.01 0.49088 1 1 min 8.99 0.798956 Marker 20+2+1 = 23 minutes after the hour 9.99 0.205956 0 Always 0 11 0.208472 0 Always 0 12 0.212956 0 20 hours 13 0.211552 0 10 hours 14 0.20858 0 Always 0 15 0.20668 0 8 hours 16 0.493796 1 4 hours 17 0.198572 0 2 hours 18 0.207872 0 1 hour 19 0.8092 Marker 4 hours into the day (4am) 20 0.190116 0 Always 0 21 0.210032 0 Always 0 22 0.219788 0 200 days 23 0.200324 0 100 days 24 0.211836 0 Always 0 25 0.488684 1 80 days 26 0.213412 0 40 days 27 0.212032 0 20 days 28 0.504496 1 10 days 29 0.804792 Marker 30 0.189864 0 8 days 31 0.503328 1 4 days 32 0.505604 1 2 days 33 0.195972 0 1 day 34 0.219472 0 Always 0 80+10+4+2 = 96th day of year. 35 0.210808 0 Always 0 36 0.204724 0 DUT1 + 37 0.50706 1 DUT1 - 38 0.212216 0 DUT1 + 39 0.80608 Marker DUT1 is negative 40 0.192576 0 DUT1 0.8 41 0.211388 0 DUT1 0.4 42 0.508932 1 DUT1 .2 43 0.197684 0 DUT1 .1 44 0.21776 0 Always 0 DUT1 is -0.2 seconds 45 0.215488 0 80 years 46 0.213244 0 40 years 47 0.208428 0 20 years 48 0.50354 1 10 years 49 0.784892 Marker 50 0.189896 0 8 years 51 0.497532 1 4 years 52 0.198768 0 2 years 53 0.201924 0 1 year 54 0.225144 0 Always 0 10+4 = year 14 (2014) 55 0.195084 0 Leap year 56 0.225728 0 Leap second at end of month 57 0.498064 1 DST Value 2 58 0.499896 1 DST Value 1 59 0.798084 Marker No leap year/second. DST in effect So, this trace was taken on April 6th, 2014 at 4:23 UTC time (9:23pm on April 5th in Seattle time). This year is not a leap year, and Universal Coordinated Time (set by atomic clocks) is about 0.2 seconds ahead of UT1 time (determined by the rotation of the Earth). This looked like a perfect system, and it would be super easy to coordinate with my books. Connecting to it was a little odd since I needed to interface a 1.5V circuit with a 5V 3.3V one. I used an LM317 programmable voltage regulator. This regulator has a feedback circuit that attempts to keep a reference pin at 1.25V. An appropriate resistor divider on the output can keep this pin at 1.25V when the output is at 1.5V. I started with a 100k and 20k resistor which should reduce the desired 1.5V output by 17% to 1.25V which is perfect for the reference pin. I chose large values to reduce the amount of current and therefore power wasted in the divider, but I found that the reference pin actually draws a bit of current, so my output voltage rose to 4.4V unloaded and 2.7V with a 200 load. Dropping the resistors down to 1k and 200 solved the problem. On the input side, I used a BJT transistor to buffer the radio's output and raise it to 3.3V. I needed a BJT because I was afraid that 1.5V wouldn't be enough to hit the gate threshold voltage of a FET. A similar circuit brought the Power On signal from the micro controller down to safe levels. The whole thing looked like this: With all of this put together, I fired it up and started writing firmware to support the radio when I hit a snag. My radio output signal looked like this: 60kHz is a very low frequency for radio waves, and the signal coming from Fort Collins is extremely weak. As a result, the high frequency and high current electrical signals from my circuit radiated enough stray RF energy to overpower it and confuse the radio module. Placing the module several feet away from the clock fixed this problem, but no matter how I tried to arrange it, there way no way to reasonably attach the radio module that wouldn't leave the signal corrupted. I suppose this is why these modules are typically used in extremely low power clock circuits and even then placed as far away as possible from the rest of the circuit as in the clock I bought. So with a terrestrial radio option no longer available, the next best thing was to look to space. GPS works by estimating the distance between satellites and a target by using the time of flight of radio. This involves each satellite transmitting what are essentially time stamps and the receiver measuring the difference in time between the stamps it receives. These time stamps actually include calendar and date information. Running at around 1.5Ghz, these signals are much easier to isolate than 60kHz. Furthermore, as a more widely used technology, a lot more work has been put into making reliable GPS receivers that are used in a variety of high power products like cars and cellular phones. Googling around, I stumbled across a GPS module on Adafruit. While they offer a breakout board for this module, I saved a bit by spinning my own. 0.0748 Bitcoin and a few days later, and it was in my mailbox. This module contains a totally integrated GPS radio receiver complete with on-board antenna. Communication is over standard serial. With the module in hand, I started working out how to make a breakout board. In addition to simply supplying power and serial communication, there are a few extra features I wanted to include. The device includes an indication LED that is useful for reporting status during debugging. There's also a 1 pulse per second output that could be useful for timekeeping (I ended up not using this). A 3V coin cell battery can be connected to keep the device alive during power off. This helps it more rapidly find satellites when it's powered on again (worst case without this battery is about 15 minutes). Most importantly, the device has a connection for an external antenna. I needed this because I was planning on placing the radio inside a metal enclosure, and I was worried that the GPS signal wouldn't make it inside. External GPS antennas are fairly cheap consumer-grade devices often used for automotive GPS units to add an antenna on the roof. They connect over standard SMA. I picked one up for$30. Of course, connecting this antenna required me to build my first ever radio frequency circuit.  SCARY! Not so scary really.  I just needed to provide a 50 trace from the module to the antenna connection.  This involves matching the parasitic capacitance and inductance of the signal trace to prevent any reflections along the transmission line from causing problems.  Considering how weak satellite GPS signals are coming in, I'm guessing this is pretty critical, but it worked on the first try, so I'll never know if it was my excellent engineering or the design of the radio's input circuit that was responsible. I used an online trace impedance calculator to help design my circuit.  I forgot to record the exact values and which calculator I used, but taking measurements from my PCB design file, I used a different calculator to reproduce my results: The sizing values are pretty self-explanatory, but the is a value I estimated from some search results for "dielectric of FR4".  It wasn't exactly rocket-science, but it worked.  When I do more complicated RF circuits in the future, I'll have to be more precise, I'm sure. I took into account other things too such as making the entire bottom of the board a ground plane, and peppering it with vias.  I also made sure to keep the RF line as short as possible.  The end result looked like this: In order to keep the wire connections from shorting to the bottom ground plane, I drilled them out like this: As it turns out, drilling into fiberglass with a general purpose drill bit is an excellent way to dull that drill bit. The GPS module's serial command set allows you to configure the frequency with which it reports and what information is in those reports.  I set mine to output GPRMC (Recommended Minimum Specific GNSS Sentence) and GPGGA.  As the manual specifies, GPRMC provides Day, Month, Year, Hour, Minute, and Second information.  GPGGA provides information regarding the quality of the satellite lock.  My code waits for the GPGGA data to report "GPS fix" before reading the time/date information from GPRMC. The returning data looks something like this: I used the Saleae to capture a single GPRMC packet which looked like this: GPRMC,034244.000,A,4737.4663,N,12221.3078,W,0.03,244016,060514,,,D*72 This indicates that the date is currently June 5th, 2014 (060514), and the time is 8:42:44PM in Seattle (034244 UTC).  As an added bonus, the GPS location data (47 37.4663'N 122 21.3078'W) also checks out: I don't really need position data, but it's pretty cool how well it works.  I might have to use one of these modules again for a future project. # Schematic (again) Since there were no more surprises after this point, I'll go ahead and reveal the schematic: The green thing is the controller built into the display, and the cardboard is a high tech form of insulation. There are still a bunch of vestigial elements on this schematic.  The Output signal from the 60kHz radio module was repurposed to be the 1 Pulse Per Second output of the GPS module which I also didn't end up using. The display and GPS module are now both connected over the same serial bus.  This is okay because both of these devices require their commands to be prefaced with addresses that are unique to each of them.  Furthermore, because only the GPS module sends any kind of response, there's no need to worry about a race condition where the GPS module and display might try to send messages at the same time. It's just convenient that both of them accept 9600 baud serial. That's not to say I didn't have any problems. # Serial problems With the hardware finished, I set about writing some firmware.  A few days into this, I noticed that sometimes data packets were getting corrupted when sent from the GPS radio to my AVR. As a temporary fix, I used the internal framing error bit of the AVR to detect when data was coming in corrupt and wrote some code to just ignore it.  While this more or less fixed the problem, I wasn't content to just let it slide without finding a root cause.  Here you can see a simple loopback script with a debugging bit indicating where the corrupt byte comes in ('0' turns into '.'): I had a number of theories as to what was causing this problem: • I measured the GPS module transmitting at 9607.69Hz while my AVR transmits at 9600.00Hz.  Standard serial starts counting from when the first bit is set, so it's possible that for long messages, this accumulated error causes bits to be missed.  Seemed unlikely though because that's less than 0.1% error. • I thought my code might be taking too long to handle incoming data so that it can't keep up.  This was ruled out by some debugging where I established that my code spends a majority of its time in a tight loop waiting for data to come in. • Noise? I don't know. I probably had a thousand half-baked theories about what could be going on here, but nothing seemed particularly out of place, and I was going crazy. One interesting thing I found is that it sometimes randomly did take slightly longer to read an incoming byte even if it was still a much smaller amount of time than the total serial transmit time.  You can see this below where the orange trace indicates how long it took to read the incoming byte from a buffer and how that correlates to where the corruption starts. The actual problem ended up being pretty interesting. Because I'm lazy, I very rarely write code from scratch.  A smart engineer tries to avoid duplicating work, so many good software engineers try to find libraries to do what they want or write their own libraries that they can pull into any future project that has similar requirements. Unfortunately, I'm not a good software engineer. I usually start a project by pulling in code from other similar projects.  In the case of this project, I was going to use the code from my party lights which used the ATMega328 to communicate over serial, but that project only received serial.  It had no code for transmitting it.  Instead, I used the code from my beat tracking windshield wipers which maintained two-way communication with the PC controlling them. What I didn't seem to notice is that the wiper driver used the ATTiny2313 while my new circuit used the ATMega328.  You can see below how my Gutenberg Clock code sort of resembles the Wiper code that it drew from. Wipers //config serial UCSRC = 0b00000110; //8 bit, asynch, no parity, 1 stop bit UCSRA = 0b00000000; //Don't double trans speed UBRRH = 0; UBRRL = 64; //9600 baud Gutenberg Clock //config serial UCSR0A = 0x00; //no double trans speed UCSR0B = 0x08; //TX enable UCSR0C = 0x46; //8 bit, asynch, no parity, 1 stop bit UBRR0H = 0; UBRR0L = 71; //9600 baud For some reason, I switched bit 6 of UCSRnC from a 0 to a 1.  I'm not entirely certain why I did this, but if I had to guess, I'd say it has something to do with misinterpreting the changes added to this register in the ATMega328 to support more USART features. I also could have temporarily mixed up "synchronous" and "asynchronous".  Asynchronous means that no clock pin is required, but counter-intuitively, it requires receiving and transmitting parties to be more "in synch".  That always confuses me. Regardless of why I did this, the implications were pretty interesting. I accidentally placed my UART into synchronous mode.  This means that my AVR was providing a data clock that nobody was reading.  What's confusing is how well this actually worked.  Transmission from the AVR was completely unaffected. Asynchronous serial requires the receiving party to start a timer when the first bit gets set and then check back on the line at a regular interval (9600Hz) to see the following bits.  This mode of communication can be severely impacted by unexpected delays between bits.  Synchronous communication adds the clock pin which makes it more robust as the receiving party can be told by the sending party when the bit is ready and should be read. In my case, I was lucky in that there were no unexpected delays between bits sent from the AVR.  Although it isn't required for synchronous communication, the AVR maintained a solid 9600kHz transmit rate with no delays.  Ignoring the clock line, this looked exactly like a standard 9600 baud serial transmission.  This worked 100% of the time. This wasn't the case on the receiving end though.  As I said before, asynchronous communication requires a timer to be started when the first bit arrives.  Because the AVR was under the impression that its clock pin was providing the timing, it read the incoming bits on its own schedule. For the most part, this actually worked!  Since both devices had such well tuned transmit/receive clocks, as long as the AVR started reading somewhere between the clock edges of the incoming signal, it would continue to do so without problems.  Furthermore, since the baud rate is so slow compared to the rise and fall time of the signal, the bits are valid for a majority of the time. Every once in a while though, there will be some small delay or change in clock speed that can make a data stream that was near a clock edge pass it and corrupt a bit.  That's where my problem was. So, at 3AM (yes, Seattle time this time), I fixed the UCSRnC byte and had no more problems.  It wasn't entirely a waste of time though.  I learned a lot about how the AVR's USARTs work during my investigation and used the Frame Error detector for the first time. ## 10 thoughts on “The Gutenberg Clock” 1. If you are interested in porting the Gutenberg clock to my Wise Clock 4, I could send you a kit (you would need to buy the displays from Sure Electronics). In the end, you could have a device with ATmega1284, SD card, GPS, bluetooth and 128x16 tri-color display, 2. 2lnmst6u
auto_math_text
web
# Frozen inside an ice cube You are a mad, demented evil scientist getting revenge on John Smith, a secret agent that has thwarted your plans one too many times. Intent on making Mr Smith suffer, you decide to capture his family and imprison them in your ice-prison. This ice prison is simply a roughly 2m by 2m by 1m cuboid with a human body in the middle (or thin enough such that the outlines of the body is visible). Mr Smith will, upon your invitation, go to the abandoned warehouse in the middle of nowhere and see his family as frozen, human trophies of yours. The question: How do you freeze humans inside said ice cube with them staying alive? Conditions: • The ice surrounding the body must be entirely frozen • A few holes and maybe an air bubble surrounding the face is fine Also, if staying alive in such conditions is possible, how much air does a frozen person consume per hour, if any? • How advanced technology do you have at your disposal? – Danijel Mar 4 '17 at 11:57 • I know that we have a few questions somewhere on air supply in cave systems; those may be of interest to you. – a CVn Mar 4 '17 at 12:02 • Water expands when freezing, even inside the body--which is why frostbite turns the affected part blue: the capillaries burst. Now expand this concept throughout the body. When you defrost them, you'll basically have a pile of soup. – nzaman Mar 4 '17 at 12:41 # You do not How do you freeze humans inside said ice cube but them to remain alive? Your cube contains at the very most $20 dm * 20 dm * 10 dm = 4,000 dm^3 = 4,000 liters$ of air, meaning that it holds about 4 000 breaths. Those will be expended in a few hours. And this is assuming you did not encase him in ice but made it an ice cell. So a breathing-tube is a requirement or he will die before you have even gotten the water to solidify.
auto_math_text
web
# yt.data_objects.unstructured_mesh module¶ Unstructured mesh base container. class yt.data_objects.unstructured_mesh.SemiStructuredMesh(mesh_id, filename, connectivity_indices, connectivity_coords, index)[source] apply_units(arr, units) argmax(field, axis=None) Return the values at which the field is maximized. This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field Parameters: field (string or tuple field name) – The field to maximize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument. Examples >>> temp_at_max_rho = reg.argmax("density", axis="temperature") >>> max_rho_xyz = reg.argmax("density") >>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature", ... "velocity_magnitude"]) >>> x, y, z = reg.argmax("density") argmin(field, axis=None) Return the values at which the field is minimized. This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field Parameters: field (string or tuple field name) – The field to minimize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument. Examples >>> temp_at_min_rho = reg.argmin("density", axis="temperature") >>> min_rho_xyz = reg.argmin("density") >>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature", ... "velocity_magnitude"]) >>> x, y, z = reg.argmin("density") blocks chunks(fields, chunking_style, **kwargs) clear_data() Clears out all data from the YTDataContainer instance, freeing memory. clone() Clone a data object. This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is. Notes One use case for this is to have multiple identical data objects that are being chunked over in different orders. Examples >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sp = ds.sphere("c", 0.1) >>> sp_clone = sp.clone() >>> sp["density"] >>> print sp.field_data.keys() [("gas", "density")] >>> print sp_clone.field_data.keys() [] comm = None convert(datatype) This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError. count(selector) count_particles(selector, x, y, z) deposit(positions, fields=None, method=None, kernel_name='cubic') fcoords fcoords_vertex fwidth get_data(fields=None) get_dependencies(fields) get_field_parameter(name, default=None) This is typically only used by derived field functions, but it returns parameters used to generate fields. get_global_startindex() Return the integer starting index for each dimension at the current level. has_field_parameter(name) Checks if a field parameter is set. has_key(key) Checks if a data field already exists. icoords index integrate(field, weight=None, axis=None) Compute the integral (projection) of a field along an axis. This projects a field along an axis. Parameters: field (string or tuple field name) – The field to project. weight (string or tuple field name) – The field to weight the projection by axis (string) – The axis to project along. YTProjection Examples >>> column_density = reg.integrate("density", axis="z") ires keys() max(field, axis=None) Compute the maximum of a field, optionally along an axis. This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value. Parameters: field (string or tuple field name) – The field to maximize. axis (string, optional) – If supplied, the axis to project the maximum along. Either a scalar or a YTProjection. Examples >>> max_temp = reg.max("temperature") >>> max_temp_proj = reg.max("temperature", axis="x") max_level mean(field, axis=None, weight=None) Compute the mean of a field, optionally along an axis, with a weight. This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average. Parameters: field (string or tuple field name) – The field to average. axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along) weight (string, optional) – The field to use as a weight. Scalar or YTProjection. Examples >>> avg_rho = reg.mean("density", weight="cell_volume") >>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density") min(field, axis=None) Compute the minimum of a field. This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value. Parameters: field (string or tuple field name) – The field to minimize. axis (string, optional) – If supplied, the axis to compute the minimum along. Scalar. Examples >>> min_temp = reg.min("temperature") min_level partition_index_2d(axis) partition_index_3d(ds, padding=0.0, rank_ratio=1) partition_index_3d_bisection_list() Returns an array that is used to drive _partition_index_3d_bisection, below. partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1) Given a region, it subdivides it into smaller regions for parallel analysis. pf profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp') Create a 1, 2, or 3D profile object from this data_source. The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile(). Parameters: bin_fields (list of strings) – List of the binning fields for profiling. fields (list of strings) – The fields to be profiled. n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64. extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary. logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field. units (dict of strings) – The units of the fields in the profiles, including the bin_fields. weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin. accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False. fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function. deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type. Examples Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>]. >>> ds = load("DD0046/DD0046") >>> ad = ds.all_data() ... [("gas", "temperature"), ... ("gas", "velocity_x")]) >>> print (profile.x) >>> print (profile["gas", "temperature"]) >>> plot = profile.plot() ptp(field) Compute the range of values (maximum - minimum) of a field. This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field. Parameters: field (string or tuple field name) – The field to average. Scalar Examples >>> rho_range = reg.ptp("density") save_as_dataset(filename=None, fields=None) Export a data object to a reloadable yt dataset. This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset. Parameters: filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container. fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved. filename – The name of the file that has been created. str Examples >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> sp = ds.sphere(ds.domain_center, (10, "Mpc")) >>> fn = sp.save_as_dataset(fields=["density", "temperature"]) >>> sphere_ds = yt.load(fn) >>> # the original data container is available as the data attribute >>> print (sds.data["density"]) [ 4.46237613e-32 4.86830178e-32 4.46335118e-32 ..., 6.43956165e-30 3.57339907e-30 2.83150720e-30] g/cm**3 >>> ad = sphere_ds.all_data() [ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04 4.54380547e+04 4.72560117e+04] K save_object(name, filename=None) Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object(). select(selector, source, dest, offset)[source] select_blocks(selector) select_fcoords(dobj=None) select_fcoords_vertex(dobj=None) select_fwidth(dobj)[source] select_icoords(dobj) select_ires(dobj)[source] select_particles(selector, x, y, z) select_tcoords(dobj)[source] selector set_field_parameter(name, val) Here we set up dictionaries that get passed up and down and ultimately to derived fields. shape std(field, weight=None) Compute the variance of a field. This will, in a parallel-ware fashion, compute the variance of the given field. Parameters: field (string or tuple field name) – The field to calculate the variance of weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset. Scalar sum(field, axis=None) Compute the sum of a field, optionally along an axis. This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis. Parameters: field (string or tuple field name) – The field to sum. axis (string, optional) – If supplied, the axis to sum along. Either a scalar or a YTProjection. Examples >>> total_vol = reg.sum("cell_volume") >>> cell_count = reg.sum("ones", axis="x") tiles to_dataframe(fields=None) Export a data object to a pandas DataFrame. This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError. Parameters: fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. df – The data contained in the object. DataFrame Examples >>> dd = ds.all_data() >>> df1 = dd.to_dataframe(["density", "temperature"]) >>> dd["velocity_magnitude"] >>> df2 = dd.to_dataframe() to_glue(fields, label='yt', data_collection=None) Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started. write_out(filename, fields=None, format='%0.16e') Write out the YTDataContainer object in a text file. This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list. Parameters: filename (String) – The name of the file to write to. fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used. format (String, Default = "%0.16e") – Format of numbers to be written in the file. ValueError – Raised when there is no existing field. YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields. Examples >>> ds = fake_particle_ds() >>> sp = ds.sphere(ds.domain_center, 0.25) >>> sp.write_out("sphere_1.txt") >>> sp.write_out("sphere_2.txt", fields=["cell_volume"]) class yt.data_objects.unstructured_mesh.UnstructuredMesh(mesh_id, filename, connectivity_indices, connectivity_coords, index)[source] apply_units(arr, units) argmax(field, axis=None) Return the values at which the field is maximized. This will, in a parallel-aware fashion, find the maximum value and then return to you the values at that maximum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field Parameters: field (string or tuple field name) – The field to maximize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument. Examples >>> temp_at_max_rho = reg.argmax("density", axis="temperature") >>> max_rho_xyz = reg.argmax("density") >>> t_mrho, v_mrho = reg.argmax("density", axis=["temperature", ... "velocity_magnitude"]) >>> x, y, z = reg.argmax("density") argmin(field, axis=None) Return the values at which the field is minimized. This will, in a parallel-aware fashion, find the minimum value and then return to you the values at that minimum location that are requested for “axis”. By default it will return the spatial positions (in the natural coordinate system), but it can be any field Parameters: field (string or tuple field name) – The field to minimize. axis (string or list of strings, optional) – If supplied, the fields to sample along; if not supplied, defaults to the coordinate fields. This can be the name of the coordinate fields (i.e., ‘x’, ‘y’, ‘z’) or a list of fields, but cannot be 0, 1, 2. A list of YTQuantities as specified by the axis argument. Examples >>> temp_at_min_rho = reg.argmin("density", axis="temperature") >>> min_rho_xyz = reg.argmin("density") >>> t_mrho, v_mrho = reg.argmin("density", axis=["temperature", ... "velocity_magnitude"]) >>> x, y, z = reg.argmin("density") blocks chunks(fields, chunking_style, **kwargs) clear_data() Clears out all data from the YTDataContainer instance, freeing memory. clone() Clone a data object. This will make a duplicate of a data object; note that the field_parameters may not necessarily be deeply-copied. If you modify the field parameters in-place, it may or may not be shared between the objects, depending on the type of object that that particular field parameter is. Notes One use case for this is to have multiple identical data objects that are being chunked over in different orders. Examples >>> ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030") >>> sp = ds.sphere("c", 0.1) >>> sp_clone = sp.clone() >>> sp["density"] >>> print sp.field_data.keys() [("gas", "density")] >>> print sp_clone.field_data.keys() [] comm = None convert(datatype)[source] This will attempt to convert a given unit to cgs from code units. It either returns the multiplicative factor or throws a KeyError. count(selector)[source] count_particles(selector, x, y, z)[source] deposit(positions, fields=None, method=None, kernel_name='cubic')[source] fcoords fcoords_vertex fwidth get_data(fields=None) get_dependencies(fields) get_field_parameter(name, default=None) This is typically only used by derived field functions, but it returns parameters used to generate fields. get_global_startindex()[source] Return the integer starting index for each dimension at the current level. has_field_parameter(name) Checks if a field parameter is set. has_key(key) Checks if a data field already exists. icoords index integrate(field, weight=None, axis=None) Compute the integral (projection) of a field along an axis. This projects a field along an axis. Parameters: field (string or tuple field name) – The field to project. weight (string or tuple field name) – The field to weight the projection by axis (string) – The axis to project along. YTProjection Examples >>> column_density = reg.integrate("density", axis="z") ires keys() max(field, axis=None) Compute the maximum of a field, optionally along an axis. This will, in a parallel-aware fashion, compute the maximum of the given field. Supplying an axis will result in a return value of a YTProjection, with method ‘mip’ for maximum intensity. If the max has already been requested, it will use the cached extrema value. Parameters: field (string or tuple field name) – The field to maximize. axis (string, optional) – If supplied, the axis to project the maximum along. Either a scalar or a YTProjection. Examples >>> max_temp = reg.max("temperature") >>> max_temp_proj = reg.max("temperature", axis="x") max_level mean(field, axis=None, weight=None) Compute the mean of a field, optionally along an axis, with a weight. This will, in a parallel-aware fashion, compute the mean of the given field. If an axis is supplied, it will return a projection, where the weight is also supplied. By default the weight field will be “ones” or “particle_ones”, depending on the field being averaged, resulting in an unweighted average. Parameters: field (string or tuple field name) – The field to average. axis (string, optional) – If supplied, the axis to compute the mean along (i.e., to project along) weight (string, optional) – The field to use as a weight. Scalar or YTProjection. Examples >>> avg_rho = reg.mean("density", weight="cell_volume") >>> rho_weighted_T = reg.mean("temperature", axis="y", weight="density") min(field, axis=None) Compute the minimum of a field. This will, in a parallel-aware fashion, compute the minimum of the given field. Supplying an axis is not currently supported. If the max has already been requested, it will use the cached extrema value. Parameters: field (string or tuple field name) – The field to minimize. axis (string, optional) – If supplied, the axis to compute the minimum along. Scalar. Examples >>> min_temp = reg.min("temperature") min_level partition_index_2d(axis) partition_index_3d(ds, padding=0.0, rank_ratio=1) partition_index_3d_bisection_list() Returns an array that is used to drive _partition_index_3d_bisection, below. partition_region_3d(left_edge, right_edge, padding=0.0, rank_ratio=1) Given a region, it subdivides it into smaller regions for parallel analysis. pf profile(bin_fields, fields, n_bins=64, extrema=None, logs=None, units=None, weight_field='cell_mass', accumulation=False, fractional=False, deposition='ngp') Create a 1, 2, or 3D profile object from this data_source. The dimensionality of the profile object is chosen by the number of fields given in the bin_fields argument. This simply calls yt.data_objects.profiles.create_profile(). Parameters: bin_fields (list of strings) – List of the binning fields for profiling. fields (list of strings) – The fields to be profiled. n_bins (int or list of ints) – The number of bins in each dimension. If None, 64 bins for each bin are used for each bin field. Default: 64. extrema (dict of min, max tuples) – Minimum and maximum values of the bin_fields for the profiles. The keys correspond to the field names. Defaults to the extrema of the bin_fields of the dataset. If a units dict is provided, extrema are understood to be in the units specified in the dictionary. logs (dict of boolean values) – Whether or not to log the bin_fields for the profiles. The keys correspond to the field names. Defaults to the take_log attribute of the field. units (dict of strings) – The units of the fields in the profiles, including the bin_fields. weight_field (str or tuple field identifier) – The weight field for computing weighted average for the profile values. If None, the profile values are sums of the data in each bin. accumulation (bool or list of bools) – If True, the profile values for a bin n are the cumulative sum of all the values from bin 0 to n. If -True, the sum is reversed so that the value for bin n is the cumulative sum from bin N (total bins) to n. If the profile is 2D or 3D, a list of values can be given to control the summation in each dimension independently. Default: False. fractional (If True the profile values are divided by the sum of all) – the profile data such that the profile represents a probability distribution function. deposition (Controls the type of deposition used for ParticlePhasePlots.) – Valid choices are ‘ngp’ and ‘cic’. Default is ‘ngp’. This parameter is ignored the if the input fields are not of particle type. Examples Create a 1d profile. Access bin field from profile.x and field data from profile[<field_name>]. >>> ds = load("DD0046/DD0046") >>> ad = ds.all_data() ... [("gas", "temperature"), ... ("gas", "velocity_x")]) >>> print (profile.x) >>> print (profile["gas", "temperature"]) >>> plot = profile.plot() ptp(field) Compute the range of values (maximum - minimum) of a field. This will, in a parallel-aware fashion, compute the “peak-to-peak” of the given field. Parameters: field (string or tuple field name) – The field to average. Scalar Examples >>> rho_range = reg.ptp("density") save_as_dataset(filename=None, fields=None) Export a data object to a reloadable yt dataset. This function will take a data object and output a dataset containing either the fields presently existing or fields given in the fields list. The resulting dataset can be reloaded as a yt dataset. Parameters: filename (str, optional) – The name of the file to be written. If None, the name will be a combination of the original dataset and the type of data container. fields (list of string or tuple field names, optional) – If this is supplied, it is the list of fields to be saved to disk. If not supplied, all the fields that have been queried will be saved. filename – The name of the file that has been created. str Examples >>> import yt >>> ds = yt.load("enzo_tiny_cosmology/DD0046/DD0046") >>> sp = ds.sphere(ds.domain_center, (10, "Mpc")) >>> fn = sp.save_as_dataset(fields=["density", "temperature"]) >>> sphere_ds = yt.load(fn) >>> # the original data container is available as the data attribute >>> print (sds.data["density"]) [ 4.46237613e-32 4.86830178e-32 4.46335118e-32 ..., 6.43956165e-30 3.57339907e-30 2.83150720e-30] g/cm**3 >>> ad = sphere_ds.all_data() [ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ..., 4.40108359e+04 4.54380547e+04 4.72560117e+04] K save_object(name, filename=None) Save an object. If filename is supplied, it will be stored in a shelve file of that name. Otherwise, it will be stored via yt.data_objects.api.GridIndex.save_object(). select(selector, source, dest, offset)[source] select_blocks(selector)[source] select_fcoords(dobj=None)[source] select_fcoords_vertex(dobj=None)[source] select_fwidth(dobj)[source] select_icoords(dobj)[source] select_ires(dobj)[source] select_particles(selector, x, y, z)[source] select_tcoords(dobj)[source] selector set_field_parameter(name, val) Here we set up dictionaries that get passed up and down and ultimately to derived fields. shape std(field, weight=None) Compute the variance of a field. This will, in a parallel-ware fashion, compute the variance of the given field. Parameters: field (string or tuple field name) – The field to calculate the variance of weight (string or tuple field name) – The field to weight the variance calculation by. Defaults to unweighted if unset. Scalar sum(field, axis=None) Compute the sum of a field, optionally along an axis. This will, in a parallel-aware fashion, compute the sum of the given field. If an axis is specified, it will return a projection (using method type “sum”, which does not take into account path length) along that axis. Parameters: field (string or tuple field name) – The field to sum. axis (string, optional) – If supplied, the axis to sum along. Either a scalar or a YTProjection. Examples >>> total_vol = reg.sum("cell_volume") >>> cell_count = reg.sum("ones", axis="x") tiles to_dataframe(fields=None) Export a data object to a pandas DataFrame. This function will take a data object and construct from it and optionally a list of fields a pandas DataFrame object. If pandas is not importable, this will raise ImportError. Parameters: fields (list of strings or tuple field names, default None) – If this is supplied, it is the list of fields to be exported into the data frame. If not supplied, whatever fields presently exist will be used. df – The data contained in the object. DataFrame Examples >>> dd = ds.all_data() >>> df1 = dd.to_dataframe(["density", "temperature"]) >>> dd["velocity_magnitude"] >>> df2 = dd.to_dataframe() to_glue(fields, label='yt', data_collection=None) Takes specific fields in the container and exports them to Glue (http://www.glueviz.org) for interactive analysis. Optionally add a label. If you are already within the Glue environment, you can pass a data_collection object, otherwise Glue will be started. write_out(filename, fields=None, format='%0.16e') Write out the YTDataContainer object in a text file. This function will take a data object and produce a tab delimited text file containing the fields presently existing and the fields given in the fields list. Parameters: filename (String) – The name of the file to write to. fields (List of string, Default = None) – If this is supplied, these fields will be added to the list of fields to be saved to disk. If not supplied, whatever fields presently exist will be used. format (String, Default = "%0.16e") – Format of numbers to be written in the file. ValueError – Raised when there is no existing field. YTException – Raised when field_type of supplied fields is inconsistent with the field_type of existing fields. Examples >>> ds = fake_particle_ds() >>> sp = ds.sphere(ds.domain_center, 0.25) >>> sp.write_out("sphere_1.txt") >>> sp.write_out("sphere_2.txt", fields=["cell_volume"])
auto_math_text
web
View Single Post 2012-02-07, 15:43   #28 EdH "Ed Hall" Dec 2009 2·1,637 Posts Quote: Originally Posted by Mini-Geek Yes. I don't know much about lattice sieving, but I'd guess that ~1.6M of the ~6.6M hash collisions eliminated as hash collisions in the first filtering were from this range being duplicated (it might help to compare this to a similar test and see if it had ~5.0M hash collisions instead of ~6.6M). FYI: in the 9M-10M range, 1007666 rels were in rels1, 666901 were in rels2, and 1662263 were in rels3 (none were in the remaining files). I wonder if it had anything to do with the power loss/restart... It shouldn't have, since I deleted the unfinished files and restarted the ranges from scratch. Perhaps my tracking of all the different "pieces" was less than accurate... I'll work on that in the future. Thanks for post-processing.
auto_math_text
web
SMS scnews item created by Georg Gottwald at Thu 7 Jun 2007 1216 Type: Seminar Distribution: World Expiry: 18 Jun 2007 Calendar1: 18 Jun 2007 1515-1600 CalLoc1: Slade Lecture Theatre, Ground Floor, School of Physics Auth: gottwald@p6363.pc.maths.usyd.edu.au # COSNet DISTINGUISHED VISITOR SEMINAR : Tony Bell -- Towards a Theory of Learning and Levels for Biology SCHOOL OF PHYSICS COLLOQUIUM and COSNet DISTINGUISHED VISITOR SEMINAR www.physics.usyd.edu.au/local/coll Monday 18th June, 2007 at 3:15 pm refreshments from 3pm Venue: Slade Lecture Theatre, Ground Floor, School of Physics Title: Towards a Theory of Learning and Levels for Biology Presenter: Prof. Tony Bell University of California at Berkeley Abstract: Learning, plasticity, adaptivity: these occur at the ecological, behavioural, neural and molecular levels amongst others. Yet each level is just a different description of the the same processes. Defined structural relations exist between the levels (networks within networks), and these define causal relations in time. I will describe how these causal relations are essentially inter-level in nature, consisting of downward ’boundary conditions’ and upward ’emergence’. Using them, information can travel from the top to the bottom of the reductionist hierarchy and vice-versa. This opens the possibility of defining new kinds of learning algorithm which exploit inter-level mappings for representational purposes. As examples, I will consider Spike Timing-Dependent synaptic Plasticity (STDP), and equations describing channel and enzyme kinetics. I will also present much review material on the nature of the biological material hierarchy and compare it to the scale-free systems successfully studied in physics by the renormalisation group theory. If you are registered you may mark the scnews item as read. School members may try to .
auto_math_text
web
# 10 BOILING ## 10.1 Introduction When a solid surface is immersed in liquid and the solid surface temperature, Tw, exceeds the saturation temperature, ${T_{sat}}({p_\ell })$, for the liquid at that pressure, vapor bubbles can form, grow, and detach from the solid surface. This phase change process is called boiling, and the energy transport involved is classified as convective heat transfer with phase change. Two types of boiling are commonly distinguished: pool boiling and flow or forced convective boiling. In the former case, the bulk liquid is quiescent while the liquid near the heating surface moves due to free convection and the mixing induced by bubble growth and detachment. In the latter case, bulk liquid motion driven by some external means is superimposed on the motion that also occurs in pool boiling. Vapor bubbles within the liquid phase are the primary visual characteristics that distinguish boiling from evaporation. The influence of bubbles is also the primary cause of differences in the thermodynamic and hydrodynamic analyses of these two phase change modes. Fig. 10.1 compares, in schematic fashion, the temperature field of a quiescent volume of fluid experiencing evaporation with that of one experiencing boiling. As the figure indicates, boiling requires a much larger temperature difference between the bulk liquid and the heating surface than evaporation does. A second distinction that arises from the figure is that the thermal boundary layer is less precise in the case of boiling; this may be attributed to the mixing action caused by the release of bubbles from the heating surface. In addition to the pool and forced convection classifications, boiling can also be categorized according to the initial temperature of the liquid. Fig. 10.2 shows pool boiling for different initial liquid temperatures. Subcooled boiling occurs if the liquid temperature starts below the saturation temperature. In this case, bubbles formed at the heater surface experience condensation as they rise through the cooler bulk liquid and may collapse before reaching the free surface. Saturated boiling, on the other hand, occurs if the temperature of the liquid equals the saturation temperature. In this case, bubbles form at the heating surface, travel intact through the liquid, and escape at the free surface. Figure 10.1 Schematic comparison of evaporation and nucleate boiling temperature profiles. (a) ${T_\ell } < {T_{sat}}$ (b) ${T_\ell } = {T_{sat}}$ Figure 10.2 Effects of liquid temperature: (a) subcooled boiling, (b) saturated boiling ( vapor, liquid). Section 10.2 introduces the pool boiling curve and the various regimes (free convection, nucleate boiling, transition boiling, film boiling) of which it is composed. After this introduction, the four regimes and the characteristic points of the curve are then discussed in detail. Section 10.3 discusses nucleate boiling, including nucleation, bubble dynamics and detachment, nucleation side density, numerical simulation of bubble growth and merger, and heat transfer analysis. The critical heat flux is presented in Section 10.4, followed by discussions on transition boiling and minimum heat flux in Section 10.5. Section 10.6 discusses film boiling, including film boiling laminar boundary layer analysis and correlations, direct numerical simulations, and Leidenfrost phenomena. This chapter is closed by a discussion of boiling in porous media (Section 10.7), including boiling on a wicked surface, boiling in porous media heated from below, and an analysis of film boiling in porous media. Forced convection boiling in both macro and micro tubes, which involves liquid-vapor two-phase flow with transitions between several characteristic flow patterns, is addressed in Chapter 11 under the heading of Two-Phase Flow.
auto_math_text
web
# The Computer Science Department On a tour of a nearby university, we went into the computer science department. I was fascinated by the courses they offered, of course, but there were weird grids displayed on screens along the hallways. Halfway along the tour, a student challenged me to find out what they represent. Can you work it out? Oh, and if it helps, there were a few ads displayed as well: PDF with the 13 pictures in text form (The story is entirely fictional) Hints: 1. The ads aren't necessary to solve the puzzle, but they will clarify the steps you need to make • That's a lot of pictures hehe – dcfyj Nov 30 '16 at 20:59 • @dcfyj Maybe I should add the visual tag ;) – boboquack Nov 30 '16 at 21:02 • I was about to ask whether this is a true story - it sounded entirely plausible - and then I scrolled to the bottom ... – Rand al'Thor Nov 30 '16 at 21:19 • Any chance of getting a text transcription of the grids? – GentlePurpleRain Nov 30 '16 at 22:12 • @GentlePurpleRain Done! At the bottom is a link to a PDF. – boboquack Dec 1 '16 at 0:00 Using Jonathan Allan's results, I suspect the final answer to be INFORMATICS How I reached this: Every keyword Jonathan found should be used as a mask on the corresponding tile, and tiles are ordered according to the numbers which were also decoded by Jonathan. The original color of cells is not needed anymore, only the characters themselves are relevant from now on. So here's how to decode each tile, but let me skip tile number 0 for a short moment, as it seems to have some anomaly - maybe my understanding is not yet perfect. EDIT: see below for updates. Tile number 1 has the keyword 'oddness'. This suggests characters with an odd number as their ASCII code should be masked: Looks like a letter N. Tile number 2 - 'fourset' - character should me masked if its ASCII code has exactly 4 digits of 1 in its binary representation, or if the character itself is the number 4: Letter F. Tile number 3 - 'ispower' - character should me masked if its ASCII code is a power (with exponent greater than one) of a prime number - letter O. Tile number 4 - 'rbshift' - character should me masked if its ASCII code and one of its neighbouring characters' ASCII code are in a right-bit-shift relation (that's an integer division by 2) - letter M. Tile number 5 - 'alphnum' - character should me masked if it is alphanumeric - letter A. Tile number 6 - let me skip this one for a while. Tile number 7 - 'sixfact' - character should me masked if its ASCII code has exactly 6 different integer divisors - letter I. Tile number 8 - 'divfour' - character should me masked if its ASCII code is divisible by 4 - letter C. Tile number 9 - 'sqrfree' - character should me masked if its ASCII code is not divisible with any square number (except 1) - letter S. So far we have letters: _NFOMA_ICS. Let's get back to tiles 0 and 6. Tile number 0 - 'collatz' - I think it means character should me masked if its ASCII code and one of its neighbouring characters' ASCII code are consecutive elements of a Collatz-sequence, however the results are a littly bit dirty here: It's almost a letter I, but the upper right part is missing. There are also some additional black cells on the left side, which might be a result of unintentional match between 68/2=34, or my interpretation is imperfect. Tile number 6 - 'isprime' - I think it means character should me masked if its ASCII code is a prime number, but this one is dirty too: It could be a letter T if there weren't those two extra black cells on the left side. A fault in my interpretation or in the puzzle generating process, maybe. Anyway, I'll go with I and T as best guesses for these two, resulting in the final solution INFOMATICS - which again, seems to be missing an R. But maybe that's a proper english word - I've never heard it, but I'm not a native speaker. Or is this a reference to programming language R, and that it should be skipped because python is superior according to the posters? EDIT: Tiles number 0 and 6 have been corrected, and now clearly say I and T, respectively. Also, tile number 3 has been updated, and instead of a single letter O, there are two letters now on it: O and R. I wasn't completely right about its masking though, as it seems, it's not only prime number's powers which should be masked, any integer to the power of at least two should be masked in the ASCII codes. With all these modifications the solution is clearly: INFORMATICS. • SQRFREE should mean it has no square factors > 1? – greenturtle3141 Dec 9 '16 at 3:13 • @greenturtle3141 yeah, that's another way to phrase the same criteria. – elias Dec 9 '16 at 7:38 • My fault with your problems. See my edit: I've fixed up some of the mistakes. I've accepted your answer because it's correct, but still try to decipher the 2 ads you haven't used. You should be able to finish the last 2, now, and be aware that I have changed the 3rd picture (in the post). – boboquack Dec 10 '16 at 6:18 • Thanks, @boboquack, I've seen and solved your updates, and will update my answer. No idea about the ads yet though. – elias Dec 10 '16 at 13:03 • To decipher the ads retrospectively, you'll need to think about some steps that were done that were perhaps not completely obvious. The graphing ad was to help with the snake, but what other things did you or Jonathan have to partially guess? – boboquack Dec 10 '16 at 22:13 I'm not convinced this is the final solution (since it does not seem to use all the poster information): All of the ASCII range (except the space character) has been used across the grids, hinting to us to look at ASCII. The black and white grids are each $7$ by $7$, and $7$ is the number of bits in an ASCII code point, so they could each be read as $7$ characters. Letters of the English alphabet (upper or lower) all have code points greater than $63$ and hence have a $1$ as their most significant bit (the leftmost in standard notation). If we scan the black and white "pixels" as if we were following an overlaid plot of the function from the Introduction to Graphing course poster from its left to its right, we would read the first column of pixels from top to bottom, the second column from bottom to top, and so on, and each resulting $7$ bit long string would start with a black pixel. Treating black as $1$ and white as $0$ and reading each as ASCII we then find ten "function names": iSPrIMe fOUrsEt iSPowER cOLlatz aLPhNuM sQRFreE oDDnesS dIVFour sIXfACT rBShIft If we then treat capitals as $1$ and lowercase as $0$ and translate the results to ASCII: iSPrIMe 0110110 '6' fOUrsEt 0110010 '2' iSPowER 0110011 '3' cOLlatz 0110000 '0' aLPhNuM 0110101 '5' sQRFreE 0111001 '9' oDDnesS 0110001 '1' dIVFour 0111000 '8' sIXfACT 0110111 '7' rBShIft 0110100 '4' ...then we have found all ten decimal digits $[0,9]$ as characters. • Nice! I guess this suggests in which order to use the tiles when reading their remaining information. – elias Dec 8 '16 at 17:23 • Correct so far! Now you need to use that information to do something else... – boboquack Dec 8 '16 at 19:26 My observations and guesses, as well as (hopefully) time saving information. The 3-letter characters are the ascii names for certain commands. Here is a list of all the ones I can find and their control shortcuts. All Ascii Special characters Black DC1- XON, with XOFF to pause listings; ":okay to send" - ^Q RS - Record separator, block-mode terminator - ^^ ETX - End of text - ^C ESC - Escape, next character is not echoed - ^[ ETB - End transmission block, not the same as EOT - ^W US - Unit separator - ^_ BEL - Bell, rings the bell… - ^G % - Modulo ENQ - Enquiry, goes with ACK; old HP flow control - ^E NUL - Null Character - ^@ NAK - Negative acknowledge - ^U VT - Vertical tab - ^K EOT - End of transmission, not the same as ETB - ^D ACK - Acknowledge, clears ENQ logon hand - ^F DC3 - XOFF, with XON is TERM=18 flow control - ^S SYN - Synchronous idle - ^V CR - Carriage Return - ^M EM - End of medium, Control-Y interrupt - ^Y DC2 - Device control 2, block-mode flow control - ^R FF - Form Feed, page eject - ^L SOH -Start of heading, = console interrupt - ^A DC4 - Device control 4 -^T DLE - Data link escape - ^P BS - Backspace, works on HP terminals/computers - ^H CAN - Cancel line, MPE echoes !!! - ^X FS - File separator - ^\ STX - Start of text, maintenance mode on HP console - ^B SO - Shift Out, alternate character set - ^N And: White Exclusive Characters LF - Line Feed - ^J SUB - Substitute - ^Z SI - Shift In, resume default character set - ^O Guesses: Assuming that SO and SI do mean to change character set, what is this alternate set? Does the use of SO in black, and SI in white indicate that we should follow ones instructions, blacks, first, then white's? Do we use the letters in the control keys as a guide for substitution? Also, note that there are codes for 4 different input devices. Do they each have their own code or set of instructions? Python Image Code: m+=n means to take m, and increment it by n x**2, means x squared a**3, means a cubed t**-1, means 1/t l[i], means the i-1th item or letter in l, as positions start from zero in python. Might they be instructions? Intro Graphing: Assuming that x is a set of 0 to 7 means for x from 0 to 7, the graph looks like this: with peaks of 7 high. • In the Python code, it is not relevant to solving the puzzle, but it is the i+1th letter or item, not the i-1th. – boboquack Dec 1 '16 at 5:34 For family trees puzzle It may represent the linked list concept, i.e they are announcing that linked list courses are to be starting soon. • No, but good try – boboquack Dec 2 '16 at 19:54 • Oh then it may be trees concept... Is it so?? – Badri Narayanan Dec 10 '16 at 3:48 • It specifically relates to the solving of the puzzle, not just a concept. – boboquack Dec 10 '16 at 5:37 Some observations: Each grid is 7x7 and contains a representation of one ASCII character in each cell, the colouring suggests a binary code, and 7 bits are exactly sufficient to encode ASCII. Perhaps the code is some sort of Viginere variant of XOR cypher keyed by the binary code? • The encoding is straight, if you can work out how… check the other answers and you might find something helpful. – boboquack Dec 4 '16 at 1:44
auto_math_text
web
- Art Gallery - LIFE, short for Laser Inertial Fusion Energy, was a fusion energy effort run at Lawrence Livermore National Laboratory between 2008 and 2013. LIFE aimed to develop the technologies necessary to convert the laser-driven inertial confinement fusion concept being developed in the National Ignition Facility (NIF) into a practical commercial power plant, a concept known generally as inertial fusion energy (IFE). LIFE used the same basic concepts as NIF, but aimed to lower costs using mass-produced fuel elements, simplified maintenance, and diode lasers with higher electrical efficiency. Two designs were considered, operated as either a pure fusion or hybrid fusion-fission system. In the former, the energy generated by the fusion reactions is used directly. In the later, the neutrons given off by the fusion reactions are used to cause fission reactions in a surrounding blanket of uranium or other nuclear fuel, and those fission events are responsible for most of the energy release. In both cases, conventional steam turbine systems are used to extract the heat and produce electricity. Construction on NIF completed in 2009 and it began a lengthy series of run-up tests to bring it to full power. Through 2011 and into 2012, NIF ran the "national ignition campaign" to reach the point at which the fusion reaction becomes self-sustaining, a key goal that is a basic requirement of any practical IFE system. NIF failed in this goal, with fusion performance that was well below ignition levels and differing considerably from predictions. With the problem of ignition unsolved, the LIFE project was canceled in 2013. The LIFE program was criticized through its development for being based on physics that had not yet been demonstrated. In one pointed assessment, Robert McCrory, director of the Laboratory for Laser Energetics, stated: "In my opinion, the overpromising and overselling of LIFE did a disservice to Lawrence Livermore Laboratory."[1] Background Lawrence Livermore National Laboratory (LLNL) has been a leader in laser-driven inertial confinement fusion (ICF) since the initial concept was developed by LLNL employee John Nuckols in the late 1950s.[2][3] The basic idea was to use a driver to compress a small pellet known as the target that contains the fusion fuel, a mix of deuterium (D) and tritium (T). If the compression reaches high enough values, fusion reactions begin to take place, releasing alpha particles and neutrons. The alphas may impact atoms in the surrounding fuel, heating them to the point where they undergo fusion as well. If the rate of alpha heating is higher than heat losses to the environment, the result is a self-sustaining chain reaction known as ignition.[4][5] Comparing the driver energy input to the fusion energy output produces a number known as fusion energy gain factor, labelled Q. A Q value of at least 1 is required for the system to produce net energy. Since some energy is needed to run the reactor, in order for there to be net electrical output, Q has to be at least 3.[6] For commercial operation, Q values much higher than this are needed.[7] For ICF, Qs on the order of 25 to 50 are needed to recoup both the electrical generation losses and the large amount of power used to power the driver. In the fall of 1960, theoretical work carried out at LLNL suggested that gains of the required order would be possible with drivers on the order of 1 MJ.[8] At the time, a number of different drivers were considered, but the introduction of the laser later that year provided the first obvious solution with the right combination of features. The desired energies were well beyond the state of the art in laser design, so LLNL began a development program in the mid-1960s to reach these levels.[9] Each increase in energy led to new and unexpected optical phenomena that had to be overcome, but these were largely solved by the mid-1970s. Working in parallel with the laser teams, physicists studying the expected reaction using computer simulations adapted from thermonuclear bomb work developed a program known as LASNEX that suggested Q of 1 could be produced at much lower energy levels, in the kilojoule range, levels that the laser team were now able to deliver.[10][11] From the late-1970s, LLNL developed a series of machines to reach the conditions being predicted by LASNEX and other simulations. With each iteration, the experimental results demonstrated that the simulations were incorrect. The first machine, the Shiva laser of the late 1970s, produced compression on the order of 50 to 100 times, but did not produce fusion reactions anywhere near the expected levels. The problem was traced to the issue of the infrared laser light heating electrons and mixing them in the fuel, and it was suggested that using ultraviolet light would solve the problem. This was addressed on the Nova laser of the 1980s, which was designed with the specific intent of producing ignition. Nova did produce large quantities of fusion, with shots producing as much as 107 neutrons, but failed to reach ignition. This was traced to the growth of Rayleigh–Taylor instabilities, which greatly increased the required driver power.[12] Ultimately all of these problems were considered to be well understood, and a much larger design emerged, NIF. NIF was designed to provide about twice the required driver energy, allowing some margin of error. NIF's design was finalized in 1994, with construction to be completed by 2002. Construction began in 1997 but took over a decade to complete, with major construction being declared complete in 2009.[13] LIFE Throughout the development of the ICF concept at LLNL and elsewhere, several small efforts had been made to consider the design of a commercial power plant based on the ICF concept. Examples include SOLASE-H[14] and HYLIFE-II.[15] As NIF was reaching completion in 2008, with the various concerns considered solved, LLNL began a more serious IFE development effort, LIFE.[16] Fusion–fission hybrid When the LIFE project was first proposed, it focused on the nuclear fusion–fission hybrid concept, which uses the fast neutrons from the fusion reactions to induce fission in fertile nuclear materials.[17] The hybrid concept was designed to generate power from both fertile and fissile nuclear fuel and to burn nuclear waste.[18][19][20] The fuel blanket was designed to use TRISO-based fuel cooled by a molten salt made from a mixture of lithium fluoride (LiF) and beryllium fluoride (BeF2).[21] Conventional fission power plants rely on the chain reaction caused when fission events release thermal neutrons that cause further fission events. Each fission event in U-235 releases two or three neutrons with about 2 MeV of kinetic energy. By careful arrangement and the use of various absorber materials, designers can balance the system so one of those neutrons causes another fission event while the other one or two are lost. This balance is known as criticality. Natural uranium is a mix of three isotopes; mainly U-238, with some U-235, and trace amounts of U-234. The neutrons released in the fission of either of the main isotopes will cause fission in U-235, but not in U-238, which requires higher energies around 5 MeV. There is not enough U-235 in natural uranium to reach criticality. Commercial light-water nuclear reactors, the most prevalent power reactors in the world, use nuclear fuel containing uranium enriched to 3 to 5% U-235 while the leftover is U-238.[22][23] Each fusion event in the D-T fusion reactor gives off an alpha particle and a fast neutron with around 14 MeV of kinetic energy. This is enough energy to cause fission in U-238, and many other transuranic elements as well. This reaction is used in H-bombs to increase the yield of the fusion section by wrapping it in a layer of depleted uranium, which undergoes rapid fission when hit by the neutrons from the fusion bomb inside. The same basic concept can also be used with a fusion reactor like LIFE, using its neutrons to cause fission in a blanket of fission fuel. Unlike a fission reactor, which burns out its fuel once the U-235 drops below a certain threshold value,[a] these fission–fusion hybrid reactors can continue producing power from the fission fuel as long as the fusion reactor continues to provide neutrons. As the neutrons have high energy, they can potentially cause multiple fission events, leading to the reactor as a whole producing more energy, a concept known as energy multiplication.[25] Even leftover nuclear fuel taken from conventional nuclear reactors will burn in this fashion. This is potentially attractive because this burns off many of the long lived radioisotopes in the process, producing waste that is only mildly radioactive and lacking most long-lived components.[17] In most fusion energy designs, fusion neutrons react with a blanket of lithium to breed new tritium for fuel. A major issue with the fission–fusion design is that the neutrons causing fission are no longer available for tritium breeding. While the fission reactions release additional neutrons, these do not have enough energy to complete the breeding reaction with Li-7, which makes up more than 92% of natural lithium. These lower energy neutrons will cause breeding in Li-6, which could be concentrated from the natural lithium ore. However, the Li-6 reaction only produces one tritium per neutron captured, and more than one T per neutron is needed to make up for natural decay and other losses.[26] Using Li-6, neutrons from the fission would make up for the losses, but only at the cost of removing them from causing other fission reactions, lowering the reactor power output. The designer has to choose which is more important; burning up the fuel through fusion neutrons, or providing power through self-induced fission events.[27] The economics of fission–fusion designs have always been questionable. The same basic effect can be created by replacing the central fusion reactor with a specially designed fission reactor, and using the surplus neutrons from the fission to breed fuel in the blanket. These fast breeder reactors have proven uneconomical in practice, and the greater expense of the fusion systems in the fission–fusion hybrid has always suggested they would be uneconomical unless built in very large units.[28] Pure IFE The National Ignition Facility's target chamber's multi-part construction would also be used in LIFE. Several chambers would be used in a production power plant, allowing them to be swapped out for maintenance. The LIFE concept stopped working along fusion-fission lines around 2009. Following consultations with their partners in the utility industry, the project was redirected toward a pure fusion design with a net electrical output around 1 gigawatt.[29] Inertial confinement fusion is one of two major lines of fusion power development, the other being magnetic confinement fusion (MCF), notably the tokamak concept which is being built in a major experimental system known as ITER. Magnetic confinement is widely considered to be the superior approach, and has seen significantly greater development activity over the decades. However, there are serious concerns that the MCF approach of ITER cannot ever become economically practical.[30] One of the cost concerns for MCF designs like ITER is that the reactor materials are subject to the intense neutron flux created by the fusion reactions. When high-energy neutrons impact materials they displace the atoms in the structure leading to a problem known as neutron embrittlement that degrades the structural integrity of the material. This is a problem for fission reactors as well, but the neutron flux and energy in a tokamak is greater than most fission designs. In most MFE designs, the reactor is constructed in layers, with a toroidal inner vacuum chamber, or "first wall", then the lithium blanket, and finally the superconducting magnets that produce the field that confines the plasma. Neutrons stopping in the blanket are desirable, but those that stop in the first wall or magnets degrade them. Disassembling a toroidal stack of elements would be a time-consuming process that would lead to poor capacity factor, which has a significant impact on the economics of the system. Reducing this effect requires the use of exotic materials which have not yet been developed.[31] As a natural side-effect of the size of the fuel elements and their resulting explosions, ICF designs use a very large reaction chamber many meters across. This lowers the neutron flux on any particular part of the chamber wall through the inverse-square law. Additionally, there are no magnets or other complex systems near or inside the reactor, and the laser is isolated on the far side of long optical paths. The far side of the chamber is empty, allowing the blanket to be placed there and easily maintained. Although the reaction chamber walls and final optics would eventually embrittle and require replacement, the chamber is essentially a large steel ball of relatively simple multi-piece construction that could be replaced without too much effort. The reaction chamber is, on the whole, dramatically simpler than those in magnetic fusion concepts, and the LIFE designs proposed building several and quickly moving them in and out of production.[32] IFE limitations NIF's huge flashlamps are both inefficient and impractical. LIFE explored solutions to replace these lamps with smaller and much more efficient LED lasers. NIF's laser uses a system of large flashtubes (like those in a photography flashlamp) to optically pump a large number of glass plates. Once the plates are flashed and have settled into a population inversion, a small signal from a separate laser is fed into the optical lines, stimulating the emission in the plates. The plates then dump their stored energy into the growing beam, amplifying it billions of times.[33] The process is extremely inefficient in energy terms; NIF feeds the flashtubes over 400 MJ of energy which produces 1.8 MJ of ultraviolet (UV) light. Due to limitations of the target chamber, NIF is only able to handle fusion outputs up to about 50 MJ, although shots would generally be about half of that. Accounting for losses in generation, perhaps 20 MJ of electrical energy might be extracted at the maximum, accounting for less than ​1⁄20 of the input energy.[33] Another problem with the NIF lasers is that the flashtubes create a significant amount of heat, which warms the laser glass enough to cause it to deform. This requires a lengthy cooling-off period between shots, on the order of 12 hours. In practice, NIF manages a shot rate of less than one shot per day.[34] To be useful as a power plant, about a dozen shots would have to take place every second, well beyond the capabilities of the NIF lasers. When originally conceived by Nuckols, laser-driven inertial fusion confinement was expected to require lasers of a few hundred kilojoules and use fuel droplets created by a perfume mister arrangement.[35] LLNLs research since that time has demonstrated that such an arrangement cannot work, and requires machined assemblies for each shot. To be economically useful, an IFE machine would need to use fuel assemblies that cost pennies. Although LLNL does not release prices for their own targets, the similar system at the Laboratory for Laser Energetics at the University of Rochester makes targets for about $1 million each.[36] It is suggested that NIF's targets cost more than$10,000.[37][38] Mercury LLNL had begun exploring different solutions to the laser problem while the system was first being described. In 1996 they built a small testbed system known as the Mercury laser that replaced the flashtubes with laser diodes.[39] One advantage of this design was that the diodes created light around the same frequency as the laser glass' output,[40] as compared to the white light flashtubes where most of the energy in the flash was wasted as it was not near the active frequency of the laser glass.[41] This change increased the energy efficiency to about 10%, a dramatic improvement.[39] For any given amount of light energy created, the diode lasers give off about ​1⁄3 as much heat as a flashtube. Less heat, combined with active cooling in the form of helium blown between the diodes and the laser glass layers, eliminated the warming of the glass and allows Mercury to run continually.[40] In 2008, Mercury was able to fire 10 times a second at 50 joules per shot for hours at a time.[39] Several other projects running in parallel with Mercury explored various cooling methods and concepts allowing many laser diodes to be packed into a very small space. These eventually produced a system with 100 kW of laser energy from a box about 50 centimetres (20 in) long, known as a diode array. In a LIFE design, these arrays would replace the less dense diode packaging of the Mercury design.[39] Beam-in-a-box LIFE was essentially a combination of the Mercury concepts and new physical arrangements to greatly reduce the volume of the NIF while making it much easier to build and maintain. Whereas an NIF beamline for one of its 192 lasers is over 100 metres (330 ft) long, LIFE was based on a design about 10.5 metres (34 ft) long that contained everything from the power supplies to frequency conversion optics. Each module was completely independent, unlike NIF which is fed from a central signal from the Master Oscillator, allowing the units to be individually removed and replaced while the system as a whole continued operation.[42] Each driver cell in the LIFE baseline design contained two of the high-density diode arrays arranged on either side of a large slab of laser glass. The arrays were provided cooling via hook-up pipes at either end of the module. The initial laser pulse was provided by a preamplifier module similar to the one from the NIF, the output of which was switched into the main beamline via a mirror and Pockel's cell optical switch. To maximize the energy deposited into the beam from the laser glass, optical switches were used to send the beam to mirrors to reflect the light through the glass four times, in a fashion similar to NIF.[40] Finally, focussing and optical cleanup was provided by optics on either side of the glass, before the beam exited the system through a frequency converter at one end.[42] The small size and independence of the laser modules allowed the huge NIF building to be dispensed with. Instead, the modules were arranged in groups surrounding the target chamber in a compact arrangement. In baseline designs, the modules were stacked in 2-wide by 8-high groups in two rings above and below the target chamber, shining their light through small holes drilled into the chamber to protect them from the neutron flux coming back out.[43] The ultimate goal was to produce a system that could be shipped in a conventional semi-trailer truck to the power plant, providing laser energy with 18% end-to-end efficiency, 15 times that of the NIF system. This reduces the required fusion gains into the 25 to 50 area, within the predicted values for NIF. The consensus was that this "beam-in-a-box" system could be built for 3 cents per Watt of laser output, and that would reduce to 0.7 cents/W in sustained production. This would mean that a complete LIFE plant would require about 600 million worth of diodes alone, significant, but within the realm of economic possibility.[42] Inexpensive targets NIF's targets (centered, in the holder) are expensive machined assemblies that cost thousands of dollars each. LIFE worked with industry partners to reduce this to under a dollar. Targets for NIF are extremely expensive. Each one consists of a small open-ended metal cylinder with transparent double-pane windows sealing each end. In order to efficiently convert the driver laser's light to the x-rays that drive the compression, the cylinder has to be coated in gold or other heavy metals. Inside, suspended on fine plastic wires, is a hollow plastic sphere containing the fuel. In order to provide symmetrical implosion, the metal cylinder and plastic sphere have extremely high machining tolerances. The fuel, normally a gas at room temperature, is deposited inside the sphere and then cryogenically frozen until it sticks to the inside of the sphere. It is then smoothed by slowly warming it with an infrared laser to form a 100 µm smooth layer on the inside of the pellet. Each target costs tens of thousands of dollars.[37] To address this concern, a considerable amount of LIFE's effort was put into the development of simplified target designs and automated construction that would lower their cost. Working with General Atomics, the LIFE team developed a concept using on-site fuel factories that would mass-produce pellets at a rate of about a million a day. It was expected that this would reduce their price to about 25 cents per target,[44] although other references suggest the target price was closer to 50 cents, and LLNL's own estimates range from 20 to 30 cents.[45] One less obvious advantage to the LIFE concept is that the amount of tritium required to start the system up is greatly reduced over MFE concepts. In MFE, a relatively large amount of fuel is prepared and put into the reactor, requiring much of the world's entire civilian tritium supply just for startup. LIFE, by virtue of the tiny amount of fuel in any one pellet, can begin operations with much less tritium, on the order of ​1⁄10.[32] Overall design LIFE.1/MEP's fusion system. The lasers are the grey boxes arranged in groups at the top and bottom of the containment building (the lower ones are just visible). Their light, in blue, is bounced through the optical paths into the target chamber in the center. The machinery on the left circulates the liquid lithium or FLiBe, which removes heat from the chamber to cool it, provides heat to the generators, and extracts tritium for fuel. The early fusion-fission designs were not well developed and only schematic outlines of the concept were shown. These systems looked like a scaled down version of NIF, with beamlines about 100 metres (330 ft) long on either side of a target chamber and power generation area. The laser produced 1.4 MJ of UV light 13 times a second. The fusion took place in a 2.5 metres (8 ft 2 in) target chamber that was surrounded by 40 short tons (36,000 kg) of unenriched fission fuel, or alternately about 7 short tons (6,400 kg) of Pu or highly enriched uranium from weapons. The fusion system was expected to produce Q on the order of 25 to 30, resulting in 350 to 500 MW of fusion energy. The fission processes triggered by the fusion would add an additional energy gain of 4 to 10 times, resulting in a total thermal output between 2000 and 5000 MWth. Using high efficiency thermal-to-electric conversion systems like Rankine cycle designs in combination with demonstrated supercritical steam generators would allow about half of the thermal output to be turned into electricity.[46][47] By 2012, the baseline design of the pure fusion concept, known as the Market Entry Plant (MEP),[b] had stabilized. This was a self-contained design with the entire fusion section packaged into a cylindrical concrete building not unlike a fission reactor confinement building, although larger at 100 metres (330 ft) diameter.[49] The central building was flanked by smaller rectangular buildings on either side, one containing the turbines and power handling systems, the other the tritium plant. A third building, either attached to the plant or behind it depending on the diagram, was used for maintenance.[50] Inside the central fusion building, the beam-in-a-box lasers were arranged in two rings, one above and one below the target chamber. A total of 384 lasers would provide 2.2 MJ of UV light at a 0.351-micrometer wavelength,[40] producing a Q of 21. A light-gas gun was used to fire 15 targets a second into the target chamber.[51] With each shot, the temperature of the target chamber's inner wall is raised from 600 °C (1,112 °F) to 800 °C (1,470 °F).[52] The target chamber is a two-wall structure filled with liquid lithium or a lithium alloy between the walls.[53] The lithium captures neutrons from the reactions to breed tritium, and also acts as the primary coolant loop.[54] The chamber is filled with xenon gas that would slow the ions from the reaction as well as protect the inner wall, or first wall, from the massive x-ray flux.[50] Because the chamber is not highly pressurized, like a fission core, it does not have to be built as a single sphere. Instead, the LIFE chamber is built from eight identical sections that include built-in connections to the cooling loop. They are shipped to the plant and bolted together on two supports, and then surrounded by a tube-based space frame.[55] To deal with embrittlement, the entire target chamber was designed to be easily rolled out of the center of the building on rails to the maintenance building where it could be rebuilt. The chamber was expected to last four years, and be replaced in one month. The optical system is decoupled from the chamber, which isolates it from vibrations during operation and means that the beamlines themselves do not have to be realigned after chamber replacement.[50] The plant had a peak generation capability, or nameplate capacity, of about 400 MWe, with design features to allow expansion to as much as 1000 MWe.[56] Economics LIFE plant parameters (MEP: prototype ; LIFE.2: first generation commercial plant)[47] MEP LIFE.2 Laser energy on target, MJ 2.2 2.2 Target yield, MJ 132 132 Pulse repetition rate, Hz 8.3 16.7 Fusion power, MW 1100 2200 Thermal power, MWt 1320 2640 Chamber material RAFMS[c] ODS First wall radius, m 6.0 6.0 Neutron wall load, MW/m2 1.8 3.6 Surface heat load, MW/m2 0.63 1.26 Tritium breeding ratio 1.05 1.05 Primary coolant Li Li Intermediate coolant Molten salt Molten salt Chamber outlet temperature, °C 530 575 Conversion efficiency, % 45 47 Gross power, MWe 595 1217 Laser electrical power input, MWe 124 248 In-plant power load, MWe 34 64 Net electric power, MWe 437 905 The levelized cost of electricity (LCoE) can be calculated by dividing the total cost to build and operate a power-generating system over its lifetime by the total amount of electricity shipped to the grid during that period. The amount of money is essentially a combination of the capital expense (CAPEX) of the plant and the interest payments on that CAPEX, and the discounted cost of the fuel, the maintenance needed to keep it running and its dismantling, the discounted operational expenses, or OPEX. The amount of power is normally calculated by considering the peak power the plant could produce, and then adjusting that by the capacity factor (CF) to account for downtime due to maintenance or deliberate throttling. As a quick calculation, one can ignore inflation, opportunity costs and minor operational expenses to develop a figure of merit for the cost of electricity.[57] MEP was not intended to be a production design, and would be able to export only small amounts of electricity. It would, however, serve as the basis for the first production model, LIFE.2. LIFE.2 would produce 2.2 GW of fusion energy and convert that to 1 GW of electrical at 48% efficiency.[51] Over a year, LIFE would produce 365 days x 24 hours x 0.9 capacity factor x 1,000,000 kW nameplate rating = 8 billion kWh. In order to generate that power, the system will have to burn 365 x 24 x 60 minutes x 60 seconds x 15 pellets per second x 0.9 capacity = 425 million fuel pellets. If the pellets cost the suggested price of 50 cents each, that is over200 million a year to fuel the plant. The average rate for wholesale electricity in the US as of 2015 is around 5 cents/kWh,[58] so this power has a commercial value of about $212 million, suggesting that LIFE.2 would just barely cover, on average, its own fuel costs.[d] CAPEX for the plant is estimated to$6.4 billion, so financing the plant over a 20-year period adds another $5 billion assuming the 6.5% unsecured rate. Considering CAPEX and fuel alone, the total cost of the plant is 6.4 + 5 + 4 =$15.4 billion. Dividing the total cost by the energy produced over the same period gives a rough estimate of the cost of electricity for a 20-year lifetime operation: $15.4 billion / 160 billion kWh = 9.6 cents/kWh. A 40-year operation lifetime would lead to a cost of electricity of 4.8 cents/kWh. LLNL calculated the LCOE of LIFE.2 at 9.1 cents using the discounted cash flow methodology described in the 2009 MIT report "the Future of Nuclear Energy".[51][60] Using either value, LIFE.2 would be unable to compete with modern renewable energy sources, which are well below 5 cents/kWh as of 2018.[61] LLNL projected that further development after widespread commercial deployment might lead to further technology improvements and cost reductions, and proposed a LIFE.3 design of about$6.3 billion CAPEX and 1.6 GW nameplate for a price per watt of $4.2/W. This leads to a projected LCOE of 5.5 cents/kWh,[51] which is competitive with offshore wind As of 2018,[62] but unlikely to be so in 2040 when LIFE.3 designs would start construction.[e] LIFE plants would be wholesale sellers, competing against a baseload rate of about 5.3 cents/kWh as of 2015.[58] The steam turbine section of a power plant, the turbine hall, generally costs about$1/W, and the electrical equipment to feed that power to the grid is about another $1/W.[64] To reach the projected total CAPEX quoted in LIFE documents, this implies that the entire nuclear island has to cost around$4/W for LIFE.2, and just over $2/W for LIFE.3. Modern nuclear plants, benefiting from decades of commercial experience and continuous design work, cost just under$8/W, with approximately half of that in the nuclear island. LLNL's estimates require LIFE.3 to be built in 2040 for about half the cost of a fission plant today.[65] End of LIFE NIF construction was completed in 2009 and the lab began a long calibration and setup period to bring the laser to its full capacity. The plant reached its design capacity of 1.8 MJ of UV light in 2012.[66] During this period, NIF began running a staged program known as the National Ignition Campaign, with the goal of reaching ignition by 30 September 2012. Ultimately, the campaign failed as unexpected performance problems arose that had not been predicted in the simulations. By the end of 2012 the system was producing best-case shots that were still ​1⁄10 of the pressures needed to achieve ignition.[67] In the years since, NIF has run a small number of experiments with the explicit aim of improving that number, but as of 2015 the best result is still ​1⁄3 away from the required densities, and the method used to achieve those numbers may not be suitable for closing that gap and reaching ignition. It is expected that a number of years of additional work are required before ignition can be achieved, if ever.[68] During a progress review after the end of the Campaign, a National Academy of Sciences review board stated that "The appropriate time for the establishment of a national, coordinated, broad-based inertial fusion energy program within DOE is when ignition is achieved."[69] They noted that "the panel assesses that ignition using laser indirect drive is not likely in the next several years."[70] The LIFE effort was quietly cancelled in early 2013.[71] LLNL's acting director, Bret Knapp, commented on the issue stating that "The focus of our inertial confinement fusion efforts is on understanding ignition on NIF rather than on the LIFE concept. Until more progress is made on ignition, we will direct our efforts on resolving the remaining fundamental scientific challenges to achieving fusion ignition."[1] Notes Or, more typically, when the products of previous fission events "poison" the ongoing reaction by capturing neutrons.[24] Referred to as LIFE.1 in other documents.[48] RAFMS stands for Reduced Activation Ferritic/Martensitic Steel. Wholesale prices have dropped since 2015, as of 2018, the average cost is closer to 3 cents/kWh, which means LIFE.2 would lose money even at the cheapest possible target prices.[59] LCoE on wind turbines declined (improved) by 58% between 2009 and 2014, to just over 5.5 cents/kWh.[63] References Citations Kramer, David (April 2014). "Livermore Ends Life". Physics Today. 67 (4): 26–27. Bibcode:2014PhT....67R..26K. doi:10.1063/PT.3.2344. Nuckolls 1998, pp. 1–2. Nuckolls, John; Wood, Lowell; Thiessen, Albert; Zimmerman, George (1972). "Laser Compression of Matter to Super-High Densities: Thermonuclear (CTR) Applications". Nature. 239 (5368): 139–142. Bibcode:1972Natur.239..139N. doi:10.1038/239139a0. S2CID 45684425. "How NIF works". Lawrence Livermore National Laboratory. Peterson, Per F. (23 September 1998). "Inertial Fusion Energy: A Tutorial on the Technology and Economics". Archived from the original on 2008-12-21. Retrieved 2013-10-08. Bethe 1979, p. 45. Feresin, Emiliano (30 April 2010). "Fusion reactor aims to rival ITER". Nature. doi:10.1038/news.2010.214. Nuckolls 1998, p. 4. Nuckolls 1998, Figure 4. Zimmerman, G (6 October 1977). The LASNEX Code for Inertial Confinement Fusion (Technical report). Lawrence Livermore Laboratory. Lindl 1993, Figure 5. Lindl 1993, Figure 8. Parker, Ann (September 2002). "Enpowering Light: Historic Accomplishments in Laser Research". Science & Technology Review. SOLASE-H, A Laser Fusion Hybrid Study (PDF) (Technical report). Fusion Technology Institute, University of Wisconsin. May 1979. Moir, Ralph (1992). "HYLIFE-II Inertial Confinement Fusion Power Plant Design" (PDF). Particle Accelerators: 467–480. LIFE. Bethe 1979, p. 44. Kramer, Kevin J.; Latkowski, Jeffery F.; Abbott, Ryan P.; Boyd, John K.; Powers, Jeffrey J.; Seifried, Jeffrey E. (2009). "Neutron Transport and Nuclear Burnup Analysis for the Laser Inertial Confinement Fusion-Fission Energy (LIFE) Engine" (PDF). Fusion Science and Technology. 56 (2): 625–631. doi:10.13182/FST18-8132. ISSN 1536-1055. S2CID 101009479. Moses, Edward I.; Diaz de la Rubia, Tomas; Storm, Erik; Latkowski, Jeffery F.; Farmer, Joseph C.; Abbott, Ryan P.; Kramer, Kevin J.; Peterson, Per F.; Shaw, Henry F. (2009). "A Sustainable Nuclear Fuel Cycle Based on Laser Inertial Fusion Energy" (PDF). Fusion Science and Technology. 56 (2): 547–565. doi:10.13182/FST09-34. ISSN 1536-1055. S2CID 19428343. Kramer, Kevin James (2010). Laser inertial fusion-based energy: Neutronic design aspects of a hybrid fusion-fission nuclear energy system (PDF). Ph.D. Thesis (Report). Kramer, Kevin J.; Fratoni, Massimiliano; Latkowski, Jeffery F.; Abbott, Ryan P.; Anklam, Thomas M.; Beckett, Elizabeth M.; Bayramian, Andy J.; DeMuth, James A.; Deri, Robert J. (2011). "Fusion-Fission Blanket Options for the LIFE Engine" (PDF). Fusion Science and Technology. 60 (1): 72–77. doi:10.13182/FST10-295. ISSN 1536-1055. S2CID 55581271. Brennen 2005, p. 16. Brennen 2005, p. 19. "Fission Product Poisoning" (PDF), Nuclear Theory, Course 227, July 1979 Principles of Fusion Energy. Allied Publishers. 2002. p. 257. Morrow, D. (November 2011). Tritium (PDF) (Technical report). JASON Panel. Bethe 1979, p. 46. Tenney, F.; et al. (November 1978). A Systems Study of Tokamak Fusion–Fission Reactors (PDF) (Technical report). Princeton Plasma Physics Laboratory. pp. 336–337. Dunne 2010, p. 2. Revkin, Andrew (18 October 2012). "With Tight Research Budgets, Is There Room for the Eternal Promise of Fusion?". The New York Times. Retrieved 1 May 2017. Bloom, Everett (1998). "The challenge of developing structural materials for fusion power systems" (PDF). Journal of Nuclear Materials. 258-263: 7–17. Bibcode:1998JNuM..258....7B. doi:10.1016/s0022-3115(98)00352-3. "Why LIFE: Advantages of the LIFE Approach". Lawrence Livermore National Laboratory. Archived from the original on 6 May 2016. "How NIF works". National Ignition Facility & Photon Science. "Plans to Increase NIF's Shot Rate Capability Described". Photons & Fusion Newsletter. March 2014. Nuckolls 1998, p. 5. Moyer, Michael (March 2010). "Fusion's False Dawn" . Scientific American. p. 57. Courtland 2013. Sutton 2011. Mercury. Ebbers 2009. Laser. Bayramian 2012. Economic. "What is LIFE?". Lawrence Livermore National Laboratory. Archived from the original on 2015-04-04. Dunne 2010, p. 8. Moses 2009, Figure 1. Meier, W. R.; Dunne, A. M.; Kramer, K. J.; Reyes, S.; Anklam, T. M. (2014). "Fusion technology aspects of laser inertial fusion energy (LIFE)". Fusion Engineering and Design. Proceedings of the 11th International Symposium on Fusion Nuclear Technology-11 (ISFNT-11) Barcelona, Spain, 15–20 September 2013. 89 (9–10): 2489–2492. doi:10.1016/j.fusengdes.2013.12.021. NSF 2013, p. 58. Dunne 2010, p. 3. Dunne 2010, p. 5. Anklam 2010, p. 5. Dunne 2010, p. 4. Latkowski, Jeffery F. (2011-07-01). "Chamber Design for the Laser Inertial Fusion Energy (LIFE) Engine". Fusion Science and Technology. 60 (1): 54–60. doi:10.13182/fst10-318. S2CID 55069880. Reyes, S.; Anklam, T.; Babineau, D.; Becnel, J.; Davis, R.; Dunne, M.; Farmer, J.; Flowers, D.; Kramer, K. (2013). "LIFE Tritium Processing: A Sustainable Solution for Closing the Fusion Fuel Cycle" (PDF). Fusion Science and Technology. 64 (2): 187–193. doi:10.13182/FST12-529. ISSN 1536-1055. S2CID 121195479. "LIFE Design: Fusion System". Lawrence Livermore National Laboratory. Archived from the original on 22 May 2016. Dunne 2010, p. 6. "Simple Levelized Cost of Energy Calculation". NREL. "Wholesale Electricity and Natural Gas Market Data". Energy Information Administration. 19 March 2015. "Electricity Monthly Update". EIA. November 2018. The Future of nuclear power. Massachusetts Institute of Technology. 2003. ISBN 978-0-615-12420-9. OCLC 803925974. Lazard’s Levelized Cost of Energy Analysis—Version 12.0 (PDF) (Technical report). Lazard. October 2018. Lazard 2014, p. 2. Lazard 2014, p. 9. The World Nuclear Supply Chain: Outlook 2035 (PDF) (Technical report). World Nuclear Association. 2016. p. 36. Lazard 2014, p. 13. Crandall 2012, p. 1. Crandall 2012, p. 3. Crandall 2012, p. 2. NSF 2013, p. 168. NSF 2013, p. 212. Levedahl, Kirk (June 2013). "National Ignition Campaign Closure and the Path Forward for Ignition" (PDF). Stockpile Stewardship Quarterly: 4–5. Archived from the original (PDF) on 6 January 2017. Bibliography Anklam, T.; Simon, A. J.; Powers, S.; Meier, W. R. (2 December 2010). "LIFE: The Case for Early Commercialization of Fusion Energy" (PDF). Proceedings of the Nineteenth Topical Meeting on the Technology of Fusion Energy. Archived from the original (PDF) on 4 September 2015. Retrieved 2 April 2015. Bayramian, Andy (12 September 2012). "Progress towards a Compact Laser Driver for Laser Inertial Fusion Energy" (PDF). High Energy Class Diode Pumped Solid State Lasers Workshop. Bethe, Hans (May 1979). "The Fusion Hybrid" (PDF). Physics Today. 32 (5): 44–51. Bibcode:1979PhT....32e..44B. doi:10.1063/1.2995553. Brennen, Christopher (2005). An Introduction to Nuclear Power Generation (PDF). Dankat Publishing.[permanent dead link] Crandall, David (27 December 2012). Final Review of the National Ignition Campaign (PDF) (Technical report). Department of Energy. Courtland, Rachel (27 March 2013). "Laser Fusion's Brightest Hope". IEEE Spectrum. Dunne, Mike (9 November 2010). "Timely Delivery of Laser Inertial Fusion Energy" (PDF). ANS 19th Topical Meeting on the Technology of Fusion Energy. Ebbers, Chris; Caird, John; Moses, Edward (1 March 2009). "The Mercury laser moves toward practical laser fusion". Laser Focus World. Lindl, John (December 1993). "The Edward Teller Medal Lecture: The Evolution Toward Indirect Drive and Two Decades of Progress Toward ICF Ignition and Burn" (PDF). 11th International Workshop on Laser Interaction and Related Plasma Phenomena. Moses, Edward (August 2009). "A Sustainable Nuclear Fuel Cycle Based on Laser Inertial Fusion Energy" (PDF). Eighteenth Topical Meeting on the Technology of Fusion Energy. pp. 547–565. Nuckolls, John (12 June 1998). Early Steps Toward Inertial Fusion Energy (IFE) (Technical report). Lawrence Livermore National Laboratory. Sutton, Don (28 November 2011). "Fusion energy plant already on the drawing board for 2030". Canadian Business. An Assessment of the Prospects for Inertial Fusion Energy. National Academies Press. July 2013. ISBN 978-0-309-27224-7. "LIFE". Lawrence Livermore National Laboratory. Archived from the original on 20 May 2012. "LIFE Design: Laser System". Lawrence Livermore National Laboratory. Archived from the original on 24 July 2012. "LIFE Design: Economic Benefits". Lawrence Livermore National Laboratory. Archived from the original on 24 July 2012. "Mercury: A Diode-Pumped Solid-State Laser". National Ignition Facility & Photon Science. Lazard's Levelized Cost of Energy Analysis — Version 8.0 (PDF). Lazard (Technical report). September 2014. vte Lawrence Livermore National Laboratory Facilities Mirror Fusion Test Facility Tandem Mirror Experiment National Atmospheric Release Advisory Center National Energy Research Scientific Computing Center National Ignition Facility Supercomputers ASC Purple ASCI Blue Pacific ASCI White Peloton Sierra Products Lasers Argus Cyclops Janus Long path Nova Novette Shiva Others Gist LLNL RISE process LX-14 Micropower impulse radar Reliable Replacement Warhead ROSE SCALD Silo Slapper detonator W47 W70 W71 Yorick People Ernest Lawrence Edward Teller Related IBM Blue Gene Inertial confinement fusion Laser Inertial Fusion Energy Stockpile stewardship Sustained Spheromak Physics Experiment Z-Division vte Fusion power, processes and devices Core topics Nuclear fusion Timeline List of experiments Nuclear power Nuclear reactor Atomic nucleus Fusion energy gain factor Lawson criterion Magnetohydrodynamics Neutron Plasma Processes, methods Confinement type Gravitational Alpha process Triple-alpha process CNO cycle Fusor Helium flash Nova remnants Proton-proton chain Carbon-burning Lithium burning Neon-burning Oxygen-burning Silicon-burning R-process S-process Magnetic Inertial Bubble (acoustic) Laser-driven Magnetized Liner Inertial Fusion Electrostatic Fusor Polywell Other forms Devices, experiments Magnetic confinement Tokamak International ITER DEMO PROTO Americas Canada STOR-M United States Alcator C-Mod ARC SPARC DIII-D Electric Tokamak LTX NSTX PLT TFTR Pegasus Brazil ETE Mexico Novillo [es] Asia, Oceania China CFETR EAST HT-7 SUNIST India ADITYA SST-1 Japan JT-60 QUEST [ja] Pakistan GLAST South Korea KSTAR Europe European Union JET Czech Republic COMPASS GOLEM [cs] France TFR WEST Germany ASDEX Upgrade TEXTOR Italy FTU IGNITOR Portugal ISTTOK Russia T-15 Switzerland TCV United Kingdom MAST-U START STEP Stellarator Americas United States CNT CTH HIDRA HSX Model C NCSX Costa Rica SCR-1 Asia, Oceania Australia H-1NF Japan Heliotron J LHD Europe Germany WEGA Wendelstein 7-AS Wendelstein 7-X Spain TJ-II Ukraine Uragan-2M Uragan-3M [uk] RFP Italy RFX United States MST Magnetized target Canada SPECTOR United States LINUS FRX-L – FRCHX Fusion Engine Other Russia GDT United States Astron LDX Lockheed Martin CFR MFTF TMX Perhapsatron PFRC Riggatron SSPX United Kingdom Sceptre Trisops ZETA Inertial confinement Laser Americas United States Argus Cyclops Janus LIFE Long path NIF Nike Nova OMEGA Shiva Asia Japan GEKKO XII Europe European Union HiPER Czech Republic Asterix IV (PALS) France LMJ LULI2000 Russia ISKRA United Kingdom Vulcan Non-laser United States PACER Z machine Applications Thermonuclear weapon Pure fusion weapon International Fusion Materials Irradiation Facility ITER Neutral Beam Test Facility Physics Encyclopedia World Index
auto_math_text
web
## I. Abstract Injecting faults into designs is a way to qualify a verification environment. To improve the performance of a qualifying process, we need to remove identical faults. The problem will provide some faulty design cases; the contestants must identify all sets of identical faults. The judgment will be based on correctness, execution time and memory usage. The topic mainly belongs to gate level design. ## II. Problem Description For a given design, called fault-free design, we can inject one or more faults into it, called faulty design. In this contest, we inject only one permanent fault into a fault-free design at a time. Normally, some output port values will be affected by an injected fault. If the verification environment can be aware of this output difference, it will be a good verification environment. However, some injected faults will cause the same output difference. That is, no matter what the input port values are, the output differences caused by these faults are the same. Because the output values of the faulty designs with different faults are all identical whenever their inputs are the same, we call these faults identical faults and only one of them is instrumental in qualifying a verification environment. ### Example: The design case is the CRC logic, crc.v, is adapt from the eth_crc module of the Ethernet MAC 10/100 Mbps design of the OpenCores website. We take the following statement of this design as example. $assign\ next\_crc[1]= enable\land(data[1] \oplus data[0] \oplus curr\_crc[28] \oplus curr\_crc[29])$ Assume we inject 2 faults. One is replacing the driver operator of signal 502 by NXOR as below, called Fault 1. Another one is the replacing the driver operator of signal 503 by NXOR as below, called Fault 2. The faulty design of Fault 1 and Fault 2 are the same with the following faulty design. Therefore, Fault 1 and Fault 2 are identical to each other so we call them identical faults in one identical fault group. ### Program Requirements: The contestant’s program needs to load and parse the given gate-level design to get the design knowledge and then generate the faulty design for the selected fault in the fault description file. After found an identical fault group by the contestant’s algorithm, the contestant’s program also needs to verify each pair of faults which are found in one identical fault group being identical. Finally, output all found identical fault groups as the output format. ### Input: For the gate level design file, there are some comments, started by the ‘#’ for one line, in the beginning of this file. These comments give a brief description of this design. After these comments, the design is described by following 4 parts. Input ports, output ports, D-type flip-flops, called “DFF”, and then gates. To simplify the format, only “BUFF” and “NOT” has one operand and other gates always have 2 operands. For each input port, we use “INPUT($signal_id)” to indicate it, where the “$signal_id” is the signal identifier of the input port. Every signal identifier is an integer. For each output port, we use “OUTPUT($signal_id)” to indicate it, where the “$signal_id” is the signal identifier of the output port. For each D-type flip-flop, we use “$signal_ id1 = DFF($clock, $reset,$enable, $signal_id2)” to indicate it, where the “$signal_id1” is the output signal identifier of this D-type flip-flop and the forth parameter, “$signal_id2”, is the input signal identifier of this D-type flip-flop. The first one, “$clock”, is the clock signal of the DFF. The second one, “$reset”, is the reset signal of the DFF. The third one, “$enable”, is the enable signal of the DFF. For each gate, we use following format for it. The “$signal_id1” is the output signal of the gate. The “$signal_id2” and “$signal_id3” are the inputs of the gate. •$signal_id1 = BUFF($signal_id2) •$signal_id1 = NOT($signal_id2) •$signal_id1 = AND($signal_id2,$signal_id3) • $signal_id1 = NAND($signal_id2, $signal_id3) •$signal_id1 = OR($signal_id2,$signal_id3) • $signal_id1 = NOR($signal_id2, $signal_id3) •$signal_id1 = XOR($signal_id2,$signal_id3) • $signal_id1 = NXOR($signal_id2, \$signal_id3) Take the above mentioned statement as example. Its related gate-level codes are as below: … INPUT(100) INPUT(101) … INPUT(105) … INPUT(228) INPUT(229) … OUTPUT(301) … 301 = AND(105, 504) … 502 = XOR(100, 101) 503 = XOR(228, 229) 504 = XOR(502, 503) For the fault description file, there are 3 columns in it as below. 1    100    SA0 2    100    SA1 3    100    NEG 4    101    SA0 ... The first column is the fault identifier of this fault. The second column is which signal we inject this fault. The third column is the fault type of this fault. In summary, we have 11 kinds of fault as the below table. We may not use all 11 kinds of faults on every signal. Name Description SA0 Stuck at 0 SA1 Stuck at 1 NEG The negative value of the signal RDOB_AND Replace Driver Operator By AND RDOB_NAND Replace Driver Operator By NAND RDOB_OR Replace Driver Operator By OR RDOB_NOR Replace Driver Operator By NOR RDOB_XOR Replace Driver Operator By XOR RDOB_NXOR Replace Driver Operator By NXOR RDOB_NOT Replace Driver Operator By NOT only when the driver operator is BUFF RDOB_BUFF Replace Driver Operator By BUFF only when the driver operator is NOT We use line 16, 17, 18 and 19 of the fault description file in the hyperlink as examples to show how we inject one fault at a time into the design. As the line 16 of the fault description file, if we inject one Stuck At 0, called “SA0”, fault on signal 502, its faulty design is as below. As the line 17 of the fault description file, if we inject one Stuck At 1, called “SA1”, fault on signal 502, its faulty design is as below. As the line 18 of the fault description file, if we inject one Negative, called “NEG”, fault on signal 502, its faulty design is as below. As the line 19 of the fault description file, if we inject one Replace Driver Operator By AND, called “RDOB_AND”, fault on signal 502, its faulty design is as below. The “Replace Driver Operator By” fault will replace the original driver operator of its signal by the given operator. In this case, the driver operator of 502, XOR, is replaced by the given operator, AND. ### Outputs: • All id pairs of the identical faults. The identical faults are put into fault groups and each id pairs should be printed out in lexicographical order, the left one is the smaller one. For example, if {1,4,9} and {6,10} are two groups of identical faults, then the id pairs of the identical faults should be printed out as follows: 1 4 1 9 6 10 where the two id’s of a pair are separated by a space character. Please make sure your program follow this format. The output text file should be named “identical_fault_pairs.txt”. Due to the Contestant’s process may crash or cost too much time so that we have to kill it, we also strongly suggest that Contestant’s program should always flush its result into the output text file ASAP. Therefore, we can still get some results from the Contestant’s process even it was killed by us. ### Scoring: The correctness of your result constitutes 60%, runtime constitutes 30% and memory usage constitutes10%. Each correct reported identical fault get 10 points. Any report identical fault, which is not a real identical fault, minuses 5 points. Any extra identical fault group will minuses 5 points. For example, Assume { 1,4,9 } and { 6,10 } are two groups of identical faults. If Contestant’s result is, (1, 4), (4,9), (6,10), it is also correct although it isn’t sorted. It gets 50 points If Contestant’s result is, (1, 4), (6,10), (1, 9), it is also correct although it isn’t sorted. It gets 50 points If Contestant’s result is, (1, 4), (6,10), it reports 4 correct identical faults. It will get 40 points. If Contestant’s result is, (1, 4), (1, 9), it reports 3 correct identical faults. It will get 30 points. If Contestant’s result is, (1, 4), (4,9), (6,10), (10, 13). The fault 13 is reported as an identical fault but it isn’t. It will get 50 points and then minus 5 points for fault 13. Its final score is 45. If Contestant’s result is, (1, 4), (4,9), (10, 13). The fault 13 is reported as an identical fault but it isn’t. In this case, even fault 10 belongs to the identical fault, the Contestant didn’t connect it to the correct fault. Therefore, fault 10 cannot be a correct one in this case. It will get 30 points from {1, 4, 9} and then minus 5 points for fault 13. No point for fault 10. Its final score is 25. Assume we have a long identical fault group { 1, 2, 3, 4, 5, 6 } If Contestant’s result is, (1, 2), (1, 3), (1, 4), (1, 5) and (1, 6), It gets 60 points. If Contestant’s result is, (1, 2), (1, 3), (4, 5), (4, 6), the one group is split into 2 groups, {1, 2, 3} and {4, 5, 6}. It will get 60 points and then minus 5 points for the 1 extra group. Its final score is 55. If Contestant’s result is, (1, 2), (3, 4) and (5, 6), the one group is split into 3 groups, {1, 2}, {3, 4} and {5, 6}. It will get 60 points and then minus 10 points for the 2 extra groups. Its final score is 50. ### Contest Objectives: Assume we inject N faults into the fault-free design, there will be N*(N-1)/2 fault pairs. Therefore, there are 2 main contest objects. The first one: How to divide the given design into several suitable partitions so that we can divide and conquer this problem. The second one: A high performance algorithm to verify some given faults being identical or not. ### Problem Guidance: At the start, we suggest contestants write a gate-level design parser to get the fault-free design knowledge. Contestants also need to write a fault reader to get the faulty design knowledge from the currently selected fault and the fault-free design knowledge. Contestants are encouraged to find out the necessary design knowledge for a high performance algorithm to get all possible identical fault groups and another high performance algorithm to verify the just found possible identical fault groups. Finally, output the result as the output format. ## III. Test Case 1. To simplify the verification process, please use the same execute file name, “IF_Searcher”, whose 1st argument is the gate level design file name and the 2nd argument is the fault description file name. 2. Sometimes, your program may crash or it costs too much time so that we have to kill your process. To prevent from getting empty output file, please always flush everything that your algorithm found. At the end, please sort your result as what’s we expected. ## IV. Reference Survey of method: Use SAT: Use BDD: Use SAT and BDD: ## VII. FAQ 1. What do you expect from the participants’solutions? The purpose of this problem is to figure out all identical fault groups in limited time. We like to know if contestants can a good solution to quickly group identical faults, especially, the faults can be injected in a large design with flip-flops. 2. Is it safe to assume that the different fault types (SA0,SA1,NEG) on the same net will never be equivalent?" More clearly, is a circuit possible that makes two different types of fault on a particular net, have an identical output response? Due to the given gate-level design is NOT optimized, I cannot say that a circuit is impossible that makes all kinds of "two different types" of fault on a particular net, have an identical output response. Some pairs may be definitely NOT identical to each other but some don't. For example, 701 = NOT(700) 702 = XOR(700, 701)    #    702 will be always 1'b1 ð "SA0 fault on 702” == “NEG fault on 702” == “RDOP_NOXR fault on 702” =\= fault-free design. 3. I would like to know if there is a time limit to generate output for problem A? The time limitation of each test case is 6 hours. 4. According to the problem A of the INPUT part: After these comments, the design is described by following 4 parts. Input ports, output ports, D-type flip-flops, called “DFF”, and then gates. To simplify the format, only “BUFF” and “NOT” has one operand and other gates ”always” have 2 operands. I wonder what does this "always" mean. Does it mean "only" or "at least"? It means “only”. Other gates always have exactly 2 operands. They never have only one operand and never have more than 2 operands. 5. I am going though the fault description file. Which has following format: (FAULT_ID    NET_ID    FAULT_TYPE) For net with multiple sinks, how should we consider the fault? Do we need to consider fault at "source pin" and each "sink pins".. or either of them? For example: For above circuit at net (1) -> SA0 Fault How should we consider the SA) fault? At all the pins (source and sinks) or either one of them? For the SA0 of the signal 1, it will be as following picture: 6. I see a gate level design is a format with .isc file (crc.isc). But when I open it by notepad ++, I see it is a format with .bench file. I am not clear. Please explain it help me. The content of crc.isc file should be clear and easy to understand. If you have any questions for a specific description in the file, please feel free to let us know. 7. In this file crc.isc, I see net 572 have two Outputs. It's mean the 527 net will be controlled with 2 drivers. I can't determine value logic. ".... 527 = XOR(230, 526) 306 = XOR(403, 202) 527 = XOR(103, 102) ...." It is a bug. The attached one is the modified version. (We have replaced the file on 2016/03/25) 8. What will be the initial state of DFF? The initial state of DFF is 0. 9. Is parallel programming allowed? The contestant teams only need to submit their binary codes for evaluation. Please make sure that your binary code can be executed in the CIC machine. The detailed specification of the CIC machine will be announced soon. Moreover, we will create an CIC account for each contestant team in this May. 10. what maximal size of circuit in testbenches? (1) We have 4 lines of comments in the .isc file and these 4 line are about the size of the signals used in this design For example, # 35 inputs -- clock(1), reset(1), input_data(33) # 32 outputs -- output_signal(32) # 0 D-type flip-flops # 209 gates ( 21 BUFFs + 12 NOTs + 17 ANDs + 19 NANDs + 24 ORs + 26 NORs + 37 XORs + 53 NXORs ) (2) There is no any obviously maximum used in the Problem A. 11. I am wondering if we need to consider the fault in a "sequential way" ? The released sample test case design01.isc mentions about Flip-Flop(FF) components. However, the total circuit never appears FF component. Do we need to consider FF behavior in the future? We will use the Flip-Flop(FF) component in later test cases. 12. Is it mandatory to include verification step in the program? If it is, is there any method to distinguish between problem-solving step and verification step only with .exe file? (If not, in my opinion, verification step will not be well-implemented in the program for time efficiency.) No. We will check all your reported fault-pairs for the correctness. 13. The output format you provided in website seems ambiguous. For example, if { 1,4,9 } and { 6,10 } are two groups of identical faults, than is following output still legal? 1,4 4,9 6,10 We think the red part of output might be the same meaning as the output on the website. Are we right? Yes. Due to we may kill the contestant's process if it costs too much time, we will not request contestant's result will be sorted or the result will be in particular pair order or the result contains particular pairs. For now, the only one format rule is that the left one is the smaller one of the every fault pair. You may re-check the final updated version of “Outputs:” and “Scoring:” parts. 14. Clock signal seems useless in this contest, right? In other words, FFs output value will not be affected by the value of clock signal. The clock signal is useless in this contest. However, please assume the “clock” signal in the form of a square wave with a 50% duty cycle, with a fixed, constant frequency. 15. The reset signal value of the FF is 1 → the output value of the FF is 0 ? Yes. 16. The reset signal value of the FF is 0 → the output value of the FF depends on the value of enable signal? Yes. 17. The enable signal value of the FF is 1 → the output value of the FF is the same as the input value of the FF? Yes. 18. The enable signal value of the FF is 0 → the output value of the FF keeps its original output value ? Yes. 19. In problem 5 situation, how can we determine which value is the original output value of the FF? Please assume that there are some reset cycles before searching the Identical Faults. After these reset cycles, all output signals of DFFs are 0. 20. Could you explain - is it possible to submit python script instead of binary executable for alpha submission. It was easier for us to make python prototype calling some binary files from within, instead of making everything on c++ from the very begining. README text file will be provided. Running the script will be easy, somewhat like: Please confirm your Python script can be executed in the CIC machine. Note that the Python script is allowed for Alpha Test Only. The Python script cannot be used for the final evaluation. 21. Are the provided test cases comparable to the biggest cases in actual final test? The biggest final one is about 10X or more than the current attached cases. 22. The identical faults are defined that no matter the input patterns are, the output differences are the same. How to judge the identical faults in sequential circuit? Can the different values captured by flip-flop (at PPO) means they are different? The values of primary outputs in 2 identical faults must be the same at every clock cycle. The test cases may have some gates among primary outputs and flip-flops. 23. Can we submit the source code with other text file library, which can help to build the circuit? No. please make sure your submitted file is executable as the one in the “Program Requirements”. 24. I found that the test case example, and other test cases have some format differences, e.g., # 37 inputs -- data(4), enable(1), curr_crc(32) # 35 inputs -- clock(1), reset(1), input_data(33). Would the hidden cases use the format of the case example? Yes. 25. Are the "reset", "enable" of DFF's inputs always the primary input of the circuit or they could be any output of a gate? The Verilog code of the DFF is below: For“signal_ id1 = DFF(clock, reset, enable, signal_id2)”, Its Verilog code is as below: always@(posedge clock or negedge reset) if (!reset) signal_ id1 <= 0; else if (enable) signal_ id1 <= signal_id2; 26. Will the gate level file always follows the order of INPUT, OUTPUT then DFF? Yes. However, the DFF does NOT always exist. Some Design doesn’t have any DFF as the case01 and case02 of the “III. TestCase” 27. I wonder if that there will be RDOB fault for DFF for example: In .isc file : 4207 = DFF(0, 601, 1202, 5409) In fault file: 4207 RDOB_BUFF There is no such kind of fault. For DROB_BUFF, it is only occurs for BUFF. For example, In .isc file: 40003 = BUFF(50020) In fault file: 50020 RDOB_BUFF 28. I was wondering how good the resulting scores are. Any reference score, the highest one, for each test cases? The full marks of Case01 is 5810. The full marks of Case02 is 6730. The full marks of Case03 is 7720. 29. According to the scoring : “Each correct reported identical fault get 10 points. Any report identical fault, which is not a real identical fault, minuses 5 points. Any extra identical fault group will minuses 5 points.” Based on the full marks for each test case, shall we have 581, 673 and 772 id pairs of the identical faults, respectively ? Would you give us the resulting for each test case? We do not announce the detailed results. 30. We know that the runtime of our result constitutes 30% and memory usage constitutes10%. How can we calculate the score according to the runtime and the memory usage? We will according to the average and standard deviation of the execution time to give the score. For example, the average is 10 seconds and the standard deviation is 2. If the execute time is smaller (the average minus 2 * standard deviation), which is 10 – 2*2 = 6, it gets 100% scores. Between 2 and 1 standard deviation, which is between 6 and 8, it gets 80% scores. Between 1 and 0 standard deviation, which is between 8 and 10, it gets 60% scores. Between the average and ( the average plus 1 standard deviation), which is 10 and 12, it gets 40 % scores. Between plus 1 and plus 2 standard deviations, which is between 12 and 14, it gets 20% scores. Over (the average plus 2 * standard deviation), which is bigger than 14 seconds, it gets 0% scores. The main point is that the score of execution time and memory usage is dependent on the average and standard deviation of them. The above numbers, 6, 8, 10, 12, 14, etc. are used for example. 31. Can you provide the evaluation result of Beta test that the score is calculated by all grading standards - the correctness, runtime and memory usage. We mean the result of Beta test is the same as the final test. Yes. I will. 32. Will all of the reset and enable wires of DFFs given in the test case (hidden cases included) be primary input? or can they be the outputs of some other gates? All reset and enable wires of DFFs are always the primary input. 33. When will the output of a DFF always be 0? when the reset is 1 or when the rest is 0? Your clarifications in FAQ15 and FAQ 25 are contradict. 34. Unfortunately we don't understand answer to question: "25. Are the "reset", "enable" of DFF's inputs always the primary input of the circuit or they could be any output of a gate?". Can you please give us more formal answer and more detail? In the design cases of the problem A, the clock and reset signals of DFF are from the primary inputs. There is no gated clock or reset signal in our test cases. 35. I wonder that are these scores also include runtime and memory usage performance? Or do they just show the correctness without considering other constitutes? Another question, are the full marks of the testcases indicates the amount of grouped faults? if not, how can we know the correct amount of grouped faults of each testcases? 1. These scores also include runtime and memory usage performance. 2. How can we know the correct amount of grouped faults of each testcases? A. You can use some formal tools to check all pairs of given faults of the published 4 cases in the website and then get a golden result. B. After A, you can check your grogram result with the result of A to know the correctness. 36. Is 100 the full score of each testcase? Does it mean the answers are all correct? How good the performance of runtime and memory usage be can get 100 score? 1. Yes. 2. Please reference the “Scoring” part of the Problem A in the website. 60% – the score for the correctness of your result 30% – the score for the runtime 10% – the score for the memory usage 3. Please reference the 30-th item in the “FAQ” of the Problem A in the website. 37. You provided full marks of three cases on the website( Q28 ). Are they the score of gloden of each cases? The full mark of case01 had a big gap with our golden result. However, our golden results of case02, case03 are the same as yours. Can you check the full mark of case01 again? All cases in Q28 are not the cases in the “III.Test Case”. The case 1 and 2 of Q28 are similar to the case01 and case02 of the “III.Test Case”. The case 3 of Q28 is similar to the case03 and case04 of the "III.Test Case". 38. The scoring of correctness is different from the scoring of runtime and memory usage(different from the Q30), right ? I guess that it is scored by the percentage you got in the full mark. For example, if we got 1234 points at the correctness, and the full mark is 2000. Finally, our correctness' score is ( 1234/2000 )*60 ? Yes. The scoring of the correctness is also in the detail page. 39. Will you release additional cases for us to test ? No. All cases of the final test will be similar to the cases of “III.Test Case” but bigger. 40. Will the primary input directly be a primary output? i.e. INPUT(500) OUTPUT(500) fault: 1 500 SA0 It may occur with a very little probability. 41. Since we've known the answer of testcase 1~4, can we just print out the answer to save runtime? No. You may have some skills to check the given case and then provide the previous answer if the given case is known. However, it is not the purpose of this contest. To avoid this issue, we won't use any published cases in the website. In previous alpha and Beta test, only one case is the same with one of the published cases. Others are similar not equal to the published cases. Therefore, at most one case can be used this kind of trick. However, no team has more better performance on this case than other cases. No public cases will be used in the final test so there is no chance to use such trick in the final test. 42. You say that no public cases will be used in the final test. But can you please upload thees cases on the web-site after final? Okay. 43. Can you say me please about circuit structure? I am interesting in feedback loop in sequence circuit. Are there feedbacks? No feedback circuit. 44. Say me please about 40th question from FAQ (problem A). You say about small circuit (one net) that "It may occur with a very little probability." OK But there is a logical question: Which kinds of faults can influence on circuit and how? For the 40th FAQ, I mean the chance of the mentioned case in the 40th FAQ is very little. However, whenever it occurs, the SA0 fault on the signal 500 called fault 1 in the 40th FAQ case, it will affect the design output as an error. 45. In the gate level design file, there are no information about the number of the enable signal from the comments. Will there always be only one enable signal? Or we have to know that by reading the whole file? You have to know that by reading the whole file.
auto_math_text
web
# Astrospheres ## Conferences in 2015 ### Joint splinter meeting "Astrospheres around massive stars" ##### from 14 to 18 September 2015 $$\Huge{+}$$ $$\Huge{\rightarrow}$$ Organisers:: D. Bomans, K. Scherer Observations and simulations of astrospheres around hot stars have recently become in the focus of scientific research. We intend to discuss together the latest observations and model efforts. Runaway O and B stars are common and part of a sizable population in the galaxy and a significant number of those shows a bow-shock structure. Observations shall be presented and applicable conservative fluid models as well as general shock structure shall be discussed. Some crucial features, which are relevant for observations shall be pointed out. Because a lot of modeling effort is done in the heliosphere (as well as in situ observations) the splinter meeting is in cooperation with the AEF/EP. Abstracts Please send abstracts via email in the to Dominik Bomans (bomans at astro.rub.de) or Klaus Scherer (kls at tp4.rub.de). Please use the AG tex-template (coming soon).
auto_math_text
web
# Search for leptonic decays of W′ bosons in pp collisions at s√=7 TeV CMS Collaboration; Chatrchyan, S; Khachatryan, V; Aguiló, E; Amsler, C; Chiochia, V; De Visscher, S; Favaro, C; Ivova Rikova, M; Millan Mejias, B; Otiougova, P; Robmann, P; Snoek, H; Tupputi, S; Verzetti, M; et al (2012). Search for leptonic decays of W′ bosons in pp collisions at s√=7 TeV. Journal of High Energy Physics, 2012(8):023. ## Abstract A search for a new heavy gauge boson W′ decaying to an electron or muon, plus a low mass neutrino, is presented. This study uses data corresponding to an integrated luminosity of 5.0 fb−1, collected using the CMS detector in pp collisions at a centre-of-mass energy of 7 TeV at the LHC. Events containing a single electron or muon and missing transverse momentum are analyzed. No significant excess of events above the standard model expectation is found in the transverse mass distribution of the lepton-neutrino system, and upper limits for cross sections above different transverse mass thresholds are presented. Mass exclusion limits at 95% CL for a range of W′ models are determined, including a limit of 2.5 TeV for right-handed W′ bosons with standard-model-like couplings and limits of 2.43–2.63 TeV for left-handed W′ bosons, taking into account their interference with the standard model W boson. Exclusion limits have also been set on Kaluza-Klein WKK states in the framework of split universal extra dimensions. A search for a new heavy gauge boson W′ decaying to an electron or muon, plus a low mass neutrino, is presented. This study uses data corresponding to an integrated luminosity of 5.0 fb−1, collected using the CMS detector in pp collisions at a centre-of-mass energy of 7 TeV at the LHC. Events containing a single electron or muon and missing transverse momentum are analyzed. No significant excess of events above the standard model expectation is found in the transverse mass distribution of the lepton-neutrino system, and upper limits for cross sections above different transverse mass thresholds are presented. Mass exclusion limits at 95% CL for a range of W′ models are determined, including a limit of 2.5 TeV for right-handed W′ bosons with standard-model-like couplings and limits of 2.43–2.63 TeV for left-handed W′ bosons, taking into account their interference with the standard model W boson. Exclusion limits have also been set on Kaluza-Klein WKK states in the framework of split universal extra dimensions. ## Altmetrics Detailed statistics Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2012 12 Mar 2013 09:43 05 Apr 2016 16:39 Springer 1029-8479 The original publication is available at www.springerlink.com https://doi.org/10.1007/JHEP08(2012)023 Permanent URL: https://doi.org/10.5167/uzh-75939 Preview Filetype: PDF Size: 486kB View at publisher Preview Content: Published Version Filetype: PDF Size: 656kB ## TrendTerms TrendTerms displays relevant terms of the abstract of this publication and related documents on a map. The terms and their relations were extracted from ZORA using word statistics. Their timelines are taken from ZORA as well. The bubble size of a term is proportional to the number of documents where the term occurs. Red, orange, yellow and green colors are used for terms that occur in the current document; red indicates high interlinkedness of a term with other terms, orange, yellow and green decreasing interlinkedness. Blue is used for terms that have a relation with the terms in this document, but occur in other documents. You can navigate and zoom the map. Mouse-hovering a term displays its timeline, clicking it yields the associated documents.
auto_math_text
web
# Subsequent Debit Posting to MIRO - SAP Accounts Payable for Beginers Sometimes a subsequent debit needs to be posted and associated with a MIRO invoice. This is ONLY for a legitimate pricing adjustment, such as a debit memo received from the vendor. It is a value-only adjustment and will not affect the quantity. In order to tie this to the PO, use the MIRO function. Normally, the MIRO screen appears ready for an invoice to be entered. Use the drop-down functionto change this to “Subsequent Debit”. Enter the appropriate information like you would for the original invoice. Be sure to use the vendor’s document number for the debit in the Reference field since SAP uses this field to check for duplicates. Enter the amount of the debit for the different line items where you want it to be applied. The+/- 3% (max $25) still applies to this function. Therefore, if the debit is more than a three percent difference or more than$25, then the PO will need to be modified. If you click on the Simulate icon , you will see what SAP will post. Once you are satisfied with the journal, you can either Save/Post the document directly from this screen by clicking on the Post iconor you can click the Back icon . This will return you to the Overview screen. Click the Save/Post icon at the top to post the document.
auto_math_text
web
Q: # What is meant by the term "mixed melting point"? A: "Mixed melting point" is a technique used to identify chemical compounds. It is particularly used for organic compounds, where a sample with a known identity and melting point is mixed with an unknown purified sample to determine the melting point. ## Keep Learning Glass capillaries are used to determine the mixed melting point. An equal amount of the unknown and known substance are mixed and placed in one capillary. The two substances are also placed in separate capillaries. All three capillaries are heated simultaneously in the melting point apparatus. The melting point of the capillary with the mixed substances should not differ by more than 4 degrees Celsius from the melting point of the substances placed in separate capillaries. Sources: ## Related Questions • A: The melting point of sand, which consists mainly of silicon dioxide or silica, is approximately 2930 degrees Fahrenheit. Its boiling point is 4046 F. Filed Under: • A: Freezing point depression represents the variation in the freezing temperature of a pure solvent from that of the solvent in solution. In the equation ΔT=Kf m i, where i is the van’t Hoff factor, Kf represents the molal freezing point depression constant, also called the cryoscopic constant. Filed Under: • A: According to the Engineering Toolbox, petrol reaches its boiling point at a measurement of 203 degrees Farenheit. This measurement can also be indicated as 95 degrees Celsius on the metric scale.
auto_math_text
web
# R.0204 ## Train formation When you drive along the highway, you will want to keep a safe distance from the vehicle in front and you’ll want a clear gap on either side. All this takes up a surprising amount of space. Any economist will tell you that empty space is wasted space, and one can get more value out of the highway system by joining vehicles together. For example, each truck can tow a trailer. A trailer confers two advantages: first, the operator delivers two loads with one driver, which halves the staff costs, and second, two vehicles joined together take up little more space than one because the trailer doesn’t need a safe gap in front. Hence, coupling vehicles in pairs will almost double the capacity of the highway. The same applies to other forms of transport such as canals and railways. Railways have the unique advantage that users can join together vehicles in much larger numbers to get a much larger saving. It is almost as easy to drive a train of ten coaches as a single railcar, so per hectare of land, the capacity of a railway line to handle traffic is potentially huge. A twin-track railway can serve around 10 trains per hour in each direction, and if each train carries 1000 passengers, the overall throughput comes to 20 000 passengers per hour, more than twice the capacity of a six-lane motorway. Of course, it’s not quite that simple. Trains must be assembled from vehicles of the right kind in the right order, and when vehicles are joined together in large numbers, they don’t always behave as you might expect. In this Section we’ll ask why, and explore some of the practical implications. ### Marshalling the train A train consists of one or more vehicles connected together and moving under the control of a single driver. The vehicles can be passenger coaches or freight wagons, or occasionally a combination of both. It will be propelled by a locomotive, or alternatively motors built into the vehicles themselves. In earlier times, a freight train would include a brake van at the rear (figure 1). Today, most wagons are fitted with air brakes and the brake van is no longer needed, but the order in which the vehicles are ‘marshalled’ is still important. The driver and the controls need to be at the front so the driver can see where he or she is going, and while the power source that keeps the train moving is often located close to the driver, this is no longer necessary and nor is it ideal for a fast-moving train. And the wagons need to be assembled in the right order, because once they are coupled, it’s difficult to add vehicles to or remove them from the middle of the formation. #### What trains are made of Passenger trains and freight trains differ in many ways. Let’s start with size. On the whole, passenger trains are shorter, usually between 2 and 12 coaches long compared with 20 or more wagons in a freight train. Ultimately, the length of a passenger train is limited by the length of the platform at the stations it serves, and since passenger coaches are generally about 20 m long, a high-speed service with 8 trailer coaches will measure a total of 200 m in length including power cars. The Brussels-Paris-London Eurostar with 18 trailer coaches is exceptional, having a total length of 400 m. Most high-speed trains need a large rectifier bank together with other equipment that cannot easily be fitted underneath the passenger coaches; they are usually provided with separate power cars, one each at the front and rear (figure 2). The set of coaches in between is called a rake. Each coach will seat between 40 and 50 passengers, so an 8-coach rake will carry up to 400 in total. Many of the coaches are designed to provide specific functions (such as the restaurant car) and will normally be coupled in a specific order. If the coaches are articulated (figure 3) they cannot easily be separated except at the maintenance depot. Articulation has the advantage [22] of eliminating overhang at the end of each passenger cabin and damping down relative motions between neighbouring cars, which improves the quality of the ride. And since for each coach there are only four wheels in contact with the rail (only two in the case of the Talgo trains), the train makes less noise. On the other hand, articulated bogies are more complex to build and maintain, the wheel loadings are higher, and since a fault effectively disables the whole rake, scheduling becomes more difficult. Articulated bogies are normally confined to high-speed trains. By comparison, regional trains must be more flexible. And they are lightly built and more sparsely furnished because they carry more passengers per unit of floor area. A regional express will seat about 70 passengers per coach. A suburban service or metro will carry even more passengers - over 100 per coach during peak periods - but many will have to stand. In either case, if the track is electrified the motors will be slung under one or more of the passenger cabins so there is no power car as such. So much for size: what about the weight? Typically, a high-speed passenger train with two power cars and 8 coaches seating a total of 400 passengers weighs around 400 tonnes. It doesn’t make much difference whether the coaches are loaded or empty. If you reckon each passenger at 75 kg including luggage [26], the total payload comes to 30 tonnes, less then 10% of the gross weight of the train. To put it the other way round, the dead weight is many times greater than the payload, roughly 1 tonne for every passenger [11]. This might seem excessive, but railway coaches are built with a robust and almost impenetrable shell so they protect passengers in the event of a derailment or a collision with a line-side object. And the weight per passenger is actually about the same as the equivalent figure for a jet airliner such as the Boeing 747, or an automobile, which weighs over a tonne and spends much of its life carrying only a single occupant. Freight trains are a different matter. They fall into two distinct categories: those in the first category are made up of individual freight wagons each with its own specific origin and destination. Trains of this type are engaged in the traditional form of distribution, known in the UK as tripping. Those in the second category are block trains, trains made up from identical wagons hauled together from a single origin to a single destination. Tripping services require a flexible approach because the wagons must be capable of being coupled and uncoupled possibly several times during the journey. Trains in either category contain at least 20 vehicles and often stretch further than the eye can see. The maximum permissible length for freight trains in many European countries is normally 750 m, but this is modest compared with some of the trains that carry bulk loads in less densely populated regions. In Australia in 1996, a train 6 km long and hauled by 10 locomotives weighed nearly 60 000 tonnes [4], and there have been heavier ones since. On a reasonably level track the total weight can be increased almost indefinitely by linking up smaller units into larger ones: the locomotives are positioned at intermediate points along the train so the traction is distributed evenly and the coupling forces minimised. The intermediate locomotives are controlled remotely by radio. In practice, things are not quite so simple because as we shall see later, quite dramatic longitudinal forces can be set up in a long train that may cause damage unless carefully managed. #### Assembling a passenger train When railway lines were first built in Britain, no-one was sure how they would work. Passenger coaches were equipped with primitive wooden seats but no roof, and supported on two axles like the horse-drawn trucks that were used in the coal mines of the north-east. And like trucks, they were joined together in whatever order seemed convenient at the time. The main challenge was not to form the train but to build a steam locomotive powerful enough to haul it from one end of the line to the other. What would the locomotive look like, and how should the boiler, cylinders and control levers be arranged? From the earliest days, the driver was positioned at the rear end of the chassis with the smoke-box and funnel at the front, so the driver had a restricted view of the track ahead. If this seems puzzling today, the reason was that the crew had to stand close to the coal and water supplies that were carried in a separate tender towed behind the engine. It was towed behind because if pushed ahead it was likely to derail, a point to which we’ll return later. There was also the stability of the loco itself to be considered. As speeds increased towards the end of the nineteenth century it was found that a locomotive could become dangerously unstable, zig-zagging from side to side in a motion called ‘hunting’ (see Section R0418). Partly in order to damp down the oscillations and partly to guide locos round curves, designers added pony trucks to the front of the chassis. They helped to cure the stability and curving problems, but only when the vehicle was moving forward. The steam loco was a one-way beast. This requirement made the business of train formation more complicated than it is today. If you wanted to run an express service starting from a mainline terminal, the locomotive that was scheduled to haul the train couldn’t be used to bring the coaches into the departure platform because it would be facing the wrong way as well as being at the wrong end of the train. It was therefore necessary to provide a second locomotive, usually a smaller shunting engine, to set up the formation. The express loco would arrive afterwards, reversing into the station to be coupled to the leading coach. The procedure for dealing with an arriving service followed a similar pattern but in reverse order. However this wasn’t always possible on a minor branch line with limited facilities, and the same locomotive was employed for the outward and return leg of each service. Here, the tank engine proved useful. The tank engine did not need a separate tender, and although not very fast, it could haul a few coaches quite happily in either direction without having to turn around in between. The only requirement at the remote terminal was a headshunt with two turnouts so the loco could run round to the opposite end ready for the return journey (figure 4). By contrast, faster locos with tenders were rotated through $$180^\circ$$ so they could operate in either direction on a main line. There were several ways of doing this: for example, major depots were provided with a turntable (figure 5). Another method was to run the loco through a track section arranged in the form of a triangle with three turnouts as shown in figure 6, a facility that was usually available on a main line network within a reasonable distance of the terminus. In some cases, the terminus was provided with a balloon loop so the whole train could double back in an arc without uncoupling the locomotive at all (figure 7). A well-known example was the loop at Grand Central Station in New York. Today, almost all locomotives can operate equally well in either direction and such measures are not needed. But the Eurotunnel shuttle system between Calais in France and Folkestone in the UK is an exception. In order to maintain a high throughput, the trains are designed to run in a fixed direction on a continuous circuit. There is a balloon loop at either end, but because of the limited area of land available, the radius of curvature is relatively small, which causes appreciable wear of the wheel treads and flanges on the outside of the curve. To counter this, the track is actually laid out as a figure-of-eight as shown in figure 8, so the balloons are opposite-handed and the wear is more evenly distributed between the wheels on either side of the train (see the web site [29] listed at the end of this Section). However, conventional passenger trains today are usually made up of fixed-formation rakes with a driver’s cab at each end so the train as a whole can shuttle backwards and forwards between terminals without having to turn round. Regional and local services are often built up from two or more complete sets so that the formation can be divided en route. One set continues on the main route while the second diverges onto a branch line. The division takes place at the station immediately upstream of the junction. It’s not an ideal arrangement because passengers sometimes find themselves in the wrong part of the train and have to scramble along the platform to avoid being taken to the wrong destination. Even more exciting is the practice of trains dividing on the move. At one time it was not unusual for express services to shed coaches from the rear of the train at intermediate stops without the main body of the train losing speed. In 1914, there were 200 services in Britain that operated in this way, and the practice only ended in 1960 [17]. The process only worked in one direction. No-one has yet discovered a way of assembling a train at speed even under computer control. #### Freight operations By comparison with passenger trains, freight trains have always been noisy, dirty and slow. The traditional railway distribution process is known as tripping. Each consignment is loaded onto a general-purpose wagon, and the routing of individual wagons is based on the hub principle. Wagons are collected from their respective rail heads by a small diesel loco, either individually or in small groups, and taken to a nearby marshalling yard, where they are sorted into larger trains each bound for a particular destination. The sorting operation takes place within a group of sidings or ‘classification tracks’: parallel tracks that branch out from a single approach (figure 9). When a train arrives, the loco is removed and the wagons are parked on the approach. A shunting engine then pushes the rake back-and-forth, each time dropping off one or more wagons into the appropriate siding; the wagons in each siding are then made up into outbound trains. Sorting consignments in this way is a slow process. Hauling them to the next stage can be a slow process too, because, as we’ll see later, empty wagons have a propensity to jump the rails and loaded wagons to break apart, so they are rarely allowed to travel at more than half the speed of a mainline passenger train. The slow travel speed together with time lost in the shunting yard means that it often takes more than a day to transport a cargo over a relatively short distance - an average of roughly 100 km in the UK [13]. The sorting process can be speeded up by rolling the wagons one after the other down a slope and switching each wagon separately onto its classification track, the turnouts being controlled from a central control box. During the twentieth century, a proportion of freight yards were converted to operate in this way, and for a while they were successful although demand has since declined. From the 1960s onwards, tripping operations suffered through competition with road transport, not least because a lorry can deliver from door to door whereas a shipment by rail will usually begin on a lorry and finish on a lorry with at least two transhipment operations in between. In future, it is conceivable that wagons can be shunted automatically by vehicles under remote control [16] [28], and as we’ll see later, other developments are being considered. But in many countries, the decline has been offset by growth in block trains. Block trains are used for transporting in bulk, for example gravel, iron ore, grain, refrigerated food, and containers. The wagons that make up a block train are specialised to the extent that they are all built to the same specification, which is tailored to the cargo they are carrying. An example of extreme specialisation might be the supply of coal from a stockpile to a power station. For many years now, trains have been operating this kind of service like a conveyor belt on a continuous circuit, loading and unloading while creeping along under electronic control at a little under 1 km/h [9]. ### Longitudinal dynamics A railway wagon is a simple contraption, and like any other vehicle, it obeys Newton’s laws of motion. For example, if you pull steadily on one of its coupling it will pick up speed, accelerating at a rate proportional to the applied force. Yet when joined together in a train, wagons sometimes appear to take on a life of their own, jostling one another like unruly children. Their group behaviour emerges from the way they are connected. A certain amount of slack is built into the couplings, so that when an impulse is applied to any one vehicle, the vehicle transmits the event to its neighbours only after a short delay. As a result, considerable forces are set up between adjacent wagons that unless carefully managed, can be large enough to compromise the stability of the train. #### Forces between wagons Let’s begin by imagining a conventional freight train hauled by a loco at constant speed along a straight, level track. Under these conditions, the whole train will be in tension because each wagon has a certain resistance to motion and exerts a retarding force on the train. The retarding force arises from air resistance and rolling resistance as described in Section G0119. At a speed of 50 km/h, the resistance is quite small, about 18 N per tonne [10]. So the total resistance for a wagon whose gross weight is 50 tonnes comes to about 0.9 kN. For the moment let’s just call it $$S$$. Under these conditions, the tensile force acting on each coupling between adjacent vehicles is steady and predictable. Suppose there are 30 wagons in the train, numbered 1 to 30 from front to rear. The last wagon needs a steady force of $$S$$ to keep it moving at the stipulated speed, so the tension in the coupling between wagon number 29 and wagon number 30 is just $$S$$ (we’ll adopt the convention that tension is positive and compression negative). Similarly, wagon number 29 requires a force of $$S$$ plus the force $$S$$ required to haul along wagon 30 behind, so the coupling force between wagon 29 and wagon 28 is $$2S$$ newtons. The coupling force rises in equal increments from the rearmost coupling to the leading coupling on wagon number 1, where it peaks at $$30S$$ newtons. The variations in coupling force along the train are shown diagrammatically in figure 10. So what happens if the loco is coupled to the rear of the train and pushes the wagons instead? The profile of coupler forces is reversed, with the force between neighbouring vehicles increasing from front to rear as shown in figure 11. The peak force occurs between the loco at the rear and wagon number 30, and this time it’s compressive so we write it as $$-30 S$$ for a train of 30 wagons. (Note incidentally that compressive forces between vehicles don’t always act through the coupling in the way that tensile forces do. For example, with old-fashioned screw link couplings of the kind still used in the UK and Europe, the links are not rigid. They carry only tensile forces, and therefore any compression force must pass through the buffers.) Now, since any wagon can be marshalled into any position during its working life, its couplings must be capable of handling the maximum tensile and compressive loads likely to be experienced anywhere in a train. For the train in question, consisting of 30 wagons each with a resistance of 0.9 kN, the steady-state force owing to rolling resistance on a level track is about 26 kN, not a huge amount. But it greatly increases on an uphill gradient; if the slope is 1.5% the loco must overcome a component of the train weight that amounts to 0.015 x 1500 tonnes or 225 kN approximately. Moreover, in practice the forces in the couplings fluctuate over wide range. Fluctuations or shock loads can be several times higher than the static loads, which explains why standard screw couplings on European freight wagons are designed to carry loads up to 850 kN [2]. And since repeated shock loads tend to encourage fatigue cracks, it is better to keep the loads within more conservative limits. #### Mexican wave During a dull passage of play, spectators at a football match will sometimes take part in a ‘Mexican wave’. Taking their cue from the people sitting next to them, they rise from their seats, raise their arms in the air, and sit down again. This creates a ripple that travels round the arena at a characteristic speed of about 12 m/s [12]. Waves can travel along a freight train too: the motion is less obvious but if you live near a railway line you will have heard the melancholy clanking of the coupler chains at night. When a freight locomotive accelerates from a standing start there is a brief delay while the slack in the first coupling tightens, followed by another delay as the slack is taken up between wagon no 1 and wagon no 2, and so on. The effect is called ‘run out’. A similar effect occurs in reverse when a loco slows down: if the wagons don’t have air brakes, they will close up one at a time, ramming each other in turn to create a shock wave that passes along the train [5]. This is called ‘run-in’. The amount of slack depends on the type of coupling and tends to be greatest for the old-fashioned screw couplings used in Europe. Even if the wagons have air brakes they don’t all act at once. When the driver applies the brake lever, the signal is transmitted via a brake pipe at finite speed so there will be a delay in the actuation of the brakes on successive wagons. As a result, for a train 700 m long there may be a lag of 5 seconds overall from front to rear until the brakes on the trailing wagon take hold [7]. In addition to steady-state loads and shock waves, there is a third category of motion that occurs only with freight trains hauled by locomotives inserted at intermediate points within the train. It takes the form of slow undulations in the coupling force (usually remaining in tension throughout) as the locos continually speed up and slow down a little, out of phase. There are no shock impacts, but the amplitude of the force variations is significant [5]. Longitudinal forces in a train can cause trouble. We have already touched on the possibility of fatigue failure in the couplings under tension; less obvious is the effect of compressive forces. A compressive force introduces a different kind of risk, and one that is especially relevant for freight trains. If you have played with toy trains you’ll know that when you push a train from behind, derailments are frequent because the compressive loads between neighbouring vehicles tend to prise them sideways off the track. Figure 12 shows a vehicle within a rake travelling round a circular curve. The angular difference between the longitudinal axes of adjacent vehicles is $$\theta$$. We imagine a compressive force $$Q$$ transmitted from vehicle to vehicle through the couplings, and by symmetry, this force $$Q$$ acting on any particular coupling of any particular vehicle forms an angle $$\theta / 2$$ with its longitudinal axis. Hence each vehicle is subject to lateral force away from the centre of the circle. This is opposed by the lateral reaction $$Y$$ between the outer rail and the outer wheel flange of each axle. Note that this flanging force occurs independently of and in addition to any centripetal force arising from the speed of the train. Resolving at right-angles to the vehicle axis we find that #### (1) $$$Y \quad = \quad Q \sin \frac{\theta }{2}$$$ We know from Nadal’s criterion (see Section R0314) that there is a limit to the flanging force $$Y$$ that the wheel can resist before climbing the rail, and since the smaller the curve radius the larger the angle $$\theta$$, the formula confirms what we might expect, namely that when you push a train from behind it is likely to ‘spring’ off the rails on a sharp curve. Another critical situation [21] arises on an S-bend of the kind that occurs on the cross-over between two parallel tracks as shown in figure 13. Compressive forces acting through the couplers tend to swivel the wagon about a vertical axis, so that each axle is forced sideways against the rail, but this time in opposite directions. Lateral reactions between the wheel flange and rail increase the likelihood that the wagon will be forced off the track, but Nadal’s criterion tells us that the effect is more serious if at the same time the vertical reaction between the wheel and rail is reduced, i.e., the wheel ‘unloads’. If the train is subject to a compressive load through the couplings, a shock impact can jolt the wagon and make it pitch with one axle lifting vertically so that it almost loses contact with the rails: theoretically, a force greater than 200 kN can cause it to derail altogether [3]. This is why freight trains should ideally be configured with empty cars at the rear, where the longitudinal impacts are less severe [1]. #### Distributed power We are so accustomed to seeing a locomotive hauling a train from the front that we don’t question whether the front is the best place to put it. It is true that the driver must be near the front to have a clear view of the track ahead, and if the loco in question is a steam engine it must be located at the front too, so the driver can reach the controls. But if you haul a train from the front you risk pulling it apart, a risk that can be lessened if the locomotive is placed further aft in the formation, and controlled through an electronic circuit or through digital radio. This raises some interesting possibilities. For example, if you are running a shuttle service on a rural branch line, it makes sense to keep the loco permanently coupled at the same end of the train throughout , so it pulls the train on the outward journey and pushes it on the way back. This makes the line more efficient since no time is wasted running the loco round the stationary rake at the end of each trip. It does however require a second driving cab with remote controls installed in the tail of the last coach. But is it safe? We saw earlier that compressive loads tend to de-stabilise a train. It is well known that empty freight wagons are liable to derail if pushed from behind, and experiments with passenger trains on the Weymouth line in the UK during the 1970s showed modest increases in the lateral forces at the first coach when the loco was coupled at the rear [20]. However, the derailment risk is relatively small for passenger coaches, which have a soft suspension that keeps each wheel pressing on the rail when the coach is jolted, and for which the compression loads are generally modest compared with a freight train. In fact, regional ‘pusher’ services have run for many years in the UK without any problems [24]. And where the track alignment is sufficiently well maintained, high-speed trains will run safely with the power car at the rear: the UK East Coast Main Line service operates at 225 km/h (140 mph) as a ‘pusher’ on its return journey from the north of Britain to London; as shown in figure 15 there is one power car permanently coupled at the north end with a control cab, and a trailer car known as the ‘DVT’ (Driving Van Trailer), also with a driver’s cab but no motors, at the south end [23]. On the German ICE system, speeds of 300 km/h are allowed with the loco at the rear [25]. Another possibility is to put the loco in the middle. Let’s return to our earlier example. If the locomotive is placed in the centre of the 30 wagons as shown in figure 16, the first fifteen wagons are being pushed so the compression profile peaks at $$-15 S$$ immediately ahead of the loco, while the tension profile peaks at $$+15 S$$ immediately behind the loco because it is pulling fifteen wagons. Compared with a front-hauled train or a ‘pusher’, the maximum force under steady state conditions is reduced numerically by a half. Power cars located in the centre of the train were once proposed for the Advanced Passenger Train that was run experimentally in the UK during the 1970s. However most high-speed trains have since adopted a common arrangement with a power car at the front and another at the rear. Again, as shown in figure 17, the steady-state coupler forces are reduced by a half. Those in the leading half of the train are tensile while those in the rear half are compressive. This arrangement doesn’t work for a freight train though, because compressive forces tend to destabilise the wagons, especially when they are empty or partially loaded. It is better put the first loco at the head and the second halfway down the train as shown in figure 18. In Canada, a freight service was run experimentally in this configuration as early as 1967, with the second loco in mid-train under radio control from the leading cab [19]. Not only is the maximum coupler force halved compared with the front-hauled equivalent, but in addition, the force immediately ahead of the second (‘remote’) loco is zero: the pattern is the same as if two separate trains of half the original length were travelling in close formation along the track. This suggests that we could make a freight train as long and as heavy as we want by joining together a series of smaller trains: provided the motive power is evenly distributed among the wagons, the coupling forces can be maintained at the same level as if the trains were moving independently. In practice however, this is not as simple as it looks because as described earlier, a long train is liable to oscillate from front to rear as it moves along, rather like a caterpillar whose body stretches and squeezes in waves that pulse at regular intervals from head to tail. Although the oscillations are not spectacular, a sophisticated control strategy is needed to contain the coupler forces within their fatigue limits. A frequently used configuration for heavy trains consists of three locos at the head and two more placed about two-thirds of the way along the train, although the best positioning of the ‘remote’ group is still under debate [8]. Finally, let’s turn to a different kind of train altogether. Vehicles employed on a light rail system such as a metro, a tramway, or a suburban railway are designed to make frequent stops and starts. Hence they must brake and accelerate quickly to maintain a viable average speed. This can’t be achieved with a traditional formation hauled by a locomotive regardless of its position within the train, because the locomotive wheels can’t develop enough friction. Only a small proportion of the total weight rests on the powered axles, and a powerful loco is not much use if the wheels are slipping. The answer is to distribute motors along the whole train, slung from the vehicles that carry the payload. The higher the proportion of the axles that generate traction, the better the performance. ### Couplings If you liken a railway train to a pearl necklace, then the couplings between wagons are the string on which the pearls are threaded: not the most eye-catching part of the machinery but vital to its operation. Without them, the whole concept of a railway falls to pieces. Since 1850, engineers have produced and patented hundreds of different designs. To be successful, a coupling must be simple, safe and easy to use, and utterly reliable in operation. #### Manual couplers At the dawn of the railway era, chain-making was an established industry. If you wanted to lift something heavy, a steel chain was the best way to do it, and being made in large quantities, it was relatively cheap and widely available. So not surprisingly, engineers adapted chains for railway use. Each vehicle was equipped at either end with a hook and a three-link chain permanently attached to the chassis frame . Vehicles were coupled manually by a railway employee using a wooden pole; in a British freight yard the employee was known as a ‘shunter’. The shunter lifted the end of the chain with his pole and dropped it over the hook of the next vehicle (this left one hook and chain unused). The pole enabled the shunter to make the connection without stepping onto the track between the vehicles. The chain couldn’t be drawn tight, so the system was noisy, and for many years, ‘loose-coupled’ trains would rattle and clank their way along railways in many parts of the world. But loose couplings were also a boon, because they made starting easier for a freight train carrying a heavy load. When the locomotive began to move, the coupling to wagon number 1 would tighten first, and the wagon would begin to move. A moment later, the coupling between wagon number 1 and wagon number 2 would tighten also, and the second wagon would begin to move. The starting wave then passed along the whole train until all the wagons were in motion. The loco did not need to accelerate all the wagons simultaneously, and by applying relatively modest traction to each wagon in turn it was possible to move a load that might otherwise lie beyond the engine’s capacity. But this called for considerable skill from the driver. If the loco accelerated too quickly, the intensity of the starting wave would increase as it moved rearwards from one wagon to the next. By the time it neared the tail end of the train, the wagons downstream could be moving quite quickly, and any wagon near the tail would receive a considerable jerk or ‘snatch’ that was liable to break the coupling. When braking, the same process could happen in reverse, with the wagons at the tail running into the main body of the train and making a considerable impact. The shock loads encountered on passenger trains were less severe, but they made travel uncomfortable, particularly for passengers in the last coach. Consequently, loose-coupled trains couldn’t travel very fast, and locomotive engineers strived for something better. The solution was the screw coupling, invented by Henry Booth, a railway promoter who worked with George Stephenson and became the secretary and treasurer of the Liverpool and Manchester Railway company. A screw coupling was just a three-link chain whose central link was replaced with a threaded spindle . Each vehicle was equipped at either end with a screw chain and a hook. To connect two vehicles, using his pole the shunter would lever one of the screw chains over the hook of the other vehicle as before, and after he had connected the vehicles in a train, he would visit each coupling in turn and wind the spindle to tighten the chain. This drew the vehicles together and produced a small pressure on the buffers so there was less jerking when the train moved off or when the brakes were applied [27]. But it was a risky operation, and remains so to this day. To complete the last step, the shunter is obliged to move into the gap between neighbouring vehicles and for a while becomes invisible to the train driver. The same applies when uncoupling two vehicles: the shunter must move into the gap and loosen the screw link before using his pole to lever the chain off its hook. Any inadvertent movement of the train can lead to horrendous consequences. And both the screw coupling and its predecessor the three-link coupling carry another risk. If a vehicle moves while the shunter is attaching or detaching the coupling, his pole can jam and throw him bodily into the air. While working for the national railway company some years ago, your writer was assigned to help carry out a survey at the scene of a fatal accident of this kind that had occurred in a freight yard in the north of England. #### Automatic couplers In a freight yard where hundreds of wagons are sorted every day, coupling and uncoupling is a labour-intensive process that adds significantly to the journey time of each consignment and to the cost of the operation as a whole. Quite early on, railroad companies in the USA grasped the need for coupling that would engage automatically, with minimum risk to employees. The solution was provided by Major Eli Janney, a Confederate veteran of the Civil War, who patented his device in 1873. Adopted as standard in the USA eighty years later [18], it is still in common use. It looks a little like a human hand, with the fingers bent round to grasp the coupling on the next vehicle in the train . The devices at opposite ends are both right-handed, so that the couplers on any wagon will mate with couplers on any other wagon regardless of orientation. Each coupler is welded to a draught gear unit that we’ll describe shortly. Variations of the Janney coupler have since spread to countries throughout the world under different names including the buckeye (the name by which it is commonly known in the UK, where it is used mainly for passenger stock) and proprietary names such as ‘Tightlock’. The secret of the coupler’s success is a locking pin that holds the outer ‘fingers’ firmly in place. Before two wagons can be connected, a shunter must raise the locking pins so the fingers can swivel out of each other’s way. He does this manually via the cut lever, which is operated from either side of the wagon and does not require the shunter to step between the wagons. The engine driver then closes the gap. When two wagons are pushed together the fingers close around each other and the locking pins drop automatically into place. Later, the shunter can hook up the brake pipe, and in the case of a passenger train, the electrical connections. There are several aspects of the design that we won’t go into here, some of them quite subtle such as such as the shape of the knuckle faces, which must allow for variations in the geometry that occur when the angle between neighbouring wagons changes during the course of a journey, for example on a horizontal curve or on the crest of a vertical curve. And several alternative devices have since appeared such as the SA3 coupling [15], which was once mooted as a standard coupler for railways in Europe but was never taken up. It is now widely used in countries of the old Soviet Union. There is however a fully automatic system in which the brake pipe and electrical connections are made in a single operation when the two vehicles are pushed together. The Scharfenberg coupling is a complex and sophisticated piece of equipment widely used on German high-speed passenger trains [14]. #### Draft gear One of the requirements that distinguishes the Janney coupler from chain link types is that it must be rigidly mounted on the vehicle frame so that the two knuckles of neighbouring vehicles line up when they are pushed together and will transmit compressive forces between vehicles. In order to reduce shock loads during shunting, the coupler allows longitudinal movement up to about 80 mm against the pressure of a spring and damper. The spring and damper are contained within a cylinder welded to the wagon chassis, and together the assembly is known as the draft gear. The spring can be made from polymer or steel, and the coupler shank is encircled by steel wedges to provide a simple form of damping [6]. ### Conclusion When freight wagons or passenger coaches are joined together to make a train, they can move about on the network in greater numbers, and in greater safety, than would otherwise be possible. The aim is to ensure an appreciable time gap between successive vehicles entering any particular section of track so they don’t run into one another. Longer trains mean longer gaps - and fewer drivers. The result, paradoxically, is that for most of the time a railway track seems idle. If you stand on a railway bridge overlooking a main line, you may have to wait for several minutes before anything happens. And when a train appears, it will occupy the track only for few seconds before the line becomes quiet again. The principle has worked well for over a hundred years and will probably continue to do so as far as passenger traffic is concerned. But freight traffic is a different matter. Freight trains are cumbersome to assemble and take apart, and the shunting process accounts for much of the total journey time from depot to depot. How much better if wagons could be routed individually or in small groups, travelling direct from origin to destination and thereby making use of spare network capacity outside busy periods [28] [30]. Using technology developed over the last twenty years or so, motor cars can effectively drive themselves, so it’s not difficult to imagine autonomous freight wagons, each powered by its own electric motor or diesel engine, travelling in close formation like ghost trains in the night. ### Loose ends Based on tests and computer simulation, there seems to be a convincing case that one can safely push a passenger train from behind - it’s unlikely to derail if suitable precautions are taken. But this leaves open the question of what might happen if things went wrong. When a coach in a front-hauled train derails, it is likely to be dragged along without necessarily colliding with line-side structures or an opposing train until the train is brought to a halt. A different scenario springs to mind for a pusher train. If the leading coach of a pushed train derails at the leading bogie, the train seems likely to push the coach to one side so that it ends up doubling back and possibly dragging other coaches with it. In either case, the subsequent career of the train could depend on how quickly the driver perceives what has happened and applies the brakes, and whether the brake pipe connections remain intact. Are there any crash simulation results that might throw light on the relative merits of the two types of train formation on the effects of derailment?
auto_math_text
web
# source:docs/PACT2011/03-research.tex@1024 Last change on this file since 1024 was 1003, checked in by ksherdy, 9 years ago Edits. File size: 9.3 KB Line 1\section{Parabix} 2\label{section:reserach} 3%Describe key technology behind Parabix 4%Introduce SIMD; 6%Highlight which SSE instructions are important 7%TAlk about each pass in the parser; How SSE is used in every phase... 8%Benefits of SSE in each phase. 9 10 11% Extract section 2.2 and merge into 3.   Add a new subsection 12% in section 2 saying a bit about SIMD.   Say a bit about pure SIMD vertical 13% operations and then mention the pack operations that allow 14% us to implement transposition efficiently in parallel. 15% Also note that the SIMD registers support bitwise logic across 16% their full width and that this is extensively used in our work. 17% 18% Also, it could be good to have a small excerpt of a byte-at-a-time 19% scanning loop for XML, e.g., extracted from Xerces in section 2.1. 20% Just a few lines showing the while loop - Linda can tell you the file. 21% 22 23% This section focuses on the 24 25 26% With this method, byte-oriented character data is first transposed to eight parallel bit streams, one for each bit position within the character code units (bytes). These bit streams are then loaded into SIMD registers of width $W$ (e.g., 64-bit, 128-bit, 256-bit, etc). This allows $W$ consecutive code units to be represented and processed at once. Bitwise logic and shift operations, bit scans, population counts and other bit-based operations are then used to carry out the work in parallel \cite{CameronLin2009}. 27 28% The results of \cite{CameronHerdyLin2008} showed that Parabix, the predecessor of Parabix2, was dramatically faster than both Expat 2.0.1 and Xerces-C++ 2.8.0. 29% It is our expectation is that Parabix2 will outperform both Expat 2.0.1 and Xerces-C++ 3.1.1 in terms of energy consumption per source XML byte. 30% This expectation is based on the relatively-branchless code composition of Parabix2 and the more-efficient utilization of last-level cache resources. 31% The authors of \cite {bellosa2001, bircher2007, bertran2010} indicate that such factors have a considerable effect on overall energy consumption. 32% Hence, one of the foci in our study is the manner in which straight line SIMD code influences energy usage. 33 34 35\subsection{Parabix1} 36 37% Our first generation parallel bitstream XML parser---Parabix1---uses employs a less conventional approach of SIMD technology to represent text in parallel bitstreams. Bits of each stream are in one-to-one-correspondence with the bytes of a character stream. A transposition step first transforms sequential byte stream data into eight basis bitstreams for the bits of each byte. 38 39At a high level, Parabix1 processes source XML in a functionally equivalent manner as a traditional processor. That is, Parabix1 moves sequentially through the source document, maintaining a single cursor position throughout the parsing process. Where Parabix1 differs from the traditional parser is that it scans for key markup characters using a series of basis bitstreams. 40A bitstream is simply a sequence of $0$s and $1$s, where there is one such bit in the bitstream for each character in a source data stream. A basis bitstream is a bitstream that consists of only transposed textual XML data. 41In other words, a source character consisting of $M$ bits can be represented with $M$ bitstreams and 42by utilizing $M$ SIMD registers of width $W$, it is possible to scan through $W$ characters in parallel. 43The register width $W$ varies between 64-bit for MMX, 128-bit for SSE, and 256-bit for AVX. 44Figure \ref{fig:inputstreams} presents an example of how we represent 8-bit ASCII characters using eight bitstreams. $B_0 \ldots B_7$ are the individual bitstreams. The $0$ bits in the bitstreams are represented by periods, so that the $1$ bits stand out. 45 46\begin{figure}[h] 47\begin{center} 48\begin{tabular}{cr}\\ 49source data $\vartriangleright$ & \verb<t1>abc</t1><t2/>\\ 50$B_0$ & \verb..1.1.1.1.1....1.\\ 51$B_1$ & \verb...1.11.1..1..111\\ 52$B_2$ & \verb11.1...111.111.11\\ 53$B_3$ & \verb1..1...11..11..11\\ 54$B_4$ & \verb1111...1.111111.1\\ 55$B_5$ & \verb11111111111111111\\ 56$B_6$ & \verb.1..111..1...1...\\ 57$B_7$ & \verb.................\\ 58\end{tabular} 59\end{center} 60\caption{Parallel Bitstream Example} 61\label{fig:inputstreams} 62\end{figure} 63 64In order represent the byte-oriented character data as parallel bitstreams, the source data is first loaded in sequential order and converted into its transposed representation through a series of packs, shifts, and bitwise operations. 65Using the SIMD capabilities of current commodity processors, this transposition of source data to bitstreams incurs an amortized overhead of about 1 CPU cycle per byte for transposition \cite{CameronHerdyLin2008}. When parsing, we need to consider multiple properties of characters at different stages during the process. Using the basis bitstreams, it is possible to combine them using bitwise logic in order to compute character-class bitstreams;t hat is, streams that identify the positions at which characters belonging to a specific character class occur. For example, a ASCII character is an open angle bracket <' if and only if $B_2 \land \ldots \land B_5 =$ 1 and the other basis bitstreams are 0 at the same position within the basis bitstreams. Once these character-class bitstreams are created, bit-scan operations, common to commodity processors, can be used for sequential markup scanning and data validation operations. A common operation in all XML parsers is identifying the start tags (<') and their accompanying end tags (either />'' or >'' depending whether the element tag is an empty element tag or not, respectively). 66 67\begin{figure}[h] 68\begin{center} 69\begin{tabular}{lr}\\ 70source data $\vartriangleright$ & \verb<t1>abc<t1/><t2/>\\ 71% $N =$ name chars & \verb.11.111.11...11..\\ 72$M_0 = 1$ & \verb1................\\ 73$M_1 = advance(M_0)$ & \verb.1...............\\ 74$M_2 = bitscan('>')$ & \verb...1.............\\ 75$M_3 = advance(M_2)$ & \verb....1............\\ 76$M_4 = bitscan('<')$ & \verb.......1.........\\ 77$M_5 = bitscan('/')$ & \verb..........1......\\ 78$M_6 = advance(M_5)$ & \verb...........1.....\\ 79$M_7 = bitscan('<')$ & \verb.............1...\\ 80$M_8 = bitscan('/')$ & \verb...............1.\\ 81$M_9 = advance(M_8)$ & \verb................1\\ 82% $M_2 \lor M_6 \lor M_9$    & \verb...1.......1....1\\ 83\end{tabular} 84\end{center} 85\caption{Parabix1 Start and End Tag Identification} 86\label{fig:Parabix1StarttagExample} 87\end{figure} 88 89 90Unlike traditional parsers, these sequential operations are accelerated significantly since bit scan operations can perform up to $W$ finite state transitions per clock cycle. This approach has recently been applied to Unicode transcoding and XML parsing to good effect, with research prototypes showing substantial speed-ups over even the best of byte-at-a-time alternatives \cite{CameronHerdyLin2008, CameronLin2009, Cameron2010}. 91 92 93 94 95% In section 3, we should try to explain a bit more detail of the 96% operation.   Under Parabix 1, a little bit on transposition 97% and calculation of the [<] bitstream would be good, perhaps 98% using the examples from the 2010 Technical Report or EuroPar submission. 99 100 101\subsection{Parabix2} 102 103% Under Parabix 2 a little discussion of bitwise addition for 104% scanning, perhaps again excerpted from the TR/EuroPar submission 105% would be good. 106 107%In Parabix2, we replace the sequential single-cursor parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single cursor approach of Parabix1 and conceptually of traditional sequential approach, in Parabix2 multiple cursors positions are processed in parallel. 108 109In Parabix2, we replace the sequential single-cursor parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single-cursor approach of Parabix1 (and conceptually of all sequential XML parsers), Parabix2 processes multiple cursors in parallel. For example, using the source data from Figure \ref{fig:Parabix1StarttagExample}, Figure \ref{fig:Parabix2StarttagExample} shows how Parabix2 identifies and moves each of the start tag markers forwards to the corresponding end tag. Like Parabix1, we assume that $N$ (the name chars) has been computed using the basis bitstreams and that 110 111\begin{figure}[h] 112\begin{center} 113\begin{tabular}{lr}\\ 114source data $\vartriangleright$ & \verb<t1>abc<t1/><t2/>\\ 115$N =$ name chars & \verb.11.111.11...11..\\ 116$M_0 = [<]$ & \verb1......1....1....\\ 117$M_1 = \texttt{advance}(M_0)$ & \verb.1......1....1...\\ 118$M_2 = \texttt{scanto}('/','>')$ & \verb...1......1....1.\\ 119$M_3 = \texttt{scanto}(>)$ & \verb...1.......1....1 120\end{tabular} 121\end{center} 122\caption{Parabix2 Start and End Tag Identification} 123\label{fig:Parabix2StarttagExample} 124\end{figure} 125 126In general, the set of bit positions in a marker bitstream may be considered to be the current parsing positions of multiple parses taking place in parallel throughout the source data stream. A further aspect of the parallel method is that conditional branch statements used to identify syntax error at each each parsing position are eliminated. Although we do not show it in the prior examples, error bitstreams can be used to identify any well-formedness errors found during the parsing process. Error positions are gathered and processed in as a final post processing step. Hence, Parabix2 offers additional parallelism over Parabix1 in the form of multiple cursor parsing as well as significanlty reduces branch misprediction penalty. 127 128 Note: See TracBrowser for help on using the repository browser.
auto_math_text
web
January 08, 2016 # Single-Decree Paxos Paxos is an algorithm which maintains a distributed, consistent log shared by a set of networked computers. Single-Decree Paxos is a slightly simpler algorithm that solves consensus and is used to implement Paxos. Both algorithms were first described by Leslie Lamport in "The Part-Time Parliament" and later described more directly in "Paxos Made Simple". In this article, we describe what consensus is, why it's so hard, and how Single-Decree Paxos solves it. Note that we won't discuss full blown Paxos. # What is Consensus? Assume computers are fail-stop and connected by an asynchronous network. • By fail-stop, we mean that any computer can crash at any time, and that any crashed computer can restart after any amount of time. When a computer crashes, it loses all of the data it is currently operating on, but computers can write values to stable storage and can recover these values upon restart. • By asynchronous, we mean that the network can drop, duplicate, re-order, and arbitrarily delay messages. We only assume that the network doesn't corrupt messages and that messages are eventually delivered if they are repeatedly sent. Consider a set $\{a, b, c\}$ of computers that want to agree on a chosen value. Some computers propose values, and other computers accept values; some computers do both. For example, perhaps the computers want to choose a leader amongst themselves. $a$ could send a message to $b$, $c$, and itself proposing that $b$ should be leader, and all three computers could accept this proposal. The act of a set of computers choosing a single value is known as consensus. Single-Decree Paxos is one example of an algorithm that can be used to reach consensus. In general, in order for a consensus algorithm to be safe, it has to meet a set of rather obvious conditions whenever it terminates: • Only one value can be chosen. Duh! • Only values proposed can be chosen. If this weren't a requirement, you could construct a rather silly yet still correct consensus algorithm in which all computers instantly agree on some predefined value. Moreover, it's desirable that a consensus algorithm guarantee some form of progress. The Fischer, Lynch, Paterson impossibility result tells us that no consensus algorithm can guarantee that it always terminates given fail-stop computers in an asynchronous network, but we'd still like some promise that a given consensus algorithm usually terminates after sufficient time given enough computers haven't crashed. For example, a consensus algorithm that fails to terminate after a single computer failure doesn't guarantee much progress. On the other hand, a consensus algorithm that can still operate correctly even after a minority of computers have failed (e.g. Single-Decree Paxos) guarantees a stronger notion of progress. # Why is Consensus Hard? Initially, consensus doesn't seem like that hard of a problem. Checking to see if a boolean formula is satisfiable, finding the minimum number of colors needed to color a graph, or checking to see if two graphs are isomorphic: these problems seem tough! Having computers choose a single value; seems kinda easy, huh? Well, it turns out that consensus is tougher than it sounds! To convince ourselves of this fact, let's consider a couple simple consensus algorithms we might think of and show why they fail to solve consensus. Perhaps the simplest algorithm we could think of is to predetermine some leader which has the exclusive responsibility of choosing the value. Proposers send their proposals to the leader, and the leader accepts the first value it receives, deeming it chosen. While this algorithm is surely safe, it doesn't guarantee much progress. Whenever the leader fails, the algorithm is doomed to not terminate! Here's a slightly more complicated algorithm that tries to guarantee a bit more progress. The main idea is that we can tolerate more computer failures by sending proposals to more computers. Proposers send proposals to all computers, and all computers accept the first value they receive. Whenever a majority of computers accept a proposal, we'll say it's chosen. This algorithm can still operate even when a minority of computers have crashed; yay! But unfortunately, if multiple proposers concurrently propose values to acceptors, it is possible to reach a split vote where no single proposal has a majority of votes. For example, consider a five computer cluster: $\{a, b, c, d, e\}$. Assume $a$, $c$, and $e$ propose values 1, 2, and 3 respectively. If $a$ and $b$ receive proposal 1 first, $c$ and $d$ receive proposal 2 first, and $e$ receives proposal 3 first, then none of proposal 1, 2, or 3 has a majority of acceptances. Thus, this algorithm can fail to terminate even when no computer fails! # Single-Decree Paxos Now that we've familiarized ourselves with consensus and convinced ourselves that it's a challenging problem to solve, let's introduce Single-Decree Paxos: an algorithm which successfully solves consensus. We let every computer in our cluster act as a proposer and an acceptor. To tolerate a minority of computer failures, we'll say a value is chosen when it is accepted by a majority of acceptors. First, we discuss the invariants of the algorithm, then we discuss why the invariants imply the algorithm is safe, and finally we present the algorithm. ## Invariants Single-Decree Paxos maintains two invariants: 1. We let each proposal be of the form $(v, i)$ where $v$ is an arbitrary proposed value and $i$ is an identifier. Our first invariant is that all proposal identifiers are unique. An easy way to construct unique identifiers is to have each computer $c$ maintain a monotonically increasing integer $i$. $i$ is stored on disk and is incremented after $c$ sends a proposal. Whenever computer $c$ sends a proposal, $c$ tags the proposal with the id $ci$. For example, if $a$ proposes values $\text{foo}$, then $\text{bar}$, then $\text{baz}$, its proposals would be of the form: $(\text{foo}, a1)$, $(\text{bar}, a2)$, and $(\text{baz}, a3)$. Also note that we can impose a total ordering on identifiers by comparing them lexicographically (e.g. $a1 < a2 < b1$). 2. Consider a proposal $(v, i)$ that is sent to some majority $C$ of computers. Let $P$ be the set of proposals with an identifier smaller than $i$ accepted by any member of $C$. For example, if $(v, i) = (\text{apple}, d1)$, $C = \{a, b, c\}$, and $a$, $b$, and $c$ have accepted proposals $\{(\text{banana}, a1), (\text{grape}, b1), (\text{banana}, e1)\}$, $\{\}$, and $\{(\text{grape}, b1), (\text{peach}, c1)\}$ respectively, then $P = \{(\text{banana}, a1), (\text{grape}, b1), (\text{peach}, c1)\}$. Note that $(\text{banana}, e1) \notin P$ because $e1 > d1$. Let $v'$ be the value associated with the largest identifier in $P$. In our example, the largest identifier in $P$ is $c1$, so $v' = \text{peach}$. Our second invariant states that $v$ must equal $v'$. In other words for all proposals $(v, i)$, the value $v$ of the proposal sent to a majority of computers must equal the value of the proposal with the largest identifier less than $i$ accepted by any of the computers. Our simple example doesn't meet this invariant because the $\text{apple}$ in our proposal should be $\text{peach}$. ## Why Invariants Imply Safety Invariant 1 is rather simple and uninteresting. Invariant 2, on the other hand, is the true workhorse behind ensuring safety. Consider an execution of Single-Decree Paxos where $(v, i)$ is the first chosen proposal; that is $(v, i)$ is the first proposal accepted by some majority of acceptors, which we'll denote $C$. After $(v, i)$ is chosen, Invariant 2 tells us that all proposals $(v', i')$ with $i' > i$ will have $v = v'$! Here's why. Consider the first proposal $(v', i')$ issued to a majority $C'$ after $(v, i)$ is chosen. Invariant 2 tells us that $v'$ must be equal to the value of the proposal with the largest identifier less than $i'$ accepted by any computer in $C'$. Since $i'$ is the first proposal larger than $i$, $i$ is the largest identifier of any proposal accepted by any computer. Moreover, all computers in $C$ have accepted $(v, i)$ and since $C$ and $C'$ overlap, some computer in $C'$ must have accepted $(v, i)$ too. Putting these facts together, Invariant 2 says $v' = v$. We can apply this reasoning iteratively to see that every proposal with identifier $i' > i$ has $v' = v$. This means that after a value is chosen, the set of computers will never accept a value other than the chosen one because all larger proposals are proposals for the chosen value! ## The Algorithm Single-Decree Paxos is a two-phase protocol. Assume a proposer wants to propose some value $v_0$. In the first phase, proposers send a prepare message with identifier $i$ to a majority (or more) of acceptors, and the acceptors reply with the largest value they have accepted (if any) with identifier smaller than $i$. Once the proposer receives a majority of responses to its prepare request, it makes a decision. If none of the acceptors it contacted have accepted a value with identifier less than $i$, then it's free to propose $v_0$. Otherwise, if one or more acceptors have accepted some value with identifier less than $i$, it throws away $v_0$ and instead proposes the value with the largest identifier returned by the acceptors. This enforces Invariant 2. Moreover, when an acceptor receives a prepare message with id $i$, it promises to never accept a proposal with identifier less than $i$. This is to ensure that between a proposer deciding a value to propose in the first phase and proposing it in the second phase, another proposer doesn't get some other value with a smaller identifier chosen. This also enforces Invariant 2. In the second phase, the proposer sends an accept request with its proposed value determined in the first phase to a majority of acceptors. Acceptors accept a value if they haven't already promised in the first phase not to, and if a majority of acceptors accept the proposal, the value is chosen.
auto_math_text
web
# survHE light I’ve made a major refactoring of (the development version of) survHE. I guess one of the main issues with the package (both from the point of view of the user and the maintainer) was that survHE is a big package and installation is a very lengthy process. And this is no surprise: the trade-off here is between the massive savings in computational time that are obtained by pre-compiling the Bayesian models available (through rstan) and the time it takes to get everything installed on your machine… And, from the developer’s point of view, often submission to CRAN has been a pain, because some of the files that get installed are very large and, again because of the nature of the package, there’s quite an intricate structure of “dependencies”, which makes the package very heavy. What I’ve now done is, to put it pompously, in the spirit of some kind of survHE-verse, in the sense that I’ve split up the package in three parts (well, in fact three packages, really). The first one (which I’m still calling survHE) does contain all the back-bone and prepares for the full functionalities (ie running the built-in survival models under both a frequentist approach, using flexsurv and a Bayesian approach, either using INLA or rstan). But, crucially, the new survHE isn’t enough to open up all these facilities — it only implements the simpler frequentist models and so dispenses with lots of the complicated and computationally-intensive, time-consuming bits. So if you install survHE with: remotes::install_github("giabaio/survHE", ref="devel") • You can only run the models using flexsurv; • All the options for the Bayesian models are coded up… BUT: you need to install additional “modules” to enable the INLA and rstan facilities of survHE. You do this with: # Install the INLA module remotes::install_github("giabaio/survHE", ref="inla") and / or # Install the HMC module remotes::install_github("giabaio/survHE", ref="hmc") The first of these two packages/steps isn’t too time-consuming and installing survHEinla is fairly quick. The second is the actual bottle-neck and installing survHEhmc is a longer process — because like I said, it does install all the pre-compiled models and the heavy dependencies that come from rstan. Basically, survHEinla and survHEhmc are not really stand-along packages. The user shouldn’t call them individually and, in effect, they don’t have all the actual facilities (eg the functions to plot and produce summaries, as well as the PSA facilities, which are still coded up in the main installation of survHE). On the contrary, they both “depend” on survHE, so that when they are loaded, survHE and all its functions are also automatically loaded. From the user point of view, not much changes. You can still run a model using fit.models like this: # Loads the "basic" survHE library(survHE) # Loads the example dataset from 'flexsurv' data(bc) # Fits a survival model using 'flexsurv' in the background mle = fit.models(formula=Surv(recyrs,censrec)~group,data=bc, distr="exp",method="mle") To do this, you don’t even need to install the Bayesian modules. But if you have installed either or both of them, you can simply specify the option method='hmc' or method='inla' and, in the background, survHE will check that you have the relevant module installed and load it, if so. As far as the user is concerned the call to fit a Bayesian model is the same as before # Loads the "basic" survHE library(survHE) # Loads the example dataset from 'flexsurv' data(bc) # Fits a survival model using 'flexsurv' in the background inla = fit.models(formula=Surv(recyrs,censrec)~group,data=bc, distr="exp",method="inla") or hmc = fit.models(formula=Surv(recyrs,censrec)~group,data=bc, distr="exp",method="hmc") If you request a Bayesian model but have only installed survHE (and none of the Bayesian modules) the above calls with method set to either inla or hmc will return a message to instruct you to install survHEinla and/or survHEhmc, which you do like above, using remotes::install_github. I’ll probably re-package it all and submit the three separate packages to CRAN — although I may leave survHEinla and survHEhmc on the GitHub repo only — this would make the submission process much easier, because in the current version, survHE is a very light-weight package. And installing from GitHub is increasingly easy — and very efficient for us to manage/update.
auto_math_text
web
# Tag Info 3 This mixed model should give you the same results as the repeated measures ANOVA MIXED accuracy BY training section format /CRITERIA = CIN(95) MXITER(100) MXSTEP(5) SCORING(1) SINGULAR(0.000000000001) HCONVERGE(0, ABSOLUTE) LCONVERGE(0, ABSOLUTE) PCONVERGE(0.000001, ABSOLUTE) /FIXED = training section format section*training format*training ... 2 SPSS usually provides univariate tests of such a main effect on each variable all the way down in the output (“Tests of Between-Subjects Effects”), even for doubly multivariate designs. So, barring any particular problem in the way you specified the model, they should be there. There are also multivariate tests of between-subject factors (why are you ... 2 First of all given you are using a standard LMM you definitely look to have normally distributed residuals. ie. $\epsilon \sim N(0, \sigma^2 I)$. When you are using a LMM you are practically saying that you data have a distribution as : $y \sim N( X\beta, \sigma^2 I + ZDZ^T)$. So in relation with what we wrote above : \$y|u \sim N( X\beta + Zu, \sigma^2 ... 1 @GaelLaurans makes a good point that you are thinking of this as a repeated-measures analysis, but you are actually fitting a regular ANOVA. Thus, this isn't the problem of how to correctly determine the denominator's degrees of freedom (which is what I had linked to in my comment above). I think the issue here is simpler. You have four data points, ... Only top voted, non community-wiki answers of a minimum length are eligible
auto_math_text
web
# How to tell a battery's ideal charging voltage with a circuit? simulate this circuit – Schematic created using CircuitLab I've been looking to build a universal charger for my laptop and phone, and I'm wondering; is there a viable way to adjust the input voltage to match the battery's ideal charging voltage by measuring A and B across an arbitrary load? The scenario is as follows; 0) I've got a universal charger that can output anywhere from 5V to 24V 1) I want to charge my phone, so I plug it into the universal charger 2) Without my interaction, the circuit and it's microcontroller adjusts itself to the required voltage (5V) 3) I now unplug my phone and plug my laptop in for charging 4) Without my interaction, the circuit and it's microcontroller adjusts itself to the required voltage (24V) How can I measure the required voltage without checking the battery label? • you either have interaction or you don't ... you cannot have both .... you need to rethink what you are trying to do .....measuring A and B across an arbitrary load is interaction – jsotola Oct 9 '18 at 6:56
auto_math_text
web
# SESAPS 2011 US/Eastern Conference Center (Hotel Roanoke, Roanoke VA) ### Conference Center #### Hotel Roanoke, Roanoke VA 110 Shenandoah Avenue, Roanoke VA 24016 , , Description Leo Piilonen • Wednesday, 19 October • 17:00 18:00 AA. Registration (5:00 to 8:00 pm) 1h Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 18:00 20:00 AB. Welcome Reception 2h Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • Thursday, 20 October • 08:00 08:30 Registration (from 8:00 to 10:00 am) 30m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 08:30 10:30 BA. Strings: Theory and Application Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Leo Piilonen (Virginia Tech) • 08:30 Recent developments in four-dimensional supergravity 30m I will summarize recent work on gauge theories in supergravity, specifically concerning the Fayet-Iliopoulos' parameter. In rigidly supersymmetric gauge theories, this parameter also appears and can vary continuously. In supergravity old lore held that it should always vanish. I will discuss recent developments showing that in fact it can be nonzero, but is quantized, and will explore various ramifications of that result. Speaker: Eric Sharpe (Virginia Tech) • 09:00 Mathematical Surprises From Off-Shell SUSY Representation Theory 30m This presentation reports on the effort to create a mathematical theory for off-shell supersymmetry that is analogous to the construction or roots and weights for Lie groups. The construction begins with the introduction of Adinkras, diagram analogous to weight space representation for quarks. Recent surprising results are discussed. Speaker: S. James Gates (University of Maryland) • 09:30 Real-time finite temperature AdS/CFT and jet quenching 30m I will introduce a simple prescription for computing real-time finite temperature n-point functions in AdS/CFT. When used to compute the stopping distance of a highly energetic jet moving through strongly coupled N=4 superYang-Mills plasma, the typical jet stopping distance scales with energy as (EL)^{1/4}, where L is the size of the region where the jet was created. Speaker: Diana Vaman (University of Virginia) • 10:00 Holographic superconductors at low temperatures 30m Holographic models of superconductivity offer a promising approach to the understanding of strongly coupled superconductors. Their properties are derived from non-linear field equations which are hard to solve, especially at low temperatures. I will discuss analytic tools that generate solutions down to zero temperature. This exploration is important for the understanding of the ground state of these systems. I will present results in the probe limit (vanishing chemical potential mu), as well as the extremal limit (small critical temperature to chemical potential (T_c/mu) ratio). Speaker: George Siopsis (University of Tennessee at Knoxville) • 08:30 10:30 BB. Nano Materials Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Michel Pleimling (Virginia Tech) • 08:30 Graphene: it's all about the surface 30m Every atom of graphene, a monolayer of graphite, belongs to the surface. Therefore, the environment of graphene -- the substrate onto which graphene is deposited and the coating on top of graphene -- intimately affects the properties of graphene. In this talk, we demonstrate that both mechanical and electrical properties of graphene can be greatly tuned by varying its environment. First, we discuss ultraclean graphene devices suspended in vacuum. We achieve a carrier mobility in excess of 200,000 cm^2/Vs in these devices and demonstrate previously inaccessible transport regimes, including ballistic transport and the fractional quantum Hall effect. Second, we explore the electrical properties of graphene surrounded by liquid dielectrics. We find that the ions in liquids can cause strong scattering in graphene and demonstrate very large values for room temperature mobility (>60,000 cm^2/Vs) in ion-free liquids with high dielectric permittivity. Finally, we demonstrate that the environment of graphene affects its mechanical properties. We develop a novel technique to study the mechanical properties of graphene films attached to substrates by measuring the temperature-dependent deflection of a "bimetallic" cantilever composed of graphene and silicon nitride or gold layers. We demonstrate that the built-in strain, the substrate adhesion force and even the thermal expansion coefficient of graphene depend on the substrate under it. Speaker: Kirill Bolotin (Vanderbilt University) • 09:00 Flat-band Nanostructures 30m The electronic band structure of many systems, e.g., carbon-based nanostructures, can exhibit essentially no dispersion. Models of electrons in such flat-band lattices define non-perturbative strongly correlated problems by default. Here strong interactions can give rise to novel quantum phases of matter with intriguing collective excitations. Flat bands therefore allow the possibility of discovering emergent physics determined solely by interactions. I will review work that theoretically explores strongly correlated lattice models with flat bands. Zero-field flat-band lattice systems offer arenas to study quantum crystals, quantum liquids, and magnetism. I will also discuss recent results from microscopic modeling of a specific flat-band system, electrons in graphene nanoribbons with zig zag edges. Here I will show that interactions can lead to quantum crystals with ferromagnetic order. Speaker: Vito Scarola (Virginia Tech) • 09:30 Spin-dependent quantum transport in nanoscaled geometries 30m We discuss experiments where the spin degree of freedom leads to quantum interference phenomena in the solid-state. Under spin-orbit interactions (SOI), spin rotation modifies weak-localization to weak anti-localization (WAL). WAL's sensitivity to spin- and phase coherence leads to its use in determining the spin coherence lengths Ls in materials, of importance moreover in spintronics. Using WAL we measure the dependence of Ls on the wire width w in narrow nanolithographic ballistic InSb wires, ballistic InAs wires, and diffusive Bi wires with surface states with Rashba-like SOI. In all three systems we find that Ls increases with decreasing w. While theory predicts the increase for diffusive wires with linear (Rashba) SOI, we experimentally conclude that the increase in Ls under dimensional confinement may be more universal, with consequences for various applications. Further, in mesoscopic ring geometries on an InAs/AlGaSb 2D electron system (2DES) we observe both Aharonov-Bohm oscillations due to spatial quantum interference, and Altshuler-Aronov-Spivak oscillations due to time-reversed paths. A transport formalism describing quantum coherent networks including ballistic transport and SOI allows a comparison of spin- and phase coherence lengths extracted for such spatial- and temporal-loop quantum interference phenomena. We further applied WAL to study the magnetic interactions between a 2DES at the surface of InAs and local magnetic moments on the surface from rare earth (RE) ions (Gd3+, Ho3+, and Sm3+). The magnetic spin-flip rate carries information about magnetic interactions. Results indicate that the heavy RE ions increase the SOI scattering rate and the spin-flip rate, the latter indicating magnetic interactions. Moreover Ho3+ on InAs yields a spin-flip rate with an unusual power 1/2 temperature dependence, possibly characteristic of a Kondo system. We acknowledge funding from DOE (DE-FG02-08ER46532). Speaker: Jean Heremans (Virginia Tech) • 10:00 Synthesis of nanostructures by combination of electrospinning and sputtering techniques 30m Electrospinning and sputtering are well known techniques for the formation of different materials in the shape of fibers and films, respectively. Both techniques offer the advantage of being able to prepare a broad range of materials, from metals to insulators, in a different range of compositions and structures. Their combined used offers then a unique opportunity to explore the fabrication of different materials with tailored compositions and nanostructures. An interesting application results when the electrospun fibers are used as templates for sputtering of palladium metal. Palladium (Pd) is one of the most prominent materials studied for the detection of hydrogen gas. Hydrogen rapidly dissociates on its surface and diffuses into subsurface layers forming palladium hydride with consequent changes in optical, mechanical and electrical properties that are easily detected. Materials with nanoscale morphologies are promising to improve sensor performance as they provide large surface areas for adsorption, and smaller crystallite size reducing the time needed for "bulk" diffusion. In this presentation it will be shown how Pd nanoribbons and nanoshells are prepared by magnetron sputtering deposition on top of the mat of polymer fibers. Sputtering is a line-of-sight deposition process and the fibers become a variable angle-substrate for the incoming Pd flux. A larger amount of palladium is deposited on top of the fiber where the incoming flux is perpendicular to the surface compared to the sides where the flux is incident at a glancing angle. The top and sides of the fibers shadow their bottom parts closer to the substrate preventing any substantial deposition there. The end result of the deposition is the formation of Pd nanostructures, thicker in the middle region than at the edges, with a large void network. The high sensitivity and response time shown to 1% or less of hydrogen in nitrogen is understood to result from the reduced dimensions combined with this unique nanostructure. A description will be given of the conductance changes with hydrogen concentration as result of the competing mechanisms of percolation and scattering. Speaker: Prof. Wilfredo Otano (University of Puerto Rico – Cayey) • 08:30 10:30 BC. Nuclear Physics I Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Mark Pitt (Virginia Tech) • 08:54 N -> Delta Asymmetry at Low Q^2 12m The Qweak collaboration at Jefferson Lab is determining the weak charge of the proton. This is done by measuring the parity-violating asymmetry of polarized electrons scattered elastically from the proton at a low Q^2 of 0.026 (GeV/c)^2. The measured asymmetry is partially diluted by polarized electrons inelastically scattered off the proton. Some Qweak experiment running time was used to measure the asymmetry in the inelastic region, which is dominated by the N -> Delta transition. In addition to constraining backgrounds for Qweak the N -> Delta asymmetry measurement is sensitive to a weakly constrained and theoretically uncertain low energy constant d_Delta. The term involving d_Delta (the "Siegert" term) is non-vanishing in the limit Q^2 -> 0, thus it can dominate the asymmetry at low Q^2. This hadronic electroweak radiative correction is driven by the same matrix element responsible for the large SU(3) breaking effects observed in hyperon decays. An update on the analysis will be presented. Speaker: John Leacock (Virginia Tech) • 09:06 Inclusive DIS: Target Normal Single-Spin Asymmetry 12m An experiment (E07--013) to measure the target normal single spin asymmetry A^n_N in inclusive deep-inelastic n^{\uparrow}(e,e') reaction with a vertically polarized 3He target has completed data collection during Jefferson Lab's Hall A neutron transversity experiment (E06--010). The expected accuracy of this measurement is delta A^n_N = 3 x 10^{-3}. There are no previous measurements of this asymmetry on the neutron. The target normal spin asymmetry in DIS probes helicity--flip amplitudes at the quark level that are related to effects beyond the leading-twist picture of DIS. In view of the predicted rapid variation of the asymmetry between 10^{-2} (exclusive) and 10^{-4} (DIS-inclusive), a non-zero measurement would be sensitive to the transition from hadronic to partonic degrees of freedom. The status and perspectives of the data analysis will be discussed. Speaker: Tim Holmstrom (Longwood University) • 09:18 Helicity-Correlated Systematics in the Qweak Experiment 12m The Qweak experiment at Jefferson Laboratory will provide a 4% measurement of the proton's weak charge Q_w^p, using parity-violating electron scattering from Hydrogen at low momentum transfer. The experiment will measure a tiny parity-violating asymmetry ~256 parts per billion, which means control and precise measurement of systematic errors is a must. While great care is being taken to suppress or eliminate helicity-correlated changes in electron beam properties at the source, broken symmetries in the experimental apparatus can produce false asymmetries in the detected signal. For Qweak we measure the detector sensitivities dA/dx_i (i = 1..5) for first order offline correction of beam-related false asymmetries, using both regression against natural beam motion and a driven modulation system. I will discuss the methodology and status of the helicity-correlated detector sensitivities and how they relate to a precision measurement Qweak. Speaker: Joshua Hoskins (College of William and Mary) • 09:30 Moller Polarimetry for the Qweak Experiment 12m The Standard Model of particle physics has been extremely successful in describing particle interactions in a wide-ranging regime of energy scales. Low-energy, parity-violating experiments enable high-precision experimental tests of Standard Model predictions. Currently, Jefferson Lab is performing one such investigation to determine the weak charge of the proton, Qweak, to 4% precision using ep scattering. By making a precise measurement of the weak charge, this experiment will provide tighter constraints on some classes of "new physics" at 2 TeV or higher. To calculate the parity-violating asymmetry and determine Qweak one needs precise knowledge of the incoming electron beam polarization. The Qweak experiment, which is underway in Jefferson Lab's Hall C, uses both Moller and Compton polarimetry to determine the 1 GeV beam polarization. The Hall C Moller polarimeter is particularly relevant as it uses a superconducting magnet to saturate thin, pure iron, foils out of plane. This provides precise measurements of beam polarization to within 1% uncertainty. Since the addition of the Compton device the Moller polarimeter has undergone a re-commissioning phase, followed by myriad studies to reduce the systematic errors to the 0.57% level required by Qweak. A brief overview of the Hall C Moller device, followed by preliminary results of these studies and of the Spring 2011 experiment run, will be provided. Speaker: Joshua Magee (College of William and Mary) • 09:42 A Diamond Micro-strip Electron Detector for Compton Polarimetry 12m The Qweak experiment in Hall C at Jefferson Lab aims to measure the weak charge of the proton with a precision of 4.1% by measuring the parity violating asymmetry in polarized electron-proton elastic scattering. Beam polarimetry is the largest experimental contribution to the error budget. A new Compton polarimeter was installed for a non-invasive and continuous monitoring of the electron beam polarization with a goal of 1% systematic and 1% per hour statistical precision. The Compton-scattered electrons are detected in four planes of diamond micro-strip detectors. These detectors are read out using custom built electronic modules that include a pre-amplifier, a pulse shaping amplifier and a discriminator for each detector micro-strip. We use Field Programmable Gate Array based general purpose logic modules for event selection and histogramming. The polarimeter was commissioned during the first run period of the Qweak experiment. We will show the preliminary results from the electron detector obtained during the first run period of Qweak experiment. Speaker: Amrendra Narayan (Mississippi State University) • 09:54 Sticky Dark Matter in the Effective Field Theory Approach 12m There is experimental evidence that Dark Matter (DM) makes up about 25% of the Universe's mass and is expected to be nonrelativistic in most models. We explore the possibility of the creation and existence of a bound state of Dark Matter and standard model (SM) particles. Such bound states can potentially be created and detected during direct DM search experiments (DAMA, CDMS, XENON etc.). We work in a model-independent approach to determine conditions under which such bound states can be created. Our results appear to be dependent upon the nuclei used in the DM direct detection experiments. In this scenario we determine the region of DM parameter space that provides a simultaneous fit to DAMA and CDMS data. Speaker: Andriy Badin (Duke University) • 10:06 Hadronic loop correction of charmonium decays 12m Recently, the effect of next leading order correction from intermediate hadronic loops to the charmonium decays has been widely studied. However, the coupling constants of the charmonium multiplets and heavy mesons cannot be directly measured from experiments. In this talk, we will present the investigation of hadronic loop correction to both hadronic decays and radiative decays of the lowest excited states of charmonia and try to extract the reasonable coupling constants. Speaker: Di-Lun Yang (Duke University) • 08:30 10:30 BD. Physics and Policy Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Beate Schmittmann (Virginia Tech) • 08:30 The Role of Physicists in Policy Making 30m Since World War II, physicists have been involved in various aspects of national life. The roles played have included: 1) Pure or applied researcher, 2) Advisor to policy makers, and 3) Congressman. Today there are many challenges and questions that the United States faces and scientists, physicists included, are often asked on how these challenges should be addressed. In addressing these concerns what is the "proper" role that scientists should play? Do scientists even know what the possible roles are? This talk will briefly address the possible roles that scientists play and what other avenues of input go into the making of policy. Speaker: Thomas Handler (University of Tennessee at Knoxville) • 09:30 Reflections on Science and Innovation Policy 30m Speaker: Jack Wells (Oak Ridge National Laboratory) • 10:30 10:45 Coffee / Refreshments 15m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 10:45 12:45 CA. Recent Progress in Nuclear Astrophysics Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Prof. Jonathan Link (Virginia Tech) • 10:45 Progress towards Low Energy Neutrino Spectroscopy (LENS) 30m The Low-Energy Neutrino Spectroscopy (LENS) experiment will precisely measure the energy spectrum of low-energy solar neutrinos via charged-current neutrino reactions on indium. LENS will test solar physics through the fundamental equality of the neutrino fluxes and the precisely known solar luminosity in photons, will probe the metallicity of the solar core through the CNO neutrino fluxes, and will test for the existence of mass-varying neutrinos. The LENS detector concept applies indium-loaded scintillator in an optically-segmented lattice geometry to achieve precise time and spatial resolution and unprecedented sensitivity for low-energy neutrino events. The LENS collaboration is currently developing a prototype, miniLENS, in the Kimballton Underground Research Facility (KURF). The miniLENS program aims to demonstrate the performance and selectivity of the technology and to benchmark Monte Carlo simulations that will guide scaling to the full LENS instrument. We will present the motivation and concept for LENS and will provide an overview of the R\&D efforts currently centered around miniLENS at KURF. Speaker: Jeff Blackmon (LSU) • 11:15 BOREXINO - A breakthrough in spectroscopy of low energy neutrinos from the sun 30m Low energy (<1 MeV) solar neutrinos account for 99+% of the emitted flux providing the essential window on energy production in the sun. For many decades of solar neutrino research, these could not be directly measured because of the formidable background barrier below 3 MeV. This constraint was broken by the Borexino experiment which has now measured the flux of 0.862 MeV neutrinos from the decay of 7Be in the sun. Indeed, this result is the most precise (<5%) solar neutrino flux known today. A strong push is being made for results on other solar neutrinos. These results arising from extraordinary technical achievements, far exceed initial goals set for this project some 20 years ago. I will trace the development and brief history of this project, describe the salient features of the detector, point out the principal technical achievements and present the most recent results and their impact on our understanding of energy production in the sun via the proton-proton chain as well as the CNO cycle. The results bear vitally on neutrino phenomenology as well. In addition to the sun, Borexino has also measured neutrinos from the interior of the earth. Future directions and plans being discussed presently for Borexino will be indicated. Speaker: Ramaswamy Raghavan (Virginia Tech) • 11:45 Exploring the Cosmos from the Ground: Nuclear Astrophysics at UNC/TUNL 30m Nuclear astrophysics is an inherently interdisciplinary field encompassing observational astronomy, astrophysical modeling, and measurements of thermonuclear reaction rates. In general, a group studies only one of these branches in depth; however, the unique nuclear astrophysics group at University of North Carolina--Chapel Hill and Triangle Universities Nuclear Laboratory (TUNL) incorporates both theoretical and experimental research. Currently focusing on nuclear reaction measurements involved in thermonuclear explosions and heavy-element synthesis, the Laboratory for Experimental Nuclear Astrophysics (LENA) utilizes two accelerators with an energy range of ~50-1000 keV and current up to ~1.5 mA to measure proton fusion with various targets. Recent and on-going measurements include 23Na (p,gamma) 24Mg, 14N (p,gamma) 15O, and {17,18}O (p,gamma) {18,19}F. Our group has also formulated a new Monte Carlo method for calculating thermonuclear reaction rates from experimental results (such as resonance strengths), in which a rigorous statistical definition of uncertainties arises naturally. These rates provide a backbone for a new type of stellar reaction rate library currently in preparation, STARLIB. This library attempts to bridge the gap between experimental nuclear physics data and stellar modelers by providing a convenient tabular format with reliable uncertainties for use in simulating astrophysical phenomena. We expect to submit STARLIB for publication by year's end, which will coincide with the unveiling of a webpage for ease of dissemination and updating. Finally, our group uses this library to run simplified models of astrophysical events, such as novae or AGB stars, via network calculations. The results from these models indicate which reactions significantly influence various isotopic abundances, thus providing motivation for new reactions to measure at LENA and other laboratories. Speaker: Anne Sallaska (University of North Carolina at Chapel Hill) • 12:15 DIANA - An Underground Accelerator Facility for Nuclear Astrophysics 30m Measuring nuclear reactions of astrophysical interest at {\em stellar} energies is usually a daunting task because the cross sections are very small and background rates can be comparatively large. Often, cosmic-ray interactions set the limit on experimental sensitivity, but can be reduced to an insignificant level by placing an accelerator underground -- as has been demonstrated by the LUNA accelerators in the Gran Sasso underground laboratory. The Dual Ion Accelerator facility for Nuclear Astrophysics (DIANA) is a proposed next-generation underground accelerator facility, which would be constructed at the 4850 ft level of the Homestake Mine in Lead, SD. This talk will describe DIANA and the questions in nuclear astrophysics that can be explored at such a laboratory. Speaker: Art Champagne (University of North Carolina at Chapel Hill) • 10:45 12:45 CB. Strongly Correlated Systems Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Vito Scarola (Virginia Tech) • 10:45 Strong Correlation Effects in Fullerene Molecules and Solids 30m Fullerenes (C20, C36, C60) are a family of Carbon cage molecules that have exactly twelve pentagons. The most famous Fullerene is C60 ("bucky ball"), which when being doped with three electrons per molecule will exhibit superconductivity. Here we describe electronic structures of these molecules with a tight-binding Hubbard model and solve the model with quantum Monte Carlo simulations and exact diagonalization method. We will show how the electronic correlation gets stronger as the molecule becomes more curved, how the strong electronic correlations change the Huckel molecular energy levels, and how we compare the single-particle excitation spectrum for the C60 molecular solid to the photoemission experiments. Speaker: Fei Lin (Virginia Tech) • 11:15 Interplay of Quantum Criticality and Geometric Frustration in Columbite 30m Co Nb_2 O_6 is a remarkable magnetic material. The interplay between two of the most exciting features of correlated quantum physics, quantum criticality and geometric frustration, results in a rich phase diagram which reflects the fundamental underlying quantum many-body physics in this complex oxide material. Many aspects of the theoretically calculated phase diagram and expectations for quantum criticality have already been observed in beautiful neutron scattering experiments on this material. Ref: Interplay of Quantum Criticality and Geometric Frustration in Columbite, SungBin Lee, Ribhu K. Kaul, Leon Balents, Nature Physics 6, 702-706 (2010) Speaker: Ribhu Kaul (University of Kentucky) • 11:45 Ultrafast Dynamics in Vanadium Dioxide: Separating Spatially Segregated Mixed Phase Dynamics in the Time-domain 30m In correlated electronic systems, observed electronic and structural behavior results from the complex interplay between multiple, sometimes competing degrees-of- freedom. One such material used to study insulator-to-metal transitions is vanadium dioxide, which undergoes a phase transition from a monoclinic-insulating phase to a rutile-metallic phase when the sample is heated to 340 K. The major open question with this material is the relative influence of this structural phase transition (Peirels transition) and the effects of electronic correlations (Mott transition) on the observed insulator-to-metal transition. Answers to these major questions are complicated by vanadium dioxide's sensitivity to perturbations in the chemical structure in VO_2. For example, related V_x O_y oxides with nearly a 2:1 ratio do not demonstrate the insulator-to- metal transition, while recent work has demonstrated that W:VO_2 has demonstrated a tunable transition temperature controllable with tungsten doping. All of these preexisting results suggest that the observed electronic properties are exquisitely sensitive to the sample disorder. Using ultrafast spectroscopic techniques, it is now possible to impulsively excite this transition and investigate the photoinduced counterpart to this thermal phase transition in a strongly nonequilibrium regime. I will discuss our recent results studying the terahertz-frequency conductivity dynamics of this photoinduced phase transition in the poorly understood near threshold temperature range. We find a dramatic softening of the transition near the critical temperature, which results primarily from the mixed phase coexistence near the transition temperature. To directly study this mixed phase behavior, we directly study the nucleation and growth rates of the metallic phase in the parent insulator using non-degenerate optical pump-probe spectroscopy. These experiments measure, in the time- domain, the coexistent phase separation in VO_2 (spatially separated insulator and metal islands) and, more importantly, their dynamic evolution in response to optical excitation. Speaker: Prof. David Hilton (University of Alabama at Birmingham) • 12:15 Superfluidity in Bilayer Systems of Cold Polar Molecules 30m An exciton is a quasiparticle state formed by an electron bound to a "hole." Many years ago it was proposed theoretically that a population of excitons can condense into a spontaneously broken symmetry ground state characterized by excitonic superfluidity. The quest for the experimental realization of the exciton condensate has lasted decades. Recently bilayer systems have emerged as some of the most promising systems in which this state can be realized. The physics of exciton condensation in bilayer systems is very general. In this talk I will present the theory of "excitonic condensation" and spontaneous interlayer superfluidity in cold polar molecules bilayers [1] that because of the great control characteristic of cold atom systems and their intrinsic lack of disorder are ideal systems to study exciton condensates. [1] R. M. Lutchyn, E. Rossi, S. Das Sarma, Spontaneous interlayer superfluidity in bilayer systems of cold polar molecules, Phys. Rev. A 82, 061604(R) (2010). Speaker: Enrico Rossi (College of William and Mary) • 10:45 12:45 CC. Biophysics and Medical Physics Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Kenneth Wong (Virginia Tech) • 10:45 Stochastic Modeling of Regulation of Gene Expression by Multiple Post-transcriptional Regulators 12m New research indicates that post-transcriptional regulators, such as small RNAs (sRNAs), are key components of global regulatory networks. In particular, it has been discovered that these networks often comprise multiple sRNAs which control expression of a critical master regulator protein. However, the regulation of a single protein by multiple sRNAs is not currently well understood and the impact of multiple sRNA on stochastic gene expression remains unclear. To address these issues, we analyze a stochastic model of regulation of gene expression by multiple sRNAs. We derive exact closed form solutions for the regulated protein distribution, including compact expressions for its mean and variance. The derived results provide novel insights into the roles of multiple sRNAs in fine-tuning the noise in gene expression. In particular, we show that, in contrast to regulation by a single sRNA, multiple sRNAs provide a mechanism for independently controlling the mean and variance of the regulated protein distribution. Speaker: Charles Baker (Virginia Tech) • 10:57 Stochastic models of gene expression and post-transcriptional regulation 12m The intrinsic stochasticity of gene expression can give rise to phenotypic heterogeneity in a population of genetically identical cells. Correspondingly, there is considerable interest in understanding how different molecular mechanisms impact the 'noise' in gene expression. Of particular interest are post-transcriptional regulatory mechanisms involving genes called small RNAs, which control important processes such as development and cancer. We propose and analyze general stochastic models of gene expression and derive exact analytical expressions quantifying the noise in protein distributions [1]. Focusing on specific regulatory mechanisms, we analyze a general model for post-transcriptional regulation of stochastic gene expression [2]. The results obtained provide new insights into the role of post-transcriptional regulation in controlling the noise in gene expression. [1] T. Jia and R. V. Kulkarni, Phys. Rev. Lett., 106, 058102 (2011). [2] T. Jia and R. V. Kulkarni, Phys. Rev. Lett., 105, 018101 (2010). Speaker: Hodjat Pendar (Virginia Tech) • 11:09 Regulation by small RNAs via coupled degradation: mean-field and variational approaches 12m Regulatory genes called small RNAs (sRNAs) are known to play critical roles in cellular responses to changing environments. For several bacterial sRNAs, regulation is effected by coupled stoichiometric degradation with messenger RNAs (mRNAs). The nonlinearity inherent in this regulatory scheme implies that exact analytical solutions for the corresponding stochastic models are intractable. Based on the mapping of the master equation to a quantum evolution equation, we use the variational method (introduced by Eyink) to analyze a well-studied stochastic model for regulation by sRNAs. Results from the variational ansatz are in excellent agreement with stochastic simulations for a wide range of parameters, including regions of parameter space where mean-field approaches break down. The results derived provide new insights into sRNA-based regulation and will serve as useful inputs for future studies focusing on the interplay of stochastic gene expression and regulation by sRNAs. Speaker: Thierry Platini (Virginia Tech) • 11:21 Utilizing protein networks to determine novel annotations 12m Proteins are a key element of life because they are involved in every metabolic process, yet a majority of proteins remain unannotated. Current chemical and physical annotation methods are inaccurate, inefficient, or expensive. Without proper annotation, understanding of organisms' metabolic pathways is limited. Based on the hypothesis that proteins with similar primary structures have similar characteristics, we theorize that a method for protein annotation can be developed using protein networking, which was previously thought to be useful in determining the evolutionary paths of proteins. A large, diverse database of proteins is used to connect protein fragments by using a preset identity threshold. With this method, unknown proteins are connected to known ones. By observing the number of links to proteins with annotated functions, a likely annotation candidate will be reached. This procedure can potentially facilitate the process of finding more accurate annotations. We have used and validated this approach to annotate putative uncharacterized proteins. Results will be presented at the conference. Speaker: Kenneth Shiao • 11:33 A Model Comparison for Characterizing Protein Motions from Structure 12m A comparative study is made using three computational models that characterize native state dynamics starting from known protein structures taken from four distinct SCOP classifications. A geometrical simulation is performed, and the results are compared to the elastic network model and molecular dynamics. The essential dynamics is quantified by a direct analysis of a mode subspace constructed from ANM and a principal component analysis on both the FRODA and MD trajectories using root mean square inner product and principal angles. Relative subspace sizes and overlaps are visualized using the projection of displacement vectors on the model modes. Additionally, a mode subspace is constructed from PCA on an exemplar set of X-ray crystal structures in order to determine similarly with respect to the generated ensembles. Quantitative analysis reveals there is significant overlap across the three model subspaces and the model independent subspace. These results indicate that structure is the key determinant for native state dynamics. Speaker: Charles David (University of North Carolina at Charlotte) • 11:45 Using blocking peptides to control and analyze the mechanical properties of single fibrin fibers 12m Fibrin is the main structural protein involved in blood clotting, and exhibits high strength and elasticity. Fibrin study traditionally focuses on fully formed clots, whereas we employ new AFM nanoManipulation techniques to study single fibrin fiber mechanics. We used 4 and 10 residue peptides to interfere with the knob-hole and alpha-C interactions involved in fibrin polymerization to evaluate the contribution of each interaction to the fiber's mechanical properties. We varied the concentration of each peptide present during polymerization to find the concentration that inhibited polymerization by half. The presence of either peptide during fibrin polymerization did not affect single fiber breaking strain (\Delta L / L_0). The breaking force of all treated fibers reduced from 10-50nN to 2-10nN, suggesting treated fibers are thinner or are the same diameter with some inhibition of interactions. Fibers polymerized with the knob-hole targeting peptide visibly lost elasticity after 100% strain, while fibers polymerized with the $\alpha$C targeting peptide lost elasticity after reaching 150% strain, suggesting that the knob-hole interactions control single fiber elasticity. Speaker: Pranav Maddi (North Carolina School of Science and Mathematics) • 11:57 A biomimetic model for internal fluid transport based on physiological systems in insects 12m Biomimetics is an increasingly important field in applied science that seeks to imitate systems and processes in nature to design improved engineering devices. In this study, we are inspired by insect respiratory systems, and model, analytically and numerically, the air transport within a single model insect tracheal tube. The tube wall undergoes localized, non- propagative rhythmic contractions. A theoretical analysis based on lubrication theory is used to model the problem at low Reynolds number. Results are then validated by performing meshfree computations based on the method of fundamental solutions (MFS). This meshfree numerical approach is then used to investigate the airflow in more complex geometries: a channel with multiple branching segments and various wall contraction regimes. This study presents a new biomimetic mechanism for valveless pumping that might guide efforts to fabricate novel microfluidic devices with improved efficiency that mimic features of physiological systems in insects. Speaker: Yasser Aboelkassem (Virginia Tech) • 12:09 Locomotion of Paramecium in patterned environments 12m Ciliary organisms like Paramecium Multimicronucleatum locomote by synchronized beating of cilia that produce metachronal waves over their body. In their natural environments they navigate through a variety of environments especially surfaces with different topology. We study the effects of wavy surfaces patterned on the PDMS channels on the locomotive abilities of Paramecium by characterizing different quantities like velocity amplitude and wavelength of the trajectories traced. We compare this result with the swimming characteristics in straight channels and draw conclusions about the effects of various patterned surfaces. Speaker: Mr Eun-Jik Park (Virginia Tech) • 10:45 12:45 CD. Advances in Computing Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Leo Piilonen (Virginia Tech) • 10:45 Open Science Grid: Linking Universities and Laboratories In National Cyberinfrastructure 30m Open Science Grid is a consortium of researchers from universities and national laboratories that operates a national computing infrastructure serving large-scale scientific and engineering research. While OSG's scale has been primarily driven by the demands of the LHC experiments, it currently serves particle and nuclear physics, gravitational wave searches, digital astronomy, genomic science, weather forecasting, molecular modeling, structural biology and nanoscience. The OSG distributed computing facility links campus and regional computing resources and is a major component of the Worldwide LHC Computing Grid (WLCG) that handles the massive computing and storage needs of experiments at the Large Hadron Collider. This collaborative work has provided a wealth of results, including powerful new software tools and services; a uniform packaging scheme (the Virtual Data Toolkit) that simplifies software deployment across many sites in the US and Europe; integration of complex tools and services in large science applications; multiple education and outreach projects; and new approaches to integrating advanced network infrastructure in scientific computing applications. More importantly, OSG has provided unique collaborative opportunities between researchers in a variety of research disciplines. Speaker: Paul Avery (University of Florida) • 11:15 Evolving from TeraGrid to XSEDE 30m Since 2001, the TeraGrid has developed into a world-class integrated, national-scale computational science infrastructure with funding from the NSF's Office of Cyberinfrastructure (OCI). Recently, the TeraGrid project came to an end and has been supplanted by the NSF's eXtreme Digital program, opening a new chapter in cyberinfrastructure by creating the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world. This talk will introduce the new project, XSEDE: the eXtreme Science and Engineering Discovery Environment, which began July 1, 2011. Speaker: John Towns (NCSA) • 11:45 Quantum transport and nanoplasmonics with carbon nanorings - using HPC in computational nanoscience 30m Central theme of this talk is the theoretical study of toroidal carbon nanostructures as a new form of metamaterial. The interference of ring-generated electromagnetic radiation in a regular array of nanorings driven by an incoming polarized wave front may lead to fascinating new optoelectronics applications. The tight-binding method is used to model charge transport in a carbon nanotorus: All transport observables can be derived from the Green's function of the device region in a non-equilibrium Green's function algorithm. We have calculated density-of-states D(E) and transmissivities T(E) between two metallic leads under a small voltage bias. Electron-phonon coupling is included for low-energy phonon modes of armchair and zigzag nanorings with atomic displacements determined by a collaborator's finite-element based code. A numerically fast and stable algorithm has been developed via parallel linear algebra matrix routines (PETSc) with MPI parallelism to reach significant speed-up. Production runs are planned on the NSF XSEDE network. This project was supported in parts by a 2010 NSF TeraGrid Fellowship and the Sunshine State Education and Research Computing Alliance (SSERCA). Two summer students were supported as 2010 and 2011 NCSI/Shodor Petascale Computing undergraduate interns. Speaker: Mark Jack (Florida Agricultural and Mechanical University) • 12:15 Discrete Molecular Dynamics Simulation of Biomolecules 30m Discrete molecular dynamics (DMD) simulation of hard spheres was the first implementation of molecular dynamics (MD) in history. DMD simulations are computationally more efficient than continuous MD simulations due to simplified interaction potentials. However, also due to these simplified potentials, DMD has often been associated with coarse-grained modeling, and hence continuous MD has become the dominant approach used to study the internal dynamics of biomolecules. With the recent advances in DMD methodology, including the development of high-resolution models for biomolecules and approaches to increase DMD efficiency, DMD simulations are emerging as an important tool in the field of molecular modeling, including the study of protein folding, protein misfolding and aggregation, and protein engineering. Recently, DMD methodology has been ~applied to modeling RNA folding and protein-ligand recognition. With these improvements to DMD methodology and the continuous increase in available computational power, we expect a growing role of DMD simulations in our understanding of biology. Speaker: Feng Ding (University of North Carolina at Chapel Hill) • 12:45 13:30 Lunch 45m • 13:30 15:30 DA. Complex Fluids Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Beate Schmittmann (Virginia Tech) • 13:30 Chaotic Advection in Multi-component Melts for the Manufacture of Composite Materials 30m Several forces arise when different liquids are placed into contact. The relative importance of these forces depends on the sizes and shapes of liquid domains and also on molecular characteristics of the liquids. When the liquids are agitated and in the absence of interdiffusion, a composite structure results that is defined by the spatial extent and size of each liquid domain in the presence of the other. Shaking a bottle with about equal parts of water and oil gives a structure that resembles a household sponge, for example. If the oil volume is much smaller than the water volume, oil droplets result instead. In polymer blends and composites, the structure can have feature sizes at the micron scale or smaller. Little has been known about the variety of structural types that can be formed because current information is based on mixing machinery that intrinsically restricts structural outcomes. This shortcoming has important consequences because physical properties of composite materials obtained by solidifying the structured liquids depend appreciably on structure characteristics. A recent approach to overcome this shortcoming makes use of \textit{chaotic advection} to establish conditions that organize liquid domains into numerous thin layers. A multi-layer construction undergoes morphological changes in situ. P\textit{rogressive structure development }arises, whereby a specific structure leads in sequence to a morphologically different structure. A new manufacturing technology has resulted which allows control of the internal structure in extruded plastic materials. Micro- and nanostructured materials have been obtained. On-line process control allows rapid optimization of physical properties. In this presentation, the underlying physics will be described, examples of novel materials and their applications will be shown, and research opportunities will be highlighted. Speaker: David Zumbrunnen (Clemson University) • 14:00 How animals drink and swim in fluids 30m Fluids are essential for most living organisms to maintain a healthy body and also serve as a medium in which they locomote. The fluid bulk or interfaces actively interact with biological structures, which produces highly nonlinear, interesting, and complicated dynamical problems. We studied the lapping of cats and the swimming of Paramecia in various fluidic environments. The problem of the cat drinking can be simplified as the competition between inertia and gravity whereas the problem of Paramecium swimming in viscous fluids results from the competition between viscous drag and thrust. The underlying mechanisms are discussed and understood through laboratory experiments utilizing high-speed photography. Speaker: Sunghwan Jung (Virginia Tech) • 14:30 Jamming and Fluidization in Granular Flows 30m Granular materials exist all around us, from avalanches in nature to the mixing of pharmaceuticals, yet the behavior of these "fluids" is poorly understood. While the interaction of individual particles is simply through friction and inelastic collisions, the non-linear forces and large number of particles leads to an unpredictable, complex system. Flow can be characterized by the continuous forming and breaking of a strong force network resisting flow, leading to jamming, avalanching and shear banding. I'll present recent work on quasi-static shear and free-surface granular flows under the influence of external vibrations as well as related experiments on particle-fluid suspensions. By using photoelastic grains, we are able to measure both particle trajectories and the local force network in 2D flows. We find through particle tracking that dense granular flow is composed of comparable contributions from the mean flow, elastic deformations, and permanent, plastic deformations. Vibration typically weakens granular materials and removes hysteresis, though small vibrations can lead to strengthening of a pile. Flows of particle-fluid suspensions allow another avenue to probe failure of granular piles and additional control parameters, such as the surface chemistry of the particles. Speaker: Dr Brian Utter (James Madison University) • 13:30 15:30 DB. Particle Physics I Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Brad Cox (University of Virginia) • 13:30 Analyzing Potential Tracking Algorithms for the Upgrade to the Silicon Tracker of the Compact Muon Solenoid 12m The research performed revolves around creating tracking algorithms for the proposed ten-year upgrade to the tracker for CMS, one of two main detectors for the LHC at CERN. The proposed upgrade to the tracker for CMS will use fast hardware to trace particle trajectories so that they can be used immediately in a trigger system. The additional information will be combined with other sub-detectors in CMS, enabling mostly the non-background events to be read-out by the detector. The algorithms would be implemented directly into the Level-1 trigger, the first trigger in a 2 trigger system, to be used in real time. Specifically, by analyzing computer generated stable particles over various ranges of transverse momentum and the tracks they produce, we created and tested various simulated trigger algorithms that might be used in hardware. As one algorithm has proved very effective, the next step is to test this algorithm against simulated events with an environment equivalent to SLHC luminosities. Speaker: John Hardin (University of North Carolina at Chapel Hill) • 13:42 Improving the Trigger Efficiency for the WH-lvbb analysis at the CDF experiment 12m At CDF, we search for the associated production of a Higgs boson and a W boson, where the Higgs boson decays into a b + anti-b quark pair and the W boson decays into a lepton and the corresponding neutrino. Events are selected with a signature of a lepton, large missing transvers energy, and two or three jets. At CDF, events are selected by a variety of triggers, and those triggers are divided into several streams based on the types of requirements of the trigger. Traditionally, in the WH analysis we only use some of triggers, because the trigger efficiency can be calculated easily under those circumstances. In this presentation, we will describe two new triggers to select leptons and will demonstrate a new method to calculate the trigger efficiency. We will use a neural network to calculate the efficiency for the event to be triggered by an entire trigger stream, disregarding each individual trigger. In this way, we can maximize the acceptance of events selected. Speaker: Hao Liu (University of Virginia) • 13:54 Study of the Sensitivity of Plastic Scintillators to Fast Neutrons 12m The Mu2e experiment at Fermilab plans to use a two-out-of-three coincident requirement in a plastic scintillator based detector to veto cosmic ray events. This veto system must operate efficiently in a high-radiation environment. In this investigation, three plastic scintillator bars containing wavelength-shifting fibers represent the veto system. These bars were placed together, in series, in front of a deuterium-deuterium neutron generator, which produced fast neutrons of approximately 2.8MeV, in order to study the sensitivity of the plastic scintillators to fast neutrons. Multi-anode photomultiplier tubes read out the light from the fibers. The collected data was analyzed to determine the rate of interaction, approximate amount of energy deposited, and numerous other aspects of the neutrons' interactions. The rate of coincidental and correlated hits in multiple scintillator bars was the primary reason for the investigation, in order to understand the sensitivity of the plastic scintillators to fast neutrons. Speaker: David Abbott (University of Virginia) • 14:06 Geometrical Standard Model Enhancements to the Standard Model of Particle Physics 12m The Standard Model (SM) is the triumph of our age. As experimentation at the LHC tracks particles for the Higgs phenomena, theoreticians and experimentalist struggle to close in on a cohesive theory. Both suffer greatly as expectation waivers those who seek to move beyond the SM and those who cannot do without. When it seems like there are no more good ideas enter Rate Change Graph Technology (RCGT). From the science of the rate change graph, a Geometrical Standard Model (GSM) is available for comprehensive modeling, giving rich new sources of data and pathways to those ultimate answers we punish ourselves to achieve. As a new addition to science, GSM is a tool that provides a structured discovery and analysis environment. By eliminating value and size, RCGT operates with the rules of RCGT mechanics creating solutions derived from geometry. The GSM rate change graph could be the ultimate validation of the Standard Model yet. In its own right, GSM is created from geometrical intersections and comes with RCGT mechanics, yet parallels the SM to offer critical enhancements. The Higgs Objects along with a host of new objects are introduced to the SM and their positions revealed in this proposed modification to the SM. Speaker: Ken Strickland • 14:18 Holographic Real-Time Finite-Temperature 3-Point Correlators and Their Applications on Second Order Hydrodynamics 12m We built up a complete real-time prescription for calculating n- point correlators of finite-temperature conformal field theory operators using holography. We found it amounts to integrating only the right quadrant of the black hole, and then adapting the finite temperature analog of Veltman's circling rules to gravity tree-level diagrams to calculate correlators. We constructed a complete mapping between the real-time finite- temperature field theory and its real-time dual supergravity description. We subjected our prescription to several checks. We gave, for the first time, concrete formulas for all real- time 3-point correlators. We applied the above to study second order hydrodynamics in 4-d conformal field theories. We derived Kubo relations for second order transport coefficients in terms of 3-point stress tensor retarded correlators. For N=4 super Yang-Mills theory at strong coupling and finite temperature we computed these stress tensor 3-point correlators using AdS/CFT. The small momentum expansion of the 3-point correlators in terms of transport coefficients is matched with AdS result and the coefficients are retrieved consitently. Our method allows for a unified treatment of hydrodynamic coefficients and can be systematically generalized to higher order hydrodynamics. Speaker: Chaolun Wu (University of Virginia) • 14:30 Constructing a two scintillator paddle telescope for cosmic ray flux measurements 12m The evolution of the Earth's climate is of growing concern. There is evidence of a causal relationship between cosmic ray muon flux and cloud cover and it is expected that long term variations in cosmic ray flux may influence Earth's temperature changes [1]. It has been observed that a muon telescope with a variable angular acceptance at Earth's surface can be used to study correlations between flux distribution and barometric pressure. The muon flux from the cosmic ray particles positively correlates with seasonal temperature variations and anti-correlates with pressure variations [2]. In this talk, the construction of a new two scintillator paddle telescope prototype will be presented along with preliminary results from this detector. [1] Henrik Svensmark, Influence of Cosmic Rays on Earth's Climate, Phys. Rev. Lett. 81, 22 (1998). [2] Serap Tilav, Paolo Desiati, Takao Kumwabara, Dominick Rocco, Florian Rothmaier, Matt Simmons, and Henrik Wissing, Atmospheric Variations as Observed by IceCube, Proceedings of the 31st ICRC, 2009. Speaker: David Camp (Georgia State University) • 14:42 Correlation study of atmospheric weather and cosmic ray flux variation 12m There is at present a great debate about the causes of the changing climate of the Earth. In recent years, there has been a growing interest of understanding the effects of cosmic ray radiation on the increase in average global temperature. The studies by Svensmark, show that there is a strong link between cosmic rays and low cloud coverage [1]. Very recently, Lu reported that there is a correlation between cosmic rays and ozone depletion over Antarctica [2]. At Georgia State University (GSU) we are working on a long-term measurement of secondary cosmic ray flux distribution and are focusing on studying the correlations among variations of cosmic ray flux and atmospheric/space weather. In this presentation, we will describe the cosmic ray flux detectors currently taking data at GSU and show the preliminary results from our measurements over the past two years. [1] Nigel D. March and Henrik Svensmark, Low Cloud Properties Influenced by Cosmic Rays, Phys. Rev. Lett. 85, 23 (2000). [2] Q.-B. Lu, Correlation between Cosmic Rays and Ozone Depletion, Phys. Rev. Lett. 102, 118501 (2009). Speaker: Mathes Dayananda (Georgia State University) • 14:54 The Double Chooz Experiment 12m Double Chooz is a reactor antineutrino experiment probing the non-vanishing value of the neutrino mixing angle theta_13. The experiment is searching for antineutrino disappearance from nuclear reactors located in northeastern France. The Double Chooz concept is to deploy two identical detectors. One detector near to the reactor cores to measure the flux of electron antineutrinos and one detector at a distance from the reactors to measure the disappearance of electron antineutrinos due to oscillations. The far detector began data taking in the spring of 2011 and the near detector will be installed in 2012. Double Chooz has the opportunity of sensitivity for probing sin^2(2 theta_13) to 0.03 (90%CL) with both detectors running. Speaker: Brandon White (University of Tennessee) • 13:30 15:30 DC. Atomic and Molecular Physics Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Leo Piilonen (Virginia Tech) • 13:30 Laser Photodetachment Spectroscopy of the S_2- Ion 12m Numerous experiments have investigated the properties and dynamics of single-atom negative ions. Similar experiments can be conducted with molecular negative ions. Laser photodetachment spectroscopy of such ions is more complicated due to rotational and vibrational structure, and often yields spectroscopic benchmarks such as rotational constants. We have conducted low-resolution photodetachment spectroscopy of the S_2^- ion over a range of roughly 2000 cm^{-1}. The ions are created in a Penning ion trap by a two-step dissociative attachment process. The photodetachment is achieved with a tunable ring-cavity titanium:sapphire laser. Our results yield a lower-limit estimate of the minimum detachment threshold energy and exhibit structure that may be due to rotational energy levels. Future experiments will focus on high-resolution detachment spectroscopy of these and other ions with an eye toward measurement of their molecular constants. Speaker: John Yukich (Davidson College) • 13:42 Identification and Analysis of Atomic and Molecular Superposition Spectra Following Laser-Induced Optical Breakdown 12m Molecular recombination and excitation of atoms following laser-induced optical breakdown provide means for simultaneous detection of atomic and molecular species. Atomic emission spectra may be analyzed to infer electron number and temperature. Careful analysis of select atomic spectra may reveal superposed diatomic molecular spectra. Nonlinear fitting of synthetic molecular spectra, calculated via diatomic quantum theory, provides tools for identification, temperature measurement, and further analyses of the diatomic molecules present. This presentation investigates the presence of C_2 molecular Swan bands in Balmer Series atomic hydrogen spectra. Combustion plumes are also studied, including comparisons of temperatures obtained using a two-color pyrometer and from data reduction analysis of measured spectroscopic AlO data. Speaker: Alexander Woods (University of Tennessee Space Institute) • 13:54 Highly parallelized detection of single fluorescent molecules: simulation and experiment 12m We are developing an ultrasensitive, fluorescence-based detection system in highly parallel microchannels. Multichannel microfluidic devices have been fabricated by direct femtosecond laser machining of fused silica substrates. We approach single-molecule detection sensitivity by introducing dilute aqueous solutions (~ pM) of fluorescently labeled molecules into the microchannels. In a custom-built, wide-field microscope, a line-generating red diode laser provides narrow epi-illumination across a 500 um field of view. Fluorescence is detected with an electron-multiplying CCD camera allowing readout rates of several kHz. Rapid initial assessment is performed through digital filtering derived from simulations based on experimental parameters. Good agreement has been shown between simulation and experimental data. Fluorescence correlation spectroscopy then provides more detailed analysis of each separate channel. Following optimization, microfluidic devices could easily be mass-produced in low-cost polymers using imprint lithography. Speaker: Brian Canfield (University of Tennessee Space Institute) • 14:06 Terahertz Rotational Spectroscopy of the v5/2v9 Dyad of Nitric Acid 12m Our studies of the terahertz rotational spectrum of nitric acid now include the ground state and the four lowest excited states. We report good progress in the assignment and analysis of the next higher energy states, the v5/2v9 interacting states. This very complex spectrum includes torsional splitting of both states and Fermi and Coriolis type interactions between them. The current analysis includes both microwave and infrared transitions for improved stability. Microwave studies of the rotational spectrum of the nitric acid molecule in the ground and excited vibration states contribute both to a better understanding of this fundamental molecule and to the construction of accurate spectral maps for remote sensing in the atmosphere. Speaker: Paul Helminger (University of South Alabama) • 14:18 Microfluidic device for three-dimensional electrokinetic manipulation of single fluorescent molecules 12m The ability to manipulate and trap single molecules in solution through the application of actively controlled electric fields is a valuable tool for a number of bio-molecular studies of proteins and nucleic acids. Here we report the development of a microfluidic device consisting of four electrodes sputtered onto two glass coverslips and fixed in a tetrahedral arrangement. This geometrical configuration allows for a uniform electric field of any orientation through the application of appropriate voltages. Three-axis control has been demonstrated for micron-sized polystyrene beads and 40 nm fluorescent spheres in phosphate buffered solution. Previous work has characterized planar motion. Recent changes to the experimental setup include the addition of a cylindrical lens in the detection arm to quantify axial position and a National Instruments PCI-7833R to provide precise voltage control. Finally, a real-time tracking algorithm and its use for trapping will be discussed. Speaker: Jason King (University of Tennessee Space Institute) • 14:30 Impact of Recent Laboratory N_2 Data to our Understanding of Thermospheric Nitric Oxide (NO) 12m In spite of its status as a minor species, NO plays key roles in many upper atmospheric processes. As the only heteronuclear molecule, its fundamental, Delta v=1 emission cools the thermosphere (z>100 km). Its low ionization potential ensures that NO^+ is the end product of the ion-neutral chemistry in the ionospheric E-region. And in the presence of excess atomic oxygen, NO will catalytically destroy ozone. The production of NO is initiated when N_2 is ionized, dissociated, or excited by the solar EUV irradiance (lambda < 100 nm). In the mesosphere and lower thermosphere (MLT), much of the irradiance is contained in the highly variable soft x-ray region (1 < lambda < 20 nm). The resulting photoelectrons produce additional ionization as well as excitation of metastable, chemically-reactive species like the first electronically excited N_2 state, N_2(A^3 Sigma_u^+). This talk will incorporate recent laboratory data on the N_2 photoabsorption and electron-impact cross-sections into a 1D photochemical reaction-diffusion model of the thermosphere. It is shown that spin-forbidden (Delta S=1) excitation to the N_2 triplet manifold enables neutral N_2 to participate in the NO production. Additional physical and chemical uncertainties relevant to NO production and loss are also presented. Speaker: Justin Yonker (Virginia Tech) • 14:42 Quasibound States of Single-Particle Systems 12m We have developed a formalism that describes both quasibound and resonant states within the same theoretical framework, and that admits a clean and unambiguous distinction between these states and the states of the embedding continuum. The approach described here builds on our earlier work by clarifying several crucial points and extending the theory to encompass a variety of continuous spectra, including those with degenerate energy levels. The result is a comprehensive and compelling formalism for the study of quasibound states. The difference between 'quasibound' and 'resonant' states turns out to be largely semantic, inasmuch as both arise from imposing what is arguably the same mathematical rule (a point condition in a novel basis set). Enforcing that rule in a given application is straightforward in principle. The formalism is illustrated by examining several cases pertinent to applications widely discussed in the literature. Speaker: Curt Moyer (University of North Carolina at Wilmington) • 14:54 Three-dimensional flow measurements with a four-focus microscope 12m The measurement of a one-dimensional flow using a confocal fluorescence microscope with two excitation volumes has been well documented. This technique can be extended to measure flow in all three dimensions simultaneously through a four-focus, two-photon microscope. To this end, an apparatus has been constructed in which the beam from a modelocked Ti-Sapphire laser is passed through a double interferometer configuration to create four displaced focal volumes. Fluorescence is gathered onto a single photon avalanche diode and time-gated by a TimeHarp 200 timer card. Calibration of one-dimensional flow through a square bore capillary has been performed. Flow of adjustable speed and direction in three dimensions is created using a cross-channel microfluidic device. To evaluate flow measurements, Monte Carlo simulations of fluorescence cross-correlation spectroscopy between the four foci were conducted and a LabView program was created to discern the flow parameters from the 16 cross-correlation functions. For simplicity, the model for the correlation functions assumes each focal volume is a three-dimensional Gaussian, but a Gaussian-Lorentzian model may improve fitting. Speaker: James Germann (University of Tennessee Space Institute) • 15:06 Rapid fabrication of long nanochannels with a single femtosecond laser pulse focused to a line 12m We have recently reported the use of tight line-focusing of an amplified femtosecond laser beam to fabricate very long, sub-micron wide features in glass with just a single laser pulse [Davis et al., IQEC/CLEO Pacific Rim, August 2011]. The optical configuration used in these experiments presents distinct advantages and can be expected to have numerous applications, including the rapid creation of micro/nano-fluidic devices and waveguides. Here we review that work and also discuss recent results on imaging features created at the surface or at various depths internal to a substrate using a number of methods, including SEM images of acetate replicas, atomic force microscope, and optical imaging of sections that show the depths of internal features. We also discuss the physical mechanisms that can occur during femtosecond laser-induced plasma formation under different conditions, while emphasizing the non-linear mechanisms that can produce sub-diffraction features and the use of aberrations and spatio-temporal focusing to control the feature depth. Speaker: Lloyd Davis (University of Tennessee Space Institute) • 15:18 Application of X-ray Fluorescence Spectroscopy in Analysis of Oil Paint Pigments 12m X-ray Fluorescence (XRF) spectroscopy is a rapid, noninvasive technique for both detecting and identifying chemical elements within a given sample. At North Georgia College and State University, a sealed tube x-ray source and slightly focusing polycapillary optic are used in nondestructive XRF analysis of oil paint pigments. Oil paints contain both organic and inorganic matter, and the inorganic ingredients such as titanium, vanadium, iron, zinc, and other elements are easily detected by XRF, which can be used to uniquely differentiate between various paint pigments. To calibrate the XRF system for paint color identification, six different colors of oil paint were fluoresced and identified based off of their characteristic spectra. By scanning the paint sample in two dimensions, the characteristic XRF spectra obtained were compiled to produce an XRF replica of the painting. Speaker: Cassandra Major (North Georgia College and State University) • 13:30 15:30 DD. Advances in Energy Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: R. Bruce Vogelaar (Virginia Tech) • 13:30 Photonic Structuring of Bulk Heterojunction Organic Solar Cells 30m The major challenge in solar cell technology dwells in achieving an efficient absorption of photons with an effective carrier extraction. In all cases, light absorption considerations call for thicker modules while carrier transport would benefit from thinner ones. This dichotomy is the fundamental problem limiting the efficiencies of photovoltaics, especially promising low-cost polymer solar cells. We present experimental and theoretical solutions to this problem applying photonic crystal nanostructuring in bulk heterojunction solar cells made of poly-3-hexylthiophene:[6,6]-phenyl-C61-butyric acid methyl ester (P3HT:PCBM). We discuss theoretical models of optical absorption that occur for the photonic design that result in a 22% enhancement over a conventional planar cell. We also calculate the local exciton creation profile within the photonic crystal structure to show nanopatterning also reduces carrier transport length. Finally, experimental results are presented that follow the theoretical predictions along with our nano fabrication method to show this approach can be used to produce improved large-area nanostructured P3HT:PCBM solar cells. Speaker: Rene Lopez (University of North Carolina at Chapel Hill) • 14:00 Advances in Polymer-Fullerene Photovoltaic Device 30m Polymer solar cells are of high interest due to their potential as efficient, lightweight, large area, flexible renewable energy sources. The basic mechanism for the photovoltaic effect in polymers consists of transfer of a photoexcited electron from the polymer donor to a fullerene electron acceptor followed by transport of the electron and hole through the acceptor and donor,, respectively, to the opposite electrodes. Polymer photovoltaic efficiencies can be increased by utilizing improved materials as electron donors and acceptors as well as by controlling the nanoscale morphology of the thin film devices. The highest efficiencies (~7%) obtained thus far utilize a nanoscale polymer-fullerene blend referred to as a bulk heterojunction, which undergoes phase separation on the 10 nm length scale in order to facilitate charge transfer from the photoexcited polymer to the fullerene electron acceptor. More organized geometries that maximize the majority carrier materials at the respective electrodes could lead to enhanced efficiencies. In one approach, thermal interdiffusion of an initial bilayer of the donor and acceptor materials can be employed to create a concentration gradient in order to optimize both the charge transfer and charge transport processes. This presentation will overview the state-of-the-art in polymeric solar cells and describe the development of thermally-interdiffused concentration gradient geometries as an alternative route towards increased efficiencies. Speaker: J. Randall Heflin (Virginia Tech) • 14:30 Nuclear Energy: Challenges and Directions 30m There are many myths regarding nuclear energy. Nuclear energy provides many advantages but like all other power generation methods it has some drawbacks. There have been some serious accidents involving nuclear power generation with the most recent occurring at Fukushima Daiichi. What role will nuclear energy play in the future? What are the challenges of the nuclear landscape as we move forward? Are there changes in policy or technology that should be considered? A vision of nuclear energy will be provided in an attempt to address these upcoming opportunities and challenges. Speaker: Mark Pierson (Virginia Tech) • 15:30 15:45 Coffee / Refreshments 15m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 15:45 17:45 - Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA • 15:45 17:00 - Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA • 15:45 17:45 EA. Physics at Jefferson Lab Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Romulus Godang (University of South Alabama) • 15:45 Qweak: A Precision Standard Model Test at Jefferson Lab 30m The Qweak collaboration is currently performing the first precision measurement of the proton's neutral weak charge at Jefferson Lab. The Standard Model gives a firm prediction for the weak charge; any deviation from that can be interpreted as new physics beyond the Standard Model. This precision, low energy measurement is sensitive to new physics signatures at energy scales up to 2 TeV. The experiment measures the parity-violating asymmetry in the scattering of longitudinally polarized electrons on the proton at low momentum transfer. An overview of the motivation and experimental approach will be presented, along with an update on the current status. Speaker: Mark Pitt (Virginia Tech) • 16:15 Hadron Spectroscopy at Jefferson Lab: Search for new States of Hadronic Matter 30m Hadrons are complex systems of confined quarks and gluons and exhibit the characteristic spectra of excited states. Quantum Chromodynamics (QCD) is only poorly understood in this non-perturbative regime. It is one of the key issues in hadronic physics to identify the relevant degrees of freedom giving rise to the observed mass spectra and the effective forces between them. Current efforts of the CLAS Collaboration at Jefferson Lab focus on the search for new baryon resonances utilizing polarized beams and targets. A further particular interesting question in hadron spectrosocpy concerns the role of glue and how this is related to the confinement in QCD. I will briefly discuss the efforts of the GlueX Collaboration to search for new forms of hadronic matter beyond simple quark-antiquark systems. Speaker: Volker CREDÉ (Florida State University) • 16:45 High Precision Measurement of the pi^0 Radiative Decay Width 30m As the lightest particle in the hadron spectrum, the pi^0 plays an important role in understanding the fundamental symmetries of QCD. The pi^0 --> gamma gamma decay provides a key process for test of the chiral anomaly, and at the same time a test of the Nambu-Goldstone nature of the pi^0 meson due to spontaneous chiral symmetry breaking. Theoretical activities over the last decade have resulted in high precision (1% level) predictions for the decay amplitude of the pi^0 into two photons. The experimental measurement of this parameter with a comparable precision will be critical to test these important QCD predictions. The PrimEx collaboration at Jefferson Lab has developed and performed new experiments to measure the pi^0 radiative decay width via the Primakoff effect. A new level of experimental precision has been achieved by implementing the high intensity and high resolution photon tagging facility and by developing a novel, high resolution, electromagnetic hybrid calorimeter (HYCAL). A recently published result from the first experimental data (PrimEx-I) with a 2.8% total uncertainty is a factor of 2.5 more precise than the current Particle Data Group average. The second experiment (PrimEx-II) was carried out in fall 2010 with the final goal of 1.4% precision. The result of PrimEx-I and the status of PrimEx-II will be presented. Speaker: Liping Gan (University of North Carolina Wilmington) • 15:45 17:15 ED. Mentoring Workshop Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Leo Piilonen (Virginia Tech) • 15:45 Improving Your Skills as a Research Mentor 1h 30m How do you effectively mentor individuals at different stages of their careers? Are you ready to address the NSF’s new requirement about mentoring post-docs in your next proposal? Can you learn to become a more effective mentor through training? Scientists often are not prepared for their crucial role of mentoring the next generation. Based on a research mentor seminar developed at the University of Wisconsin-Madison and modified for physics by the American Physical Society, this workshop is designed to help you start to become a more effective mentor. Speaker: Monica Plisch (American Physical Society) • 17:00 19:00 FB. SESAPS Executive Committee Meeting (closed) Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA • 19:00 20:00 FD. SESAPS Business Meeting (open to SESAPS members) Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Laurie McNeil (University of North Carolina at Chapel Hill) • Friday, 21 October • 08:00 08:30 Registration (from 8:00 to 10:00 am) 30m Roanoke Foyer () ### Roanoke Foyer • 08:30 10:30 GA. Biological Physics and Biomechanics Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Rahul Kulkarni (Virginia Tech) • 08:30 The physics of bat biosonar 30m Bats have evolved one of the most capable and at the same time parsimonious sensory systems found in nature. Using active and passive biosonar as a major - and often sufficient - far sense, different bat species are able to master a wide variety of sensory tasks under very dissimilar sets of constraints. Given the limited computational resources of the bat's brain, this performance is unlikely to be explained as the result of brute-force, black-box-style computations. Instead, the animals must rely heavily on in-built physics knowledge in order to ensure that all required information is encoded reliably into the acoustic signals received at the ear drum. To this end, bats can manipulate the emitted and received signals in the physical domain: By diffracting the outgoing and incoming ultrasonic waves with intricate baffle shapes (i.e., noseleaves and outer ears), the animals can generate selectivity filters that are joint functions of space and frequency. To achieve this, bats employ structural features such as resonance cavities and diffracting ridges. In addition, some bat species can dynamically adjust the shape of their selectivity filters through muscular actuation. Speaker: Rolf MÜLLER (Virginia Tech) • 09:00 How Do Songbirds Produce Precise Vocalizations? 30m Many species of songbirds do not sing instinctively but learn their songs by a process of auditory-guided vocal learning that starts with a kind of babbling that converges over several months and through tens of thousands of iterations to a highly precise adult song. How the neural circuitry of the songbird brain learns, generates, and recognizes temporal sequences related to song are important questions for neurobiologists and also interest an increasing number of physicists with interests in biophysics, statistical mechanics, nonlinear dynamics, and networks. I will discuss some interesting questions posed by recent experiments on songbirds, especially in regard to extremely sparse neuronal firing associated with song production. I will then discuss a theoretical model known as a synfire chain that my group and others have invoked and analyzed to explain some features of the experimental data. Speaker: Henry Greenside (Duke University) • 09:30 Dissecting cellular biomechanics with a laser 30m The biological tissues of a developing organism are built and reshaped by the mechanical behavior of individual cells. We probe the relevant cellular mechanics \textit{in vivo} using laser-microsurgery -- both qualitatively, to assess whether removal of specific cells alters the dynamics of tissue reshaping, and quantitatively, to measure sub-cellular mechanical properties and stresses. I will detail two quantitative microsurgical measurements. The first uses a laser to drill a sub-cellular hole in a sheet of cells. The subsequent retraction of surrounding cells allows one to infer the local mechanical stress. The second uses a laser to isolate a single cell from the rest of a cell sheet. Isolation is accomplished on a microsecond time scale by holographically shaping a single laser pulse. The subsequent retraction (or expansion) of the isolated cell allows one to separate and quantify the effects of internal and external stresses in the determination of cell shape. I will discuss application of these techniques to the time-dependent biomechanics of epithelial tissues during early fruit fly embryogenesis -- specifically during the processes of germband retraction and dorsal closure. Speaker: M. Shane Hutson (Vanderbilt University) • 08:30 10:30 GB. New Developments in Physics Education Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Prof. Per Arne Rikvold (Florida State University) • 08:30 Learning Physics Through Computational Modeling 30m Computational modeling is a central enterprise in both theoretical and experimental physics but it can also be an excellent means for students in the introductory courses to develop a deeper conceptual understanding of fundamental physics principles. Many instructional benefits are associated with computational modeling, including visualizing 3D phenomena, modeling complex, real-world systems, and reasoning algorithmically. In this talk, I will discuss many of these benefits as well as some of the ongoing research on how students build conceptual understanding from computational models. Speaker: Mr Brandon Lunk (North Carolina State University) • 09:00 Transforming the undergraduate physics program at Florida International University 30m We describe the ongoing physics transformation underway at Florida International University (FIU), highlighting activities that target institutionalization of innovative physics practices. We report on several coherent efforts to improve the undergraduate physics instruction at FIU. These programs include Modeling Instruction, a studio based, integrated lab-lecture course in which students learn by building, validating, and extending models; the Learning Assistant program, an experiential program that recruits top students into teaching careers and provides a vehicle for classroom reform; and reformed curricula in laboratory sections. These reforms have contributed to a 1500% increase in the number of graduates (comparing current three-year averages to the early 1990's), while FIU's undergraduate enrollment grew 180%. Our results are most compelling, as FIU is a minority-serving urban public research institution in Miami, Florida, serving over 44,000 students, of which 64% are Hispanic, 13% are Black, and 56% are women. Speaker: Renee Michelle Goertzen (Florida International University) • 09:30 Collaborative Group Learning using the SCALE-UP Pedagogy 30m The time-honored conventional lecture ("teaching by telling") has been shown to be an ineffective mode of instruction for science classes. In these cases, where the enhancement of critical thinking skills and the development of problem-solving abilities are emphasized, collaborative group learning environments have proven to be far more effective. In addition, students naturally improve their teamwork skills through the close interaction they have with their group members. Early work on the Studio Physics model at Rensselaer Polytechnic Institute in the mid-1990's was extended to large classes via the SCALE-UP model pioneered at North Carolina State University a few years later. In SCALE-UP, students sit at large round tables in three groups of three --- in this configuration, they carry out a variety of pencil/paper exercises (ponderables) using small whiteboards and perform hands-on activities like demos and labs (tangibles) throughout the class period. They also work on computer simulations using a shared laptop for each group of three. Formal lecture is reduced to a minimal level and the instructor serves more as a "coach" to facilitate the academic "drills" that the students are working on. Since its inception in 1997, the SCALE-UP pedagogical approach has been adopted by over 100 institutions across the country and about 20 more around the world. In this talk, I will present an overview of the SCALE-UP concept and I will outline the details of its deployment at George Washington University over the past 4 years. I will also discuss empirical data from assessments given to the SCALE-UP collaborative classes and the regular lecture classes at GWU in order to make a comparative study of the effectiveness of the two methodologies. Speaker: Gerald Feldman (George Washington University) • 10:00 Transforming Introductory Physics for Life Scientists: Researching the consequences for students 30m In response to policy documents calling for dramatic changes in pre-medical and biology education [1-3], the physics and biology education research groups at the University of Maryland are rethinking how to teach physics to life science majors. As an interdisciplinary team, we are drastically reconsidering the physics topics relevant for these courses. We are designing new in-class tasks to engage students in using physical principles to explain aspects of biological phenomena where the physical principles are of consequence to the biological systems. We will present examples of such tasks as well as preliminary data on how students engage in these tasks. Lastly, we will share some barriers encountered in pursuing meaningful interdisciplinary education. Co-authors: Edward F. Redish and Julia Svaboda. [1] National Research Council, Bio2010: Transforming Undergraduate Education for Future Research Biologists (NAP, 2003). [2] AAMC-HHMI committee, Scientific Foundations for Future Physicians (AAMC, 2009). [3] American Association for the Advancement of Science, Vision and Change in Undergraduate Biology Education: A Call to Action (AAAS, 2009). Speaker: Dr Chandra Turpen (University of Maryland) • 08:30 10:30 GC. Condensed Matter Physics / Nanophysics I Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Dr Hans Robinson (Virginia Tech) • 08:30 Capturing Ion-Solid Interactions with MOS structures 12m We have fabricated metal-oxide-semiconductor (MOS) devices for a study of implantation rates and damage resulting from low energy ion-solid impacts. Specifically, we seek to capture ion irradiation effects on the oxides. Fabrication of the MOS devices follows a standard procedure where Ohmic contacts are first created on the wafer backside followed by the thermal growth of various thicknesses of SiO_2 (from 50 nm to 200 nm) on the wafer frontside. As-grown SiO_2 layers are then exposed to various singly-charged alkalis ions with energies in the range of 100 eV to 10 keV in our beamline setup. Following this exposure, the MOS devices are completed in situ with the deposition of a top Al contact. Characterization of the ion-modified devices involves the standard device technique of biased capacitance-voltage (C-V) measurements where a field is applied across the MOS structure at an elevated temperature to move implanted ions resulting in changes in surface charge density that are reflected as shifts in the flatband voltage (V_FB). Similarly, a triangular voltage sweep (TVS) test can be utilized to measure the ionic displacement current as it is driven by a slow linear voltage ramp and it should reveal the total ionic space charge in an MOS. Speaker: R. Shyam (Clemson University) • 08:42 Free flux flow in two single crystals of V_3 Si with differing pinning strengths 12m Results of measurements on two very clean, single-crystal samples of the A15 superconductor V_3 Si are presented. Magnetization and transport data have confirmed the "clean" quality of both samples, as manifested by: (i) high residual electrical resistivity ratio, (ii) very low critical current densities Jc, and (iii) a "peak" effect in the field dependence of critical current. The (H,T) phase line for this peak effect is shifted down for the slightly "dirtier" sample, which consequently also has higher critical current density J_c(H). Large Lorentz forces are applied on mixed-state vortices via large currents, in order to induce the highly ordered free flux flow (FFF) phase, using experimental methods developed previously. The traditional model by Bardeen and Stephen (BS) predicts a simple field dependence of flux flow resistivity rho_f(H) ~ H/H_c2, presuming a field-independent flux core size. A model by Kogan and Zelezhina (KZ) takes into account the effects of magnetic field on core size, and predict a clear deviation from the linear BS dependence. In this study, rho_f(H) is confirmed to be consistent with predictions of KZ. Speaker: O. Gafarov (University of South Alabama) • 08:54 Double-Paddle Oscillators for the Mechanical Spectroscopy of Ion-Surface Modifications 12m We discuss the use of silicon double-paddle oscillators (DPOs) as a technique for following atomistic changes in mechanical properties under energetic ion irradiation conditions in ultra high vacuum (UHV). For these DPOs, it is well known that at low temperatures (~ 4k) the internal friction or Q^{-1} of the anti-symmetric oscillator eigenmode is lower than 10^{-8} and that it increases to 10^{-5} as temperature is increased (up to 673K). This small damping or high Q allows for sensitive measurements of the mechanical properties of thin deposited films or of the oscillator structure itself. Using an incident ion beam we will investigate changes in the mechanical properties of the DPO due to mass loss during ion bombardment. In initiating these measurements, a basic frequency sweep setup has been utilized under ambient atmospheric conditions in order to finalize the required electronics and to demonstrate the various DPO eigenmodes that have been seen in earlier studies. A more advanced electronics and DPO mount design will follow as the system is transitioned to UHV operation. Speaker: Daniel Field (Clemson University) • 09:06 Measurement of DC resistivity of new quasi-one-dimensional conducting platinate 12m Cs_4 [Pt (C N)_4](C F_3 S O_3)_2 (TCP) is the newest palatinate, quasi-one-dimensional conductors with parallel "chains" of Pt maintained by peripheral materials and with well known properties, especially in the potassium-containing material, KCP. Unlike KCP, however, we are finding properties unique to TCP. First, we discuss technical difficulties in measuring the DC resistivity of this material: Unlike with KCP, the samples of TCP were relatively small and very fragile, their contact surface had an insulating film, and the crystal had a very sensitive pressure dependence, coupled with significant thermal contraction/expansion. These issues were addressed with reasonable success, using proper handling methods, sputtered electrical contacts, and a floating sample mount, as will be discussed. The resulting temperature dependence of resistivity is radically different from KCP, showing an anomalous "peak" at around 150 K. Speaker: Albert Gapud (University of South Alabama) • 09:18 NMR study of 133Cs in new quasi-one-dimensional conducting platinate 12m Cs_4 [Pt (C N)_4](C F_3 S O_3)_2 (TCP) is a new Krogmann's salt, consisting of quasi-one-dimensional conducting chains of Pt with well known properties, especially in the potassium-containing material, KCP. Unlike KCP, however, there are properties unique to TCP, e.g., longer Pt-Pt separation, insulating at room temperature, and non-magnetic. Previous NMR studies on KCP have mainly been on 195Pt, which does not produce a usable NMR signal in TCP; our study utilizes 133Cs instead, which are peripheral to the Pt chains. Splitting of spin states due to quadrupole interaction with local electric field gradient has been measured as a function of orientation versus applied static field. Modeling of the frequency shifts reveals consistency with the known symmetry axes of 133Cs determined by single-crystal x-ray diffraction. Relaxation time T1 versus temperature reveals a weak relaxation mechanism and absence of magnetism. Relaxation data has a sharp anomaly around 119 K where T1 jumps 3 orders of magnitude, consistent with critical fluctuations but not yet well understood. Speaker: R. I. Leatherbury (University of South Alabama) • 09:30 Controlled release from stimuli-sensitive microgel capsules 12m We introduce a mesoscale computational model for responsive gels, i.e. chemically cross-linked polymer networks immersed in Newtonian fluids, and use it to probe the release of nanoparticles from hollow microgel capsules that swell and deswell in response to external stimuli. Our model explicitly describes the transport of nanoparticles in swelling/deswelling polymer networks with complex geometries and associated fluid flows. Our simulations reveal that responsive microcapsules can be effectively utilized for steady and pulsatile release of encapsulated solutes. Steady, diffusive release of nanoparticle takes place from swollen gel capsules, whereas capsule deswelling cause burst-like discharge of solutes driven by a flow from the shrinking capsule interior. We demonstrate that this hydrodynamic release can be regulated by introducing rigid microscopic rods inside the capsule. Our calculations indicate that the rods stretch the deswelling membrane and promote the formation of large pores in the shell, which allow massive flow-driven release of nanoparticles. Thus, our findings unveil a new approach for regulating the release from stimulus responsive micro-carriers that will be especially useful for designing new drug delivery systems. Speaker: Hassan Masoud (Georgia Institute of Technology) • 09:42 Study the friction behaviour of poly[2-(dimethylamino)ethyl methacrylate] brush with AFM probes in contact mechanics 12m We have studied the frictional behaviour of grafted poly[2-(dimethylamino)ethyl methacrylate] (PDMAEMA) films using friction force microscopy (FFM). The films were prepared on native oxide-terminated silicon substrates using the technique of atom transfer radical polymerization (ATRP). We show that single asperity contact mechanics (Johnson-Kendall-Roberts(JKR) and Derjaguin-Muller-Toporov(DMT)) as well as a linear (Amontons) relation between applied load and frictional load depending on the pH of the FFM probe. Measurements were made using functionalized and unfunctionalized silicon nitride triangular probes. Functionalized probes included gold-coated probes, and ones coated with a self-assembled monolayer of dodecanethiol (DDT). The frictional behaviour between PDMAEMA and all tips immersed in pH from 3 to 11 are corresponded to the DMT or JKR model and are linear in pH=1, 2, and 12. These results show that contact mechanics of polyelectrolytes in water is complex and strongly dependent on the environmental pH. Speaker: Mrs Maryam Raftari (University of Sheffield) • 09:54 Dynamics of Polydisperse Foam-like Emulsion 12m Foam is a complex fluid whose relaxation properties are associated with the continuous diffusion of gas from small to large bubbles driven by differences in Laplace pressures. We study the dynamics of bubble rearrangements by tracking droplets of a clear, buoyantly neutral emulsion that coarsens like a foam. The droplets are imaged in three dimensions using confocal microscopy. Analysis of the images allows us to measure their positions and radii, and track their evolution in time. We find that the droplet size distribution fits a Weibull distribution characteristics of foam systems. Additionally, we observe that droplets undergo continuous evolution interspersed by occasional large rearrangements in par with local relaxation behavior typical of foams. Speaker: Harry Hicock (James Madison University) • 10:06 Patterning the adhesive properties of amine-rich polymer films 12m Full integration of top-down and bottom-up nanofabrication technologies will require the ability to accurately place nanostructures onto well-defined locations on a surface, where the nanostructures initially only exist suspended in a liquid. As the nanostructures may be quite fragile, perhaps the best way to do this is to pattern the adhesiveness of the surface in order to guide assemblies to the desired locations. We have demonstrated two routes for achieving this using the amine-rich, nm thick polymer films based on poly(allylamine hydrochloride). The adhesive properties of the films can be patterned with standard lithographic techniques, where adhesion to selected portions of the surface is suppressed either by treatment with acetic anhydride or by direct exposure to ultraviolet light. We applied these techniques both to flat and curved substrates and demonstrate spatial resolution better than 100 nm. Speaker: Stefan Stoianov (Virginia Tech) • 08:30 10:30 GD. The 100th Anniversary of the Discovery of the Atomic Nucleus: A historical reflection of nuclear science in the Southeast Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Paul Cottle (Florida State University) • 08:30 Selected Highlights in Nuclear Research in the Southeast by Vanderbilt and ORNL 30m On the one hundredth anniversary of the discovery of the nucleus, selected highlights in nuclear research by Vanderbilt scientists and by Oak Ridge National Laboratory scientists as well as their joint research are described. These will include the earliest work involving the first confirmation of neutron induced fission and classic papers on the fission process. This was followed by the development of the barrier for the gaseous diffusion separation of 235U from 238U. In the 1940's the first working nuclear reactor became operational at ORNL, to make 239Pu followed by the first radioisotopes for nuclear medicine, neutron scattering to probe materials leading to a Nobel Prize and the first observation of the beta decay of the free neutron. In 1953 Hill and Wheeler published their classic nuclear theory paper that has over 2000 citations. In the 1960's large E0 transitions were observed in decays of beta but not gamma vibrational bands to confirm the predictions of Bohr and Mottelson that $\beta$ vibrations change the nuclear deformation. Then the first failures of the B-M model were observed. In the 1970's the paradigm that each nucleus had one fixed shape was changed when the discovery of the coexistence of overlapping bands built on different deformations were observed. This was made possible, in part, by universities building the first isotope separator on-line to the Oak Ridge cyclotron. This was followed by the discovery of the reinforcement of proton and neutron shell gaps at the same deformation to give superdeformed double magic nuclei. Other highlights will be presented, including the recent discovery of the new element 117 and confirmation of new elements 113 and 115. Speaker: Joseph Hamilton (Vanderbilt University) • 09:00 A Personal Perspective on Triangle Universities Nuclear Laboratory Development 30m Nuclear physics research in NC began seriously in 1950 when Henry Newson and his colleagues at Duke attracted support for a 4 MeV Van de Graaff accelerator with which they grew their doctoral training program. The lab's scientific achievements also grew, including the discovery in 1966 of fine structure of nuclear analog states. By then UNC and NC State had attracted Eugen Merzbacher and Worth Seagondollar who, with Newson, brought more faculty to work at an enlarged three-university, cooperative lab. Launched at Duke in 1967 with a 30 MeV Cyclograff accelerator, and subsequently equipped with a polarized H and D ion source and polarized H and 3He targets, an extensive program in light-ion and neutron physics ensued. Faculty interest in electromagnetic interactions led to development since 2001 of TUNL's HIgS (High Intensity gamma Source) facility to produce intense 1-100 MeV polarized photon beams with small energy spread. Photonuclear reaction studies there today are producing results of unmatched quality. These 60 years of nuclear physics research have produced ~250 doctoral graduates, many of whom have gone on to very distinguished careers. A personal perspective on these activities will be presented. Speaker: Thomas Clegg (University of North Carolina at Chapel Hill) • 09:30 A personal view of nuclear physics in the Southeast 30m Numerous physicists who have carried part or all of their work in the Southeast have made major contributions to our present understanding of the nucleus, from Robert Van de Graaff whose accelerator became the work horse of experimental nuclear physics to John Wheeler whose early work at North Carolina began a tradition there that continues until today. Many early major results from southern researchers will be presented as well as some outstanding current work. The shift from exploring nuclear structure to generating the chemical elements in stars to unraveling the structure of the nucleon are evidence of the impact made in the field of nuclear physics by the Southeast. Speaker: Kirby Kemper (Florida State University) • 10:00 Early History of Jefferson Laboratory 30m This talk will focus on the history of Jefferson Laboratory from its inception as the NEAL proposal by the Southeastern Universities Research Association (SURA) in 1980, to about 1986 -- two years after the arrival of Hermann Grunder and his Berkeley team. Major themes are (i) a national decision to build a high energy, high duty factor electron accelerator for basic nuclear physics research, (ii) open competition established by the DOE, (iii) formation of SURA, and (iv) interest of SURA physicists (particularly at UVA and W&M) in this research. I will discuss the scientific, technical, and political issues that eventually lead to the selection of the SURA proposal, the choice of Newport News as the site, and the decision to adopt a recirculating superconducting ring for the final design. Speaker: Franz Gross (Jefferson Lab) • 10:30 10:45 Coffee / Refreshments 15m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 10:45 12:45 HA. Gravitation Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Dr George Siopsis (University of Tennessee at Knoxville) • 10:45 LISA: the space-based gravitational wave observatory 12m The Laser Interferometer Space Antenna (LISA) is a space-based gravitational wave (GW) observatory with the primary scientific goal of detecting and observing GW from astronomical sources in the milli-Hertz range. Such observations will provide a new way to explore the Universe and they will bring new rich information about its structure and evolution. However, GWs signals are very weak and thus very precise and low-noise measurements are required. GWs are detected by measuring the relative change in distance between free falling proof masses inside widely separated spacecraft. These changes are measured with pico-meter sensitivity by means of laser interferometry. I will give an overview of the LISA mission and a summary of the research done at the University of Florida. Speaker: Josep Sanjuan (University of Florida) • 10:57 Ring Heater for Advanced LIGO 12m The Laser Interferometer Gravitational-wave Observatory (LIGO) is currently being upgraded to Advanced LIGO. One of the main changes is the increase in input laser power from 30W to 165W. In Advanced LIGO up to 600kW laser power will circulate inside the interferometer. Some of the power will be absorbed by the LIGO test masses, creating a thermal gradient that will deform them changing the spatial mode of the laser field inside the interferometer. Radiative ring-shaped heaters will be installed close to the test masses to provide additional heat to counteract this effect and minimize the deformation. In this talk we will present the proposed University of Florida ring heater design, and measurements of the thermal profile homogeneity to be compared with initial requirements. In addition, we present initial results of outgassing measurements to qualify our ring heater for use in the LIGO vacuum system. Speaker: Eric Deleeuw (University of Florida) • 11:09 Laser frequency stabilization 12m Laser ranging and interferometry are essential technologies allowing for many astounding new space-based missions such as the Laser Interferometer Space Antenna (LISA) to measure gravitational radiation emitted from distant super massive black hole mergers or distributed aperture telescopes with unprecedented angular resolution in the NIR or visible regime. The requirements on laser frequency noise depend on the residual motion and the distances between the spacecraft forming the interferometer. The intrinsic frequency stability of commercial lasers is several orders of magnitude above these requirements. Therefore, it is necessary for lasers to be stabilized to an ultrastable frequency reference so that they can be used to sense and control distances between spacecraft. Various optical frequency references and frequency stabilization schemes are considered and investigated for the applicability and usefulness for space-based interferometry missions. Speaker: Darsa Donelan (University of Florida) • 11:21 High Speed Alignment Control of an Optical Resonator 12m Laser interferometric gravitational wave detectors are by far the most sensitive interferometer in the world. They require exquisite control over all degrees of freedom of the optical components comprising the main detector but also over all degrees of freedom of the used laser beam. One of the most critical degrees of freedom is the propagation direction and beam location of the input beam when it enters the interferometer. Any variations in these two parameters will couple to static misalignments inside the interferometer and will generate spurious signals, which can easily limit the sensitivity of gravitational wave detectors such as Advanced LIGO. This has long been recognized and has led to alignment sensing and control systems, which use piezo mounted mirrors to control the alignment of the laser beam. The disadvantage of these systems is their low bandwidth and intrinsic noise. We have are in the process of characterizing actuators which use the electro-optical effect to steer the laser beam. These systems have a significantly higher bandwidth and don't require any moving parts which usually means much higher reliability. We report on the performance of these devices. Speaker: Mr Daniel Amariutei (University of Florida) • 11:33 Orbits and Scaling for an Isotropic Metric 12m Scaling of physical quantities shows the symmetries of an isotropic metric. For example, invariance of Planck's constant under gravitational scaling provides consistency of general relativity with quantum mechanics. Invariance of charge and electric field strength provide consistency with electromagnetism. Transitivity of scaling eliminates the traditional need for a globally preferred reference frame. Rather, diagonalization of the metric yields local rest frames. Conventional application of the Einstein Equation has inconsistencies and contradictions, such as gravitational fields without energy, objects crossing event-horizons, objects exceeding the speed of light, and inconsistency in scaling the speed of light and its factors. An isotropic metric resolves such problems by attributing energy to the gravitational field, in the energy-momentum tensor of the Einstein Equation. Scattering, orbital period, and precession offer ways to distinguish an isotropic from a Schwarzschild metric. Speaker: Joseph Rudmin (James Madison University) • 11:45 Space-based interferometric gravitational wave observatories will measure changes in the distance between free falling proof masses inside widely separated spacecraft with pm sensitivity. These observatories~will use fast telescopes to exchange laser beams. These telescopes are part of the probed optical distances and any length change in the gravitational wave band between secondary and primary can limit the sensitivity of the observatories. Furthermore, the large distance between and space constrains on the spacecraft require to use very fast telescopes with f-numbers approaching unity. These telescopes are very sensitive against any absolute length change which would reduce interferometer visibility and, ultimately, sensitivity. Our group has assembled a Silicon Carbide test structure and investigated its dimensional stability in the 10^{-4} to 1Hz frequency band at different operating temperatures. We also measured the overall length change and started investigating asymmetric length changes during cool down which would lead to misalignments in the telescope. Speaker: Danila Korytov (University of Florida) • 10:45 12:45 HB. Statistical and Nonlinear Physics I Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Michel Pleimling (Virginia Tech) • 10:45 Boundary conflicts and cluster coarsening: Waves of life and death in the cyclic competition of four species 12m In the cyclic competition among four species on a two-dimensional lattice, the partner particles, which swap positions on the lattice with some probability, produce clusters with a length that grows algebraically as t^1/z where z is the dynamical exponent. Further investigation of the dynamics at the boundary of the clusters is realized by placing one partner particle pair in the upper half of the system and the other pair in the lower half. Using this technique, results about the fluctuations of the interface are obtained. We also observe wave fronts in the case of non-symmetric reaction rates where extinction of a partner particle pair takes place. Speaker: Ahmed Roman (Virginia Tech) • 10:57 Stochastic evolution of four species in cyclic competition: exact and simulation results 12m We study a stochastic system with N individuals, consisting of four species competing cyclically: A+B --> A+A, ..., D+A --> D+D. Randomly choosing a pair and letting them react, N is conserved but the fractions of each species evolve non-trivially. At late times, the system ends in a static, absorbing state -- typically, coexisting species AC or BD. The master equation is shown and solved exactly for N=4, providing a little insight into the problem. For large N, we rely on simulations by Monte Carlo techniques (with a faster dynamics where a reaction occurs at every step). Generally, the results are in good agreement with predictions from mean field theory, after appropriate rescaling of Monte Carlo time. The theory fails, however, to describe extinction or predict their probabilities. Nevertheless, it can hint at many remarkable behavior associated with extinction, which we discover when studying systems with extremely disparate rates. Speaker: Sara Case (Virginia Tech) • 11:09 The effects of mobility on the one-dimensional four-species cyclic predator-prey model 12m The dynamics of a one-dimensional lattice composed of four species cyclically dominating each other is very much dependent on the rates of mobility in the system. We realize mobility as the exchange of two particles located at two nearest neighbor sites with some species dependent rate s. Allowing for only one particle per site, the different species interact cyclically, with species dependent consumption rate k, such that k + s <= 1. When varying the exchange rates, we see vastly different behavior when compared to the three-species model. The patterns of domain growth and decay still show an overall power law behavior, however the fundamental trend of domain growth does not follow the three-species case. We also look at the space-time diagrams to see precisely how the domains form, grow, and decay. Speaker: David Konrad (Virginia Tech) • 11:21 Quenched Spatial Disorder in Cyclic Three-Species Predator-Prey Models 12m We employ individual-based Monte Carlo simulations to study the effects of quenched spatial disorder in the reaction rates on the co-evolutionary dynamics of cyclic three- species predator-prey models with conserved total particle density. To this end, we numerically explore the oscillatory dynamics of two different variants: (1) the model with symmetric interaction rates near the center of the configuration space, and (2) a strongly asymmetric model version located in one of the three "corners" of configuration space. We find that spatial rate variability has only minor effect on the dynamics of generic, not strongly asymmetric systems (variant 1). In stark contrast, spatial disorder can greatly enhance the fitness of both minor species in "corner" systems (2). Furthermore, through both mean-field analysis and numerical simulation, we conclude that the evolutionary dynamics of two-species Lotka-Volterra predator- prey models is well approximated by such strongly asymmetric cyclic three-species predator-prey systems. Refs.: Qian He, Mauro Mobilia, and Uwe C. Tauber, Phys. Rev. E 82, 051909 (2010); Qian He and Uwe C.Tauber, in preparation (2011). Speaker: Qian He (Virginia Tech) • 11:33 Epidemic spreading on preferred degree adaptive networks 12m We report our study of SIS epidemic spreading model on networks where individuals have a fluctuating number of connections around some preferred degree. By making our preferred degree depend on the level of infection, we model the response of individuals to the prevailing epidemic. This helps us to explore the feedback mechanisms between the dynamics on the network and dynamic of the network. We will discuss the effect of such feedback mechanisms on the SIS phase diagram. We have also explored the SIS model on two communities with a coupling between them. Speaker: Shivakumar Jolad (Virginia Tech) • 11:45 Aging behavior in disordered systems 12m Using Monte Carlo simulations we investigate aging behavior during phase ordering in two-dimensional Ising models with disorder and in three-dimensional Ising spin glasses. The time-dependent dynamical correlation length L(t) is determined numerically and the scaling behavior of various two-time quantities as a function of L(t)/L(s) is discussed. For disordered Ising models deviations of L(t) from the algebraic growth law show up. The generalized scaling forms as a function of L(t)/L(s) reveal a simple aging scenario for Ising spin glasses as well as for disordered Ising ferromagnets. Speaker: Hyunhang Park (Virginia Tech) • 11:57 Time-dependent mechanical response of the cytoskeleton 12m Motivated by a series of experiments that study the response of the cytoskeleton in living cells to time-dependent mechanical forces, we investigate, through Monte Carlo simulations, a three-dimensional network subjected to perturbations. After having prepared the system in a relaxed state, shear is applied and the relaxation processes are monitored. We measure two time quantities and discuss the possible implications of our results for relaxation processes taking place in the cytoskeleton. Speaker: Nasrin Afzal (Virginia Tech) • 12:09 Drop Formation from a Wettable Nozzle 12m Drop formation from a nozzle is a common occurrence in our daily lives. It is essential in ink-jet printers and spray cooling technology. However, most research has already been done on the pinch-off mechanism from a non-wettable nozzle. In this study, we focus on the formation of a drop from a wettable nozzle. Initially, a drop will climb the outer walls of the wettable nozzle because of surface tension. This initial upward motion is closely related to the capillary rise phenomenon. Then, when the weight of the drop becomes large enough, the force of gravity would overcome surface tension causing the drop to fall. By changing the nozzle size and fluid flow rate, we have observed different behaviors of the droplets and developed a mathematical model that predicts the motion of the drop. Two asymptotic solutions in the initial and later stages of drop formation are then obtained and show good agreement with the experimental observations. Speaker: Brian Chang (Virginia Tech) • 10:45 12:45 HC. Medical Physics: Improving Health, Saving Lives Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Kenneth Wong (Virginia Tech) • 10:45 Image Guidance and Motion Adaptation in Radiation Therapy 30m Modern radiation therapy can achieve a very high level of conformality, meaning that the size and shape of nearly any disease site (such as a tumor) can be irradiated to uniform dose while sparing surrounding normal tissue. However, an inherent limitation in many treatment planning and delivery systems is that the body region under treatment is considered to be static and unchanging. This assumption is false, as there are many processes over varying time scales that change the shape, location, and size of the treatment target and surrounding tissue. Technological advances are now making it feasible to treat tumors adaptively, so that the radiation delivered is modulated in real time to match the changes in the body. These advances will enable more accurate and precise radiation treatments, which should improve cure rates and patient survival times. In this talk, I will present methods for observing the dynamic tumor, determining its changes in shape, size, and position, and delivering adaptive therapy. Speaker: Martin Murphy (Virginia Commonwealth University) • 11:15 Monitoring Electrical and Thermal Burns with Spatial Frequency Domain Imaging 30m Thermal and electrical injuries are devastating and hard-to-treat clinical lesions. The pathophysiology of these injuries is not fully understood to this day. Further elucidating the natural history of this form of tissue injury could be helpful in offering stage-appropriate therapy. Spatial Frequency Domain Imaging (SFDI) is a novel non-invasive technique that can be used to determine optical properties of biological media. We have developed an experimental apparatus based on SFDI aimed at monitoring parameters of clinical interest such as tissue oxygen saturation, methemoglobin volume fraction, and hemoglobin volume fraction. Co- registered Laser Doppler images of the lesions are also acquired to assess tissue perfusion. Results of experiments conducted on a rat model and discussions on the systemic changes in tissue optical properties before and after injury will be presented. Speaker: Jessica Ramella-Roman (Catholic University of America) • 11:45 Medical Imaging for Understanding Sleep Regulation 30m Sleep is essential for the health of the nervous system. Lack of sleep has a profound negative effect on cognitive ability and task performance. During sustained military operations, soldiers often suffer from decreased quality and quantity of sleep, increasing their susceptibility to neurological problems and limiting their ability to perform the challenging mental tasks that their missions require. In the civilian sector, inadequate sleep and overt sleep pathology are becoming more common, with many detrimental impacts. There is a strong need for new, in vivo studies of human brains during sleep, particularly the initial descent from wakefulness. Our research team is investigating sleep using a combination of magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalography (EEG). High resolution MRI combined with PET enables localization of biochemical processes (e.g., metabolism) to anatomical structures. MRI methods can also be used to examine functional connectivity among brain regions. Neural networks are dynamically reordered during different sleep stages, reflecting the disconnect with the waking world and the essential yet unconscious brain activity that occurs during sleep. In collaboration with Linda Larson-Prior, Washington University; Alpay Ozcan, Virginia Tech; Seong Mun, Virginia Tech; and Zang-Hee Cho, Gachon University. Speaker: Kenneth Wong (Virginia Tech) • 12:15 Radiation Oncology Physics and Medical Physics Education 30m Medical physics, an applied field of physics, is the applications of physics in medicine. Medical physicists are essential professionals in contemporary healthcare, contributing primarily to the diagnosis and treatment of diseases through numerous inventions, advances, and improvements in medical imaging and cancer treatment. Clinical service, research, and teaching by medical physicists benefits thousands of patients and other individuals every day. This talk will cover three main topics. First, exciting current research and development areas in the medical physics sub-specialty of radiation oncology physics will be described, including advanced oncology imaging for treatment simulation, image-guided radiation therapy, and biologically-optimized radiation treatment. Challenges in patient safety in high-technology radiation treatments will be briefly reviewed. Second, the educational path to becoming a medical physicist will be reviewed, including undergraduate foundations, graduate training, residency, board certification, and career opportunities. Third, I will introduce the American Association of Physicists in Medicine (AAPM), which is the professional society that represents, advocates, and advances the field of medical physics (www.aapm.org). Speaker: Dr J. Daniel Bourland (Wake Forest School of Medicine) • 10:45 12:45 HD. Neutrinos Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Leo Piilonen (Virginia Tech) • 10:45 Theta_13 and Beyond 30m I will briefly review the current status of neutrino oscillation and highlight the open issues. The current generation of neutrino experiments Double Chooz, Daya Bay, T2K and NOvA have started to probe theta_13 and soon will deliver a first measurement. However, they can not test the mass hierarchy or study leptonic CP violation, therefore even larger facilities are needed. I will present the underlying physics and the various different proposals in detail. Speaker: Patrick Huber (Virginia Tech) • 11:15 The T2K Experiment 30m The T2K experiment is designed to study neutrino oscillation. In particular, it is designed to measure the final, previously unmeasured oscillation mixing angle, known as theta_13. This mixing angle is responsible for allowing muon neutrinos to oscillate to electron neutrinos. T2K features a nearly pure beam of muon neutrinos, produced at the J-PARC accelerator complex in Tokai, on the East coast of Japan. This beam travels 295 km through the earth, and emerges at the Super-Kamiokande detector, in the mountains in Western Japan, where the neutrinos are detected. At this far detector, the appearance of electron neutrinos from the nu_mu beam can indicate non-zero theta_13. Six electron neutrino candidate events were observed at Super-Kamiokande, with an expected background of 1.5. The probability of observing six or more events from just background is just 0.7%. Speaker: Joshua Albert (Duke University) • 11:45 The MINOS and NOvA Experiments 30m Massive neutrinos provide the first hints at physics beyond the standard model. Current and future neutrino experiments aim to further refine our understanding of neutrino mixing, one of the implications of neutrino mass. Two of these experiments, MINOS and NOvA, are long-baseline neutrino oscillation experiments in the Fermilab NuMI neutrino beam line. Both the currently running MINOS experiment, and the future NOvA experiment, employ two detectors, hundreds of km apart. Comparisons of the energy spectra and beam composition at the two sites yield precision measurements of neutrino oscillations for L/E ~ 500 km/GeV. In this talk, I will describe the two experiments, presenting updated measurements from MINOS on the probability of muon-neutrino and antineutrino disappearance as a function of energy. I will report on the MINOS measurement of neutral current interaction rates in each detector, which enables a search for light neutrino families that do not couple via the weak interaction, and I will also discuss the latest results from the search for electron-neutrino events in the MINOS Far Detector, which probes the value of the mixing angle theta_13. Finally, I will discuss the goals and status of the NOvA experiment. Speaker: Patricia Vahle (College of William and Mary) • 12:15 Neutrino Oscillations with the Daya Bay Reactor Neutrino Experiment 30m The last unknown neutrino mixing matrix element, theta_13, holds the key to lepton based CP violation and to determining the ordering of the neutrino mass states. The Daya Bay Reactor Neutrino Experiment, which has just started to take data will have the best reach in theta_13 sensitivity for the next decade. The experiment will be discussed, including current status and future prospects. Speaker: Jonathan Link (Virginia Tech) • 12:45 13:30 Lunch 45m • 13:30 15:30 JA. Astrophysics Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Prof. Michael Kavic (Long Island University) • 13:30 LENS -- A Novel Technology to Measure the Low Energy Solar Neutrino Spectrum (pp, 7Be, and CNO) 12m LENS is a low energy solar neutrino spectrometer that will measure the solar neutrino spectrum above 115 keV, >95% of the solar neutrino flux, in real time. The fundamental neutrino reaction in LENS is charged-current based capture on 115In detected in a liquid scintillator medium. The reaction yields the prompt emission of an electron and the delayed emission of 2 gamma rays that serve as a time & space coincidence tag. Sufficient spatial resolution is used to exploit this signature and suppress background, particularly due to 115In beta decay. A novel design of optical segmentation (The Scintillation Lattice or SL) channels the signal light along the three primary axes. The channeling is achieved via total internal reflection by suitable low index gaps in the segmentation. The spatial resolution of a nuclear event is obtained digitally, much more precisely than possible by common time of flight methods. Advanced Geant4 analysis methods have been developed to suppress adequately the severe background due to 115In beta decay, achieving at the same time high detection efficiency. Speaker: Derek Rountree (Virginia Tech) • 13:42 LENS Prototyping -- Construction and Deployment of MicroLENS 12m The LENS collaboration's goal is the construction of a low energy neutrino spectrometer (LENS) that will measure the entire solar neutrino spectrum above 115keV. In an effort to reach this goal we have developed a two phase prototype program. The first of these is microLENS, a small prototype to study the light transmission in the as built LENS scintillation lattice---a novel detector method of high segmentation in a large liquid scintillator detector. The microLENS prototype is currently being finished and deployed at the Kimballton Underground Research Facility (KURF) near Virginia Tech. This prototype will be the main topic of this presentation. We will present the detector construction and the methods and schemes of the program during the first phases of running with minimal channels instrumented (~41 compared to full coverage 216). After construction of the microLENS detector we will finalize designs for the miniLENS prototype and have the miniLENS prototype running shortly thereafter. Speaker: Tristan Wright (Virginia Tech) • 13:54 Borexino Calibration, Precision Measurement and Seasonal Variations of the 7Be solar neutrino flux 12m Borexino, a real-time calorimetric detector for low energy neutrino spectroscopy, is located in the underground laboratories of Gran Sasso, Italy (LNGS). The experiment's main focus is the direct measurement of the 7Be solar neutrino flux of all flavors via neutrino-electron scattering in an ultra-pure scintillation liquid. After years of construction, the first data was collected in May 2007, and since then, over 740 live days have been acquired for the analysis. Years of operation and extensive calibration campaign led by Virginia Tech have opened new fields that extend beyond Borexino's initial mission. Currently, the precision measurement on the 7Be line approaches an extraordinarily low level of 4%. That allows us to extract the Seasonal Variation of the neutrino-flux which I am mainly involved in at Virginia Tech; Studies of such fluctuations will deliver definite evidence for the Solar origin of the signal. Borexino also serves as a powerful observatory for anti-neutrinos from Supernovae as well as for Geo-neutrinos. Design and the detector calibration will also be covered in this discussion. Speaker: Szymon Manecki (Virginia Tech) • 14:06 Solar system tests versus cosmological constraints for f(G) models 12m Recently, some f(G) higher order gravity models have been shown to exhibit some interesting phenomenology including a late time cosmic acceleration following a matter-dominated deceleration period with no separatrix singularities in between the two phases. In this work, we compare the models to the solar system limits from the gravitational frequency redshift, the deflection of light, the Cassini experiment, the time delay and the perihelion shift of planets deriving various bounds on the model parameters. We contrast the bounds obtained with the cosmological constraints on these models finding that the models pass simultaneously both types of constraints. Speaker: Jacob Moldenhauer (Francis Marion University) • 14:18 The Spectral Properties of Galaxies with H2O Maser Emission 12m Megamaser disk systems allow for accurate measurements of the masses of galactic supermassive black holes and precise distance determinations of extragalactic systems, but the detection rate of maser systems remains low. We investigate the optical spectral properties of a large, statistically significant sample of galaxies that host water masers in order to identify the host properties that correlate with maser emission, and thus provide efficient ways to search for new mega-maser disks. We combined spectroscopic observations from the Sloan Digital Sky Survey with the sample of galaxies surveyed for water maser emission from the Megamaser Cosmology Project. We identified 46 maser detections and 1207 non-detections in the SDSS spectroscopic sample of galaxies, for which we compared their black hole masses, optical spectral classifications via line ratio diagrams, extinction and reddening, electron density of the emitting gas, ages of the host stellar population and host stellar masses, emission line luminosities and the black hole accretion rates. Speaker: Nathan DiDomenico (James Madison University) • 14:30 A Proposed Theory of Everything (TOE) 12m The TOE unites all known physical phenomena from the Planck cube to the Super Universe. Each matter and force particle exists within a Planck cube and any universe object is representable by a volume of contiguous Planck cubes. The TOE unifies 16 SM, 16 Supersymmetric, 32 anti, 64 Higgs, and the super force for 129 particles. At t = 0, our universe's energy/mass consisted of super force. By t = 100 seconds, this transformed into eight permanent matter particles. Matter creation coincided with the inflationary period. Spontaneous symmetry breaking occurred for 17 matter particles including W/Z bosons and 17 associated Higgs force particles. The sum of eight permanent Higgs force energies was dark energy. Our universe and parallel universes were nested in the Super Universe. A black hole was redefined as a quark star (matter) and black hole (energy). Super supermassive (10^{24} solar masses) quark stars (matter)/black holes (energy) were to universes as supermassive (10^6 to 10^9 solar masses) quark stars (matter) were to galaxies. Information was lost in quark star/black hole formation and none was emitted as Hawking radiation. Entropy switched from maximum to minimum in the transformation "resurrecting" life. The cosmological constant problem existed because the Super Universe was a googol larger than our universe. Speaker: Antonio Colella (IBM) • 13:30 15:30 JB. Relativistic Heavy Ions at RHIC and LHC Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Soren Sorensen (University of Tennessee at Knoxville) • 13:30 Recent Results from PHENIX 30m Studying the property of quark-gluon plasma and its implication to the Big Bang model of cosmology has been the focal point of research in the field of relativistic heavy ion collisions over the past three decades. The Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory started taking data in 2000. The PHENIX Collaboration at RHIC has carried out a comprehensive study of particle production that includes baseline measurement in p+p collisions, and the measurement from d+Au, Cu+Cu and Au+Au collisions at multiple energies. This talk will focus on the most recent and exciting results from PHENIX. Speaker: Xiaochun He (Georgia State University) • 14:00 Recent Results from ALICE at LHC 30m The ALICE experiment at the Large Hadron Collider at CERN is optimized to study the properties of the hot, dense matter created in high energy nuclear collisions in order to improve our understanding of the properties of nuclear matter under extreme conditions. In 2009 the first proton beams were collided at the Large Hadron collider and since then data from proton-proton collisions at sqrt(s) = 0.7, 2.76, and 7 TeV have been taken. In 2010 the first lead nuclei were collided at 2.76 TeV. Recent results from ALICE will be presented. These results are consistent with expectations based on data available at lower energies at RHIC and the SPS, indicating that the matter created in collisions at the LHC is hotter and larger than that at lower energies and behaves like a strongly interacting, nearly perfect liquid. Speaker: Christine Nattrass (University of Tennessee at Knoxville) • 14:30 Results from PbPb Collisions Measured by the CMS Detector 30m We will survey the results obtained from the analyses of PbPb collisions taken by the CMS detector during the first heavy ion run at the LHC. The physics topics will include quarkonium suppression studies, the non-suppression of the electro-weak Z and photon gauge bosons, the new insights into jet suppression dynamics afforded by di-jet energy asymmetry measurements, and the extensive investigations into the multiple harmonics of hydrodynamic flow. The quarkonium results will include both the J/Psi prompt and non-prompt production yields, and the Upsilon excited state production modifications in heavy ion collisions. The discussion of the hydrodynamic flow will extend across a variety of complementary methods aimed at disentangling the flow and non-flow contributions to the observed signals. Speaker: Charles Maguire (Vanderbilt University) • 15:00 What do we know about the shear-viscosity of QCD matter? 30m The success of viscous Relativistic Fluid Dynamics (RFD) in describing hadron spectra and elliptic flow at RHIC has led to a strong interest in the transport coefficients of QCD, in particular the shear- and bulk-viscosity as well as the shear-viscosity over entropy-density ratio eta/s. In my talk I will review our current state of knowledge on the shear viscosity of QCD matter at RHIC. In particular I will focus on the latest attempts to constrain eta/s via model to data comparisons, the question whether low viscosity matter needs to be strongly interacting in the deconfined phase and on recent calculations of eta/s for a hadron gas in and out of chemical equilibrium. Speaker: Steffen Bass (Duke University) • 13:30 15:30 JC. Nuclear Physics II Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Paul Cottle (Florida State University) • 13:30 New Levels in 162Gd 12m We've measured prompt gamma rays from the fission fragments of the spontaneous fission of 252Cf in Gammasphere. The data from the experiment have high statistics with 5.7 * 10^{11} triple and higher gamma coincidences. We examined levels in 162Gd in this data set which shows very consistent I(I+1) level spacing in the yrast band. This demonstrates consistency with a rotational nucleus that has a large quadrupole deformation. this is common for nuclei in between closed spherical shells. To find new levels and gamma transitions, we looked at triple coincidence gates in the Radware software in which we see population of yrast states up to 16+. We found new evidence for proposed collective bands in this isotope. Results will be discussed. Speaker: Brayton Doll (NBPHS, Vanderbilt University) • 13:42 Octupole correlations in Ba and Ce nuclei 12m Gamma-rays from the Spontaneous Fission of 252Cf were measured with Gammasphere and have given great insight into the structure of neutron rich nuclei. We have examined high-spin states and the gamma-transitions associated with octupole correlations in {143-146{Ba and 148Ce. Coexisting quadrupole/octupole deformation is characterized by two Delta I = 1 rotational bands with opposite parities. The states in these two rotational bands are described by a quantum number called simplex with s^2 = (-1)^A. In 143Ba, the levels are extended to 43/2^+ with a total of six new levels along with two new transitions. In 144Ba, we have placed new levels including three E1 transitions and 8 linking transitions to the s = +1 band to give more definitive evidence for the s = -1 band. Six new levels are found in 145Ba. For 144Ba and 148Ce we have, for the first time in even-even isotopes, confirmed the spin/parity of some s = -1 levels using angular correlations. Speaker: N. T. Brewer (Vanderbilt University) • 13:54 Neutron emission asymmetries from linearly polarized gamma rays on {nat}Cd, {nat}Sn, and 181Ta 12m Azimuthal asymmetries in neutron yields produced by bombarding targets with linearly polarized photons via (gamma,n), (gamma,2n), and (gamma,f) reactions are being investigated as a possible means of identifying various nuclear isotopes. The High Intensity gamma-ray Source (HIgS) at Duke University provides nearly monochromatic, circularly or linearly polarized gamma rays with high intensity by Compton backscattering free-electron-laser photons from stored electrons. Linearly polarized gamma rays produced by HIgS were incident on {nat}Cd, {nat}Sn, and 181Ta targets at six energies E_gamma between 11.0 and 15.5 MeV and emitted neutrons were detected both parallel and perpendicular to the plane of polarization by an array of 18 liquid-scintillator detectors at angles in the range theta=55 deg--142 deg. Detected neutrons were distinguished from Compton scattered photons by pulse-shape-discrimination and timing cuts, and their energies (E_n) were determined using time-of-flight information over a 0.5 m flight path. The characteristic plots of R_n, the ratio of neutron counts parallel to neutron counts perpendicular to the plane of the incident gamma-ray polarization, against E_n were constructed for each value of E_gamma and theta and then compared to those for other targets studied at HIgS, including fissile nuclei 235U and 238U. Speaker: W. Clarke Smith (George Washington University) • 14:06 Sensitivity of the Reaction Cross Section Calculation in the Glauber Theory Framework to the Parameters of Random Number Generation 12m To extract the nuclear size information, the experimentally measured interaction cross-section is compared to cross-sections calculated in the framework of Glauber theory or in its various approximations. These calculations are usually performed using a Monte Carlo technique. In the presented paper, we discuss the sensitivity of the reaction and interaction cross sections' calculation to the parameters of the Metropolis-Hasting algorithm which is used to produce nucleon coordinates distributed according to the chosen nuclear density distribution. We evaluate generated sequence of the random nucleon coordinates using lag-1 autocorrelation, correlation of multiple data sets, and running first and second moments. We show that an non-optimal Metropolis-Hasting proposal distribution increases uncertainty of the cross section calculation. The obtained dependence of the accuracy of the determined nuclear density parameters on the various statistical diagnostics of the Metropolis-Hasting for the various types of nuclear density distributions is also discussed. Speaker: John Wilson (Western Kentucky University) • 14:18 From Finite Nuclei to Neutron Stars 12m We will discuss attempts to build a relativistic density functional using constraints from both finite nuclei and neutron stars. The calibration of the model will proceed through a standard minimization of a quality chi-square measure. Moreover, by studying the model-parameter landscape around the minimum we will be able to provide meaningful theoretical error bars as well as to uncover correlations between physical observables. Speaker: Wei-Chia Chen (Florida State University) • 14:30 Neutrino oscillations: latest mixing parameters 12m Assuming three neutrinos, the neutrino oscillation mixing parameters are extracted from a global analysis of the Super-K atmospheric, MINOS disappearance and appearance neutrino, CHOOZ, T2K, KamLAND, and all solar data. MINOS anti-neutrino data is not included. The full oscillation probabilities are used so that we can address the question of the sign of theta_13. How to extract the allowed confidence level regions without assuming Gaussian statistics is explain. The probability that theta_13 is negative will be given, as well as the probability that Double CHOOZ and Daya Bay will measure a non-zero value of theta_13. Correlations between theta_13 and theta_23 will be examined. Speaker: David Ernst (Vanderbilt University and Fisk University) • 14:42 Independent Benchmarking of a Hybrid Monte Carlo Cross Section Code 12m Understanding the effects of high-energy neutron interactions with certain materials is of considerable interest to the field of space radiation protection. Due to the expected radiation environment, neutron production and interactions with spacecraft materials will result in neutrons that can cause significant biological risk to crewmembers. For investigating incident particle interactions with target materials, an existing statistical model code (ALICE2008) was used for determining the particle spectra from a hybrid Monte Carlo simulation (HMS) of pre-compound nuclear decay. Presented is a comparison of neutron reaction cross-section results from ALICE2008 to reported values from widely accepted sources to benchmark the code for this specialized use with targets of interest. Speaker: Nathan DeLauder (University of Tennessee) • 14:54 Derivation of the Abrasion-Ablation Model Using Corrections to the Phase Function 12m The analytical abrasion-ablation model has been used for the quantitative predictions of the neutron and light ion spectra from nucleus-nucleus and nucleon-nucleus collisions. The abrasion stage of the current model is based on the Glauber's multiple scattering theory and applies the small angle approximation which assumes the longitudinal momentum transfer for the scattering amplitude to be small, where the expansion of the scattering amplitude is only considers first order terms. However the validity of the small angle approximations for the current model is not clear for light ions and nucleons. In this work, we have re-derived the phase functions, chi, for the calculation of nuclear cross-sections using a perturbation approach and expanded Fourier-Bessel arguments of scattering amplitude in terms of Legendre polynomials, thus eliminating the small angle approximation. We have computed the differential cross-section for various projectile-target data sets at different energies for different scattering angles and compared our results with the usual Glauber model. Speaker: Santosh Bhatt (University of Tennessee at Knoxville) • 13:30 15:30 JD. Statistical Physics Far from Equilibrium Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Henry Greenside (Duke University) • 13:30 The Emergence of Community Structure in Metacommunities 30m The role of space in determining species coexistence and community structure is well established. However, previous studies mainly focus on simple competition and predation systems, and the role of mutualistic interspecies interactions is not well understood. Here we use a spatially explicit metacommunity model, in which new species enter by a mutation process, to study the effect of fitness-dependent dispersal on the structure of communities with interactions comprising mutualism, competition, and exploitation [1,2]. We find that the diversity and the structure of the interaction network undergo a nonequilibrium phase transition with increasing dispersal rate. *Low* dispersion rate favors spontaneous emergence of many dissimilar, strongly mutualistic and species- poor local communities. Due to the local dissimilarities, the global diversity is high. *High* dispersion rate promotes local biodiversity and supports similar, species-rich local communities with a wide range of interactions. The strong similarity between neighboring local communities leads to reduced global diversity. [1] E. Filotas, M. Grant, L. Parrott, P.A. Rikvold, J. Theor. Biol. 266, 419 (2010). [2] E. Filotas, M. Grant, L. Parrott, P.A. Rikvold, Ecol. Modell. 221, 885 (2010). Speaker: Per Arne Rikvold (Florida State University) • 14:00 Cyclically competing species: deterministic trajectories and stochastic evolution 30m Generalizing the cyclically competing three-species model (often referred to as the rock-paper-scissors game), we consider a simple system of population dynamics that involves four species. We discuss both well-mixed systems, i.e. without spatial structure, and spatial systems on one- and two-dimensional regular lattices. Unlike the three-species model, the four species form alliance pairs which resemble partnership in the game of Bridge. In a finite system with discrete stochastic dynamics, all but four of the absorbing states consist of coexistence of a partner-pair. For the system without spatial structure mean-field theory predicts complex time dependence of the system and that the surviving partner-pair is the one with the larger product of their strengths (rates of consumption). Beyond mean-field much richer behavior is revealed, including complicated extinction probabilities and non-trivial distributions of the population ratio in the surviving pair. For the lattice systems, we discuss the growth of domains and the related extinction events, thereby confronting our results with those obtained for the three-species case. Speaker: Michel Pleimling (Virginia Tech) • 14:30 Stochastic population oscillations in spatial predator-prey models 30m It is well-established that including spatial structure and stochastic noise in models for predator-prey interactions invalidates the classical deterministic Lotka-Volterra picture of neutral population cycles. In contrast, stochastic models yield long-lived, but ultimately decaying erratic population oscillations, which can be understood through a resonant amplification mechanism for density fluctuations. In Monte Carlo simulations of spatial stochastic predator-prey systems, one observes striking complex spatio-temporal structures. These spreading activity fronts induce persistent correlations between predators and prey. In the presence of local particle density restrictions (finite prey carrying capacity), there exists an extinction threshold for the predator population. The accompanying continuous non-equilibrium phase transition is governed by the directed-percolation universality class. We employ field-theoretic methods based on the Doi-Peliti representation of the master equation for stochastic particle interaction models to (i) map the ensuing action in the vicinity of the absorbing state phase transition to Reggeon field theory, and (ii) to quantitatively address fluctuation-induced renormalizations of the population oscillation frequency, damping, and diffusion coefficients in the species coexistence phase. [See Preprint arXiv:1105.4242, and further refs. therein.] Speaker: Uwe TÄUBER (Virginia Tech) • 15:00 Accumulation of beneficial mutations in low dimensions 30m When beneficial mutations are relatively common, competition between multiple unfixed mutations can reduce the rate of fixation in well-mixed asexual populations. We introduce a one-dimensional model with a steady accumulation of beneficial mutations. We find a transition between periodic selection and multiple-mutation regimes. In the multiple-mutation regime, the increase of fitness along the lattice bears is similar to surface growth phenomena, with power-law growth, saturation of the interface width, and KPZ universality class exponents. We also find significant differences compared to the well-mixed model. In our lattice model, the transition between regimes happens at a much lower mutation rate due to slower fixation times in one dimension. Also, the rate of fixation is reduced with increasing mutation rate due to the more intense competition, and it saturates with large population size. Speaker: Jakub Otwinowski (Emory University) • 15:30 15:45 Coffee / Refreshments 15m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 15:45 17:45 - Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA • 15:45 17:45 - Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA • 15:45 17:45 KA. Superconductivity: 100th Anniversary Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Norman Mannella (University of Tennessee - Knoxville) • 15:45 A New Piece in the High T_c Superconductivity Puzzle: Fe based Superconductors 30m An overview of the historic and current developments in superconductivity will be be presented. The phenomenon of superconductivity was discovered almost 100 hundred years ago and it is still one of the hottest research topics providing fascinating puzzles and challenges to both theoreticians and experimentalists. There was a lag of almost 50 years between the experimental discovery of (low T_c) superconductivity and the development of the BCS theory which explained the phenomenon in terms of pairs of electrons held together by the interaction with the phonons in the material. The quest to discover superconducting materials with higher T_c's continued quietly for many years until huge progress occurred in the 1980' when T_c's higher than 77K were observed in copper-oxide based materials. The study of these new materials generated tremendous advances in both experimental and theoretical methods and much is now known about their properties; but the mechanism, i.e., the "glue," that binds the electrons together is still unknown; it appears that phonons are unable to do the job and there is controversy on whether the magnetism present in these materials helps or hurts. Very recently, in 2008, high T_c was discovered in a new family of iron based materials. While they are similar to the cuprates in some ways, i.e., magnetism is present, there are many differences as well. This discovery provides a new chance to unveil the high-T_c mystery and the condensed matter community is intensely working on the subject. Speaker: Adriana Moreo (University of Tennessee at Knoxville) • 16:15 High Temperature Superconductors: From Basic Research to High-Current Wires 30m In this talk, I will provide a perspective on the fundamental properties of the cuprate high-temperature superconductors (HTS), and how early and ongoing fundamental research has identified the strengths and weaknesses, and has ultimately led to the development of superconducting wires for power applications--the so-called "coated conductors." Early work on the properties of various classes of cuprate HTS materials revealed their emergent behavior as type-II superconductors, even though it was apparent that the underlying pairing mechanism is likely quite different than for conventional, electron-phonon coupled materials. From the perspective of this talk, important findings documenting the level of electronic anisotropy, basic length scales, etc., and the effects of thermal energies on vortex matter are described, especially as they relate to the ability to carry loss-free currents. It became apparent that good supercurrent conduction was achieved only along well-aligned basal planes of the structure, and enhancement of those currents could be obtained by introduction of controlled nanostructures for flux pinning. From this work, the (RE)Ba_2 Cu_3 O_7-delta emerged as the best material class for potential high-current wires, mainly because it was the least anisotropic from among those with transition temperature exceeding the boiling point of liquid nitrogen. Ultimately, much effort has been devoted to the control and optimization of nanostructural modifications to the materials, at a size range and spacing that should be tailored to match the magnetic vortex array. The description, success, and consequences of these efforts will be presented. Speaker: Dr David Christen (Oak Ridge National Laboratory) • 16:45 New challenges and opportunities for high-T_c superconducting materials 30m Since its discovery 100 years ago, superconductivity has captured the imagination of many as a fascinating physical phenomenon which would enable the drastic reduction of energy waste in the electric power grid, high field magnets and large accelerators. Understanding the physics of superconductivity has been advancing along with the discovery of many superconducting materials and tuning their properties. In this talk I will give a brief overview of how the physics of unconventional superconductivity turned out to be intertwined with materials properties, with the emphasis on high-T_c cuprates and the recently discovered Fe-based superconductors. One of the lessons of the last 20 years is that high critical temperatures and upper critical magnetic fields of unconventional superconductors are no longer the main parameters of merit for power applications, which can also be important for the ongoing quest for higher-T_c materials. Speaker: Alexander Gurevich (Old Dominion University) • 17:15 Evolution of spin excitations in high-temperature iron-based superconductors 30m In this Talk, I describe the most recent progress in the field of iron-based superconductors. Using neutron scattering as a probe, we study the spin wave excitations in BaFe2As2 and RbFe1.6Se2, and its electron/hole doping evolution of the spin excitations. We find that the effective next nearest neighbor (NNN) exchange interactions for different families of materials are rather similar, thus demonstrating that the common features for superconductivity is associated with the NNN exchange interactions in these materials. These results suggest that spin excitations are the most promising candidate for electron pairing and superconductivity in iron-based superconductors, regardless of their original antiferromagnetic ordering status and electronic structure. Speaker: Pencheng Dai (University of Tennessee at Knoxville / ORNL) • 15:45 17:45 KD. Panel Discussion: The Under-Represented Majority Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Panelists: Theda Daniels-Race, Associate Professor of Physics, Louisiana State University; David Ernst, Professor of Physics, Vanderbilt University; Ronald E. Mickens, Distinguished Fuller E. Callaway Professor, Clark University; Christine Nattrass, Postdoctoral Researcher, University of Tennessee at Knoxville Convener: Roxanne Springer (Duke University) • 15:45 The Under-Represented Majority 2h Submit your question/comment in the box at the registration desk or at http://www.surveymonkey.com/s/9SGRXS9. Speaker: Roxanne Springer (Duke University) • 18:00 20:00 LA. Poster Session Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 18:00 A modified thermodynamic model to estimating the secondary particle source radius, and coalescence radius, in heavy ion collisions 2h In an abrasion-ablation model of high energy heavy ion collisions as the extremely hot and dense participating region expands, and cools off, light high energy particles are emitted in the sphere regions where the relative momentum of the nucleons is less than the coalescence radius in momentum space. The probability of the light particle emission and the source radius of the region emitting these light particles may be related with a thermodynamic coalescence models. At the high beam energies, the Coulomb repulsion does not effect our thermodynamic coalescence model estimates, how ever at energies below 25 MeV/nucleon, the Coulomb repulsion must be considered. The objective of our study is to estimate the emitting source radius and the coalescence radius at beam energies less than 25 MeV/Nucleon by using a modified thermodynamic coalescence model which includes Coulomb repulsion. The coalescence radius is inversely proportional to the emitting source radius. Emitting source radii and coalescence radii for light energetic particles from many sets systems are estimated for both symmetric and asymmetric systems. Speaker: Mahmoud PourArsalan (University of Tennessee at Knoxville) • 18:00 A Muon Tomography Station with GEM Detectors for Nuclear Threat Detection 2h Muon tomography for homeland security aims at detecting well-shielded nuclear contraband in cargo and imaging it in 3D. The technique exploits multiple scattering of atmospheric cosmic ray muons, which is stronger in dense, high-Z nuclear materials, e.g. enriched uranium, than in low-Z and medium-Z shielding materials. We have constructed and operated a compact Muon Tomography Station (MTS) that tracks muons with six to ten 30 cm x 30 cm Triple Gas Electron Multiplier (GEM) detectors placed on the sides of a 27-liter cubic imaging volume. The 2D strip readouts of the GEMs achieve a spatial resolution of ~ 130 um in both dimensions and the station is operated at a muon trigger rate of ~ 20 Hz. The 1,536 strips per GEM detector are read out with the first medium-size implementation of the Scalable Readout System (SRS) developed specifically for Micro-Pattern Gas Detectors by the RD51 collaboration at CERN. We discuss the performance of this MTS prototype and present experimental results on tomographic imaging of high-Z objects with and without shielding. Speaker: Michael Staib (Florida Institute of Technology) • 18:00 A New Viewpoint (The expanding universe, Dark energy and Dark matter) 2h Just as the relativity paradox once threatened the validity of physics in Albert Einstein's days, the cosmos paradox, the galaxy rotation paradox and the experimental invalidity of the theory of dark matter and dark energy threaten the stability and validity of physics today. These theories and ideas and many others, including the Big Bang theory, all depend almost entirely on the notion of the expanding universe, Edwin Hubble's observations and reports and the observational inconsistencies of modern day theoretical Physics and Astrophysics on related subjects. However, much of the evidence collected in experimental Physics and Astronomy aimed at proving many of these ideas and theories is ambiguous, and can be used to prove other theories, given a different interpretation of its implications. The argument offered here is aimed at providing one such interpretation, attacking the present day theories of dark energy, dark matter and the Big Bang, and proposing a new Cosmological theory based on a modification of Isaac Newton's laws and an expansion on Albert Einstein's theories, without assuming any invalidity or questionability on present day cosmological data and astronomical observations. Speaker: Daniel Cwele • 18:00 A Search for Astrophysical Meter Wavelength Radio Transients 2h Astrophysical phenomena such as exploding primordial black holes (PBHs), gamma-ray bursts (GRBs), compact object mergers, and supernovae are expected to produce a single pulse of electromagnetic radiation detectable in the low-frequency end of the radio spectrum. Detection of any of these pulses would be significant for the study of the objects themselves, their host environments, and the interstellar/intergalactic medium. Furthermore, a positive detection of an exploding PBH could be a signature of an extra spatial dimension, which would drastically alter our perception of spacetime. However, even upper limits on the existence of PBHs, from searches, would be important to discussions of cosmology. We describe a method to carry out an agnostic single dispersed pulse search, and apply it to data collected with ETA. Applying the single pulse search procedure to 30 hours worth ETA data yielded no compelling detections with S/N >= 6. However, with ~ 8 hours of interference free data, we find an observational upper limit to the rate of exploding PBHs r ~ 8 x 10^{-8} pc^{-3} y^{-1} for a PBH with a fireball Lorentz factor gamma_f = 10^{4.3}. Speaker: Sean Cutchin (Virginia Tech) • 18:00 A statistical analysis of the environments of extragalactic water masers 2h Water megamasers provide crucial tools for accurate determinations of masses of black holes lurking in galaxy centers, and of extragalactic distances without the need for indirect cosmological assumptions. Current searches have detected masers in only 3 -- 4% of the galaxies surveyed and require refinement of their survey criteria. Motivated by current models linking galaxy environment and black hole accretion and the possibility that maser activity correlates with black hole accretion, we undertook a study of the properties of the small-scale environments of galaxies hosting masers. Using samples of maser detections and non-detections provided by the Megamaser Cosmology Project together with SDSS DR7 photometric and spectroscopic observations we performed a comparative analysis of near-neighbor statistics that include distances to first and third neighbors, neighbor counts and color distributions for both flux and absolute magnitude limited volumes. We present results that provide potential constraints for maser surveys, which may increase their detection rate. Speaker: Thomas Redpath (James Madison University) • 18:00 A study of the chiro-optical properties of Carvone 2h The intrinsic optical rotatory dispersion (IORD) and circular dichroism (CD) of the conformationally flexible carvone molecule has been investigated in 17 solvents and compared with results from calculations for the "free" (gas phase) molecule. The G3 method was used to determine the relative energies of the six conformers. The ORD of (R)-(-)-carvone at 589 nm was calculated using coupled cluster and density-functional methods, including temperature-dependent vibrational corrections. Vibrational corrections are significant and are primarily associated with normal modes involving the stereogenic carbon atom and the carbonyl group, whose n -> pi^* excitation plays a significant role in the chiroptical response of carvone. However, without the vibrational correction the calculated ORD is of opposite sign to that of the experiment for the CCSD and B3LYP methods. Calculations performed in solution using the PCM model were also opposite in sign to of the experiment when using the B3LYP density functional. Speaker: Jason Lambert (University of Tennessee) • 18:00 Acoustic measurement of the granular density of state 2h Measurements of the vibrational density of states (DOS) in glasses reveal that an excess number of low-frequency modes, as compared to the Debye scaling seen in crystalline materials, is associated with a loss of mechanical rigidity. An excess number of modes have also been observed experimentally in colloids and in simulations of idealized granular materials near the jamming point. However, there have not been any experimental measurements in an athermal granular system. We experimentally probe the material by mimicking thermal motion with acoustic waves, thereby allowing us to measure a DOS like quantity by analogy with conventional solid state techniques. Our system is made up of two dimensional photoelastic disks which allow visualization of the internal force structure, and a voice coil driver provides a white noise signal to excite a broad spectrum of vibrations. The sound is then detected with piezoelectric sensors embedded inside a subset of the particles. These measurements give us the particle velocities, from which we are able to compute a DOS by taking the Fourier transform of the velocity autocorrelation function. We measure this DOS as a function of the confining pressure and degree of disorder and find that the peak in the density of states shifts to higher frequency as the system pressure is increased. Speaker: Eli Owens (North Carolina State University) • 18:00 Acoustic Radiation from Smart Foam for Various Foam Geometries 2h Smart foam is an emerging active-passive noise control technology with many applications. Smart foam consists of passive foam with an embedded curved piezoelectric (PZT) film. We experimented with three geometries of varying film curvatures and a constant cross-sectional area of 58 cm^2, constructed using melamine foam covered with 28 um thick polyvinylidene fluoride (piezoelectric) films with Cu-Ni surface electrodes. An AC voltage provided by a signal generator and amplifier drives the smart foam. An omnidirectional microphone mounted at a distance 100mm from the foam surface measured the sound level (dB) and harmonic distortion generated by the smart foam. Experiments were repeated for voltages, 40V-140V, and frequencies, 300Hz-2000Hz. The result show that the sound level generated by the smart foams has a characteristic frequency response common to all geometries and a peak sound level between 900 to 1,100 Hz. Speaker: Nishkala Shivakumar (North Carolina School of Science and Mathematics and North Carolina A&T State University) • 18:00 Afterglow photometry and Modeling GRB 091018 2h We focus on continuing the modeling of GRB (Gamma-ray Burst) 091018. Our data is mostly collected across 4 bands (BVRI) from PROMPT (Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes) approximately 4.1 hours after the trigger. We have added NIR, UVOT, X-ray, and more optical points to our datasets. After rejecting the orginal assertion of dust evolution by linking extinction parameters with Galapagos (a software that employs genetic algorithms to output the best fit model with our circum-burst GRB parameters we have settled on a model with the circumburst density index k, at -1.75 (which is close to the wind-blown medium of k=-2). In addition to k, the results of our baseline fit indicate that the cooling break is above the data, and may be crossing the synchrotron peak during the early UVOT data. This cross-over will yield interesting results about the circumburst medium of a GRB at early times. Photometring GRBs live was also conducted along with instrumentation techniques. Speaker: Apurva Oza (University of North Carolina at Chapel Hill) • 18:00 Analysis of Carbon Nanotubes and Graphene Nanoribbons with Folded Racket Shapes 2h When carbon nanotubes and graphene nanoribbons become long, they may self-fold and form tennis racket-like shapes. This phenomenon is analyzed in two ways by treating a nanotube or nanoribbon as an elastica. First, an approach from adhesion science is used, in which the two sides of the racket handle are assumed to be straight and bonded together with constant or no separation. New analytical results are obtained involving the shape, bending energy, and adhesion energy of the self-folded structures. These relations show that the dimensions of the racket loop are proportional to the square root of the flexural rigidity. The second analysis uses the Lennard-Jones potential to model the van der Waals forces between the two sides of the racket. A nanoribbon is considered, and the interatomic forces are integrated along the length and across the width of the nanoribbon. The resulting integro-differential equations are solved using the finite difference method. The racket handle is found to be in compression and the separation between the two sides of the racket handle decreases in the direction of the racket loop. The results for the Lennard-Jones model approximately satisfy the relationship between the dimensions and the flexural rigidity found using the adhesion model. Speaker: Andy Borum (Virginia Tech) • 18:00 Calculation of Stationary, Free Molecular Flux Distributions in General 3D Environments 2h This article presents an application of the angular coefficient method for diffuse reflection to calculate stationary molecular flux distributions in general three dimensional environments. The method of angular coefficients is reviewed and the integration of the method into Blender, a free, open-source, 3D modeling software package, is described. Some example calculations are compared to analytical and Direct Simulation Monte Carlo (DSMC) results with excellent agreement. Speaker: Jesse Labello (University of Tennessee Space Institute) • 18:00 Characterization of large-scale velocity fluctuations in the Princeton MRI experiment 2h The Princeton MRI Experiment is a modified Taylor-Couette device that uses GaInSn as its working fluid. An Ultrasonic Doppler Velocimetry (UDV) system allows the measurement of internal fluid velocities. Starting from both hydrodynamically stable and unstable background flow states, prior work has demonstrated the existence of large-scale, large-amplitude, coherent, nonaxisymmetric velocity fluctuations when a sufficiently strong magnetic field is applied. Characterizations of these oscillations are made by looking at the dominant fluctuations in the azimuthal and radial velocity field components and matching these features to different model velocity profiles. These profiles are calculated by starting with a model azimuthal and radial flow and calculating the vertical term in the continuity equation. The relative magnitudes of the calculated azimuthal and radial flows are compared to experimental UDV data to determine the validity of the model. Additional calculated properties such as final velocity current density profiles will be presented. Speaker: William Love (Virginia Tech) • 18:00 Collective excitations in a spinor condensate 2h Bose Einstein Condensates (BECs) confined in a trap allow us to study the excitation between eigenfunctions of a given trap potential, which can be directly calculated from quantum mechanics. Here we study the spinor collective excitations, in other words, the collective excitations of different spin components. Specifically, the spinor collective modes in a 3D harmonic trap will be presented. Moreover, different types of collective excitations in this trap, collective mode mixing as well as their applications will be discussed. Speaker: Jianing Han (Hollins University) • 18:00 Comparison of Top-Antitop Cross Section Measurement Analyses by SHyFT and Simple Counting Method 2h Analysis of top events at the CMS (Compact Muon Solenoid) experiment is tested by subjecting a single dataset to both the simple counting method and the newer Simultaneous Heavy Flavor and Top (SHyFT) cross section measurement. Respective statistical and systematic errors associated with the data are then compared. The results of the SHyFT analysis have much smaller overall uncertainties. Speaker: Erin Chambers • 18:00 Detector Performance in the SLHC era at CMS 2h The future upgrade in instantaneous luminosity at the Large Hadron Collider, the Super LHC, introduces challenging demands on existing and future instrumentation at the Compact Muon Solenoid experiment. The increased particle and radiative flux, especially in the forward regions, requires extensive study to understand aging effects of the detector and any future materials to be considered. Additionally with increased luminosity, the incidence of multiple events in a single beam crossing poses difficulties in detector performance and energy resolution in the calorimeter sub-system. This poster presents the University of Virginia's efforts in understanding these effects in the SLHC era, with a focus on the electromagnetic calorimeter subsystem. Speaker: Brian Francis (University of Virginia) • 18:00 Dianion formation from anion-alkali metal charge exchange reactions: TCNQ- + Na --> TCNQ-- + Na+ 2h The interaction of an electron with an anion is characterized by a long-range coulomb repulsion and a short range polarizability attraction giving rise to a coulomb barrier. The permanent addition of an extra electron to a negatively charged anion requires tunneling through the barrier or attachment of the electron over the top of this coulomb barrier followed by disposal of the excess energy. Charge-exchange collisions of an anion with an alkali atom utilize the latter channel to produce permanent dianions with cross sections of ~ 1 Angstrom^2. We have previously examined the reaction TCNQ-F_4^- + Xe -> TCNQ-F_4^{--} + Xe^+ and reported a delayed threshold and quantum phase interference effects in the charge exchange cross section. [1] Employing sodium as the collision partner, the cross section is seen to increase with decreasing energy with a threshold below 180 eV (com). A new apparatus has been constructed to allow measurements down to energies below the expected threshold (~ 41 eV, laboratory energy based upon a 1 eV second electron affinity). This method has been used to study the reaction TCNQ^- + NA -> TCNQ^{--} + Na^+ and will provide one of the first measurements of second electron affinities for molecular anions. [1] S. Yu. Ovchinnikov, et al. Phys. Rev. A, 73, 64704(2006). Speaker: Byron Smith (University of Tennessee) • 18:00 Discovery of Isotopes 2h Although a few thousand isotopes have been discovered, the limit existence is only known for the lightest elements. Unfortunately, there has not been a comprehensive compilation of all the discoveries. A project has been undertaken to find all of the first discovery papers. Claims of discoveries were investigated and verified, and first publications are listed at http://www.nscl.msu.edu/~thoennes/2009/discovery.htm. In this project, I investigated isotopes with 66 <= Z <= 70 and 81 <= Z <= 98. Speaker: Cathleen Fry • 18:00 Dissociative Electron-Ion Recombination of the Protonated Interstellar Species Glycolaldehyde, Acetic Acid, and Methyl Formate 2h Recently, the prebiotic molecule and primitive sugar glycolaldehyde and its structural isomers acetic acid and the abundant methyl formate have been detected in the interstellar medium(ISM). Understanding the processes involving these molecules is vital to understand the ISM, where stars are formed. The rate constants, alpha_e, for dissociative electron-ion recombination of protonated gycolaldehyde, (H O C H_2 C H O)H^+, and protonated methyl formate, (H C O O C H_3)H^+, have been determined at 300K in a variable temperature flowing afterglow using a Langmuir probe to determine the electron density. The alpha_e at 300K are 3.2 x 10^{-7} cm^{3} s^{-1} for protonated methyl formate and 7.5 x 10^{-7} cm^{3} s^{-1} for protonated glycolaldehyde. The alpha_e of protonated acetic acid could not be directly measured due to difficulty in producing the ion, but it appears to have a recombination rate constant, alpha_e, on the ~ 10^{-7} cm^{3} s^{-1} scale. Additional temperature dependence information was obtained. The astrochemical implications of the alpha_e measurements and protonation routes are also discussed. Speaker: Patrick Lawson (University of Georgia) • 18:00 e/m Experiment Analysis Refinement 2h Thomson's e/m experiment is widely popular in undergraduate courses to help gain an understanding of the properties of the electron. Our results using a standard apparatus, however, reveal significant systematic errors. We examine possible reasons for the discrepancy with the aim of modeling effects that were not included in the original analysis. We conclude that the energy loss of the electron beam as it travels through the helium and the distortion of the beam radius measurement by the curved glass of the tube are the two factors which dominate the discrepancy. Speaker: Michael Harmon (Erskine College) • 18:00 Electrical Characterization of Zn and ZnO Nanowires Grown on PEDOT:PSS Conductive Polymer Thin Films by Physical Vapor Deposition 2h Physical vapor deposition (PVD) techniques offer tremendous possibilities for easy fabrication of nanostructure arrays for use in thin film electronics. In this study we examine inorganic/organic heterojunctions produced by growing conductive Zn and semiconductive ZnO nanowire arrays on organic conductive PEDOT:PSS polymer thin films using simple and cost-effective PVD methods. Understanding the electrical properties of these hybrid films are of particular interest for applications in organic electronics. However, traditional systems for measuring conductivity and resistivity of thin films by the Van Der Pauw method prove problematic when dealing with soft polymeric surfaces. We present here electrical studies of ZnO- and Zn-nanowire/PEDOT:PSS heterojunctions using a modified 2-point probe method constructed from inexpensive and easily available materials. Speaker: Matthew Chamberlin (James Madison University) • 18:00 Electronic transport in semiconductors 2h The ultimate goal of this work is the Monte-Carlo simulation of electronic transport in semiconductors. As a special case, the effect of the adsorbed surface change on conductivity in the ambient air was investigated. The classical equation of electronic transport for semiconductors must be solved numerically since the analytical solution can be derived only for limited number of relatively simple cases. There are several numerical methods to describe the electronic transport in semiconductors. The One particle Monte Carlo simulation is widely used technique to obtain the exact solution for Boltzmann Transport Equation (BTE). During the simulation several assumptions were made: electron is a particle and its motion can be described by classical mechanics equations, the only interactions the electrons have are those with ions, the collisions/scattering of electrons with ions are elastic, and the outside electric field is uniform inside of the semiconductor device. The quantity of interest in the simulation is current density. The current density was calculated as an integrated result from contributions of individual paths of electrons as they travel from one ohmic contact to another. The simulation can also be used to predict the electronic transport under the influence of nonuniform electric and magnetic fields. The special case of oxygen adsorption was investigated in this work. It was found that an increase in the oxygen concentration in the ambient air can decrease the conductivity of some semiconductor materials. Speaker: Alexander Larin • 18:00 Emission Spectroscopy of RF Helicon Heated Plasmas 2h In order to study plasma-material interfaces under high power and particle flux, large linear machines are being constructed that can effectively simulate conditions that will be found in fusion-grade toroidal devices such as ITER and DEMO. A 15 cm diameter, 1.5 m long linear machine has been built at ORNL using a new helicon antenna designed for input powers up to 100 kW, producing a plasma that will be used to bombard material targets. Visible spectroscopy has been used to measure emission line spectra of the helicon heated plasma from 200 nm to 1100 nm at low resolution. The spectrometer is thoroughly calibrated for wavelength and intensity in order to determine electron density and temperature using the ratios of spectral line intensities. A variety of gas species have been heated, including hydrogen, deuterium and helium. Residual amounts of foreign materials can be monitored near the plasma-wall interface. Results on how magnetic field scans, probe scans, and power scans affect the plasma will be analyzed and presented. Speaker: Tim Younkin (University of Tennessee at Knoxville) • 18:00 First Neutrino Results from the NOvA Near Detector 2h The NOvA collaboration is building a long-baseline neutrino spectrometer optimized to study the appearance of electron neutrinos in a muon neutrino beam. A full-sized prototype of the Near Detector has been fabricated on the surface and is presently taking data with the Fermilab NUMI neutrino beam. A description of the Near Detector will be given and its performance will be shown. Speaker: Zukai Wang (University of Virginia) • 18:00 Identifying Electromagnetic Events in the Forward Hadron Calorimeter 2h The Forward Hadron Calorimeter (HF) of the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) lies in a region not covered by an inner tracking system, and we can rely only on the shapes of showers that hit the HF to determine whether or not they are due to electromagnetic particles. We review the current method of distinguishing shower types in the HF, and we bring attention to a drawback that will become present as the luminosity of the LHC increases and creates a need for tighter shower-shape cuts. We provide a method to correct this drawback, and we analyze the effectiveness of various tight cuts at isolating signal from background. Speaker: Christopher Frye (University of Central Florida) • 18:00 It may be possible to use Capillary Action as a Cooling method 2h It is well known that it takes no work for water to rise in a Capillary tube. It only takes work for the water to be removed from the top of the tube. It may be possible for this water to be removed using individual photons of the size needed to break the water to water hydrogen bond. This bond is often broken in evaporation of water from surfaces. As this bond is broken at the top of the Capillary tube the water makes a phase transition and makes room for another water molecule to move up the column. The phase transition cools the column and another molecule moves up the column with no work being done. There is a net energy loss in this system, and the entire system is cooled. This may be one of the mechanisms that plants use to cool themselves and the soil around the plant. This mechanism may be used to explain the slight temperature regulating effect of plants and the areas around large plant populations. Photons of other sizes may also be used in this mechanism if there are the proper molecules (Chlorophyll for instance) in a chain reaction linked to this mechanism. This chimney like effect could also be used as a precise balancing method to transport materials based on mass and chemical composition, like a chromatograph. The "Einstein Refrigerator" can be viewed as a similar idea. Speaker: Richard Kriske (University of Minnesota) • 18:00 LabView ALPHA Immersion at Reed College 2h During the summer of 2011, ALPHA (Advanced Physics Laboratory Association) hosted a series of laboratory immersion experiences in which faculty could spend several days working closely with a mentor on an advanced undergraduate experiment. The goal of this program is to foster wider implementation of these experiments at the undergraduate level. One of these immersions took place at Reed College and focused on the use of LabVIEW software in undergraduate physics laboratories. This was an extremely valuable laboratory. The immersion experience and the LabVIEW projects will be described. Speaker: R. Seth Smith (Francis Marion University) • 18:00 Lifetime Performance Studies on Vacuum Photo-Triodes in the ECAL at CMS 2h The electromagnetic calorimeter (ECAL) is a crucial sub-detector of the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC). It uses scintillation light fr om approximately 83,000 Lead Tungstate (PbWO_4) crystals to make precision measurements of high energy photons and electrons. In the endcaps of the ECAL this scinti llation light is collected at the rear of the crystal and converted to an analog electric current with radiation hard, single stage photmultipliers known as Vacuum Pho to-Triodes (VPTs). The response of the VPTs is dependent on several effects including orientation within the magnetic field, calibration and scintillation light expos ure rates, and time between successive exposures. The High Energy Physics group at the University of Virginia (UVa) uses a 3.8 T large-bore superconducting solenoid m agnet to simulate conditions at the LHC and to study the long term behavior of these VPTs under various light and magnetic field conditions. Also, using the ECAL lase r and LED calibration system, UVa is also able to study the response of the VPTs in situ at the CMS detector in order to understand and quantitatively assess the perfo rmance of the VPTs over time. Herein we will report on these remote and in-situ studies of VPT characteristics and performance. Speaker: John Wood (University of Virginia) • 18:00 Magnetization Dynamics in Magnetic Nanoparticle Chains 2h Magnetic nanoparticles (MNPs) exhibit superparamagnetism when the energy changes due to thermal fluctuations (~ k_B T) are comparable to or larger than the anisotropy potential barrier KV. Thermal fluctuations produce frequent magnetization reversals in such a situation causing the net MNP magnetization to approach zero. If thermal oscillations are relatively small, the odds of magnetization reversal diminish significantly implying that an MNP is permanently magnetized. In this study we explore the influence of the magnetostatic coupling of moments in neighboring MNPs in an idealized two-particle system. The anisotropic nature of such coupling adds to the magnetocrystalline anisotropy to augment the potential barrier for magnetization reversal. A two particle system of MNPs therefore has a more stable magnetization than an isolated particle. This is analyzed by a scaling analysis of the interaction energies concerned. Numerical simulations of mangetization dynamics of MNPs using a stochastic form of the Landau-Lifshitz-Gilbert equation confirm the hypothesis. The phenomena is explored to determine a range of radii within which an MNP exhibits superparamagnetism in isolation while forming permanently magnetized chains upon self-assembly. Speaker: Suvojit Ghosh (Virginia Tech) • 18:00 Mid-infrared Molecular Emission Studies from Energetic Materials using Laser-Induced Breakdown Spectroscopy 2h Laser-induced breakdown spectroscopy (LIBS) is a powerful diagnostic tool for detection of trace elements by monitoring the atomic and ionic emission from laser-induced plasmas. The laser-induced plasma was produced by focusing a 30 mJ pulsed Nd:YAG laser (1064 nm) to dissociate, atomize, and ionize target molecules. In this work, LIBS emissions in the mid-infrared (MIR) region were studied for potential applications in chemical, biological, and explosives (CBE) sensing. We report on the observation of MIR emissions from energetic materials (e.g. ammonium compounds) due to laser-induced breakdown processes. All samples showed LIBS-triggered oxygenated breakdown products as well as partially dissociated and recombination molecular species. More detailed results of the performed MIR LIBS studies on the energetic materials will be discussed at the conference. Speaker: Ei Brown (Hampton University) • 18:00 Modeling of CVD Diamond Detectors 2h Diamond's properties make it a prime candidate for future use in particle detectors such as at the Compact Muon Solenoid at the LHC. Diamond is radiation hard, has a low thermal conductivity, and has a large bandgap. When a fast moving particle passes through the diamond, ionization occurs, leaving a trail of charge carriers in the diamond. By applying an external electric field, these secondary particles are induced to move towards the electrodes. The movement of these charge carriers induces a current, which can be measured. This is the detection mechanism for diamond detectors. A simulation of this detection mechanism was created using GEANT, a platform developed by CERN for simulating the passage of particles through materials. The program uses Monte-Carlo methods to simulate the ionization process through the material. It is capable of tracking each secondary produced. By using this information and the Shockley-Ramo theorem, we are able to simulate the detection mechanism. Speaker: Travis Tune (University of Tennessee at Knoxville) • 18:00 Modeling of the pressurized xenon gamma ray scintillation detector 2h We are developing a high pressure xenon detector for photon measurements. Xenon produces electroluminescence (EL) scintillation emission that we use as the primary signal in our strategy to acquire information. The detector consists of a high pressure chamber, a thin radiation input window with the supporting grid of collimator ribs and electrode grids to create the electric field, and a photo sensor -- the large area silicon avalanche photodiode. The electrode grids are made of thin wire. The modeling of the electric field is a crucial step in developing a working prototype. It has been previously shown that the uniform electric field divided by the number density of xenon gas needs to be above approximately 3 Td to give enough energy to ionize the xenon atoms, but less than 16 Td to prevent electron avalanches from occurring. The electric field was modeled using Comsol Multiphysics. This presentation discusses the results of electric field modeling for the detector (absorption, drift, and EL regions). Speaker: Romney Meek (Western Kentucky University) • 18:00 Network Theoretical Approach to Partitioning of Real Power Grids 2h Power grids are innately susceptible to electrical faults. Here we present various network-theoretical approaches to achieve intentional intelligent islanding of a power grid in order to limit cascading power failures in case such a fault occurs. The methods we use can partition networks into communities with local generating capacity. Here we discuss results of using spectral matrix methods along with Monte Carlo methods to analyze and partition the Floridian and Italian high-voltage power grids, as well as the power distribution system for a conceptual all-electric naval vessel. We contrast the effects of approximating the generating capacity of generators according to degree of the generators versus using actual generating capacities. Speaker: Brett Israels (Florida State University) • 18:00 Neutron Photoproduction from 139La Using 12-15 MeV Linearly Polarized gamma-Rays 2h Data have been collected at the High Intensity gamma-ray Source (HIgS) to investigate neutron emission from a 139La target with linearly polarized gamma rays at E_gamma = 12, 13, 14, and 15 MeV. Liquid scintillator detectors were placed at scattering angles of 55 degrees, 90 degrees and 125 degrees above, below and to the left and right of the target. Six additional detectors were placed at angles of 72 degrees, 107 degrees, and 142 degrees above and to the right of the target. The ratio of neutron yields parallel to neutron yields perpendicular to the plane of polarization observed as a function of E_n, E_gamma, and theta characterizes the response of the nucleus and may prove to be a useful observable in nuclear forensics. The results of the experiment will be discussed. Speaker: R. K. Thrasher (James Madison University) • 18:00 Neutron Photoproduction from {Nat}Hg Using 11-15 MeV Linearly Polarized gamma-Rays 2h Speaker: J. Hauver (James Madison University) • 18:00 On-line Java Tools for Analyzing AGN Outflows 2h We present six interactive programs created to aid in the analysis of outflows from Active Galactic Nuclei. 1. An interactive plot showing the ionic fraction versus the ionization parameter, for each ion of several elements and for different SEDs. 2. An interactive plot showing the excitation ratio versus electron number density for several elements. 3. A tool for finding the ionization parameter solution from the measured column densities. The user provides the measured ionic column densities and chooses an SED. Then the program displays the locus of possible models in a plot of Hydrogen column density versus ionization parameter. The program also calculates and overlays a chi-squared map for one- or two-ionization parameter solutions. 4. A spectral identification tool displays a spectrum, and allows the user to interactively identify the absorption features. This will give the redshift of each outflow and intervening system along the line of sight to the quasar. 5. Two calculators a) Calculate the velocity of an outflow given the systemic redshift and the absorber redshift. b) Convert GALEX flux to units of 10^{-15} ergs/s/cm^2/Angstrom. Speaker: Carter Chamberlain (Virginia Tech) • 18:00 Open and Solved Elementary Questions in Astronomy 2h Some school scientific problems are posed: 1) Let's consider a tunnel getting from one side to the other of a planet and passing through the planet center. An object is dropped into the tunnel. Is the object oscillating about the center as a pendulum? What happens if the tunnel gets from a side to another side of the planet but doesn't pass through the planet center, would the midpoint of the tunnel play a similar role as the planet center? How will Coriolis force influence this? 2) Is it possible to accelerate a photon (or another particle traveling at, let's say, 0.999c) and thus to get a speed greater than c? Speaker: Florentin Smarandache (University of New Mexico) • 18:00 Pac-Man: Lock and Key Colloid Particles 2h The lock and key models using Pac-man particles is an alternative identification mechanism for directing the assembly of combined structures. The system was guided by Fischer's lock-and-key principle which consisted of colloidal spheres as keys and monodisperse colloidal particles with a spherical cavity as locks that bind. What makes this so specific is the fact that the assembly is controlled by how closely the size of a spherical colloidal key particle matches the radius of the spherical cavity of the lock particle. Viscosity measurements were also looked at because nano-particles are known to change the resistance of the fluid. Speaker: Ashley Taylor (Winston Salem State University) • 18:00 Periodicity of the Benjamin-Feir Instability and Linear Superposition 2h Freak waves are waves of great height that appear out of nowhere from otherwise ordinary, if rough, seas. The steepness of these waves can cause an enormous amount of damage to ships and oil platforms. Understanding the cause of freak waves will help us to predict dangerous conditions, and engineer structures better able to withstand such waves. A number of mechanisms have been studied as the source of freak waves, including linear focusing, refraction of waves through a current field, and nonlinear effects. The Benjamin-Feir instability solves the nonlinear Schrodinger equation when a carrier band of frequency omega_0 is perturbed by sidebands of omega_0 +/- Delta omega. These solutions are periodic, or "breather," solutions under the condition that Delta omega < omega ka\sqrt{2}, where ka is the wave steepness determined by k, the wavenumber, and a, the wave amplitude. In this poster, we will compare the period of these breather solutions with the period of the envelope of the linear superposition of the same carrier wave and sideband perturbations using MatLab movies. Speaker: Justin Cutrer (Xavier University of Louisiana) • 18:00 Photon diffraction 2h A particle model of light that exhibited wave--like behavior was proposed at SESAPS log. No. SES09-2009-000064. The model combined the Bohm interpretation with the Scalar Potential Model (SPM) of photons. The model simulation is expanded with a slight modification to allow for different color photons through a single slit experiment, Young's experiment, and coherent light from large distances. Speaker: John Hodge (Blue Ridge Community College) • 18:00 Physical properties of unacetylated chromatin as examined by magnetic tweezers 2h As the source of genetic material, DNA is involved in a variety of biological processes like transcription, cell replication, and more. In these processes, DNA is manipulated into different structures and is subjected to different levels of physical force on a molecular scale. When tension is applied to one hierarchical structure called chromatin, it appears to behave like a Hookian spring. The base component of chromatin is a nucleosome, which is constructed when DNA coils around octamers of histone proteins. The histones can become acetylated---a chemical process in which an acetyl functional group attaches to amino acids of the histones, often lysines. Acetylation may loosen chromatin's coils and therefore lower the amount of tension required to stretch the chromatin. Comparing the levels of tension required to stretch acetylated chromatin could reveal, directly, physical differences in the chromatin fiber that bear ion the function of the DNA molecule. Work presented will be the investigation of unacetylated chromatin. Speaker: Kerry McGill (North Georgia College and State University) • 18:00 Radio Detection of Neutron Star Binary Mergers 2h Neutron star binary systems lose energy through gravitational radiation, and eventually merge. The gravitational radiation from the merger can be detected by the Laser Interferometer Gravitational-Wave Observatory (LIGO). It is expected that a transient radio pulse will also be produced during the merger event. Detection of such radio transients would allow for LIGO to search for signals within constrained time periods. We calculate the LWA-1 detection rate of transient events from neutron star binary mergers. We calculate the detection rate of transient events from neutron star binary mergers for the Long Wavelength Array and the Eight-meter-wavelength Transient Array. Speaker: Brandon Bear (Virginia Tech) • 18:00 Searching for Low-Frequency Radio Transients from Supernovae 2h Supernovae events may be accompanied by prompt emission of a low-frequency electromagnetic transient. These transient events are created by the interaction of a shock wave of charged particles created by SN core-collapse with a stars ambient magnetic field. Such events can be detected in low-frequency radio array. Here we discuss an ongoing search for such events using two radio arrays: the Long Wavelength Array (LWA) and Eight-meter-wavelength Transient Array (ETA). Speaker: Jr-Wei Tsai (Virginia Tech) • 18:00 Studies of Diamond Pixel Detectors for CMS at LHC 2h Single-crystalline diamond detectors are radiation hard and conduct heat very well which makes them an ideal choice for particle tracking devices close to the LHC beam. As a first application they will be used in a luminosity telescope (PLT) that is scheduled to be inserted into the CMS detector in 2012. This summer, several diamond detectors have been bump-bonded to the readout-chip of the CMS silicon detector and their detection characteristics have been tested in the 150GeV pion beam of CERN's SPS, with and without a 3Tesla magnetic field. The poster will introduce the luminosity telescope and present results from the beam test. Speaker: Thomas Robacker (University of Tennessee) • 18:00 Studies of the Performance of Radiation Hard GaAs Photodetectors 2h Speaker: Joseph Goodell (University of Virginia) • 18:00 Study of the Sensitivity of Plastic Scintillators to Fast Neutrons 2h The Mu2e experiment at Fermilab plans to use a two-out-of-three coincident requirement in a plastic scintillator based detector to veto cosmic ray events. This veto system must operate efficiently in a high-radiation environment. In this investigation, three plastic scintillator bars containing wavelength-shifting fibers represent the veto system. These bars were placed together, in series, in front of a deuterium-deuterium neutron generator, which produced fast neutrons of approximately 2.8MeV, in order to study the sensitivity of the plastic scintillators to fast neutrons. Multi-anode photomultiplier tubes read out the light from the fibers. The collected data was analyzed to determine the rate of interaction, approximate amount of energy deposited, and numerous other aspects of the neutrons' interactions. The rate of coincidental and correlated hits in multiple scintillator bars was the primary reason for the investigation, in order to understand the sensitivity of the plastic scintillators to fast neutrons. Speaker: David Abbott (University of Virginia) • 18:00 Superconducting Properties of Nb/Mo Bilayers 2h We studied various electrical properties of Nb/Mo bilayer films at low temperatures as a function of layer proportions with series varying both Nb and Mo (eg. holding Nb constant at 30nm with Mo ranging from 10 to 40 nm). After growing multiple series of Nb/Mo bilayers on silicon substrates at different configurations through magnetron sputtering, the samples were cooled to ~6K, where we explored their critical fields (H_{c2}) at low field strengths. Critical fields were measured using both resistive and inductive measurements on the samples under the influence of a magnetic field ranging from 0 to 120 Gauss. We also look at how the transition temperature of the films (T_c) vary with Nb and Mo layer thicknesses. We will compare our findings to the proximity effect theory for the T_c of thin film bilayers. We will also contrast the linearity of our resistive H_{c2} vs T data fits with the non-linearity of our inductive H_{c2} vs T plots. Speaker: James Veldhorst (Covenant College) • 18:00 Testing General Relativity at Cosmological Scales using ISiTGR 2h With the plethora of incoming and future cosmological data, the testing of general relativity at cosmological scales has become a possible and timely endeavor. It is not only motivated by the pressing question of cosmic acceleration but also by the proposals of some extensions to general relativity that would manifest themselves at large scales of distance. To test the consistency of current and future data with general relativity, we introduce the package: ISiTGR, Integrated Software in Testing General Relativity, an integrated set of modified modules for the publicly available packages CosmoMC and CAMB, including a modified version of the ISW-galaxy cross correlation module of Ho et al and a new weak lensing likelihood module for the refined HST-COSMOS weak gravitational lensing tomography data. We provide the equations for the parameterized modified growth equations and their evolution. We implement a functional form approach, a binning approach, as well as a new hybrid approach to evolve the modified gravity parameters in redshift (time) and scale. Examples calculating current constraints on modified gravity parameters are given for illustration and showing again that current data is consistent with general relativity. Speaker: Jacob Moldenhauer (Francis Marion University) • 18:00 The Arcminute Morphology of the WIM Toward the Local Perseus Arm of the Galaxy 2h We used the Virginia Tech Spectral-Line Imaging Camera (SLIC) to image the warm ionized interstellar medium (WIM) toward the Local Perseus Arm. We obtained a series of images, each of which is 10 degree-wide, and has arcminute-resolution. The images show three basic types of structures --- compact clouds with diameters greater than several degrees, those that are 1 degree or less in diameter, and extended filaments which span several degrees in length but have thicknesses of only a few tens of arcminutes. The data show that [S II]/H-alpha ratios are, on average, nearly six times higher in the filaments than in the clouds, which indicates that emission from collisionally excited, singly-ionized S^+ is the dominant emission source within the filaments. In clouds, the lower [S II]/H-alpha values are evidence that the H-alpha recombination line of photoionized hydrogen dominates. Speaker: Phillip Nelson (Virginia Tech) • 18:00 The Coffee and Cream Dilemma 2h Many coffee drinkers take cream with their coffee and often wonder whether to add the cream earlier or later. With the objective of keeping their coffee as hot as possible over a moderate time period (10-15 minutes), this is a question that most of them can never answer definitively. We investigated this problem empirically using hot and cold water, with special emphasis on the calorimetry of the mixture. Assuming a coffee:cream (hot:cold) ratio of 3:1, we began with two identical styrofoam coffee cups containing hot water and then added cold water at t = 200 s in one cup and t = 700 s in the other cup. Using two Vernier temperature probes to simultaneously track the temperature change during the cool-down period of the water in both cups over Delta t = 1000 s, we obtained a real-time graphical account of which process achieved the higher temperature over this time period. In addition, the effect of evaporation was explored by comparing trials with and without a lid on the coffee cup. The application of Newton's Law of Cooling, as compared to the graphical temperature data acquired, will leave no doubt as to the best strategy for adding cool cream to hot coffee. Speaker: Brandon Minor (George Washington University) • 18:00 Towards Modeling Self-Consistant Core Collapse Supernovae 2h Core-collapse supernovae (CCSN) are multi-dimensional events and the codes we develop to model them must follow suit. Our group at the Oak Ridge National Lab has successfully generated self-consistent explosions in 2D of 12-25 solar mass stars using our code CHIMERA. This code is made up of three essentially independent parts designed to evolve the stellar gas hydrodynamics (VH1/MVH3), the "ray-by-ray-plus" multi-group neutrino transport (MGFLD-TRANS), and the nuclear kinetics (XNET). Incorporation of passive tracer particles, for post-processing nucleosynthesis, allows us to explore effects that stem from anisotropies, instabilities, and mixing. An extension of our alpha-nuclear network to 150 species, has enabled us to identify nuclear processes such as the nu-p process and better follow the neutronization during the explosion. These advances also allow us to investigate lower mass limit O-Ne-Mg CCSN and possible sites for the production of weak r-process elements. In this poster, we will present results of these efforts. Speaker: Merek Chertkow (Oak Ridge National Laboratory / University of Tennessee at Knoxville) • 18:00 Two definitions for genders 2h By my definition, man and woman are the same fact to say. So man and woman have the same thinkings and same existence. But when I say again for man and woman, they are different for sex as the two different persons. They are different each two persons. As an example, by quantum, sex and color is different (the same existence and also different kind with quantum way-push and pull at the same time), also they are the same as they are our ID (hormones) and also dream matter. The same way, I hope we go to heaven and god will say you are the truth like it to be after the end of the world. I wish man and woman are different as it is more fun. Speaker: Philip Shin • 18:00 Upconversion Studies of Er3+ Doped into Low Phonon-Energy Hosts KPb2Cl5 and KPB2Cl5 via 0.97 um and 1.5 um Laser Excitation 2h A comparative study of the wavelength dependence of the Er3+ upconversion in low phonon-energy hosts KPb2Cl5 and KPb2 Br5 will be presented. Initial measurements indicate that visible and infrared upconversion was generated under 0.97 um and 1.5 um laser excitation. Using time resolved emission, spectral emission, and spectral absorption data the dominant upconversion mechanisms involving excited state absorption and/or energy transfer were investigated. In addition, special emphasis was geared toward a comparative study of the detrimental effects of upconversion under resonant pumping conditions (1.5 um) for possible applications in the eye-safe wavelengths (1.5 -- 1.6 um) region. Speaker: A. Bluiett (Elizabeth City State University) • 18:00 Use of Spray Adhesives for the Manufacture of 3-D Capillary Origami Microstructures 2h The method of "capillary origami"---using the surface tension of an evaporating water droplet to fold a flexible membrane into a 3-D polyhedron, as investigated by Py et al.---has shown promise as a way to create fully 3-D microstructures. However, the origami re-opens past a critical evaporation point, and previous attempts to prevent this re-opening have proven to be expensive and time-consuming. We therefore investigated the use of various spray adhesives in keeping these origami microstructures closed. Three characteristics were measured: efficiency, tackiness, and strength of the adhesive. Measurements of these three characteristics point to 3M Super 77 Spray Adhesive as an optimal adhesive for spraying microstructures. Furthermore, we designed a new method to measure adhesive strength by using an analytical balance to measure force applied by a micrometer to a microstructure. We also developed novel procedures to create uniformly-sized microstructures and to accelerate the folding process, all of which improve upon the original capillary origami method. These novel procedures, combined with measurements that indicate 3M Super 77 as an optimum adhesive, suggest a potential method for the mass-production of truly 3-D microstructures. Py, Charlotte, et al. Capillary origami: Spontaneous wrapping of a droplet with an elastic sheet. Physical Review Letters. 98.156103 (2007). Speaker: Mithi de los Reyes (North Carolina School of Science and Mathematics) • 18:00 {nat}Dy(gamma,n) Asymmetry Measurements with Linearly Polarized gamma-rays between 11 and 15 MeV 2h The linearly polarized photon beam at the High Intensity gamma-ray Source (HIgS) was used to study neutron emission from a {nat}Hg target at energies of 11, 12, 13, 14, and 15 MeV. Twelve liquid scintillator detectors were placed at polar angles of 55 degrees, 90 degrees and 125 degrees and at azimuthal angles of phi=0 degrees, 90 degrees, 180 degrees, 270 degrees. Six more detectors were placed at polar angles of 72 degrees, 107 degrees, and 142 degrees at phi=0 degrees and 90 degrees. The ratio of neutron yields parallel to neutron yields perpendicular to the plane of polarization were determined as a function of E_gamma, E_n, and theta. Results will be discussed. Speaker: W. R. Henderson (James Madison University) • 20:00 22:00 MA. Banquet 2h Shenandoah Room ### Shenandoah Room #### Hotel Roanoke, Roanoke VA Ronald Mickens (Clark Atlanta University) will speak about his book on the life of Edward Bouchet, the first African American to receive a Ph.D. in physics from a US institution (1876). Presentation of the Beams, Pegram, and Slack awards will be made at the banquet. • Saturday, 22 October • 08:00 08:30 Registration (from 8:00 to 10:00 am) 30m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 08:30 10:30 NA. Opportunities at National Labs and User Facilities in the SESAPS Area Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Laurie McNeil (University of North Carolina at Chapel Hill) • 08:30 Scientific user facilities at Oak Ridge National Laboratory: New research capabilities and opportunities 30m Over the past decade, Oak Ridge National Laboratory (ORNL) has transformed its research infrastructure, particularly in the areas of neutron scattering, nanoscale science and technology, and high-performance computing. New facilities, including the Spallation Neutron Source, Center for Nanophase Materials Sciences, and Leadership Computing Facility, have been constructed that provide world-leading capabilities in neutron science, condensed matter and materials physics, and computational physics. In addition, many existing physics-related facilities have been upgraded with new capabilities, including new instruments and a high- intensity cold neutron source at the High Flux Isotope Reactor. These facilities are operated for the scientific community and are available to qualified users based on competitive peer-reviewed proposals. User facilities at ORNL currently welcome more than 2,500 researchers each year, mostly from universities. These facilities, many of which are unique in the world, will be reviewed including current and planned research capabilities, availability and operational performance, access procedures, and recent research results. Particular attention will be given to new neutron scattering capabilities, nanoscale science, and petascale simulation and modeling. In addition, user facilities provide a portal into ORNL that can enhance the development of research collaborations. The spectrum of partnership opportunities with ORNL will be described including collaborations, joint faculty, and graduate research and education. Speaker: James Roberto (Oak Ridge National Laboratory) • 09:00 Grand Challenges in Science and the Opportunities Afforded by DOE's New X-ray Laser Project 30m The National Academy of Sciences, Department of Energy Office of Science and National Science Foundation have recently defined a set of scientific "Grand Challenges" for the 21st Century. DOE's interest is a secure and sustainable energy future in a clean environment. Addressing many of the challenges will require an X-ray laser - a coherent ultra-bright light source whose wavelength is of atomic dimensions. The machine will cost $1-2B, and will be based on technology developed at Jefferson Lab. In this talk we will address the science motivating the X-ray laser, will describe the physics and nature of the source itself, and talk about JLab's Free Electron Laser program and Virginia's potential role in this project. Speaker: Prof. Gwyn Williams (Jefferson Lab) • 09:30 Kimballton Underground Research Facility 30m A new deep underground research facility is open and operating only 30 minutes from the Virginia Tech campus. It is located in an operating limestone mine, and has drive-in access (eg: roll-back truck, motor coach), over 50 miles of drifts (all 40' x 20' x 100'; the current lab is 35'x100'x22'), and is located where there is a 1700' overburden. The laboratory was built in 2007 and offers fiber optic internet, LN2, 480/220/110 V power, ample water, filtered air, 55 F constant temp, low Rn levels, low rock background activity, and a muon flux of only$\sim $0.004 muons per square meter, per second, per steradian. There are currently six projects using the facility: mini-LENS - Low Energy Neutrino Spectroscopy (Virginia Tech, Louisiana State University, BNL); Neutron Spectrometer (University of Maryland, NIST); Double Beta Decay to Excited States (Duke University); HPGe Low-Background Screening (North Carolina State University, University of North Carolina, Virginia Tech); MALBEK - Majorana neutrinoless double beta decay (University of North Carolina); Ar-39 Depleted Argon (Princeton University). I will summarize the current program, and exciting plans for the future. Speaker: R. Bruce Vogelaar (Virginia Tech) • 08:30 10:30 NB. Particle Physics II Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Craig Group (University of Virginia) • 08:30 Search for a Fourth Generation t' Quark via Wb Decays into a Lepton Plus Jets Final State in 7 TeV pp Collisions 12m The CMS Experiment at the LHC is currently observing 7 TeV center of mass energy pp collisions. One of the many beyond the standard model searches being conducted by CMS is for evidence of a fourth generation top-like quark (t'). If this object exists, it is expected to decay as: t' -> W b. In pp collisions the top-like quark would be produced with its anti-quark (pp -> t' tbar' -> W+ b W- bbar). This search looks for this decay where one of the W bosons decays leptonically (W$\to $lepton neutrino) and the other hadronically (W ->$qq\={ }). This analysis studies two channels: muon+jets and electron+jets. Results from a sample of 684 pb-1 muon+jets and 573pb-1 electrons+jets will be presented. Speaker: Charles Jenkins (University of South Alabama) • 08:42 Techniques for Higgs Hunting at D0 12m This talk will discuss several techniques employed to increase sensitivities in searches for the standard model Higgs boson at the D0 Experiment, including kinematic fits, matrix element methods, and kinematically motivated divisions of data. Examples from recent data analysis work will be presented. Speaker: Huong Nguyen (University of Virginia) • 08:54 Angular Distribution of Z0 Bosons in Z0+Jet Events 12m The Z0 boson center-of-mass angular distribution is measured in proton-proton collisions at sqrt{s} = 7~TeV, at the CERN LHC. The advantage of studying the angular distribution is that the partonic cross section is solely a function of s-hat and cos(theta-hat); it does not depend on the details of the parton distribution functions. The data sample, recorded with the CMS detector, corresponds to an integrated luminosity of approximately 36 pb^{-1}. Events in which there is a Z0 and at least one jet, with a transverse momentum threshold of 20 GeV and absolute rapidity less than 2.5, are selected for this analysis. Only the Z0's muon decay channel is studied. Within experimental and theoretical uncertainties, the measured angular distribution is in agreement with next-to-leading order perturbative QCD predictions. This analysis extends the phase space available to previous Tevatron studies by probing larger values of s-hat and center-of-mass rapidities. Speaker: Luis Lebolo (Florida International University) • 09:06 Identifying Electromagnetic Events in the Forward Hadron Calorimeter 12m The Forward Hadron Calorimeter (HF) of the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) lies in a region not covered by an inner tracking system, and we can rely only on the shapes of showers that hit the HF to determine whether or not they are due to electromagnetic particles. We review the current method of distinguishing shower types in the HF, and we bring attention to a drawback that will become present as the luminosity of the LHC increases and creates a need for tighter shower-shape cuts. We provide a method to correct this drawback, and we analyze the effectiveness of various tight cuts at isolating signal from background. Speaker: Christopher Frye (University of Central Florida) • 09:18 Precision Measurement of anti-B0bar -> D*+ Lepton Neutrino Branching Fraction 12m We present a precision measurement of the exclusive anti-B0 meson decays to D*+, lepton, and anti-neutrino using 476 million B-meson anti-B-meson pairs. The data sample collected with the BABAR detector at the PEP-II asymmetric-energy B-Factory at SLAC National Accelerator Laboratory. The anti-B0 mesons are reconstructed using a partial reconstruction in which the D* four-momentum is inferred from the slow pion. This allows for a much higher statistical precision on this branching fraction. We use a single and double tag method to measure this important branching fraction. Speaker: Christopher Buchanan (University of South Alabama) • 09:30 CP Violation in B Decays at BABAR 12m We report on the study of the decay B+ to D0(D0bar) K+ where the D0 or D0bar meson decaying to Kpipi and Kpipi0, with the Atwood Dunietz and Soni (ADS) and Gronau, London, and Wyler (GLW) methods. We measure the ratios Rads, R+, and R- since the processes B+ to D0barK+ and B+ to D0K+ are proportional to Vcb and Vub, respectively, are sensitive to rB and to the weak phase angle gamma. Speaker: Romulus Godang (University of South Alabama) • 09:42 Measurement of the Branching Fraction of Y(4S) to Neutral B Pairs 12m We measure a model independent measurement of the branching fraction of Upsilon(4S) to neutral B pairs. We use a sample of 476 million B-meson anti-B-meson pairs collected at the Upsilon(4S) resonance with the BABAR detector at the PEP-II asymmetric-energy B-Factory at SLAC National Accelerator Laboratory. The B mesons are reconstructed through the channel of anti-B0 decays to D*+ lepton anti-neutrino using a partial reconstruction method. Our result does not depend on any branching fraction, the reconstruction efficiency, and the ratio of the charged and neutral B meson. This measurement is an important input for normalizing many B mesons decay. Speaker: Rafi Qumsieh (University of South Alabama) • 09:54 Angular Distribution of Photons in gamma+Jet Events 12m The angular distribution of prompt photons in events with at least one jet in the center-of-mass frame for pp collisions at sqrt{s} = 7 TeV is presented. A template method is used to distinguish between signal and the dominant background from jets fragmenting into neutral mesons. Measuring the angular distribution is a direct probe of the partonic cross section for prompt photon production and is free of the parton distribution functions that are normally associated with an inclusive cross section measurment typically used for next-to-leading order predictions. The |eta-hat| distribution in the center-of-mass frame ranging from 0-2.1 (|cos(theta-hat)| 0-0.97) is examined and compared to next-to-leading order QCD predictions, the highest angular limit reached since the last measurement of angular distributions nearly a decade ago. Speaker: Vanessa Gaultney (Florida International University) • 08:30 10:30 NC. Nanoscale Optics Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Richard Haglund (Vanderbilt University) • 08:30 Nanoplasmonics and Metamaterials with Low Loss and Gain 30m Nanoplasmonics and Metamaterials have become an important research topic because of their interesting physics and exciting potential applications, ranging from sensing and biomedicine to nanoscopic imaging and information technology. However, many applications are hindered by one common cause -- absorption loss in metal. We have shown the loss of surface plasmon can be conquered by modifying the surface structure of metal, and also can be compensated with optical gain. We also have observed the stimulated emission of surface plasmon, and demonstrated the spaser (nanolaser) supported by localized surface plasmon. We have studied the non-metallic metamaterials, which do not suffer from the damping loss of metals, including semiconductors and laser dyes. We have shown indium tin oxide (ITO) is more suitable for the nanoplasmonic applications in the infrared range than Au. We also have shown that high concentrated laser dyes exhibit negative real parts of the electric permittivity, and their dielectric functions can be controlled by laser illumination. With no doubts, such materials can revolutionize the entire technological fields of nanoplasmonics and metamaterials. Speaker: Guohua Zhu (Norfolk State University) • 09:00 Pattern Transfer Nanomanufacturing 30m We report programmed fluidic assembly of ~12 nm diameter Fe3O4 nanoparticles into hierarchically-patterned architectures using the confined magnetic fields that are emitted from transitions written onto magnetic disk drive media. When combined with a controlled external field, our approach yields both laterally-programmed assemblies of nanoparticles over cm length scales and vertically-programmed periodic topography. After assembly, the 3D arrays of nanoparticles are transferred to a flexible and transparent polymer film by spin-coating and peeling. We determine the total transferred magnetic moment as a function of nanoparticle concentration and exposure time, and explain the variation in moment for low concentrations using a simple hydrodynamic model. However, this model is insufficient to explain the transferred moment at higher concentrations, likely because of the combination of dynamically-changing fields during assembly, and field-shielding near the medium surface, both of which will play an enhanced role at higher nanoparticle concentrations. We will discuss potential applications of this technology for creating optoelectronic and biomedical devices. Speaker: Thomas Crawford (University of South Carolina) • 09:30 High Surface Area Vertically Aligned Metal Oxide Nanostructures for Dye-Sensitized Photoanodes by Pulsed Laser Deposition 30m Dye Sensitized Solar Cells (DSSCs) differ from conventional semiconductor devices in that they separate the function of light absorption from charge carrier transport. At the heart of a DSSC is a metal oxide nanoparticle film, which provides a large effective surface area for adsorption of light harvesting molecules. The films need to be thick enough to absorb a significant fraction of the incident light but increased thickness results in diminished efficiencies due to augmented recombination. Here we introduce a new structural motif for the photoanode in which the traditional random nanoparticle oxide network is replaced by vertically aligned bundles of oxide nanocrystals. The direct pathways provided by the vertical structures appear to provide for enhanced collection efficiency for carriers generated throughout the device. The fabrication method is materials agnostic as similar structures will be shown in Nb2O5, TiO2 and SrTiO3. Speaker: Rene Lopez (University of North Carolina at Chapel Hill) • 10:00 Enhanced nonlinear optics and other applications of resonant plasmonics 30m Surface plasmon resonances tend to concentrate the electromagnetic field intensity by several orders of magnitude within nanometer scale hotspots located at sharp corners or inside narrow gaps in the structure. This phenomenon can be used to enhance a number of different effects, such as Raman scattering, fluorescence efficiency and photochemical reactions. This talk will give an overview of some of our recent work in this area, focusing on using plasmons to enhance the second harmonic generation (SHG) from nonlinear optical films. In particular, we have shown that the addition of plasmonic nanoparticles to such a film can increase the SHG emission as much as 2000 times. We have applied this idea to SHG generation in tapered optical fiber, where we obtain quasi-phase matching by patterning the deposition of metal nanoparticles onto the otherwise uniform nonlinear film that coats the fiber. I will also discuss our recent work on plasmonically enhanced nonlinear microscopy and plasmon enhanced photovoltaics. Speaker: Hans Robinson (Virginia Tech) • 08:30 10:30 ND. Astronomy Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Leo Piilonen (Virginia Tech) • 08:30 Gravitational Wave Astronomy and Astrophysics: A Status Report 30m The LIGO, GEO and Virgo gravitational wave detectors have collected a few years of data with good sensitivity and have carried out searches for several types of gravitational-wave signals. I will highlight a few search results obtained so far which shed light on plausible astrophysical sources. The detectors are currently undergoing major upgrades and will run again as Advanced LIGO and Advanced Virgo beginning around 2015. I will describe several areas of astrophysics which will be opened up by the future data. Speaker: Peter Shawhan (University of Maryland) • 09:00 A Multi-Messenger Search for Radio Transients and Gravitational Waves 30m The sensitivity of gravitational waves searches could be improved by coincident observation of electromagnetic signals from expected gravitational wave sources. One possibility is using low-frequency radio transients to trigger and constrain searches for gravitational wave signals. Both are all-sky observations with a number of common sources, and low frequency observations are able to provide spatial and temporal constraints to the search for gravitational wave signals. There is also the added benefit that coincident low-frequency radio and gravitational spectra will allow for more in-depth study of astrophysical events and processes than otherwise possible. In this talk I will layout the case for using low-frequency radio observations to trigger and constrain searches for coincident gravitational wave signals. Common sources and potential ways the joint observation of low-frequency radio and gravitational waves can enhance our understanding of the physics behind these sources will be addressed. Speaker: Michael Kavic (Long Island University) • 09:30 A Precision Test for an Extra Spatial Dimension Using Black-Hole---Pulsar Binary Systems 30m Given the difficulties in testing current frontier physics ideas in earth-based experiments, we might profitably look to the cosmos for observational tests. I will discuss observations that could set a limit on the size of a warped extra spatial dimension in the braneworld scenario. The observations would be similar to those that provided evidence of gravitational radiation by the binary pulsar B1913+16. In the presence of a warped extra spatial dimension a stellar mass black hole will evaporate at a sufficiently high rate to produce an observable orbital effect in a black-hole---pulsar binary system. For some masses and orbital parameters the binary components will outspiral, the opposite of the behavior due to energy loss by gravitational radiation alone. Observations of a black-hole---pulsar system could set considerably better limits on size of the extra dimension in these braneworld models than could be determined by torsion-balance gravity experiments in the foreseeable future. Speaker: John Simonetti (Virginia Tech) • 10:00 Kinetic Luminosity of Quasar Outflows and its Implications to AGN Feedback 30m Sub-relativistic outflows are seen as blueshifted absorption troughs in the spectra of roughly one third of all quasars. I will describe how we determine the mass flux and kinetic luminosity of these outflows and show that the derived values suggest that absorption outflows may be a main agent of AGN feedback scenarios. Speaker: Nahum Arav (Virginia Tech) • 10:30 10:45 Coffee / Refreshments 15m Roanoke Foyer ### Roanoke Foyer #### Hotel Roanoke, Roanoke VA • 10:45 12:45 PA. Physics Education Crystal Ballroom A ### Crystal Ballroom A #### Hotel Roanoke, Roanoke VA Convener: Gerald Feldman (George Washington University) • 10:57 What is the purpose of undergraduate physics labs? 12m In recent years, enrollment in undergraduate physics courses at NC State has grown significantly, especially in introductory physics. Since most of these courses involve a laboratory component, the increased enrollment is leading to a shortage of laboratory space. Starting this spring NC State will implement kit labs in calculus-based mechanics labs. These kits will make it possible for students to have laboratory experiences outside of the standard lab rooms, decreasing space demands. During the implementation the kit labs will be evaluated with an instrument developed for this purpose. This paper discusses the first step of designing this instrument, determining what the specific goals and purposes of the labs are. Literature reviews have led to focus on three primary areas where students should make gains during lab: content knowledge, scientific process, and affect. Physics faculty members were surveyed to identify specific areas considered important for our labs. Using results from our survey and published literature we have developed a specific set of goals for our labs, and we are using this to guide the development of our assessment instrument. Speaker: William Sams (North Carolina State University) • 11:09 Remote Sensing: Radio Frequency Detection for High School Physics Students 12m In an effort to give high school students experience in real world science applications, we have partnered with Loranger High School in Loranger, LA to mentor 9 senior physics students in radio frequency electromagnetic detection. The effort consists of two projects: Mapping of 60 Hz noise around the Laser Interferometer Gravitational Wave Observatory (LIGO), and the construction of a 20 MHz radio telescope for observations of the Sun and Jupiter (Radio Jove, NASA). The results of the LIGO mapping will aid in strategies to reduce the 60 Hz line noise in the LIGO noise spectrum. The Radio Jove project will introduce students to the field of radio astronomy and give them better insight into the dynamic nature of large solar system objects. Both groups will work together in the early stages as they learn the basics of electromagnetic transmission and detection. The groups will document and report their progress regularly. The students will work under the supervision of three undergraduate mentors. Our program is designed to give them theoretical and practical knowledge in radiation and electronics. The students will learn how to design and test receiver in the lab and field settings. Speaker: Daniel Huggett (Southeastern Louisiana University) • 10:45 12:57 PB. Statistical and Nonlinear Physics II Crystal Ballroom B ### Crystal Ballroom B #### Hotel Roanoke, Roanoke VA Convener: Uwe Tauber (Virginia Tech) • 10:45 Effect of Diffusion on Size Distribution Dynamics of Desorption in KMC Simulations of a Lattice-Gas Model of Pulsed Electrodeposition 12m We have studied the effect of diffusion during the desorption phase in pulsed electrodeposition in a square lattice-gas model using Kinetic Monte Carlo simulations. The effect of diffusion on correlation length and size distribution during the desorption were studied. During the process, the correlation length increased up to a maximum and then decreased. We found that diffusion increase correlation length by small percentage in the regime where correlation length is decreasing, and increase it more significantly when the correlation length is increasing. By studying size distributions we found that diffusion tends to shrink large clusters and grow or create medium clusters. When the clusters growth or creation by diffusion is small, the increase of correlation length by diffusion is small and large otherwise. Speaker: Tjipto Juwono (Florida State University) • 10:57 Effect of the size distributions of magnetic nanoparticles on metastability across dynamic phase boundary 12m Recent experiments showed that magnetic nanoparticles have distributions of sizes and shapes, and that the distributions greatly influence static and dynamic properties of the nanoparticles. Therefore, it is critical to understand their properties as functions of the distributions. Previously, we studied an effect of particle size distributions on metastability in magnetization relaxation, using a spin S=1 Blume-Capel model, in the single-droplet regime where a critical droplet comprises a single flipped spin. The particle size distributions were simulated using distributions of magnetic anisotropy parameter D with spins fixed. We found that the lifetime of the metastable state is governed by the smallest particle or the particle with the smallest value of D in a given system. In this work, we present the effect of size distributions on metastability in the region where the values of D are distributed across the phase boundary between different critical droplets for constant D. Interesting phenomena may occur in this region because particles with low values of D expect different critical droplets from particles with high values of D in a given distribution of D. We examine magnetization relaxation in this region using kinetic Monte Carlo simulations for the spin S=1 Blume-Capel model. Speaker: Yoh Yamamoto (Virginia Tech) • 11:09 Non-equilibrium phases of the two-dimensional Ising model in contact with two heat baths 12m The equilibrium phase diagram of the two-dimensional Ising model in contact with a single heat bath is well understood. We here study the properties of the two-dimensional Ising model with conserved dynamics where the two halves of the system are in contact with different heat baths. Using Monte Carlo simulations, we identify three different phases for this non-equilibrium system, as a function of the aspect ratio of the lattice and of the temperatures. The first phase is characterized by the complete disorder of the particles, while the second phase is characterized by the complete order of the particles. The third phase is the most interesting one as it displays stripes with widths that depend on the system parameters. The full phase diagram of our non-equilibrium system is determined through the study of the structure factor. Speaker: Ms Linjun Li (Virginia Tech) • 11:21 Langevin Molecular Dynamics of Driven Magnetic Flux Lines 12m The characterization of type-II superconducting materials and their technological applications in external magnetic fields require a thorough understanding of the stationary and dynamical properties of vortex matter. The competition of repulsive interactions and attractive material defects renders the physics of externally driven magnetic flux lines very rich. We study the non-equilibrium steady states as well as transient relaxation properties of driven vortex lines in the presence of randomly distributed point pinning centers. We model the vortices as interacting elastic lines and employ a Langevin Molecular Dynamics (LMD) algorithm to extract steady-state and non-stationary time-dependent behavior. We compare the efficiency and accuracy of LMD to previously obtained Metropolis Monte Carlo steady-state force-velocity and gyration radius data. In future work we intend investigate the transient two-time height-height correlation and response functions. Speaker: Ulrich Dobramysl (Virginia Tech) • 11:33 An Approximation to the Periodic Solution of a Differential Equation of Abel 12m The Abel equation, in canonical form, is y' = sin t - y^3 (*) and corresponds to the singular (epsilon -> 0) limit of the nonlinear, forced oscillator epsilon y" + y' + y^3 = sin t, epsilon -> 0. (**) Equation (*) has the property that it has a unique periodic solution defined on (-infty, infty). Further, as t increases, all solutions are attracted into the strip |y| < 1 and any two different solutions y_1(t) and y_2(t) satisfy the condition Lim [y_1(t) - y_2(t)] = 0, (***) t -> infty and for t negatively decreasing, each solution, except for the periodic solution, becomes unbounded.[1] Our purpose is to calculate an approximation to the unique periodic solution of Eq. (*) using the method of harmonic balance. We also determine an estimation for the blow-up time of the non-periodic solutions. [1] U. Elias, American Mathematical Monthly, vol.115, (Feb. 2008), pps. 147-149. Speaker: Ronald Mickens (Clark Atlanta University) • 11:45 An Averaged-Separation of Variable Solution to the Burger Equation 12m The Burger Partial Differential Equation (PDE) provides a nonlinear model that incorporates several of the important properties of fluid behavior. However, no general solution to it is known for given arbitrary initial and/or boundary conditions. We propose a "new" method for determining approximations for the solutions. Our method combines the separation of variables technique, combined with an averaging over the space variable. A test of this procedure is made for the following problem, where u = u(x,t): 0 <= x <= 1, t > 0, u(0,t) = 0, u(1,t) = 0, u(x,0) = x(1-x), u_t + u u_x = D u_{xx}, where D is a non-negative parameter. The validity of the calculated solution is made by comparing it to an exact analytic solution, as well as an accurate numerical solution for the special case where D = 0. Speaker: Dr 'Kale Oyedeji (Morehouse College) • 11:57 A two-Lane model with anomalous slow dynamics 12m It is known that in one-dimensional equilibrium systems with short range interactions a phase transition cannot exist at finite, non-zero temperatures. However, far from equilibrium, one-dimensional systems with local interactions can exhibit a phase transition. The ABC model, a three species model defined on a chain characterized by non-symmetric exchanges between particles, is known to possess a non-equilibrium phase transition. This model exhibits anomalous slow dynamics that we investigate in some detail using two-time quantities. In addition we discuss an extension of this model to a case where this single lane is coupled to a one-dimensional particle bath. This coupling yields an additional phase transition that we discuss in some detail. Speaker: Daniel Linford (Virginia Tech) • 12:09 Degree-based graph construction and sampling 12m Network representation and modeling has been one of the most comprehensive ways to study many complex systems. However, the network describing the system frequently has to be built from incomplete connectivity data, a typical case being degree-based graph construction, when only the sequence of node degrees is available. In this presentation I will introduce problems and results related to the construction of all the possible graphs and sampling from the class of graphs with fixed degree-sequence. Firstly, for graph construction, I will present necessary and sufficient conditions for a sequence of integers to be realized as a simple graph's degree sequence under the condition that a specific set of connections from an arbitrary node are avoided. Secondly, by using this result, I will show how to provide an efficient, polynomial time algorithm that generates graph samples with a given degree sequence. Unlike other existing algorithms, this method always produces statistically independent samples, without back-tracking or rejections. Also, the algorithm provides the weight associated with each sample, allowing graph observables to be measured uniformly over the graph ensemble. Speaker: Hyunju Kim (Virginia Tech) • 12:21 A simple model for studying interacting networks 12m The characteristics of single networks, whether physical, biological or social, are well known. However, many of these networks function not only in isolation, but also coupled to each other. So far, little is known about such "interacting networks." Here, we consider two coupled systems, modeling social networks with a preferred number of friends. We first report on the (statistical) properties of the stationary state of a single network, which consists of a fixed set of nodes and a stochastically varying set of links (generated according to a preferred degree, kappa). Next, we investigate the effects of coupling two such networks (with different kappas) by various means. Findings using both analytic and simulation techniques will be presented and potential consequences for real networks will be discussed. Speaker: Wenjia Liu (Virginia Tech) • 12:33 Image Charge Optimization for the Reaction Field by Matching to an Electrostatic Force Tensor 12m A new image charge solvation model has recently been developed, which consists of a spherical cavity of explicit solvent embedded in a continuum dielectric medium. Inside the cavity, the dielectric constant is 1 and outside the cavity is set to 80. Although the discontinuity from 1 to 80 at the cavity interface creates large artifacts near the boundary, MD simulation using this model yields accurate results by incorporating a buffer layer containing imaged water. We generalized the model to reflect a continuously changing dielectric profile at the boundary, and optimized image charges for the reaction field based on electrostatic forces to minimize the buffer layer volume and reproduce the electrostatic force field associated with the dielectric properties of the model solvent. However, MD simulation suggests that the new model is unstable. Previously, we also showed that the reaction field has an order of magnitude stronger influence on the electrostatic torque compared to force on solvent water molecules. Therefore, we optimize the image charges in a different way, using a force tensor defined by a grid of dipoles, which places more constraints on the system. Speaker: Wei Song (University of North Carolina at Charlotte) • 12:45 Unusual criticality in a generalized XY model 12m We study the generalized XY model in two dimension, which has a term proportional to cos(2 theta) in addition to the normal XY Hamiltonian. This corresponds to having half vortices connected by solitons, as well as integer vortices. From both renormalization group analysis and Monte Carlo simulation using the worm algorithm, we find that the phase diagram includes Kosterlitz-Thouless transitions of half and integer vortices, together with an Ising transition. Remarkably, part of the Ising line is a direct transition from the quasi-long-ranged ordered state to the disordered state. Speaker: Yifei Shi (University of Virginia) • 10:45 13:21 PC. Condensed Matter Physics / Nanophysics II Crystal Ballroom C ### Crystal Ballroom C #### Hotel Roanoke, Roanoke VA Convener: Prof. Wilfredo Otano (University of Puerto Rico – Cayey) • 10:45 Determination of the Current Voltage Signatures of NanoGUMBOS 12m Tantamount to the realization of next generation nanoscale devices is the synthesis and characterization of new electronic materials. GUMBOS, or a Group of Uniform Materials Based on Organic Salts, represent a first-time synthesis of nanoscale material composed of ionic liquid species in the frozen (solid) state whose electronic characteristics are indicative of potential future application to device electronics. Using a Keithley 4200 semiconductor characterization system, we have examined the nanoscale conductivity and current-voltage (I-V) characteristics of GUMBOS nanowires under both aqueous and "dry" conditions. Just as nanoGUMBOS are new materials in the realm of ionic liquid research, our I-V measurements are a first-time characterization of this species of nanostructures. Speaker: Kalyan Kanakamedala (Louisiana State University) • 10:57 Characterization of NanoGUMBOS Using Conductive Probe Atomic Force Microscopy 12m In our work on hybrid (organic-inorganic) electronic materials (HEMs), we have developed a reasonably facile method for characterizing GUMBOS or a Group of Uniform Materials Based on Organic Salts. In addition to the versatility of traditional ionic liquids (i.e.-solubility, melting point, viscosity), nanoGUMBOS are functionalizable to exhibit properties such as fluorescence, magnetic susceptibility, and even antimicrobial activity. However, given our interest in the electrical properties of HEMs, we have made first-time measurements of nanoGUMBOS, using CP-AFM, in order to deduce their room temperature current-voltage characteristics. In conjunction with the nanoscale imaging of AFM alone, we have observed both the morphology and conductivity of these unique materials. Our results bode well for combining GUMBOS with substrates of more traditional materials, such as metals or semiconductors, to serve as the basis for future HEMs-based devices. Speaker: Naveen Jagadish (Louisiana State University) • 11:09 Negative coefficient of thermal expansion in (epoxy resin)/(zirconium tungstate) nanocomposites 12m The alpha-phase of zirconium tungstate (Zr W_2 O_8) has the remarkable property that its coefficient of thermal expansion (CTE) takes on a nearly constant negative value throughout its entire range of thermal stability (0 -- 1050 K). Composites of Zr W_2 O_8 nanoparticles and polymer resins have a reduced CTE compared to the pure polymer, but previous work has been restricted to measurements near room temperature. We show that the CTE of such composites can take on increasingly negative values as the temperature is lowered to cryogenic values. We used this phenomenon to fabricate a metal-free all-optical cryogenic temperature sensor by coating a fiber optic Bragg grating with the nanocomposite. This sensor has a sensitivity at 2 K that is at least six time better than any previous fiber-optic temperature sensor at this temperature. Speaker: Erich See (Virginia Tech) • 11:21 Towed-grid system for production and calorimetric study of homogenous quantum turbulence 12m The decay of quantum turbulence is not fully understood in superfluid helium at milikelvin temperatures where the viscous normal component is absent. Vibrating grid experiments performed periously produced inhomogeneous turbulence, making the results hard to interpret. We have developed experimental methods to produce homogeneous isotropic turbulence by pulling a grid at a variable constant velocity through superfluid 4He. While using calorimetric technique to measure the energy dissipation, the Meissner effect was employed to eliminate all heat sources except from turbulent decay. A controlled divergent magnetic field provides the lift to a hollow cylindrical superconducting actuator to which the grid is attached. Position sensing is performed by measuring the inductance change of a coil when a superconductor, similar to that of the actuator, is moved inside it. This position sensing technique proved to be reliable under varying temperatures and magnetic fields, making it perfect for use in the towed-grid experiment where a rise in temperature emerges from turbulent decay. Additionally, the reproducible dependency of the grid's position on the applied magnetic field enables complete control of the actuator's motion. Speaker: Roman Ciapurin (University of Florida) • 11:33 Radiative Polaritons in Thin Oxide Films with Experimental and Simulated Dispersion Relations 12m Our research focuses on polaritons, or infrared (IR) photon-phonon coupling in ionic materials, as a way to capture IR radiation from the solar spectrum. Radiative polaritons (RP) have the unique property that their phase velocity is faster than the speed of light. We wish to prove that the polaritons present in thin oxide films are RP's with the traits predicted by theory. Therefore, in this work we study simulated and experimental IR spectra of Al_2 O_3 films grown by atomic layer deposition (ALD) on Al. Since RP's are characterized by a complex frequency, omega, we have derived from IR spectra the real part, Re(omega), as the peak centroid, and the imaginary part, Im($\omega$), as the peak's width. Dispersion relations were obtained by plotting Re($\omega$) and Im(omega) versus the angle of incidence of the polarized IR radiation. The agreement between simulated and experimental data and between our data and theory allow us to conclude that RP's are present in thin oxide films. Speaker: Anita Vincent-Johnson (James Madison University) • 11:45 Inductive Critical Currents in Nb/Mo bilayers 12m We have carried out measurements of inductive critical currents in Nb/Mo bilayers. The films were grown by magnetron sputtering onto silicon substrates from separate sources. Sequences with varying either the molybdenum or niobium layer thickness were grown and studied. Inductive critical currents were measured using a third harmonic technique at 1 kHz. J_c varies as (1-t)^{3/2} as expected from Ginzburg-Landau theory (here t is the reduced temperature, T/T_c). Measurements in low magnetic field (below 120 Gauss) show a marked decrease in J_c with applied magnetic field. We look at various ways to interpret the V_{3f} vs. drive current mentioned in the literature and compare to our results for pure niobium and the bilayers. Speaker: Phillip Broussard (Covenant College) • 11:57 Time-dependent hydrogen annealing of Mg-doped GaN 12m Unintentional doping by hydrogen is a concern for industrial growth of p-type GaN which is important in creating blue LEDs and high frequency devices. Using electron paramagnetic resonance (EPR) we investigated hydrogen passivation in p-type nitrides. Samples included conventional GaN and Al_x Ga_{1-x} N(x=0.12,0.28) grown by chemical vapor deposition (CVD) with 1-4x10^{19} cm${-3} Mg and GaN grown by Metal Modulation Epitaxy (MME) yielding 1.5x10^{20} cm^{-3} Mg. The Mg signal was observed during isothermal anneals in N$_{2}\$:H_2 (92%: 7%). The Mg EPR signal unexpectedly increased below 600C in GaN, but no changes were observed in AlGaN. The MME Mg EPR signal began decreasing after 10 min at 400C, while the Mg intensity of AlGaN did not start reducing until 500C. As expected the Mg EPR signal in the CVD GaN quenched at 700C, as did the signal in AlGaN. However, the intensity of the Mg signal in MME samples was eliminated after only 20 min at 500C. The different temperature dependence suggests that hydrogen diffusion is affected by increased Mg concentration. These studies are integral for the advancement of p-type GaN. Speaker: Ustun Sunay (University of Alabama at Birmingham) • 12:09 First-principles study of surface states of topological insulators 12m Recently, three-dimensional topological insulators (TIs) with time reversal symmetry draw attention due to their unique quantum properties and device applications. Strong spin-orbit coupling in TIs induces metallic surface states within bulk band gaps. It has been known that Bi_2 Te_3, Bi_2 Se_3, and Sb_2 Te_3 are TIs possessing a single Dirac cone in the dispersion of the surface states at a given surface. The surface states of TIs play a critical role in proposed novel physical phenomena and applications. We investigate the surface states of thin films of Bi_2 Te_3(111) and Bi_2 Se_3(111) using density-functional theory including spin-orbit coupling. We identify the surface states of the TI films from calculated band structures using the decay length of the surface states and electron density plots. We also present the electronic properties of the surface states of the films. Speaker: Kyungwha Park (Virginia Tech) • 12:21 Electronic Structure Determination of the Thermoelectric Cu Rh_{1-x} Mg_x O_2 using Soft X-Ray Spectroscopies 12m Magnesium-doped rhodium oxides with formula unit Cu Rh_{1-x} Mg_x O_2 and delafossite-type structure exhibit a high thermoelectric figure of merit at elevated temperatures. The electronic structure of Cu Rh_{1-x} Mg_x O_2 has been studied with x-ray emission spectroscopy (XES), x-ray absorption spectroscopy (XAS), and photoemission spectroscopy (PES). The data reveal that the states at the Fermi level are Rh-derived. Measurements carried out by changing the orientation of the linear photon polarization further indicate that the Rh states have a more localized character along the c-axis, consistent with the layered crystal structure. Given the similarity of the electronic configurations of Co and Rh, these data provide solid experimental evidence that the orbital degrees of freedom of the d^6 ionic configuration of the states rooted in transport are key for explaining the thermoelectric properties of oxide materials. Speaker: Eric Martin (University of Tennessee) • 12:33 Energy Band Gap Behavior as a Function of Optical Electronegativity for Semiconducting and Insulating Binary Oxides 12m A relationship between energy band gap and electronegativity has long been understood to exist. However, defining the relationship between the two for binary oxide systems has proven difficult. Many scientists tried to model the band gap as a function of Pauling electronegativity values, but we show that by using a new concept called optical electronegativity'' one can obtain much better predictions regarding band gaps of new oxide. Interestingly we found that the behavior of oxides varies across depending on the chemical group the cation is from. With that knowledge, we developed two equations to describe the alkali earth metal and poor metal oxide. By using our models, we are able to predict the band gap of radium oxide at 5.36 eV. Due to the contributions of d' and `f' orbitals we could not model lanthanide rare earth and transition metal oxides but rather we found that band gaps for both lay beween 3.56 - 5.72 eV, and 1.82 -- 3.82 eV, respectively. Speaker: Kristen Dagenais (University of Maryland, Baltimore County) • 12:45 Doping evolution of the electronic structure in Ba (Fe_{1-x} Co_x)_2 As_2 as revealed by polarization dependent ARPES and soft X-ray absorption 12m The newly discovered BaFe2As2 high Tc superconductors have given a huge stimulus in the field of superconductivity after more than two decades of cuprates supremacy. Their relatively simpler crystal structure, the possibility of ambivalent doping (holes and electrons) and their rich phase diagram provide an ideal workbench for a deeper understanding of high Tc superconductivity. Here we present a study based on ARPES and X-ray absorption on the electronic structure evolution upon Co doping in Ba (Fe_{1-x} Co_x)_2 As_2 high Tc superconductors, for the doping levels x=(0,6,8,12,22)%. This study focuses on two points: i) the effective role of Co at different doping levels; ii) the modification upon doping of the band structure and of the Fermi Surface. ARPES data supported by state of the art LDA calculations show a non-rigid band structure modification upon doping and XAS at Co L edge show the metallic nature of Co-As bond. Speaker: Paolo Vilmercati (University of Tennessee at Knoxville) • 12:57 Microstructural investigations of 0.2 per cent carbon content steel 12m The effect of thermal annealing to get different phases on low carbon steel was investigated. Steel sheets (0.2 wt. % C) of 900 um thickness were heat treated to produce different structures. All the samples have the same starting point, transformation to coarse austenite at 900 degree Celsius. The nano indentation results revealed that samples have different hadness. By making conventional SEM micrographs, focus ion beam maps, and Electron backscatter diffraction (EBSD) the microstructural development and grain boundary variation of transformed phases martensite, biainte, tempered martensite and different combination of these phases were studied. Speaker: Sajjad Tollabimazraehno (Johannes Kepler University) • 13:09 Optical interferometric assessment of thin-film adhesion to substrate 12m A Michelson interferometer has been assembled to evaluate the adhesion strength of thin-film coating on silicon wafers. Two gold coated silicon wafer specimens are configured as the two end mirrors of the interferometer. The end mirrors are slightly tilted so that vertical interferometric fringes (dark stripes) are formed behind the beam splitter. An acoustic transducer is attached to the silicon substrate of each wafer so that the gold coated surface oscillates in the direction of the optical axis. One wafer is driven at a time. As the coated surface oscillates, the vertical fringes oscillate horizontally, where the amplitude of the oscillation varies depending on the adhesion strength. Two specimens, one with oxygen-plasma pre- coating treatment and the other with no pre-coating treatment, have been tested. Empirically, the former is known to be stronger in adhesion than the latter. When the specimen of the weaker adhesion is driven in a range of 10 -- 17 kHz, the fringes become blurry, indicating that displacement is greater. Analysis of the fringe patterns in the spatial frequency domain has enabled us to differentiate the displacement quantitatively. Speaker: Sushovit Adhikari (Southeastern Louisiana University) • 10:45 12:45 PD. Particle Physics at the LHC Crystal Ballroom DE ### Crystal Ballroom DE #### Hotel Roanoke, Roanoke VA Convener: Brad Cox (University of Virginia) • 10:45 The impact of Higgs boson searches at the Tevatron in the LHC era 30m The Tevatron's long program of colliding protons and anti-protons at a center-of-mass energy of 1.96 TeV will end in September of this year (2011). I will describe the ongoing efforts of the CDF and DO collaborations to conclude their search for the Higgs boson and make predictions on their sensitivity with the complete dataset. The sensitivity of the LHC experiments at CERN is quickly surpassing the Tevatron in most new physics searches; however, in some efforts--such as some low-mass Higgs boson searches--the Tevatron results will remain competitive for quite some time. I will focus the talk on the complementarity of the information that will be provided by the Tevatron and LHC experiments and will explain why both are important in understanding the nature of a low mass Higgs boson if it is discovered in the next few years. Speaker: Craig Group (University of Virginia) • 11:15 ATLAS in 2011: Status and prospects 30m The ATLAS Experiment at the Large Hadron Collider (LHC) began taking data at a center of mass energy of 7 TeV in spring 2010. What have we learned from ATLAS since SESAPS 2010? In my talk, I present the status of our measurements thus far, relate these results to predictions of the Standard Model and of theories beyond the Standard Model, and conclude with our prospects for making interesting discoveries in the future. Speaker: Dick Greenwood (Louisiana Tech University) • 11:45 Recent Results from 7 GeV proton-proton running at CMS 30m The Compact Muon Solenoid (CMS) experiment at CERN's Large Hadron Collider (LHC) has been collecting and analyzing proton-proton collisions at 7 TeV. CMS has collected more than 2 fb^-1 of collision data, including smaller samples at lower energies of 0.9 TeV and 2.36 TeV. These samples allow precision measurements of Standard Model processes and probing for new physics. The results presented will show good detector performance as well as some of the recent physics results from CMS. Speaker: Will Johns (Vanderbilt) • 12:15 Naturalness of electroweak symmetry breaking in the LHC era 30m I will provide a concise, coherent overview of electroweak symmetry breaking from a modern perspective and in light of the latest LHC data, focusing on the mechanisms of electroweak symmetry breaking that are natural, i.e., without significant fine-tuning. Speaker: Takemichi Okui (Florida State University)
auto_math_text
web
NOAO >   Observing Info >   Approved Programs >   2005A-0511 # Proposal Information for 2005A-0511 PI: Jaehyon Rhee, California Institute of Technology, rhee@srl.caltech.edu Address: Space Astrophysics Laboratory, MC 405-47, 1200 E. California Blvd., Pasadena, CA 91125, U.S.A. CoI: Inese I. Ivans, California Institute of Technology CoI: Andrew McWilliam, Carnegie Observatories (OCIW) Title: Chemical Compositions of Newly Discovered Very Metal-Poor Red Giants Abstract: We propose to continue a high-resolution spectroscopy program for very metal-poor (VMP) red giant stars with [Fe/H] \le -2.5 in the halo and thick disk of the Galaxy. Thanks to previous medium-resolution spectroscopy using NOAO observing facilities, the HK-II survey has been able to \it newly discover some 50 [Fe/H] \le -2.5 \it red giant stars with 11 \le B \le 15.5, over 7000 deg^2 (one-sixth of the entire sky). Comprehensive chemical abundance analyses for the old stars placed at various directions and distances will allow us to understand early history of the Galactic chemical evolution. During the course of this program, we have already discovered a new highly r- process enhanced VMP star, II 16033-02187, a red giant with [Fe/H]=- 2.48 and [Eu/Fe]=+1.6, based on a preliminary analysis for high- resolution spectra taken with KPNO 4-m/Echelle in May 2004. We expect to identify additional 1-2 r-process enriched VMP red giants, and if detected, age dating of such stars by the use of long-lived radioactive species (e.g., Th and U) may place a strong constraint on lower limits on the age of the Universe. Also, detailed abundance studies of carbon enhanced metal-poor stars will help understand the relation between neutron-capture processes (particularly s-process) and carbon enhancement at the early Galactic times. The results from this effort will enable us to have refined'' metallicity distribution function of [Fe/H] \le -2.5 populations and, combined with proper motions from astrometric surveys (e.g., UCAC2 and USNO B1), will provide full space motions of the extremely old stars to uncover the chemo-dynamical history of the early Galaxy. National Optical Astronomy Observatory, 950 North Cherry Avenue, P.O. Box 26732, Tucson, Arizona 85726, Phone: (520) 318-8000, Fax: (520) 318-8360 NOAO >   Observing Info >   Approved Programs >   2005A-0511
auto_math_text
web
# Market Making in Stellar 101: Fundamentals & Kelp Brief introduction to Market Making in Stellar. Main concepts in security markets and market making. The importance of fostering this activity in Stellar and introduction to Kelp. # Market Making in General A market maker is a participant who provides both, bids and ask quotes in a security market. Its objective is to provide a smoother flow and a better experience to the rest of the market participants by guaranteeing the possibility of a trade execution. If the previous paragraph seems too daunting or difficult to understand because of all the jargon, do not worry. I'll be dissecting its different pieces to make it simpler. ## Basic terminology in a Security Market A security market or exchange is the place where one trades ones asset for another. In general an exchange will list multiple security markets, one for each trading pair: XLM/USD, XLM/BTC, etc. In the following picture we can see how the XLM/USD(anchor) security market in the Stellar DEX, through StellarX looks like. The order book lists the current outstanding orders for both sides: offering the security and bidding for it. In market terminology a bid offer is an offer to buy a certain amount of the asset at a certain exchange rate, in the view these are the ones in light blue. While an ask offer is the one where the placing party is willing to sell a certain amount of the asset at a specific exchange rate. The ask prices will always be higher than the bid prices, otherwise those offers would be executed and the market would be clearing. The difference between the lowest ask price and highest bid price is what's called the spread. The spread can be thought of the cost of entering and exiting positions in the market, the tighter(smaller) the spread the cheaper it is to get in and out of a position. Smaller spreads are signs of a liquid active markets, while large spreads are usually signs of either an illiquid market, a volatile market or both: as volatility tends to reduce liquidity due to uncertainty perceived by market participants. We'll see how the spread is relevant for market makers in the next sections. From the order book, the market depth can be derived. The depth of the market is determined by how effectively it can absorb a market order in either way without altering the price of the security. The deeper the market is, the harder it is to move the price through a single market order. Market depth is generally correlated with market liquidity and volatility: the more liquid the market the deeper it tends to be while the more volatile the market is the shallower it will be. Having deep markets is something any security market aspires to have as it makes it easier for participants to get in and out of it. Let's discuss a bit about the different types of orders that can be placed in security markets. We can categorize orders based on whether they increase or remove liquidity in the security market, in general they are given the names of maker order or taker orders respectively. A maker order is an order that goes into the order book and doesn't get immediately executed, thus it increases the market liquidity and depth. On the contrary, a taker order gets immediately filled out by triggering the execution of outstanding orders in the order book until either the order is completely, there are no more orders available or a price limit is reached. It is possible that a taker order becomes a maker order, this happens when an order is partially filled and the remainder of it is kept in the order book awaiting to be taken. That classification is a conceptual one, exchanges usually allow two types of orders: limit and market orders. Market orders don't state any price only quantity of the asset to sell or buy and the exchange clears operations in the orderbook until the order is fulfilled. Market orders are never maker orders, they only take liquidity from the market. On the other hand, limit orders state a explicit price limit for the asset being transacted; in the case of a sell order it indicates the minimum selling price while on bid orders the maximum buying price. Limit orders can be taker, maker or taker then maker, depending on the limit and the orderbook state when the order was submitted. A limit order whose price crosses the other side will get executed immediately thus considered a taker order. If the order cannot get fulfilled because there are not enough offers below the limit, the remainder stays in the orderbook waiting fulfilment becoming a maker order then. If from inception, the order never crossed the opposite's side best offer then the order is a maker order and sits in the orderbook until an order gets matched to it. To conclude this section, let's discuss how price actually gets affected by orders. The price that is listed for the asset is the price of the last executed trade, thus taker trades are the ones that move the price along with the depth of the market. In the recent order panel we can see last executed offers, if the price is stated that it dropped it means that a sell order decided to take outstanding bid orders which will generally be below the last trading price. The opposite is true if the stated executed order shows a price increase. It is also possible the price remains the same, this is common in very liquid and tight markets where the spread is almost non-existent. ## Why Market Makers are important? Now that we have discussed the basics of a security market and what a market maker does, we can go into the reason why fostering market makers is a good idea. When a market participant wants to exchange an asset for another, the only possible way for that to happen is if there's someone on the other side of the trade. Having to wait for that to happen would not make a smooth, pleasant nor reliable market experience which discourages participation. That's where the importance of market makers come into play. By having open orders on both sides of the orderbook they allow any participant to make a trade without having to wait for a counter-party. It is the market-maker's job to be there for anyone who wants to trade. Market makers inherently increase market liquidity and the depth of the market, thus making it more robust and easier to get in and out of positions for the rest of the participants. The more market makers there are, the more available liquidity is going to be. Also, as market makers compete with each other for orders the spread tightens, making it more attractive for the rest of participants to be involved in the market. ## How do Market Makers benefit? You might be wondering what's in it for the market makers, as they are not charitable organizations. Well, market makers are in that business for the profit it generates. They generate profit from the spread between their bid and ask orders; they literally buy low and sell high. The larger the spread, the more profitable the market maker is. The follow-up question would be: why don't they make their spread really large? And if they could, they would do it but the only way for them to generate profit is by having their offers taken on both sides, if the spread is too large there are going to be other participants (or market makers) willing to bid/ask closer to the middle, removing profitability from the market maker. This is strictly related as to why liquid markets with multiple market makers tend to have tighter spreads. But market making is not a risk-free business, inherently market makers are on the “wrong-side” of the trade with informed investors making the taker orders. Their profitability relies on the assumption that the market will behave like a random walk with a similar number of buys and sells at around the same price. If that doesn't hold, the market maker will build up inventory on one side of the market and will not be able to generate profit. Making things worse, an unbalanced trading situation will move the price against the positions of the market maker, putting it in the position of waiting for a mean reversal or taking losses to balance its inventory. Because of this, most market makers will either remove their offers or significantly increase the spread in times of high-volatility or market turmoil. Apart from withdrawing the offers market makers tend to have multiple offers on each side open with each offer widening the spread. This way, they are able to provide the same liquidity but mitigating the risks of a sudden price change compromising their entire inventory # Market Making in Stellar - The Stellar DEX The Stellar ecosystem doesn't require an external exchange handling the order matching and execution. Stellar has a built-in exchange with all the operations built-in as native protocol operations, this is the Stellar DEX. The different Stellar exchange apps, such as Interstellar and StellarX, provide a nice user interface to see the state of the different security markets in the DEX as well as an easy way to interact with it. But there is no requirement of using this exchange apps to interact with the DEX, just by having an Stellar Account and being able to create and publish Stellar Transactions is all you need; it can perfectly be done through the Stellar Laboratory. The operations that are going to be used in the context of market making are: Manage Buy Offer and Manage Sell Offer. With just these two operations, we are able to open, modify or withdraw outstanding offers on both sides of the orderbook. All offers in the Stellar DEX are limit orders, but market orders can be emulated through the issuance of a path payment strict receive/send to oneself with a really high max send or min receive value. This effectively acts as a market order that will clear the outstanding orders until fulfilled or fails completely. Every market maker needs to have a strategy. A strategy defines what offers to make and their lifecycle: when to modify them, when to cancel, etc. There are several different strategies, each one with its pros and cons. There is no "best" strategy as some have specific use-cases while others are more general but depending on the market conditions they can lead to very different results. It is outside of this article's scope to discuss the different market making strategies. Once a strategy for market making has been chosen, one could manage the offers manually through the issuance of these operations, but that would be pretty hard-working and exploitable as changes in the market happen more quickly than the speed a human can react to. The next step would be to build a script that based on market conditions manages the outstanding offers and this is where Kelp comes into play. # Kelp - Becoming a Market Maker in the Stellar DEX Kelp is the implementation of a modular automatic trading bot primarily focused on the Stellar DEX. It is open source and written in Go. Its objective is to make it easy for parties to start participating in the Stellar DEX without having to implement a lot of the boilerplate required for doing automatic trading. It comes with several strategies already implemented, making it easier to start participating right away just by configuring it properly. These strategies are focused on the following objectives: making markets, providing liquidity and price-discovery for ICOs and mimicking orderbooks from other exchanges. Kelp also contains several external exchange integrations by default and as it is capable to interact with CCXT, it allows interactions with over 120 exchanges. Even though it comes with a lot of things by default, its modular design allows developers to customize key behaviors, such as adding new trading strategies or integrating with other exchanges without having to get into the weeds of the bot itself. In order to start working with it, you can either download a precompiled binary from their GitHub Page or compile the source code yourself. If interacting with other exchanges through CCXT is desired, then CCXT needs to be setup separately and should be reachable by Kelp. Lastly, Kelp has the capability of storing the trades in a Postgres database, if you want to use the feature you will need an instance of Postgres with appropriate credentials and Kelp should be able to access it. Once all the required components are setup, it is turn to create the required configuration files. There is a main bot configuration file that contains general configuration, a sample can be found here. A second configuration file is required, which basically is going to modify the strategy's behavior. This file is specific for the trading strategy that was chosen, the example files are pretty well explained. Finally, the bot can be launched with the configuration files and the trading strategy. \$ kelp trade --botConf ./path/trader.cfg --strategy buysell --stratConf ./path/buysell.cfg Each instance of the Kelp bot can only handle a single security market, that is a single trading pair. In order to operate on multiple trading pairs, multiple instances need to be launched, each will need its own configuration. If you don't want to have different funding accounts for each security market, you can use a nice Kelp feature: the decoupling of the funding account from the trading account. Kelp can be configured to issue the transactions managing the offers from one account (the trading) while using the funding account as the source account for the offer operations. This enables you to have a single funding account with multiple bots, each having their own trading account, using its funds in different security markets. It is necessary to do it this way because Stellar doesn't allow to issue multiple transactions from the same source account in parallel: each one requires the correct sequence number. Kelp is a good option to start the journey of becoming a market maker in the Stellar ecosystem, the ramp up to have things setup is small and not hard at all. But, beware that this can be a double-edged sword, by making it easy to setup and get started it also makes it easier to launch something you don't fully understand and make you lose money. And, like all automated trading tools, if misconfigured it can actually make you lose money pretty fast. The best advice I can give is to make sure you get acquainted with market making in general, the strategy you are using and the configuration details of Kelp. Before launching this into the actual Stellar DEX in the mainnet, run it in a controlled environment and in the testnet to validate that things are working according to your expectations. And last but not least, start with a little amount of assets (and something you can afford to lose) and grow slowly as you get more understanding of the different moving parts. # Conclusion If we want the Stellar project to succeed, its DEX needs to have healthy security markets. Participation in those markets needs to be smooth, easy and cheap. The transaction costs work in its favor but currently its security markets are too illiquid and shallow. In order to get better liquidity, depth and tighter spreads in those markets, a good number of market makers are needed. As more market makers participate, healthy competition will kick in and we'll be able to enjoy healthier markets. And once we get healthier markets, a positive feedback loop can be established attracting more people to the Stellar ecosystem, further improving its markets. There is currently a good opportunity before the space gets crowded to enter in this business line and learn while competition is not so fierce. Kelp makes it easy to enter the arena and start experimenting without needing to develop a lot of software, allowing you to focus on the important parts: your market making strategy. If you are reading this, it means that you already are somewhat interested in the space. I would encourage you to give it some more thought, do some extra research and experiment with market making in the Stellar DEX. Kelp will make the experimentation much easier than if you start from scratch.
auto_math_text
web
Find all School-related info fast with the new School-Specific MBA Forum It is currently 24 Aug 2016, 12:08 # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # GMAT Data Sufficiency (DS) new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous    1  ...  9   10   11   12   13  ...  164    Next Search for: Topics Author Replies   Views Last post Announcements 105 150 Hardest and easiest questions for DS   Tags: Bunuel 3 16276 07 Dec 2015, 09:09 437 DS Question Directory by Topic and Difficulty   Tags: Coordinate Geometry bb 0 126863 07 Mar 2012, 08:58 Topics 2 By what percent is the time taken by 12 men to complete a pa tabsang 6 1839 22 Oct 2013, 07:14 5 Is the probability of an element in Set B also being an elem tabsang 4 1269 17 Jul 2014, 08:58 6 In the figure above, does a = b? systemm6665 12 1953 07 Aug 2016, 14:05 1 Susan flipped a fair coin N times. What fraction of the synecdoche 14 2793 05 Aug 2016, 06:03 9 If z1, z2, z3,..., zn is a series of consecutive positive swethar 9 4920 09 Jul 2016, 22:53 By what percent will the bacteria population increase in 1 swatirpr 2 2545 15 Feb 2015, 14:53 3 What is the value of xy? swati007 4 1479 01 Jun 2014, 13:33 4 What is the minimum number of RECTANGULAR shipping boxes swarman 8 1366 19 Nov 2015, 17:19 4 Nancy, a car dealer, put 420 cars on sale. All cars on sale belong to swanidhi 2 525 31 Aug 2015, 05:08 1 Adam, Cara, Carlos and Donna are friends swanidhi 2 633 02 Dec 2015, 13:42 32 70 75 80 85 90 105 105 130 130 130 The list shown consist of   Go to page: 1, 2 Tags: Difficulty: 700-Level,  Statistics and Sets Problems,  Source: GMAT Prep sushma0805 26 9479 06 Oct 2015, 03:03 4 If x is an integer, is |x|>1? sushma0805 5 1997 04 Mar 2015, 10:04 2 Is xy < 6 ? a.x < 3 and y < 2. b.1/2 < x < sushma0805 4 2066 27 Mar 2010, 09:35 2 If and and m are positive integers,then what is the remainder of a^m/3 susheelh 2 201 26 Jun 2016, 03:47 1 If X is greater than two, is X the square of an odd prime integer? susheelh 2 165 26 Jun 2016, 04:03 Is the area of the rectangle more than the area of the square? susheelh 6 277 24 Jul 2016, 10:52 9 If the lengths of two sides of a certain triangle are 5 and 10, what surupab 5 1302 15 Aug 2016, 06:02 20 Does the line with equation ax+by = c, where a,b and c are real consta surupab 5 1719 22 May 2016, 23:00 8 If a, b, c are positive integers what is the range of the five numbers surupab 8 1275 19 Jul 2016, 22:01 if x and Y are positive, is X surupab 1 324 27 Jun 2016, 21:48 17 If t and x are integers, what is the value of x? surupab 11 1739 12 Jul 2016, 02:52 3 A number B is formed by reversing the two digit number A. Wh surendar26 4 1883 21 Feb 2014, 00:22 1 Does the prime number p divide n!? surendar26 4 2303 16 Apr 2011, 00:13 14 If a and b are distinct non zero numbers, is x+y an even surendar26 7 2229 05 May 2015, 05:06 15 If 8^0.5y 3^0.75x = 12^n then what is the value of x?? 1) n surendar26 9 3727 27 Sep 2015, 08:52 1 Is m^n >= 0? surendar26 1 1170 20 Dec 2010, 11:48 9 Is x + y > 0 ? surendar26 15 2402 09 Apr 2016, 12:21 1 Is xy < 5 ? (1) x is prime and y is the reciprocal of a surendar26 2 1491 31 Dec 2010, 16:17 36 If x is negative, is x < -3 ? surendar26 16 7501 02 Dec 2015, 23:31 If y is an integer and y = x + |x|, is y = 0? (1) x < 0 suraabhi 7 1389 09 Aug 2011, 20:09 32 Lines k and l intersect in the coordinate plane at point (3, supri1234 16 3513 22 Sep 2015, 15:59 11 If x is a positive integer, is x a prime integer? suprememodelrus 6 2801 08 Sep 2015, 02:11 The average (arithmetic mean) of a, b, c, d, e is 7. What is superpus07 2 6475 29 Jul 2012, 00:43 8 A marzipan factory has two machines producing marzipan. Ever superpus07 5 3019 26 Jul 2014, 02:10 10 All the terms in Set S are integers. Five terms in S are eve superpus07 13 2391 25 May 2016, 22:15 14 Eight consecutive integers are selected from the integers 1 superpus07 9 2767 16 Jun 2015, 03:32 6 Each of the letters in the table above represents one of the numbers suntaurian 6 5056 17 Jun 2015, 02:48 7 Is positive integer n 1 a multiple of 3? (1) n^3 n is a suntaurian 3 7757 13 Feb 2012, 05:06 2 If 2 + 5a b/2 = 3c, what is the value of b? suntaurian 3 3184 16 Jul 2014, 15:50 10 Committee X and Committee Y , which have no common members sunland 14 5150 04 Apr 2015, 07:14 17 What fraction of this year's graduating students at a sunland 14 4878 21 Jan 2016, 02:20 Moved: when a certain tree was first planted it was 4 ft tall. the height of - - - - If m and n are positive integers is m/n an integer? sunita123 3 976 15 Dec 2014, 07:51 2 If a pencil is selected at random from a desk drawer, what sunita123 3 1840 03 May 2016, 15:11 2 Three sides of triangle are x, x+2 and x+4. What is the area of the tr sunita123 2 883 14 Jul 2015, 12:03 2 Is x + y > 0 ? sunita123 4 1464 22 Apr 2016, 00:17 1 If a certain company purchased computers at $2000 each and printers$ sunita123 6 896 18 Jan 2016, 22:10 If n is an integer, is n even? (1) n^2-1 is an odd integer sundarc 15 2175 11 Jun 2010, 22:46 2 What is the difference between the standard deviation of two five memb 2 659 08 Jan 2015, 13:36 12 If 25% of the company's employees contribute at least 4% of their 6 1053 12 May 2016, 13:35 new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous    1  ...  9   10   11   12   13  ...  164    Next Search for: Who is online In total there are 16 users online :: 2 registered, 0 hidden and 14 guests (based on users active over the past 15 minutes) Users browsing this forum: kk1987, miloni244 and 14 guests Statistics Total posts 1523596 | Total topics 185117 | Active members 459541 | Our newest member mahorraj Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
auto_math_text
web
Casimir is an effects library for supporting modular effects in Haskell. It gives a universal interpretation of effects both from the point of view of MTL type classes (final encoding) and algebraic effects (initial encoding), allowing different styles of effect interpretation to be used. Casimir also generalizes the understanding of higher order effects by parameterizing effects over different lift types that correspond to MonadTrans and MonadTransControl.
auto_math_text
web
English    中文 # Power, energy App description Heat unit conversion Unit conversion for thermal conductivity, thermal resistance, thermal conductivity, specific ....... App description Heat unit conversion Unit conversion for thermal conductivity, thermal resistance, thermal conductivity, specific ....... App description Heat unit conversion Unit conversion for thermal conductivity, thermal resistance, thermal conductivity, specific ....... App description Heat unit conversion Unit conversion for thermal conductivity, thermal resistance, thermal conductivity, specific ....... App Description Enter the value and choose to convert from the button below and display the result. Calculation formula Usage e....... Kilojoules to Calorie [Nutrition] Conversion Calculator 1 calorie [nutritional] = 4.1868 kJ....... Kilojoule to calorie conversion [thermodynamics] In the following two forms, the number of inputs can be converted to each other........ Power measurement unit conversion: can achieve online tile (W), kilowatts (kW), British horsepower (HP), metric horsepower (PS), ....... Kilowatt (kW) to kilovolt-ampere (kV) calculator. Calculation formula: To the kilowatt calculator ► kVA Calculating kilovolt-amper....... Car 100 km fuel consumption calculator, fuel consumption per 100 km, mileage per liter, how many miles per gallon (Inch)....... Most concern Recent articles
auto_math_text
web
1. ## Re: Westpac Maths Comp marathon Originally Posted by mathpie What did everyone get for the last 5 (free response) questions in the senior division? I couldn't get any.. lol I'm trying to remember a q from there, can u recall all of them? 2. ## Re: Australian Maths Competition Originally Posted by Mongoose528 How would you solve this algebraically: A nude number is a natural number of whose digits are a factor of the number. Find all 3 digit nude numbers where no digits are repeated. Can't seem to make in roads :/ I can solve it case by case, but I want to know a quicker way of how to do it. Bump. 3. ## Re: Westpac Maths Comp marathon Hi can anyone solve question 27 of the 2013 Senior AMC paper. I have tried many different methods but just cannot get it to work. I've tried finding the ratio of the areas/solving the areas simultaneously and all that. Please someone help as I've been trying to figure it out for a fortnight now. Show Working 4. ## Re: Westpac Maths Comp marathon Originally Posted by Drdusk Hi can anyone solve question 27 of the 2013 Senior AMC paper. I have tried many different methods but just cannot get it to work. I've tried finding the ratio of the areas/solving the areas simultaneously and all that. Please someone help as I've been trying to figure it out for a fortnight now. Show Working Refer to the previous posts Originally Posted by Mongoose528 Here's a solution to one of them: Attachment 34157 5. ## Re: Australian Maths Competition A bit of help on this question please, this is a bit of practice for the AIMO. 11.JPG This is all the working I've managed to do working.JPG 6. ## Re: Australian Maths Competition $Hint: \\ If \triangle ABC is a triangle, and X is a point on the line segment BC with BX:CX=\lambda, then |\triangle ABX|:|\triangle ACX|=\lambda.$ 7. ## Re: Australian Maths Competition How did everyone go for the Australian Maths Competition? 8. ## Re: Australian Maths Competition Got high 70s with Distinction, close to HD ( Page 26 of 26 First ... 16242526 There are currently 1 users browsing this thread. (0 members and 1 guests) #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
auto_math_text
web
Anh-Thi DINH In this post ### PhD: Models & Test cases Posted on 01/11/2018, in PhD. error Because I cannot find the place I stored the notes for the models I have beed tested, this note is for them. I rewrite this for short! This note was created while testing with Chopp’s model (chopp06combine). In this test, If we use Ghost Penalty method, the results become bad in some steps. I had modified some points in the code of Ghost penalty. That’s why I need to check again if there is something wrong with the old models? ## NXFEM test cases • File main.m with file main_eachStep.m. • Models in nxfem\func\func_model: • Sinha • Becker • Barrau • See the models here. ### Sinha’s test case • Article unfitted fem ellip para sinha.pdf
auto_math_text
web
Subscribe Issue No.03 - July-September (1999 vol.21) pp: 38-48 ABSTRACT <p>The history of computer developments in Czechoslovakia spans the period from the end of World War II until recent times, when the country split into two: the Czech Republic and Slovakia. This is an account of those developments. When the area was one country, the story includes information about the entire national picture, but we have, in this article, put particular emphasis on those events occurring in Slovakia.</p> CITATION Jozef Dujnic, Norbert Fristacký, Ludovít Molnár, Ivan Plander, Branislav Rovan, "On the History of Computer Science, Computer Engineering, and Computer Technology Development in Slovakia", IEEE Annals of the History of Computing, vol.21, no. 3, pp. 38-48, July-September 1999, doi:10.1109/85.778981 REFERENCES 1. T. ${\bf Barto}\breve{\bf s}$ and N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$, "Verifying Timing Consistency in Formal Specifications," IEEE Design and Test of Computers, vol. 13, no. 1, pp. 8-15, 1996. 2. A. Blaho, I. ${\bf Kala}\!\breve{\bf s}$, and P. Tomcsányi, "Comenius LOGO: Environment for Teachers and Environment for Learners," Proc. Fourth EuroLogo Conf., Supplement pp. 1-11,Athens, 1993. 3. A. $\breve {\bf C}{\bf ern}\acute {\bf y}$ and J. Gruska, "Modular Trellises," G. Rozenberg and A. Salomaa, eds., The Book of L. Springer Verlag, 1986, pp. 45-60. 4. P. $\breve {\bf D}\!{\bf uri}\!\breve {\bf s}$, Z. Galil, and G. Schnitger, "Lower Bounds on Communication Complexity," Information and Computation, vol. 73, pp. 1-22, 1987. 5. P. $\breve {\bf D}\!{\bf uri}\!\breve {\bf s}$ and Z. Galil, "Fooling a Two-Way Automaton or One Pushdown Store Is Better Than One Counter for Two-Way Machines," Theoretical Computer Science, vol. 21, pp. 39-53, 1982. 6. K. Diks, H.N. Djidjev, O. Sýkora, and I. Vrto, "Edge Separators of Planar Graphs With Applications," J. Algorithms, vol. 14, pp. 258-279, 1993. 7. P. $\breve {\bf D}\!{\bf uri}\!\breve {\bf s}$, O. Sýkora, C.D. Thompson, and I. Vrto, "A Minimum-Area Circuit for L-Selection," Algorithmica, vol. 2, no. 2, pp. 251-265, 1987. 8. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$, "To the Question of Signal Transmission in Logic Circuits Containing Some Types of Magnetic Logic Gates," Avtomatika i telemechnika (USSR), vol. 26, no. 1, pp. 149-161, 1965 (in Russian). 9. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$, "On Essential Hazards in Relay Sequential Circuits," Information Processing Machines, no. 18, Academia Praha (Prague), pp. 7-15, 1975. 10. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$ and J. Hlavatý, "Synthesis of Delayed Boolean Function Circuit Realization," Proc. Fourth IFAC Symp. Discrete Systems, pp. 288-297,Riga, 1974. 11. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$, J. ${\bf Ha}\!\breve{\bf s}{\bf kovec}$, P. Popov, M. Vandrovec, and M. Kolesár, Logic Processors.Prague: SNTL-ALFA, 1981 (in Czech). 12. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$ and M. ${\bf Koto}\breve{\bf c}{\bf ov}\!\acute{\bf a}$, "A Contribution to Microprogram Control Unit Design," 27th Int'l Wiss. Koloquium T. Ilmenau. Vortragsreihe "Entwurf, Analyse und Einsatz von informationsverarbeitung Geräten und Systemen," pp. 43-47, 1982. 13. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$ and P. Sabo, "Updated Theory of Microprogram Control and Microparallelism for Microprogram Compaction," Computers and Artificial Intelligence, vol. 3, pp. 445-467, 1984. 14. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$ and P. Sabo, "Interpretation and Compaction of Program on SIPO Systems With Operational Parallelism," Computers and Artificial Intelligence, vol. 6, no. 6, pp. 553-572, 1987. 15. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$, L. Gvozdjak, and $\breve {\bf S}\,.$ Neuschl, "Twenty Years of Education in Computer Engineering and Science at Electrical Engineering Faculty of STU," Computers '84 Conf. Proc., pp. 3-14,Bratislava, 1984 (in Slovak). 16. N. ${\bf Fri}\breve{\bf s}{\bf tack}\!\acute{\bf y}$, Logic Circuits I. Publishing Center of Slovak Technical Univ., 1967. 17. V. Geffert, "Normal Forms for Phrase Structure Grammars," RAIRO—Theoretical Informatics and Applications, vol. 25, pp. 473-496, 1991. 18. V. Geffert, "Nondeterministic Computations in Sublogarithmic Space and Space Constructibility," SIAM J. Computers, vol. 20, pp. 484-498, 1991. 19. V. Geffert, "Bridging Across the Log(n) Space Frontier" (invited lecture), Proc. Mathematical Foundations of Computer Science 95, Lecture Notes in Computer Science 969, Springer Verlag, 1995, pp. 50-65. 20. D.P. Gruska and A. Maggiolo-Schettini, "Process Communication Environment," S. Purushothaman and A. Zwarico, eds., NAPAW 93.Berlin: Springer Verlag, 1993. 21. J. Gruska, "Some Classifications of Context-Free Languages," Information and Control, vol. 14, pp. 152-173, 1969. 22. T. Hegedüs, "Generalized Teaching Dimensions and the Query Complexity of Learning," Proc. Eighth Annual Conf. Computational Learning Theory (COLT 95). New York: ACM Press, 1995, pp. 108-117. 23. R. ${\bf Hon}\breve{\bf c}{\bf ariv}$ and $\breve {\bf S}\,.$ Hudák, "Ein Kleinkomputer fur den Unterricht der Grundlagen der Vererbungslehre," Proc. Int'l Conf. New Trends in Pedagogical Technology,Knotke, Belgium, 1971. 24. J. ${\bf Hromkovi}\!\breve{\bf c}$, "Tradeoffs for Language Recognition on Alternating Machines," Theoretical Computer Science, vol. 63, pp. 203-221, 1989. 25. J. ${\bf Hromkovi}\!\breve{\bf c}$, "Lower Bound Techniques for VLSI Algorithms," Trends, Techniques, and Problems in Theoretical Computer Science, Lecture Notes in Computer Science 281, Springer Verlag, 1987, pp. 2-25. 26. J. ${\bf Hromkovi}\!\breve{\bf c}$, J. Karhumäki, B. Rovan, and A. Slobodová, "On the Power of Synchronization in Parallel Computations," Discrete Applied Math., vol. 32, pp. 155-182, 1991. 27. J. ${\bf Hromkovi}\!\breve{\bf c}$, J. Kari, L. Kari, and D. Pardubská, "Two Lower Bounds on Distributive Generation of Languages," Proc. Mathematical Foundations of Computer Science 94, Lecture Notes in Computer Science 841, Springer Verlag, 1994, pp. 423-432. 28. J. ${\bf Hromkovi}\!\breve{\bf c}$, R. Klasing, W. Unger, H. Wagener, and D. Pardubská, "The Complexity of Systolic Dissemination of Information in Interconnection Networks," RAIRO—Theoretical Informatics and Applications, vol. 28, pp. 303-342, 1994. 29. $\breve {\bf S}\,.$ Hudák, "On the Reachability Problem for VAS," Proc. Second European Workshop Application and Theory of Petri Nets,Bad Honnef, Germany, Sept. 1981. 30. L. Hudec, P.D. Stigal, and R.E. Ziemer, "Performance Study of 16-Bit Microcomputer Implemented FFT Algorithms," IEEE Micro, vol. 2, no. 4, pp. 61-67, 1982. 31. K. Jelemenská and L. Hudec, "The Quadruple Approach in Fault-Tolerant Transputer System Design," R. Miles and A. Chalmers, eds., Progress in Computer and OCCAM Research.Amsterdam: IOS Press, 1994, pp. 155-163. 32. M. ${\bf Jel}\!\breve{\bf s}{\bf ina}$, "Two-Level Control of Hybrid Modeling of Extrapolators," Proc. Seventh AICA Congress, pp. 86-89,Prague, 1973 (in Russian). 33. M. ${\bf Jel}\!\breve{\bf s}{\bf ina}$, "Multiprocessor Robot Control System Using Microprogramming," N.K. Sinha, ed., Microprocessor Based Control Systems. D. Reidel Pub. Co., 1986. 34. M. ${\bf Jel}\!\breve{\bf s}{\bf ina}$ and J. Kollár, "The Dataflow Implementation Environment for a Functional Language," Proc. Japan-Central Europe Joint Workshop Advanced Computer Engineering, p. 6,Warsaw, 1994. 35. M. ${\bf Jel}\!\breve{\bf s}{\bf ina}$ and M. ${\bf Kov}\!\!\acute{\bf a}\breve {\bf c}$, "Microprocessor System With Parallel Architecture for Control of Robots," Proc. Ninth Congress of IFAC, vol. 2, pp. 241-246,Budapest, 1984. 36. J. Kelemen, "Syntactical Models of Distributed Cooperative Systems," J. Exp. T. Artificial Intelligence, vol. 3, pp. 1-10, 1991. 37. A. Kelemenová, "Complexity of Normal Form Grammars," Theoretical Computer Science, vol. 28, pp. 299-314, 1984. 38. I. Korec, "Undecidable Problems Concerning Generalized Pascal Triangles of Commutative Algebras," Proc. 12th Mathematical Foundations of Computer Science 86, Lecture Notes in Computer Science 233, Springer Verlag, 1986, pp. 458-466. 39. M. ${\bf Koto}\breve{\bf c}{\bf ov}\!\acute{\bf a}$, "Design of Totally Self-Checking Check Circuits for Some 1-out-of-n Codes," Proc. FTSD 81, pp. 241-245,Ceske Budejovice, Czech Republic, 1981. 40. J. ${\bf Miklo}\breve{\bf s}{\bf ko}$, R. Klette, M. ${\bf Vajter}\!\breve{\bf s}{\bf ic}$, and I. Vrto, Fast Algorithms and Their Implementation on Specialized Parallel Computers.Amsterdam: Elsevier, 1989. 41. P. Návrat, "AI in Slovakia," Informatica, vol. 20, pp. 249-253, 1996. 42. I. Plander, J. Sedlák, and J. Semidubsky, "SAV Electronic Analog Computer," Automatizace, no. 5, pp. 124-125, 1963 (in Slovak). 43. I. Plander, Mathematical Methods and Programming for Analog Computers.Bratislava: VEDA, 1968 (in Slovak). 44. I. Plander, ed., Artificial Intelligence and Control Systems of Robots.Singapore: World Scientific, 1994. 45. M. Popper and J. Stanek, "Medical Knowledge Representation—a Non-Uniform Approach," R. Salamon and B. Blum, eds., Proc. MEDINFO '86, vol. 1, pp. 56-60,Amsterdam, 1986. 46. M. Popper, "CODEX: An Expert System for Medical Applications," $\breve {C}asopis\ L \acute {e}ka \breve {r}u\ \breve {C} esk \acute {y} ch\,,$ vol. 125, pp. 40-44, 1986 (in Slovak). 47. A. Raspaud, O. Sýkora, and I. Vrto, "Cutwidth of the de Bruijn Graph," RAIRO—Informatique Théorique et Applications (Theoretical Informatics and Applications), vol. 26, pp. 509-514, 1996. 48. B. Rovan, "A Framework for Studying Grammars," Proc. Mathematical Foundations of Computer Science 81, Lecture Notes in Computer Science 118, Springer Verlag, 1981, pp. 473-482. 49. P. ${\bf Ru}\!\breve{\bf z}{\bf i}\breve {\bf c}{\bf ka}$, "On Efficiency of Interval Routing Algorithms," Proc. Mathematical Foundations of Computer Science 88, Lecture Notes in Computer Science 324, Springer Verlag, 1988, pp. 492-500. 50. P. ${\bf Ru}\!\breve{\bf z}{\bf i}\breve {\bf c}{\bf ka}$, "Validity Test for Floyd Operator-Precedence Algorithms Is Polynomial in Time," Kybernetika, vol. 17, pp. 368-379, 1981. 51. P. ${\bf Ru}\!\breve{\bf z}{\bf i}\breve {\bf c}{\bf ka}$ and I. Prívara, "An Almost Linear Robinson Unification Algorithm," Acta Informatica, vol. 27, pp. 61-71, 1989. 52. P. ${\bf Ru}\!\breve{\bf z}{\bf i}\breve {\bf c}{\bf ka}$ and J. Wiedermann, On the Lower Bound for Minimum Comparison Selection. Lecture Notes in Computer Science 45, Springer-Verlag, 1976, pp. 495-502. 53. R. Szelepcsényi, "A Method of Forced Enumeration for Nondeterministic Automata," Acta Informatica, vol. 26, pp. 279-284, 1988. 54. M. $\breve {\bf S}{\bf perka}$, "The Origins of Computer Graphics in the Czech and Slovak Republics," Leonardo, vol. 27, pp. 45-50, 1994. 55. O. Sýkora, "VLSI Systems for Some Problems of Computational Geometry," Parallel Computing, vol. 1, pp. 337-342, 1984. 56. P.J. Voda, "Subrecursion as a Basis for a Feasible Programming Language," Proc. CSL'94, Lecture Notes in Computer Science 933, Springer-Verlag, 1994, pp. 324-338. 57. "The Development of the Research Institute of Mathematical Machines in Prague," Information Processing Machines, vol. 10, pp. 15-24, 1964 (in Czech). 58. A. Svoboda, "The Construction of a Linear Analyser in Czechoslovakia," Czechoslovak J. Physics, vol. 1, pp. 10-18, 1952 (n Czech). 59. L. Molnár, V. Sirotová, V. Vojtek, and P. ${\bf Kupkovi}\!\breve{\bf c}$, "Fortran Precompiler for SM-3-20," Proc. Fourth Seminar on the Use of Mini and Micro Computers SMEP, VysokéTatry, Czechoslovakia, ${\bf EF\ SV}\breve{\bf S}{\bf T}$-Datasystém-SVTS, pp. 126-135, 1982 (in Slovak). 60. L. Molnár, P. Návrat, and V. Vojtek, "Automatic Program Synthesis Using Heuristics and Interaction," Computers and Artificial Intelligence, vol. 5, no. 5, pp. 429-442, 1986.
auto_math_text
web
This topic is 2657 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts hi all, i have a question regard collision detection using ellipsoid (the collide and slide algorithm). I have read the collision algorithm posted by; and its "Improve collision and response" pdf book. Im having problem with deriving the "epsilon" value or the veryCloseDistance. How can i compute this value? What is the basis for this value? i have a problem with my collision since it detects the collision and slide sideways but it does not slide up when there is a stairs. it is when my velocity vector is not totaly 0,0,0 but has value like (0.0000019, 0.0, 0.0000094) the resulting slide can have an upward result. my velocity vector decreases with the friction every frame, the friction scales down the velocity every frame instead of directly subtracting from it. When my velocity vector is too small because of friction scale down) but not zero my collision routine is called and the result give an upward slide instead. What is odd is when velocity is too small(not completely zero), the 3d mesh object can be viewed as standing still (not walking) in viewers view then suddenly moves up because of the result from the collision routine. Can anybody help me debug this? [Edited by - cebugdev on November 12, 2010 5:50:26 PM] ##### Share on other sites How are you checking your velocity? Are you comparing it to zero like velocity.x == 0? if so you should use a different method like, if(abs(velocity.x) < 0.002){...}else{velocity.x = 0;} this way you can set the minimum velocity before it's set to zero that should help out a bit but this probably wont fix the problem completely ##### Share on other sites Quote: Original post by BlackShark33How are you checking your velocity? Are you comparing it to zero like velocity.x == 0?if so you should use a different method like,*** Source Snippet Removed ***this way you can set the minimum velocity before it's set to zerothat should help out a bit but this probably wont fix the problem completely i checked the velocity by checking its length if its zero like velocity.length()==0 but like what youve said that wont solve my collision problem since if i forced the velocity to zero, then it wont slide up anymore.
auto_math_text
web
This content will become publicly available on April 12, 2023 Generic character of charge and spin density waves in superconducting cuprates Charge density waves (CDWs) have been observed in nearly all families of copper-oxide superconductors. But the behavior of these phases across different families has been perplexing. In La-based cuprates, the CDW wavevector is an increasing function of doping, exhibiting the so-called Yamada behavior, while in Y- and Bi-based materials the behavior is the opposite. Here, we report a combined resonant soft X-ray scattering (RSXS) and neutron scattering study of charge and spin density waves in isotopically enriched La 1.8 − x Eu 0.2 Sr x CuO 4 over a range of doping 0.07 ≤ x ≤ 0.20 . We find that the CDW amplitude is temperature independent and develops well above experimentally accessible temperatures. Further, the CDW wavevector shows a nonmonotonic temperature dependence, exhibiting Yamada behavior at low temperature with a sudden change occurring near the spin ordering temperature. We describe these observations using a Landau–Ginzburg theory for an incommensurate CDW in a metallic system with a finite charge compressibility and spin-CDW coupling. Extrapolating to high temperature, where the CDW amplitude is small and spin order is absent, our analysis predicts a decreasing wavevector with doping, similar to Y and Bi cuprates. Our study suggests that CDW order in all more » Authors: ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » Award ID(s): Publication Date: NSF-PAR ID: 10320768 Journal Name: Proceedings of the National Academy of Sciences Volume: 119 Issue: 15 ISSN: 0027-8424 National Science Foundation ##### More Like this 1. The defining characteristic of hole-doped cuprates is d -wave high temperature superconductivity. However, intense theoretical interest is now focused on whether a pair density wave state (PDW) could coexist with cuprate superconductivity [D. F. Agterberg et al., Annu. Rev. Condens. Matter Phys. 11, 231 (2020)]. Here, we use a strong-coupling mean-field theory of cuprates, to model the atomic-scale electronic structure of an eight-unit-cell periodic, d -symmetry form factor, pair density wave (PDW) state coexisting with d -wave superconductivity (DSC). From this PDW + DSC model, the atomically resolved density of Bogoliubov quasiparticle states N r , E is predicted at the terminal BiO surface of Bi 2 Sr 2 CaCu 2 O 8 and compared with high-precision electronic visualization experiments using spectroscopic imaging scanning tunneling microscopy (STM). The PDW + DSC model predictions include the intraunit-cell structure and periodic modulations of N r , E , the modulations of the coherence peak energy Δ p r , and the characteristics of Bogoliubov quasiparticle interference in scattering-wavevector space q - space . Consistency between all these predictions and the corresponding experiments indicates that lightly hole-doped Bi 2 Sr 2 CaCu 2 O 8 does contain a PDW + DSC state. Moreover,more » 2. Charge-density waves (CDWs) are a ubiquitous form of electron density modulation in cuprate superconductors. Unveiling the nature of quasistatic CDWs and their dynamical excitations is crucial for understanding their origin––similar to the study of antiferromagnetism in cuprates. However, dynamical CDW excitations remain largely unexplored due to the limited availability of suitable experimental probes. Here, using resonant inelastic X-ray scattering, we observe dynamical CDW excitations in Bi2Sr2LaCuO6+δ (Bi2201) superconductors through its interference with the lattice. The distinct anomalies of the bond-buckling and the bond-stretching phonons allow us to draw a clear picture of funnel-shaped dynamical CDW excitations in Bi2201. Our results of the interplay between CDWs and the phonon anomalies shed light on the nature of CDWs in cuprates. 3. Abstract The origin of the weak insulating behavior of the resistivity, i.e. $${\rho }_{xx}\propto {\mathrm{ln}}\,(1/T)$$ ρ x x ∝ ln ( 1 / T ) , revealed when magnetic fields ( H ) suppress superconductivity in underdoped cuprates has been a longtime mystery. Surprisingly, the high-field behavior of the resistivity observed recently in charge- and spin-stripe-ordered La-214 cuprates suggests a metallic, as opposed to insulating, high-field normal state. Here we report the vanishing of the Hall coefficient in this field-revealed normal state for all $$T\ <\ (2-6){T}_{{\rm{c}}}^{0}$$ T < ( 2 − 6 ) T c 0 , where $${T}_{{\rm{c}}}^{0}$$ T c 0 is the zero-field superconducting transition temperature. Our measurements demonstrate that this is a robust fundamental property of the normal state of cuprates with intertwined orders, exhibited in the previously unexplored regime of T and H . The behavior of the high-field Hall coefficient is fundamentally different from that in other cuprates such as YBa 2 Cu 3 O 6+ x and YBa 2 Cu 4 O 8 , and may imply an approximate particle-hole symmetry that is unique to stripe-ordered cuprates. Our results highlight the important role of the competing orders in determining the normal state ofmore » 4. We report results of large-scale ground-state density matrix renormalization group (DMRG) calculations on t-$t′$-J cylinders with circumferences 6 and 8. We determine a rough phase diagram that appears to approximate the two-dimensional (2D) system. While for many properties, positive and negative$t′$values ($t′/t=±0.2$) appear to correspond to electron- and hole-doped cuprate systems, respectively, the behavior of superconductivity itself shows an inconsistency between the model and the materials. The$t′<0$(hole-doped) region shows antiferromagnetism limited to very low doping, stripes more generally, and the familiar Fermi surface of the hole-doped cuprates. However, we find$t′<0$strongly suppresses superconductivity. The$t′>0$(electron-doped) region shows the expected circular Fermi pocket of holes around the$(π,π)$point and a broad low-doped region of coexisting antiferromagnetism and d-wave pairing with a triplet p component at wavevector$(π,π)$induced by the antiferromagnetism and d-wave pairing. The pairing for the electron low-doped system with$t′>0$is strong and unambiguous in the DMRG simulations. At larger doping another broad region with stripes in addition to weaker d-wave pairing and striped p-wave pairing appears. In a small doping region near$x=0.08$for$t′∼−0.2$, we find an unconventional type of stripe involving unpaired holes located predominantly on chains spaced three lattice spacings apart. The undopedmore » 5. Abstract We study the ground state properties of the Hubbard model on three-leg triangular cylinders using large-scale density-matrix renormalization group simulations. At half-filling, we identify an intermediate gapless spin liquid phase, which has one gapless spin mode and algebraic spin–spin correlations but exponential decay scalar chiral–chiral correlations, between a metallic phase at weak coupling and Mott insulating dimer phase at strong interaction. Upon light doping the gapless spin liquid, the system exhibits power-law charge-density-wave (CDW) correlations but short-range single-particle, spin–spin, and chiral–chiral correlations. Similar to CDW correlations, the superconducting correlations also decay in power-law but oscillate in sign as a function of distance, which is consistent with the striped pair-density wave. When further doping the gapless spin liquid phase or doping the dimer order phase, another phase takes over, which has similar CDW correlations but all other correlations decay exponentially.
auto_math_text
web
# Trend: Searching for the Higgs , Department of Physics and Astronomy, University of California, Riverside, Riverside, CA 92521, USA Published December 14, 2009  |  Physics 2, 106 (2009)  |  DOI: 10.1103/Physics.2.106 Since the 1970s, physicists have known that two fundamental forces of nature, the electromagnetic force and the weak force, can be unified into a single force—the electroweak force—if the particles that carry these forces are massless. The photon, which carries the electromagnetic force, is massless, but the particles that carry the weak force have substantial mass, explaining why the weak force is weaker than the electromagnetic force. This unification can still work if a new spin-zero boson, the Higgs boson, is introduced, allowing the particles that carry the weak force to be massive. In addition, interactions with the Higgs boson are responsible for the masses of all particles. These ideas form the basis of the standard model of particle physics, which is consistent with almost all observations. Gravity can act once particles have mass due to the Higgs boson—the Higgs boson is not the source of the gravitational force. The one outstanding missing piece in this entire picture is the Higgs boson itself. What are the prospects for its discovery? ## The standard model of particle physics and the Higgs boson Matter is made up of spin-$1/2$ fermions, the particles known as leptons (the “light ones”) and quarks. There are three families of leptons, each consisting of two particles: the electron with its corresponding neutrino ($\nu$), the muon ($\mu$) and its neutrino, and the tau lepton ($\tau$) and its neutrino [1]. Electrons are familiar from electric current and as constituents of atoms; they are the lightest electrically charged particles. Muons and tau leptons are also charged and can be considered to be heavier electrons. Neutrinos are neutral and (almost) massless. All of the leptons can be directly observed, some more easily than others. Quarks also come in three families, and they also have electrical charge, but their charges are fractions of the charge of the electron ($+2/3$ and $-1/3$). They cannot be directly observed—the particles we do observe, such as the proton and the pion, are made up of either three quarks or a quark and its antiparticle, an antiquark. Every particle has a corresponding antiparticle with the same mass but opposite charge, for example, the antiparticle of the electron is the positively charged positron. Quarks that are produced in particle interactions or decays materialize as “jets” of ordinary particles collimated close to the original quark direction [2]. Four fundamental forces act on the fundamental fermions: gravity, the weak force (responsible for nuclear beta decay), the electromagnetic force, and the strong force. These forces occur through the exchange of fundamental bosons: the graviton, the charged and neutral $W$ and $Z$ bosons, the photon, and eight gluons. (Gravity will not be discussed further here.) All of the fundamental fermions have interactions via the weak force, and all of the charged fundamental fermions have electromagnetic interactions. Only the quarks can interact via the strong force, and particles such as protons that are made up of quarks and therefore have strong interactions are called hadrons (the “heavy ones”). The fundamental particles and forces are summarized in Fig. 1. Photons, which have no mass, carry the electromagnetic force, whereas the massive charged $W$ and neutral $Z$ are responsible for the weak interactions; all of these particles are spin-one bosons. The minimal standard model [4] requires in addition a massive scalar boson, the Higgs boson, to allow the $W$ and $Z$ to be massive, as described by the Higgs mechanism [5]. The lowest energy state of the Higgs field has a nonzero value, which has the dimensions of mass. Particles obtain their mass from their interactions with this Higgs field—this is the reason the Higgs boson plays such a major role in physics. The photon has no such interactions, so it retains its massless character, while the masses of the $W$ and $Z$ are approximately $100$ times the mass of the proton. The asymmetry between the masses of the photon and the $W$ and $Z$ bosons is called “electroweak symmetry breaking.” According to theory, the Higgs occurs as a doublet of complex scalar fields, giving four degrees of freedom. Three of the four degrees of freedom are unphysical but are needed as intermediate states in the theory, while the fourth degree of freedom corresponds to the single physical Higgs boson. Once the Higgs mechanism is included, the electromagnetic and weak interactions are unified into one interaction—the electroweak interaction [6]. The Higgs boson, or something else that plays its role, is necessary in the standard model, but it has not yet been observed. Therefore its discovery is of utmost importance in particle physics. Searches have most recently been carried out at the Large Electron Positron collider (LEP) at the European Organization for Nuclear Research (CERN) [7] and at the Tevatron proton-antiproton collider at Fermilab [8]. It is most likely, however, that it will be discovered at the Large Hadron Collider (LHC) [9] at CERN. At what mass should we be looking for the Higgs? The mass of the Higgs boson is not specified in the standard model, but theorists think that it should be less than about $1000\phantom{\rule{0.333em}{0ex}}\text{GeV}$ (about $1000$ times the mass of the proton). In certain extensions of the standard model such as supersymmetry there may be other constraints on the mass. The couplings of the Higgs boson to other particles determine its production rate and its decays to other particles, and knowing these coupling strengths within the theory allows the prediction of its decays as functions of its unknown mass alone. Couplings of the Higgs boson to other elementary particles are directly related to its role in generating their masses. The Higgs boson is produced in interactions involving heavy particles, and its decays are in general into the heaviest particles that are kinematically possible. If the Higgs boson is heavier than twice the mass of the $W$ boson, it decays primarily into ${W}^{+}\phantom{\rule{0}{0ex}}{W}^{-}$ and $Z\phantom{\rule{0}{0ex}}Z$. If it is lighter, its decays to pairs of heavy fermions (a $b$ quark and its antiparticle the $\overline{b}$ quark, or a tau lepton ${\tau }^{-}$ and its antiparticle the ${\tau }^{+}$) become dominant [10]. ## Indirect limits on the Higgs boson mass The value of the Higgs boson mass affects the standard model predictions for electroweak quantities, such as the mass and width of the $W$ boson and the width and other parameters of the $Z$ boson, measured in electron-positron colliders, hadron colliders, and elsewhere through higher-order corrections to the basic calculations, which are dependent logarithmically on the Higgs mass. (Such corrections are dependent on the square of the top quark mass and accurately predicted it before the top quark was discovered.) These electroweak quantities have been measured extremely precisely, for example at LEP, and global fits to the data with the standard model Higgs mass as a free parameter provide limits on the Higgs boson mass, as shown in Fig. 2 [11]. The quantity ${\chi }^{2}$ is a statistical measure of the agreement of the fit with the data, with the minimum value, ${\chi }_{\text{min}}^{2}$ , at the most probable value of the Higgs mass. The global electroweak fit yields $\mathrm{\Delta }\phantom{\rule{0}{0ex}}{\chi }^{2}={\chi }^{2}-{\chi }_{\text{min}}^{2}=1$ limits, corresponding to a $68%$ confidence level or one standard deviation errors on the Higgs mass of ${m}_{H}={87}_{-26}^{+35}\phantom{\rule{0.333em}{0ex}}\text{GeV}$ (1) or a one-sided $95%$ confidence-level upper limit, including the band of theoretical uncertainty, on ${m}_{H}$ of $157\phantom{\rule{0.333em}{0ex}}\text{GeV}$. Precision electroweak fits thus prefer a relatively low-mass Higgs boson. ## The Higgs boson in supersymmetric models Supersymmetric extensions of the standard model [12, 13] are particularly interesting on theoretical grounds. In supersymmetric theories there is a link between fermions and bosons. Every particle has a supersymmetric partner with the same properties except that fermions have supersymmetric partners that are bosons, and bosons have supersymmetric partners that are fermions. For example, the supersymmetric partner of the electron, a spin-$1/2$ fermion, is the spin-$0$ scalar electron, or selectron; the supersymmetric partner of the spin-$1/2$ top quark is the spin-$0$ stop quark; and the supersymmetric partner of the spin-$1$ gluon is the spin-$1/2$ gluino. Since such supersymmetric partners of the known particles have not been discovered, supersymmetry is broken, that is, the partners have larger masses than the known particles. Supersymmetric theories provide a consistent framework for the unification of the interactions at a high-energy scale and for the stability of the electroweak scale. Supersymmetry appears to be essential for string theory. In many supersymmetric models, the Lightest Supersymmetric Particle (LSP) is stable (it does not decay) and is a candidate for dark matter [14]. The measurement of the muon anomalous magnetic moment is significantly inconsistent with the standard model [15] and may be accounted for by supersymmetry. A general property of any supersymmetric extension of the standard model is the presence of at least two Higgs doublets, but there can be more. The simplest supersymmetric model is the minimal supersymmetric extension of the standard model (MSSM) [13]. In the MSSM there are two Higgs doublets, resulting in five physical Higgs bosons: three neutral ($h$, $H$, and $A$) and two charged ( ${H}^{±}$ ). Masses and couplings in the MSSM depend on standard model parameters plus at least two other parameters, tan $\beta$ and a mass parameter (usually ${m}_{A}$). The mass of the lightest Higgs boson, ${m}_{h}$, is less than the mass of the $Z$, ${m}_{Z}$, at the basic level and thus it was thought that it could have been found at LEP. However, ${m}_{h}$ is increased significantly by corrections due primarily to the effects of the top quark and its supersymmetric partner, the spin-$0$ stop quark. Calculations within the MSSM and other supersymmetry models obtain an upper limit for ${m}_{h}$ of typically about $130\phantom{\rule{0.333em}{0ex}}\text{GeV}$ [13]. Thus the lightest Higgs boson must be relatively light, as favored by the precision electroweak data. In fact, fits to the precision electroweak data within the constrained minimal supersymmetric standard model (CMSSM) give [16] ${m}_{h}={110}_{-10}^{+8}\left(\text{exp}\right)±3\left(\text{theor}\right)\text{GeV}.$ (2) In the decoupling limit, ${m}_{A}^{2}\gg {m}_{Z}^{2}$, the lightest neutral Higgs boson $h$ couples in much the same way as the standard model Higgs. The $H$, $A$, and ${H}^{±}$ are much heavier and nearly degenerate. ## Searches at electron-positron colliders Direct searches for the standard model Higgs boson were carried out at the LEP electron-positron collider, running at center-of-mass energies of $91$ to $209\phantom{\rule{0.333em}{0ex}}\text{GeV}$, up until the end of 2000, the final year of the LEP program. The four LEP experiments were ALEPH [17], DELPHI [18], L3 [19], and OPAL [20]. In electron-positron colliders the Higgs boson would be produced in association with a $Z$: ${e}^{+}\phantom{\rule{0}{0ex}}{e}^{-}\to H\phantom{\rule{0}{0ex}}Z$ (that is, a high-energy collision between an electron and a positron would create a Higgs plus a $Z$ boson). Since electrons and positrons are fundamental particles, the collision makes use of their full energy. The Higgs and $Z$ bosons were searched for by reconstructing them from their decay products. At LEP energies, the kinematic limit for the mass of the Higgs boson is about $115\phantom{\rule{0.333em}{0ex}}\text{GeV}$, so the dominant decay of the Higgs would be into a pair of $b$ quarks, with smaller fractions of tau lepton pairs, $W$ pairs (one $W$ is virtual, that is, its mass is not equal to the rest mass of the $W$ boson), or gluon pairs. An important constraint was the reconstruction of the mass of the accompanying $Z$ through its decay products, and identification of $b$ quarks was also used. The event configurations searched were the four-jet final state ($H\to b\phantom{\rule{0}{0ex}}\overline{b}$, $Z\to q\phantom{\rule{0}{0ex}}\overline{q}$), the missing energy final state ($H\to b\phantom{\rule{0}{0ex}}\overline{b}$, $Z\to v\phantom{\rule{0}{0ex}}\overline{v}$), the leptonic final state ($H\to b\phantom{\rule{0}{0ex}}\overline{b}$, $Z\to {e}^{+}\phantom{\rule{0}{0ex}}{e}^{-}$ or $H\to b\phantom{\rule{0}{0ex}}\overline{b}$, $Z\to {\mu }^{+}\phantom{\rule{0}{0ex}}{\mu }^{-}$), and the tau lepton final state ($H\to b\phantom{\rule{0}{0ex}}\overline{b}$, $Z\to {\tau }^{+}\phantom{\rule{0}{0ex}}{\tau }^{-}$ or $H\to {\tau }^{+}\phantom{\rule{0}{0ex}}{\tau }^{-}$, $Z\to q\phantom{\rule{0}{0ex}}\overline{q}$). Reconstructing these decays requires an array of methods that have been designed into the experiments and used for other physics as well. Charged particles leave trails in tracking devices, such as drift chambers or silicon detectors, and their momenta can be measured from how much they bend in a magnetic field. Neutral particles such as photons leave energy deposits in the detectors. Electrons and muons are identified through their interactions with the material of the detector. Neutrinos do not interact with the amount of material in the detector and so are identified by missing energy in the reconstruction of the event, since the total energy is known from the center-of-mass energy of the electron-positron collision. Quarks are reconstructed from the jets of particles they produce, charged or neutral, since quarks cannot be directly observed. Jets from $b$ quarks can be distinguished by the rather long lifetimes of the hadrons containing the $b$ quarks. These hadrons decay at some distance from the overall event production point along the beams, and this displacement can be measured in precision tracking devices. When searching for the Higgs boson, physicists look for events that meet the criteria expected for the Higgs. However, there are background events, which are those from other physics processes that mimic the characteristics of the Higgs signal. There are significant numbers of background events due to $W$ pairs and $Z$ pairs, which appear as four-fermion events due to their decays, and quark-antiquark events. A signal due to Higgs boson production would appear as an excess number of events compared with these known standard model backgrounds. No statistically significant evidence was found for the Higgs boson, and a combination of the data of the four experiments gave a lower limit of ${m}_{H}>114.4\phantom{\rule{0.333em}{0ex}}\text{GeV}$ at the $95%$ confidence level [21]. However, in the last year of LEP running at center-of-mass energies above $206\phantom{\rule{0.333em}{0ex}}\text{GeV}$, some excess events were seen that were consistent with background plus a Higgs boson of mass about $115\phantom{\rule{0.333em}{0ex}}\text{GeV}$ [22]. The experiments requested an extension of the LEP program for six months, but the request was denied because it would delay the construction of the LHC, which was built in the same tunnel as LEP. The four LEP experiments also searched for neutral Higgs bosons as predicted by the MSSM. The numbers of events produced and Higgs decays in the MSSM are determined by the parameters of the particular MSSM model, so the interpretations of search results depend on these parameters. The lightest Higgs boson $h$ typically decays into a pair of $b$ quarks or a pair of tau leptons, and the main production mechanisms are ${e}^{+}\phantom{\rule{0}{0ex}}{e}^{-}\to h\phantom{\rule{0}{0ex}}Z$ and ${e}^{+}\phantom{\rule{0}{0ex}}{e}^{-}\to h\phantom{\rule{0}{0ex}}A$, so searches for the standard model Higgs boson can be interpreted within the MSSM. The searches of the four LEP experiments were combined to give limits on ${m}_{h}$ and ${m}_{A}$ of about $93\phantom{\rule{0.333em}{0ex}}\text{GeV}$ at $95%$ confidence level over most of the MSSM parameter space [23]. The limit on ${m}_{h}$ gradually approaches that of the standard model Higgs in the decoupling limit. In summary, no statistically significant evidence for a Higgs boson was obtained at LEP. Plans were for Higgs boson searches to take place at the Superconducting Super Collider (SSC), a $40$-$\text{TeV}$ (one $\text{TeV}$ equals $1000\phantom{\rule{0.333em}{0ex}}\text{GeV}$) proton-proton collider that had started construction in Texas but was canceled in 1993. However, after LEP the search for the Higgs boson then moved to Fermilab as an upgraded collider and experiments began data taking. At proton-proton or proton-antiproton colliders, unlike at electron-positron colliders, the colliding particles are not fundamental. Protons (antiprotons) are made up of quarks (antiquarks) and gluons, so the collisions involve quarks with quarks (antiquarks) or gluons, or gluons with gluons. The energies of the quarks or gluons within the proton or antiproton vary as steeply falling functions of the fraction of the total energy of the proton or antiproton. Therefore the effective center-of-mass energy of the collision is in general much less than that of the colliding protons and antiprotons and varies over a wide range. The energy transverse to the beam direction roughly balances since the quarks and gluons travel in the same direction as the proton or antiproton. To date, searches for the standard model Higgs boson have been performed at the Fermilab Tevatron proton-antiproton collider at a center-of-mass energy of $1.96\phantom{\rule{0.333em}{0ex}}\text{TeV}$ in the CDF [24] and D0 [25] experiments. The dominant production mechanism for Higgs bosons at the Tevatron would be through the interaction of a gluon in a proton with a gluon in an antiproton (gluon-gluon fusion). The Higgs can also be produced in association with a $W$ or $Z$ boson through the interaction of a quark in a proton with an antiquark in an antiproton (similar to the production of $H\phantom{\rule{0}{0ex}}Z$ in an electron-positron collider). With the data accumulated so far, the Tevatron experiments are sensitive only to high-mass Higgs bosons that decay into $W$ pairs. Searches for low-mass Higgs bosons are more difficult and require more data—there are very large backgrounds that mask evidence for a low-mass Higgs decaying into a pair of $b$ quarks or a pair of tau leptons. In order to control these backgrounds, researchers look for the low-mass Higgs in association with a $W$ or $Z$ boson, which reduces the number of possible Higgs events. In addition, the low-mass Higgs boson must be identified by reconstructing it from a pair of $b$-quark jets. There are still large numbers of background events, even with the requirement of identifying an accompanying $W$ or $Z$, and the mass peak from the pair of $b$ quarks must be well defined in order to observe it above the background. To search for a Higgs that decays into a pair of $W$’s, the subsequent decays of the $W$ into a lepton ($e$, $\mu$, or $\tau$) and a neutrino are used. The signature for the Higgs is therefore events with two energetic electrons with opposite charge, or two muons with opposite charge, or an electron and a muon with opposite charge, plus large missing transverse energy due to the two neutrinos, which are not detected. (The tau contributes through its decay to an electron or a muon.) The main background is due to electromagnetic production of a pair of oppositely charged leptons when a quark-antiquark annihilation occurs (the Drell-Yan process), which is suppressed by the requirement of large missing transverse energy. Other backgrounds are due to $W\phantom{\rule{0}{0ex}}W$, $Z\phantom{\rule{0}{0ex}}Z$, $W\phantom{\rule{0}{0ex}}Z$, and top quark pair production with subsequent decays into leptons. Both experiments compare the numbers of events observed with the numbers of background events expected, plus a possible signal due to a standard model Higgs boson of assumed mass produced at the predicted rate in the standard model. They use statistical methods to determine upper limits (at the $95%$ confidence level) on the possible production rate for the Higgs boson compared with the standard model prediction. Neither experiment by itself can set a $95%$ confidence level upper limit below the standard model prediction (exclusion), but the combined results of the two experiments exclude a standard model Higgs boson of mass between $160$ and $170\phantom{\rule{0.333em}{0ex}}\text{GeV}$ [26]. Discovery of a Higgs boson with mass in the region $115$$120\phantom{\rule{0.333em}{0ex}}\text{GeV}$ by the Tevatron is unlikely [27]. The Large Hadron Collider (LHC) at CERN, which will begin data taking with proton-proton collisions in early 2010 and will ultimately have a center-of-mass energy of $14\phantom{\rule{0.333em}{0ex}}\text{TeV}$, will be sensitive to the entire mass range of the standard model Higgs boson. Searches for the Higgs boson will then begin in the ATLAS [28] and Compact Muon Solenoid (CMS) [29] detectors. The most important production mechanisms for the Higgs at the LHC are similar to those at the Tevatron. The Higgs decay into $W$ or $Z$ pairs will be used for the high-mass region. For a low-mass Higgs boson, decays into pairs of $b$ quarks or $\tau$ leptons dominate; however, backgrounds from ordinary quarks and gluons are expected to be too large at the LHC to make these decay modes possible for a Higgs search. Therefore the search for the low-mass Higgs will rely on the decay into two photons, with a decay fraction of only about $0.002$. There is still considerable background in the two-photon channel due to real photon pairs produced in standard model processes and jets misidentified as photons, so the Higgs will be seen as a small peak on top of a large background [30, 31]. Accurate reconstruction of the photons in the detector is needed for the best definition of the peak. Finding a relatively light Higgs boson (which seems likely judging from the fits to precision electroweak data) at the LHC will be difficult and will require—in the language of high-energy physicists—several ${\text{fb}}^{-1}$ of integrated luminosity [32], as shown in Fig. 3. In this context, luminosity is a measure of the collision rate of the two beams, and integrated luminosity of the number of collisions. One ${\text{fb}}^{-1}$ of integrated luminosity corresponds to the production of one event for a process with a theoretical cross section of $1\phantom{\rule{0.333em}{0ex}}\text{fb}$ and is thus a measure of the amount of data that needs to be acquired. In practical terms, it means two to three years of data taking after the LHC begins operation at full energy will be required for observation of the Higgs boson. It may be that a Higgs boson of mass $115\phantom{\rule{0.333em}{0ex}}\text{GeV}$, just above the LEP limit, will be found. If this Higgs is the lightest MSSM Higgs boson, then it is possible that supersymmetry will be discovered first since the supersymmetric partners of quarks and gluons, the squarks and gluinos, could be produced copiously. This will be truly exciting! ## Acknowledgments The author acknowledges support by the Department of Energy through grants DE-FG02-07ER41465 and DE-FG02-07ER41487 and by the National Science Foundation through grants PHY-0630052 and PHY-0612805. The author would also like to acknowledge fruitful discussions with students and colleagues, especially Nina Byers, Ernest Ma, Harry Tom, and Gillian Wilson. ### References 1. C. Amsler et al. (Particle Data Group), Phys. Lett. B 667, 1 (2008); and 2009 partial update for the 2010 edition, http://pdg.bl.gov. 2. G. Hanson et al., Phys. Rev. Lett. 35, 1609 (1975). 3. Contemporary Physics Education Project, http://cpepweb.org. 4. S. Weinberg, Phys. Rev. Lett. 19, 1264 (1967); A. Salam, Elementary Particle Theory, edited by N. Svartholm (Almquist and Wiksells, Stockholm, 1968), p.367. 5. P. W. Higgs, Phys. Lett. 12, 132 (1964); F. Englert and R. Brout, Phys. Rev. Lett. 13, 321 (1964); P. W. Higgs, 13, 508 (1964); Phys. Rev. 145, 1156 (1966). 6. For reviews, see J. F. Gunion, H. E. Haber, G. L. Kane, and S. Dawson, The Higgs Hunter Guide (Addison-Wesley, Reading, Massachusetts, 1990)[Amazon][WorldCat]; J. Ellis, G. Ridolfi, and F. Zwirner, C. R. Physique 8, 999 (2007); A. Djouadi, Phys. Rep. 457, 1 (2008). 7. CERN official web site http://public.web.cern.ch/public/; John Adams Memorial Lecture, CERN, November 26, 1990, http://sl-div.web.cern.ch/sl-div/history/lep_doc.html. 8. Fermilab official web site http://www.fnal.gov; R. R. Wilson, “The Tevatron,” FERMILAB-TM-0763 (1978). 9. LHC official web site http://public.web.cern.ch/public/en/LHC/LHC-en.html. 10. A. Djouadi, J. Kalinowski, and M. Spira, Comput. Phys. Commun. 108, 56 (1998). 11. The ALEPH, CDF, D0, DELPHI, L3, OPAL, SLD Collaborations, the LEP Electroweak Working Group, the Tevatron Electroweak Working Group, the SLD Electroweak, and Heavy Flavour Groups, arXiv:0811.4682v1 (hep-ex) (2008); updates for Summer 2009 from http://lepewwg.web.cern.ch/LEPEWWG/. 12. J. Wess and B. Zumino, Nucl. Phys. B70, 39 (1974); Phys. Lett. B 49, 52 (1974); P. Fayet, 69, 489 (1977); 84, 421 (1979); 86, 272 (1979). 13. For reviews with references to the original literature, see H. E. Haber, and G. L. Kane, Phys. Rep. 117, 75 (1985); H. E. Haber, Phys. Rev. D 66, 010001 (2002); S. P. Martin, arXiv:hep-ph/9709356v5 (2008); A. Djouadi, Phys. Rep. 459, 1 (2008); Eur. Phys. J. C 59, 389 (2009). 14. J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, and M. Srednicki, Phys. Lett. B 127, 233 (1983); J. R. Ellis, J. S. Hagelin, D. V. Nanopoulos, K. A. Olive, and M. Srednicki, Nucl. Phys. B 238, 453 (1984); H. Goldberg, Phys. Rev. Lett. 50, 1419 (1983). 15. G. W. Bennett et al., Phys. Rev. D 73, 072003 (2006). 16. O. Buchmueller et al., Phys. Lett. B 657, 87 (2007). 17. ALEPH Collaboration, Nucl. Instrum. Methods A 294, 121 (1990); 360, 481 (1995); D. Creanza et al., 409, 157 (1998). 18. P. Aarnio et al. (DELPHI Collaboration), Nucl. Instrum. Methods A 303, 233 (1991); P. Abreu et al. (DELPHI Collaboration), 378, 57 (1996); P. Chochula et al.(DELPHI Silicon Tracker Group), 412, 304 (1998). 19. B. Adeva et al.(L3 Collaboration), Nucl. Instrum. Methods A 289, 35 (1990); O. Adriani et al. (L3 Collaboration), Phys. Rep. 236, 1 (1993); J. A. Bakken et al., Nucl. Instrum. Methods A 275, 81 (1989); O. Adriani et al., 302, 53 (1991); B. Adeva et al., 323, 109 (1992); K. Deiters et al., 323, 162 (1992); M. Chemarin et al., 349, 345 (1994); M. Acciarri et al., 351, 300 (1994); G. Basti et al., 374, 293 (1996); A. Adam et al., 383, 342 (1996). 20. K. Ahmet et al. (OPAL Collaboration), Nucl. Instrum. Methods A 305, 275 (1991); S. Anderson et al., 403, 326 (1998); B. E. Anderson et al., IEEE Trans. Nucl. Science 41, 845 (1994); G. Aguillion et al., Nucl. Instrum. Methods A 417, 266 (1998). 21. R. Barate et al. (ALEPH, DELPHI, L3, OPAL Collaborations, and The LEP Working Group for Higgs Boson Searches), Phys. Lett. B 565, 61 (2003). 22. R. Barate et al. (ALEPH Collaboration), Phys. Lett. B 495, 1 (2000); M. Acciarri et al. (L3 Collaboration), 495, 18 (2000); G. Abbiendi et al. (OPAL Collaboration), 499, 38 (2001); P. Abreu et al. (DELPHI Collaboration), 499, 23 (2001). 23. The ALEPH, DELPHI, L3, OPAL Collaborations, and The LEP Working Group for Higgs Boson Searches, Eur. Phys. J. C 47, 547 (2006). 24. D. Acosta et al. (CDF Collaboration), Phys. Rev. D 71, 032001 (2005); R. Blair et al., “The CDF II Detector Technical Design Report,” Report No. FERMILAB-PUB-96 390-E. 25. D0 Collaboration, Nucl. Instrum. Methods A 565, 463 (2006). 26. The Tevatron New Phenomena and Higgs Working Group for the CDF and D0 Collaborations, FERMILAB-PUB-09-060-E, arXiv:0903.4001v1 (hep-ex) (2009). 27. J. Conway, “The Search for the Higgs Boson,” plenary presentation at the 2009 Europhysics Conference on High Energy Physics, Kraków, Poland, July 2009, http://indico.ifj.edu.pl/MaKaC/contributionDisplay.py?contribId=937&sessionId=31&confId=11. 28. G. Aad et al., JINST 3, S08003 (2008). 29. R. Adolphi et al., JINST 3, S08004 (2008). 30. ATLAS Physics Performance Technical Design Report, CERN/LHCC/99-15; S. Asai et al., Eur. Phys. J. C 32, Suppl. 2, 19 (2004); arXiv:hep-ph/0402254. 31. CMS Physics, Technical Design Report, vol. II: Physics Performance, CERN/LHCC 2006-021, CMS TDR 8.2. 32. J.-J. Blaising et al. “Potential LHC Contributions to Europe’s Future Strategy at the High-Energy Frontier,” (2006), http://council-strategygroup.web.cern.ch/council-strategygroup/BB2/contributions/Blaising2.pdf.; F. Gianotti, “ATLAS: preparing for the first LHC data,” plenary presentation at the 2009 Europhysics Conference on High Energy Physics, Kraków, Poland, July 2009, http://indico.ifj.edu.pl/MaKaC/contributionDisplay.py?contribId=940&sessionId=32&confId=11. ### About the Author: Gail G. Hanson Gail G. Hanson received her B.S. degree in physics in 1968 from the Massachusetts Institute of Technology. She began work in experimental particle physics as an undergraduate. She received her Ph.D. degree in 1973 also from the Massachusetts Institute of Technology for research on the electron-positron collider at the Cambridge Electron Accelerator at Harvard University. She did postdoctoral research at the Stanford Linear Accelerator Center (SLAC), working on the SPEAR electron-positron storage ring, where she contributed to the discoveries of the J/ψ particle and the τ lepton and independently discovered quark jets in 1975, for which she was awarded the American Physical Society’s W. K. H. Panofsky Prize. Hanson continued in staff physicist positions at SLAC until 1989, when she moved to Indiana University to become a Professor of Physics. In 1997, she became a Distinguished Professor at Indiana University. She continued research on electron-positron physics on the PEP storage ring and the SLAC Linear Collider at SLAC and on the OPAL experiment at the LEP electron-positron collider at CERN, where she served as Physics Coordinator and contributed to b-quark hadron discoveries and searches for new particles. In 2002, she moved to the University of California, Riverside, as a Distinguished Professor of Physics. Hanson now carries out research on the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN and on development of a future μ+μ- collider. Hanson was elected a Fellow of both the American Physical Society and the American Association for the Advancement of Science.
auto_math_text
web
# How to use K-fold Cross Validation with Keras? When you train supervised machine learning models, you’ll likely try multiple models, in order to find out how good they are. Part of this process is likely going to be the question how can I compare models objectively? Training and testing datasets have been invented for this purpose. By splitting a small part off your full dataset, you create a dataset which (1) was not yet seen by the model, and which (2) you assume to approximate the distribution of the population, i.e. the real world scenario you wish to generate a predictive model for. Now, when generating such a split, you should ensure that your splits are relatively unbiased. In this blog post, we’ll cover one technique for doing so: K-fold Cross Validation. Firstly, we’ll show you how such splits can be made naïvely – i.e., by a simple hold out split strategy. Then, we introduce K-fold Cross Validation, show you how it works, and why it can produce better results. This is followed by an example, created with Keras and Scikit-learn’s KFold functions. Are you ready? Let’s go! 😎 Update 11/06/2020: improved K-fold cross validation code based on reader comments. ## Evaluating and selecting models with K-fold Cross Validation Training a supervised machine learning model involves changing model weights using a training set. Later, once training has finished, the trained model is tested with new data – the testing set – in order to find out how well it performs in real life. When you are satisfied with the performance of the model, you train it again with the entire dataset, in order to finalize it and use it in production (Bogdanovist, n.d.) However, when checking how well the model performance, the question how to split the dataset is one that emerges pretty rapidly. K-fold Cross Validation, the topic of today’s blog post, is one possible approach, which we’ll discuss next. However, let’s first take a look at the concept of generating train/test splits in the first place. Why do you need them? Why can’t you simply train the model with all your data and then compare the results with other models? We’ll answer these questions first. Then, we take a look at the efficient but naïve simple hold-out splits. This way, when we discuss K-fold Cross Validation, you’ll understand more easily why it can be more useful when comparing performance between models. Let’s go! ### Why using train/test splits? – On finding a model that works for you Before we’ll dive into the approaches for generating train/test splits, I think that it’s important to take a look at why we should split them in the first place when evaluating model performance. For this reason, we’ll invent a model evaluation scenario first. #### Generating many predictions Say that we’re training a few models to classify images of digits. We train a Support Vector Machine (SVM), a Convolutional Neural Network (CNN) and a Densely-connected Neural Network (DNN) and of course, hope that each of them predicts “5” in this scenario: Our goal here is to use the model that performs best in production, a.k.a. “really using it” 🙂 The central question then becomes: how well does each model perform? Based on their performance, we can select a model that can be used in real life. However, if we wish to determine model performance, we should generate a whole bunch of predictions – preferably, thousands or even more – so that we can compute metrics like accuracy, or loss. Great! #### Don’t be the student who checks his own homework Now, we’ll get to the core of our point – i.e., why we need to generate splits between training and testing data when evaluating machine learning models. We’ll require an understanding of the high-level supervised machine learning process for this purpose: It can be read as follows: • In the first step, all the training samples (in blue on the left) are fed forward to the machine learning model, which generates predictions (blue on the right). • In the second step, the predictions are compared with the “ground truth” (the real targets) – which results in the computation of a loss value. • The model can subsequently be optimized by steering the model away from the error, by changing its weights, in the backwards pass of the gradient with respect to (finally) the loss value. • The process then starts again. Presumably, the model performs better this time. As you can imagine, the model will improve based on the loss generated by the data. This data is a sample, which means that there is always a difference between the sample distribution and the population distribution. In other words, there is always a difference between what your data tells that the patterns are and what the patterns are in the real world. This difference can be really small, but it’s there. Now, if you let the model train for long enough, it will adapt substantially to the dataset. This also means that the impact of the difference will get larger and larger, relative to the patterns of the real-world scenario. If you’ve trained it for too long – a problem called overfitting – the difference may be the cause that it won’t work anymore when real world data is fed to it. Generating a split between training data and testing data can help you solve this issue. By training your model using the training data, you can let it train for as long as you want. Why? Simple: you have the testing data to evaluate model performance afterwards, using data that is (1) presumably representative for the real world and (2) unseen yet. If the model is highly overfit, this will be clear, because it will perform very poorly during the evaluation step with the testing data. Now, let’s take a look at how we can do this. We’ll s tart with simple hold-out splits 🙂 ### A naïve approach: simple hold-out split Say that you’ve got a dataset of 10.000 samples. It hasn’t been split into a training and a testing set yet. Generally speaking, a 80/20 split is acceptable. That is, 80% of your data – 8.000 samples in our case – will be used for training purposes, while 20% – 2.000 – will be used for testing. We can thus simply draw a boundary at 8.000 samples, like this: We call this simple hold-out split, as we simply “hold out” the last 2.000 samples (Chollet, 2017). It can be a highly effective approach. What’s more, it’s also very inexpensive in terms of the computational power you need. However, it’s also a very naïve approach, as you’ll have to keep these edge cases in mind all the time (Chollet, 2017): 1. Data representativeness: all datasets, which are essentially samples, must represent the patterns in the population as much as possible. This becomes especially important when you generate samples from a sample (i.e., from your full dataset). For example, if the first part of your dataset has pictures of ice cream, while the latter one only represents espressos, trouble is guaranteed when you generate the split as displayed above. Random shuffling may help you solve these issues. 2. The arrow of time: if you have a time series dataset, your dataset is likely ordered chronologically. If you’d shuffle randomly, and then perform simple hold-out validation, you’d effectively “[predict] the future given the past” (Chollet, 2017). Such temporal leaks don’t benefit model performance. 3. Data redundancy: if some samples appear more than once, a simple hold-out split with random shuffling may introduce redundancy between training and testing datasets. That is, identical samples belong to both datasets. This is problematic too, as data used for training thus leaks into the dataset for testing implicitly. Now, as we can see, while a simple hold-out split based approach can be effective and will be efficient in terms of computational resources, it also requires you to monitor for these edge cases continuously. 🚀 Something for you? Interesting Machine Learning books 📚 MachineCurve.com will earn a small affiliate commission from the Amazon Services LLC Associates Program when you purchase one of the books linked above. ### K-fold Cross Validation A more expensive and less naïve approach would be to perform K-fold Cross Validation. Here, you set some value for $$K$$ and (hey, what’s in a name 😋) the dataset is split into $$K$$ partitions of equal size. $$K – 1$$ are used for training, while one is used for testing. This process is repeated $$K$$ times, with a different partition used for testing each time. For example, this would be the scenario for our dataset with $$K = 5$$ (i.e., once again the 80/20 split, but then 5 times!): For each split, the same model is trained, and performance is displayed per fold. For evaluation purposes, you can obviously also average it across all folds. While this produces better estimates, K-fold Cross Validation also increases training cost: in the $$K = 5$$ scenario above, the model must be trained for 5 times. Let’s now extend our viewpoint with a few variations of K-fold Cross Validation 🙂 If you have no computational limitations whatsoever, you might wish to try a special case of K-fold Cross Validation, called Leave One Out Cross Validation (or LOOCV, Khandelwal 2019). LOOCV means $$K = N$$, where $$N$$ is the number of samples in your dataset. As the number of models trained is maximized, the precision of the model performance average is maximized too, but so is the cost of training due to the sheer amount of models that must be trained. If you have a binary classification problem, you might also wish to take a look at Stratified Cross Validation (Khandelwal, 2019). It extends K-fold Cross Validation by ensuring an equal distribution of the target classes over the splits. This ensures that your classification problem is balanced. It doesn’t work for multiclass classification due to the way that samples are distributed. Finally, if you have a time series dataset, you might wish to use Time-series Cross Validation (Khandelwal, 2019). Check here how it works. ## Creating a Keras model with K-fold Cross Validation Now that we understand how K-fold Cross Validation works, it’s time to code an example with the Keras deep learning framework 🙂 Coding it will be a multi-stage process: • Firstly, we’ll take a look at what we need in order to run our model successfully. • Then, we take a look at today’s model. • Subsequently, we add K-fold Cross Validation, train the model instances, and average performance. • Finally, we output the performance metrics on screen. ### What we’ll need to run our model For running the model, we’ll need to install a set of software dependencies. For today’s blog post, they are as follows: • TensorFlow 2.0+, which includes the Keras deep learning framework; • The most recent version of scikit-learn; • Numpy. That’s it, already! 🙂 ### Our model: a CIFAR-10 CNN classifier Now, today’s model. We’ll be using a convolutional neural network that can be used to classify CIFAR-10 images into a set of 10 classes. The images are varied, as you can see here: Now, my goal is not to replicate the process of creating the model here, as we already did that in our blog post “How to build a ConvNet for CIFAR-10 and CIFAR-100 classification with Keras?”. Take a look at that post if you wish to understand the steps that lead to the model below. (Do note that this is a small adaptation, where we removed the third convolutional block for reasons of speed.) Here is the full model code of the original CIFAR-10 CNN classifier, which we can use when adding K-fold Cross Validation: from tensorflow.keras.datasets import cifar10 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D from tensorflow.keras.losses import sparse_categorical_crossentropy from tensorflow.keras.optimizers import Adam import matplotlib.pyplot as plt # Model configuration batch_size = 50 img_width, img_height, img_num_channels = 32, 32, 3 loss_function = sparse_categorical_crossentropy no_classes = 100 no_epochs = 100 optimizer = Adam() verbosity = 1 # Load CIFAR-10 data (input_train, target_train), (input_test, target_test) = cifar10.load_data() # Determine shape of the data input_shape = (img_width, img_height, img_num_channels) # Parse numbers as floats input_train = input_train.astype('float32') input_test = input_test.astype('float32') # Normalize data input_train = input_train / 255 input_test = input_test / 255 # Create the model model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dense(no_classes, activation='softmax')) # Compile the model model.compile(loss=loss_function, optimizer=optimizer, metrics=['accuracy']) # Fit data to model history = model.fit(input_train, target_train, batch_size=batch_size, epochs=no_epochs, verbose=verbosity) # Generate generalization metrics score = model.evaluate(input_test, target_test, verbose=0) print(f'Test loss: {score[0]} / Test accuracy: {score[1]}') # Visualize history # Plot history: Loss plt.plot(history.history['val_loss']) plt.title('Validation loss history') plt.ylabel('Loss value') plt.xlabel('No. epoch') plt.show() # Plot history: Accuracy plt.plot(history.history['val_accuracy']) plt.title('Validation accuracy history') plt.ylabel('Accuracy value (%)') plt.xlabel('No. epoch') plt.show() ### Removing obsolete code Now, let’s slightly adapt the model in order to add K-fold Cross Validation. Firstly, we’ll strip off some code that we no longer need: import matplotlib.pyplot as plt We will no longer generate the visualizations, and besides the import we thus also remove the part generating them: # Visualize history # Plot history: Loss plt.plot(history.history['val_loss']) plt.title('Validation loss history') plt.ylabel('Loss value') plt.xlabel('No. epoch') plt.show() # Plot history: Accuracy plt.plot(history.history['val_accuracy']) plt.title('Validation accuracy history') plt.ylabel('Accuracy value (%)') plt.xlabel('No. epoch') plt.show() ### Adding K-fold Cross Validation Secondly, let’s add the KFold code from scikit-learn to the imports – as well as numpy: from sklearn.model_selection import KFold import numpy as np Which… Provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default). Scikit-learn (n.d.) sklearn.model_selection.KFold Precisely what we want! We also add a new configuration value: num_folds = 10 This will ensure that our $$K = 10$$. What’s more, directly after the “normalize data” step, we add two empty lists for storing the results of cross validation: # Normalize data input_train = input_train / 255 input_test = input_test / 255 # Define per-fold score containers <-- these are new acc_per_fold = [] loss_per_fold = [] This is followed by a concat of our ‘training’ and ‘testing’ datasets – remember that K-fold Cross Validation makes the split! # Merge inputs and targets inputs = np.concatenate((input_train, input_test), axis=0) targets = np.concatenate((target_train, target_test), axis=0) Based on this prior work, we can add the code for K-fold Cross Validation: fold_no = 1 for train, test in kfold.split(input_train, target_train): Ensure that all the model related steps are now wrapped inside the for loop. Also make sure to add a couple of extra print statements and to replace the inputs and targets to model.fit: # K-fold Cross Validation model evaluation fold_no = 1 for train, test in kfold.split(inputs, targets): # Define the model architecture model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dense(no_classes, activation='softmax')) # Compile the model model.compile(loss=loss_function, optimizer=optimizer, metrics=['accuracy']) # Generate a print print('------------------------------------------------------------------------') print(f'Training for fold {fold_no} ...') # Fit data to model history = model.fit(inputs[train], targets[train], batch_size=batch_size, epochs=no_epochs, verbose=verbosity) We next replace the “test loss” print with one related to what we’re doing. Also, we increase the fold_no: # Generate generalization metrics scores = model.evaluate(inputs[test], targets[test], verbose=0) print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]*100}%') acc_per_fold.append(scores[1] * 100) loss_per_fold.append(scores[0]) # Increase fold number fold_no = fold_no + 1 Here, we simply print a “score for fold X” – and add the accuracy and sparse categorical crossentropy loss values to the lists. Now, why do we do that? Simple: at the end, we provide an overview of all scores and the averages. This allows us to easily compare the model with others, as we can simply compare these outputs. Add this code at the end of the model, but make sure that it is not wrapped inside the for loop: # == Provide average scores == print('------------------------------------------------------------------------') print('Score per fold') for i in range(0, len(acc_per_fold)): print('------------------------------------------------------------------------') print(f'> Fold {i+1} - Loss: {loss_per_fold[i]} - Accuracy: {acc_per_fold[i]}%') print('------------------------------------------------------------------------') print('Average scores for all folds:') print(f'> Accuracy: {np.mean(acc_per_fold)} (+- {np.std(acc_per_fold)})') print(f'> Loss: {np.mean(loss_per_fold)}') print('------------------------------------------------------------------------') #### Full model code Altogether, this is the new code for your K-fold Cross Validation scenario with $$K = 10$$: 🚀 Something for you? Interesting Machine Learning books 📚 MachineCurve.com will earn a small affiliate commission from the Amazon Services LLC Associates Program when you purchase one of the books linked above. from tensorflow.keras.datasets import cifar10 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D from tensorflow.keras.losses import sparse_categorical_crossentropy from tensorflow.keras.optimizers import Adam from sklearn.model_selection import KFold import numpy as np # Model configuration batch_size = 50 img_width, img_height, img_num_channels = 32, 32, 3 loss_function = sparse_categorical_crossentropy no_classes = 100 no_epochs = 25 optimizer = Adam() verbosity = 1 num_folds = 10 # Load CIFAR-10 data (input_train, target_train), (input_test, target_test) = cifar10.load_data() # Determine shape of the data input_shape = (img_width, img_height, img_num_channels) # Parse numbers as floats input_train = input_train.astype('float32') input_test = input_test.astype('float32') # Normalize data input_train = input_train / 255 input_test = input_test / 255 # Define per-fold score containers acc_per_fold = [] loss_per_fold = [] # Merge inputs and targets inputs = np.concatenate((input_train, input_test), axis=0) targets = np.concatenate((target_train, target_test), axis=0) # Define the K-fold Cross Validator kfold = KFold(n_splits=num_folds, shuffle=True) # K-fold Cross Validation model evaluation fold_no = 1 for train, test in kfold.split(inputs, targets): # Define the model architecture model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dense(no_classes, activation='softmax')) # Compile the model model.compile(loss=loss_function, optimizer=optimizer, metrics=['accuracy']) # Generate a print print('------------------------------------------------------------------------') print(f'Training for fold {fold_no} ...') # Fit data to model history = model.fit(inputs[train], targets[train], batch_size=batch_size, epochs=no_epochs, verbose=verbosity) # Generate generalization metrics scores = model.evaluate(inputs[test], targets[test], verbose=0) print(f'Score for fold {fold_no}: {model.metrics_names[0]} of {scores[0]}; {model.metrics_names[1]} of {scores[1]*100}%') acc_per_fold.append(scores[1] * 100) loss_per_fold.append(scores[0]) # Increase fold number fold_no = fold_no + 1 # == Provide average scores == print('------------------------------------------------------------------------') print('Score per fold') for i in range(0, len(acc_per_fold)): print('------------------------------------------------------------------------') print(f'> Fold {i+1} - Loss: {loss_per_fold[i]} - Accuracy: {acc_per_fold[i]}%') print('------------------------------------------------------------------------') print('Average scores for all folds:') print(f'> Accuracy: {np.mean(acc_per_fold)} (+- {np.std(acc_per_fold)})') print(f'> Loss: {np.mean(loss_per_fold)}') print('------------------------------------------------------------------------') ## Results Now, it’s time to run the model, to see whether we can get some nice results 🙂 Say, for example, that you saved the model as k-fold-model.py in some folder. Open up your command prompt – for example, Anaconda Prompt – and cd to the folder where your file is stored. Make sure that your dependencies are installed and then run python k-fold-model.py. If everything goes well, the model should start training for 25 epochs per fold. ### Evaluating the performance of your model During training, it should produce batches like this one: ------------------------------------------------------------------------ Training for fold 3 ... Train on 43200 samples, validate on 10800 samples Epoch 1/25 43200/43200 [==============================] - 9s 200us/sample - loss: 1.5628 - accuracy: 0.4281 - val_loss: 1.2300 - val_accuracy: 0.5618 Epoch 2/25 43200/43200 [==============================] - 7s 165us/sample - loss: 1.1368 - accuracy: 0.5959 - val_loss: 1.0767 - val_accuracy: 0.6187 Epoch 3/25 43200/43200 [==============================] - 7s 161us/sample - loss: 0.9737 - accuracy: 0.6557 - val_loss: 0.9869 - val_accuracy: 0.6522 Epoch 4/25 43200/43200 [==============================] - 7s 169us/sample - loss: 0.8665 - accuracy: 0.6967 - val_loss: 0.9347 - val_accuracy: 0.6772 Epoch 5/25 43200/43200 [==============================] - 8s 175us/sample - loss: 0.7792 - accuracy: 0.7281 - val_loss: 0.8909 - val_accuracy: 0.6918 Epoch 6/25 43200/43200 [==============================] - 7s 168us/sample - loss: 0.7110 - accuracy: 0.7508 - val_loss: 0.9058 - val_accuracy: 0.6917 Epoch 7/25 43200/43200 [==============================] - 7s 161us/sample - loss: 0.6460 - accuracy: 0.7745 - val_loss: 0.9357 - val_accuracy: 0.6892 Epoch 8/25 43200/43200 [==============================] - 8s 184us/sample - loss: 0.5885 - accuracy: 0.7963 - val_loss: 0.9242 - val_accuracy: 0.6962 Epoch 9/25 43200/43200 [==============================] - 7s 156us/sample - loss: 0.5293 - accuracy: 0.8134 - val_loss: 0.9631 - val_accuracy: 0.6892 Epoch 10/25 43200/43200 [==============================] - 7s 164us/sample - loss: 0.4722 - accuracy: 0.8346 - val_loss: 0.9965 - val_accuracy: 0.6931 Epoch 11/25 43200/43200 [==============================] - 7s 161us/sample - loss: 0.4168 - accuracy: 0.8530 - val_loss: 1.0481 - val_accuracy: 0.6957 Epoch 12/25 43200/43200 [==============================] - 7s 159us/sample - loss: 0.3680 - accuracy: 0.8689 - val_loss: 1.1481 - val_accuracy: 0.6938 Epoch 13/25 43200/43200 [==============================] - 7s 165us/sample - loss: 0.3279 - accuracy: 0.8850 - val_loss: 1.1438 - val_accuracy: 0.6940 Epoch 14/25 43200/43200 [==============================] - 7s 171us/sample - loss: 0.2822 - accuracy: 0.8997 - val_loss: 1.2441 - val_accuracy: 0.6832 Epoch 15/25 43200/43200 [==============================] - 7s 167us/sample - loss: 0.2415 - accuracy: 0.9149 - val_loss: 1.3760 - val_accuracy: 0.6786 Epoch 16/25 43200/43200 [==============================] - 7s 170us/sample - loss: 0.2029 - accuracy: 0.9294 - val_loss: 1.4653 - val_accuracy: 0.6820 Epoch 17/25 43200/43200 [==============================] - 7s 165us/sample - loss: 0.1858 - accuracy: 0.9339 - val_loss: 1.6131 - val_accuracy: 0.6793 Epoch 18/25 43200/43200 [==============================] - 7s 171us/sample - loss: 0.1593 - accuracy: 0.9439 - val_loss: 1.7192 - val_accuracy: 0.6703 Epoch 19/25 43200/43200 [==============================] - 7s 168us/sample - loss: 0.1271 - accuracy: 0.9565 - val_loss: 1.7989 - val_accuracy: 0.6807 Epoch 20/25 43200/43200 [==============================] - 8s 190us/sample - loss: 0.1264 - accuracy: 0.9547 - val_loss: 1.9215 - val_accuracy: 0.6743 Epoch 21/25 43200/43200 [==============================] - 9s 207us/sample - loss: 0.1148 - accuracy: 0.9587 - val_loss: 1.9823 - val_accuracy: 0.6720 Epoch 22/25 43200/43200 [==============================] - 7s 167us/sample - loss: 0.1110 - accuracy: 0.9615 - val_loss: 2.0952 - val_accuracy: 0.6681 Epoch 23/25 43200/43200 [==============================] - 7s 166us/sample - loss: 0.0984 - accuracy: 0.9653 - val_loss: 2.1623 - val_accuracy: 0.6746 Epoch 24/25 43200/43200 [==============================] - 7s 168us/sample - loss: 0.0886 - accuracy: 0.9691 - val_loss: 2.2377 - val_accuracy: 0.6772 Epoch 25/25 43200/43200 [==============================] - 7s 166us/sample - loss: 0.0855 - accuracy: 0.9697 - val_loss: 2.3857 - val_accuracy: 0.6670 Score for fold 3: loss of 2.4695983460744224; accuracy of 66.46666526794434% ------------------------------------------------------------------------ Do note the increasing validation loss, a clear sign of overfitting. And finally, after the 10th fold, it should display the overview with results per fold and the average: ------------------------------------------------------------------------ Score per fold ------------------------------------------------------------------------ > Fold 1 - Loss: 2.4094747734069824 - Accuracy: 67.96666383743286% ------------------------------------------------------------------------ > Fold 2 - Loss: 1.768296229839325 - Accuracy: 67.03333258628845% ------------------------------------------------------------------------ > Fold 3 - Loss: 2.4695983460744224 - Accuracy: 66.46666526794434% ------------------------------------------------------------------------ > Fold 4 - Loss: 2.363724467277527 - Accuracy: 66.28333330154419% ------------------------------------------------------------------------ > Fold 5 - Loss: 2.083754387060801 - Accuracy: 65.51666855812073% ------------------------------------------------------------------------ > Fold 6 - Loss: 2.2160572570165 - Accuracy: 65.6499981880188% ------------------------------------------------------------------------ > Fold 7 - Loss: 1.7227793588638305 - Accuracy: 66.76666736602783% ------------------------------------------------------------------------ > Fold 8 - Loss: 2.357142448425293 - Accuracy: 67.25000143051147% ------------------------------------------------------------------------ > Fold 9 - Loss: 1.553109979470571 - Accuracy: 65.54999947547913% ------------------------------------------------------------------------ > Fold 10 - Loss: 2.426255855560303 - Accuracy: 66.03333353996277% ------------------------------------------------------------------------ Average scores for all folds: > Accuracy: 66.45166635513306 (+- 0.7683473645622098) > Loss: 2.1370193102995554 ------------------------------------------------------------------------ This allows you to compare the performance across folds, and compare the averages of the folds across model types you’re evaluating 🙂 In our case, the model produces accuracies of 60-70%. This is acceptable, but there is still room for improvement. But hey, that wasn’t the scope of this blog post 🙂 ### Model finalization If you’re satisfied with the performance of your model, you can finalize it. There are two options for doing so: • Save the best performing model instance (check “How to save and load a model with Keras?” – do note that this requires retraining because you haven’t saved models with the code above), and use it for generating predictions. • Retrain the model, but this time with all the data – i.e., without making the split. Save that model, and use it for generating predictions. Both sides have advantages and disadvantages. The advantages of the first are that you don’t have to retrain, as you can simply use the best-performing fold which was saved during the training procedure. As retraining may be expensive, this could be an option, especially when your model is large. However, the disadvantage is that you simply miss out a percentage of your data – which may bring your training sample closer to the actual patterns in the population rather than your sample. If that’s the case, then the second option is better. However, that’s entirely up to you! 🙂 ## Summary In this blog post, we looked at the concept of model evaluation: what is it? Why would we need it in the first place? And how to do so objectively? If we can’t evaluate models without introducing bias of some sort, there’s no point in evaluating at all, is there? We introduced simple hold-out splits for this purpose, and showed that while they are efficient in terms of the required computational resources, they are also naïve. K-fold Cross Validation is $$K$$ times more expensive, but can produce significantly better estimates because it trains the models for $$K$$ times, each time with a different train/test split. To illustrate this further, we provided an example implementation for the Keras deep learning framework using TensorFlow 2.0. Using a Convolutional Neural Network for CIFAR-10 classification, we generated evaluations that performed in the range of 60-70% accuracies. I hope you’ve learnt something from today’s blog post. If you did, feel free to leave a comment in the comments section! Please do the same if you have questions, if you spotted mistakes or when you have other remarks. I’ll happily answer your comments and will improve my blog if that’s the best thing to do. Thank you for reading MachineCurve today and happy engineering! 😎 🚀 Boost your ML knowledge with MachineCurve Continue your Keras journey 👩‍💻 Learn about supervised learning with the Keras Deep Learning framework, including tutorials on ConvNets, autoencoders, activation functions, optimizers... and a lot more! Python examples are included. Enjoy our 100+ free Keras tutorials ## References Scikit-learn. (n.d.). sklearn.model_selection.KFold — scikit-learn 0.22.1 documentation. Retrieved February 17, 2020, from https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html Allibhai, E. (2018, October 3). Holdout vs. Cross-validation in Machine Learning. Retrieved from https://medium.com/@eijaz/holdout-vs-cross-validation-in-machine-learning-7637112d3f8f Chollet, F. (2017). Deep Learning with Python. New York, NY: Manning Publications. Khandelwal, R. (2019, January 25). K fold and other cross-validation techniques. Retrieved from https://medium.com/datadriveninvestor/k-fold-and-other-cross-validation-techniques-6c03a2563f1e Bogdanovist. (n.d.). How to choose a predictive model after k-fold cross-validation? Retrieved from https://stats.stackexchange.com/a/52277 ## Do you want to start learning ML from a developer perspective? 👩‍💻 Blogs at MachineCurve teach Machine Learning for Developers. Sign up to learn new things and better understand concepts you already know. We send emails every Friday. By signing up, you consent that any information you receive can include services and special offers by email. ## 19 thoughts on “How to use K-fold Cross Validation with Keras?” 1. Devidas great post but How can I save the best performance among all the folds in the program itself. Also retrain on whole data without using validation will it become robust model for unknown population samples. please clarify and if possible need code snippest. Thanks Thanks 1. Chris Hi Devidas, Thanks for your questions. Question 1: how to save the best performing Keras model across all the folds in K-fold cross validation: This cannot be done out of the box. However, as you can see in my code, using for ... in, I loop over the folds, and train the model again and again with the split made for that particular fold. In those cases, you could use Keras ModelCheckpoint to save the best model per fold. You would need to add to the imports: from tensorflow.keras.callbacks import ModelCheckpoint Also make sure to import the os module: import os …and subsequently add the callback to your code so that it runs during training: fold_no = 0 for train, test in kfold.split(inputs, targets): # Define the model architecture model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dense(no_classes, activation='softmax')) # Compile the model model.compile(loss=loss_function, optimizer=optimizer, metrics=['accuracy']) # Define callbacks checkpoint_path = f'./some_folder/{fold_no}' os.mkdir(checkpoint_path) keras_callbacks = [ ModelCheckpoint(checkpoint_path, monitor='val_loss', save_best_only=True, mode='min') ] # Increase fold no fold_no += 1 # Generate a print print('------------------------------------------------------------------------') print(f'Training for fold {fold_no} ...') # Fit data to model history = model.fit(inputs[train], targets[train], batch_size=batch_size, epochs=no_epochs, verbose=verbosity, validation_split=validation_split, callbacks=keras_callbacks) Now, all best instances of your model given the particular fold are saved. Based on how the folds perform (which you’ll see in your terminal after training), you can pick the saved model that works best. However, I wouldn’t recommend this, as each fold is trained with a subset of your data – and it might in fact be bias that drives the better performance. Be careful when doing this. Question 2: if you retrain without validation data, will it become a robust model for unknown samples? That’s difficult to say, because it depends on the distribution from which you draw the samples. For example, if you cross-validate a ConvNet trained on the MNIST dataset with K-fold cross validation, and it performs well across all folds, you can be confident that you can train it with full data for once. You might nevertheless wish to use validation data for detecting e.g. overfitting though (also see https://www.machinecurve.com/index.php/2019/05/30/avoid-wasting-resources-with-earlystopping-and-modelcheckpoint-in-keras/). Now, if you fed it CIFAR10 data in production usage, you could obviously expect very poor performance. Hope this helps! Best, Chris 2. Josseline Hi Chris! Thanks for this post! It helps me a lot 🙂 I have some questions and I hope you can help me. 1. Do you store checkpoints per fold when val_loss is the lowest in all epochs? I am doing my own implementation on Pytorch and I don’t have clear the criteria of ModelCheckpoint. 2. I am trying with different hyperparameters of one model and I would like to choose what is the best one. I test every config doing KFolds CV to train and validation. Should I store the parameters with the best metric per fold (no matter the epoch) and then choose the best one overall folds? Thanks in advance 🙂 1. Chris Hi Josseline, Thanks for your compliment 🙂 With regards to your questions: 1. The Keras ModelCheckpoint can be configured in many ways. See https://keras.io/callbacks/#modelcheckpoint for all options. In my case, I use it to save the best-performing model instance only, by setting ‘val_loss’, ‘min’ for minimum validation loss and save_best_only=True for saving the epoch with lowest validation loss only. In practice, this means that after every epoch, it checks whether validation loss is lower this time, and if so, it saves the model. Did you know that something similar (albeit differently) is available for PyTorch? https://pytorch.org/ignite/handlers.html#ignite.handlers.ModelCheckpoint It seems that you need to write your own checking logic, though. 2. If I understand you correctly, you are using different hyperparameters for every fold. I wouldn’t do it this way. Instead, I would train every fold with the same set of hyperparameters, and keep your training/validation/testing sets constant. This way, what goes in (the data) is sampled from the same distribution all the time, and (should you use a same random seed for e.g. random weight initialization) nothing much should interfere from a data point of view. Then, for every different set of hyperparameters, I would repeat K-fold Cross Validation. This way, across many K-fold cross validation instances, I can see how well one set of hyperparameters performs generally (within the folds) and how well a model performs across different sets of hyperparameters (across the folds). Hope this helps. Regards, Chris 3. Josseline Thanks for your reply Chris! 🙂 For every set of hyperparameters I repeat K-Folds CV to get training and validation splits, in order to get K instances for every hyperparameters config. My doubt is how can I choose a model instance for every experiment? I mean, I don’t know what is the best way to decided between hyperparameters sets if per every one I applied K-Folds CV. I hope I had explained it better 🙂 Regards 1. Chris Hi Josseline, So if I understand you correctly, if you have two experiments – say, the same architecture, same hyperparameters, but with one you use the ‘Adam’ optimizer whereas with the other you use the ‘SGD’ optimizer – you repeat K-fold cross validation twice? So, if K = 10, you effectively make 2×10 splits, train your 2 architectures 10 times each with the different splits, then average the outcome for each fold and check whether there are abnormalities within the folds? If that’s a correct understanding, now, would my understanding of your question be correct if I’d say your question is “what hyperparameters to choose for my model?”? If not, my apologies. If so – there are limited general answers to that question. Often, I start with Xavier or He initialization (based on whether I do not or do use ReLU activated layers), Adam optimization, some regularization (L1/L2/Dropout) and LR Range tested learning rates with decay. Then, I start experimenting, and change a few hyperparameters here and there – also based on intuition and what I see happening during the training process. Doing so, K-fold CV can help me validate the model performance across various train/test splits each time, before training the model with the full dataset. 4. Josseline For now, my experiments are limited to variations of a base architecture, for example, trying with different amount of filters to my convolutional layers and set my learning rate. I applied K-Fold CV because I have a small dataset (with less than 2000 samples) but after it, I don’t know yet what would be my final model, I mean I would like to know what is the strategy to how to decide what of them have the best performance during K-Fold CV. Do you store all the trained models during your experiments? I am a beginner in this area, so my apologies if I said something wrong in my questions. 1. Chris Hi Josseline, Don’t worry, nothing wrong in your questions, it’s the exact opposite in fact – it would be weird for me to answer your question wrongly because I read it wrongly 🙂 I do store all the models during my experiments – but only the best ones per fold (see example code in one of my comments above for implementing this with Keras). I would consider this strategy: 1. For every variation, train with K-fold CV with the exact same dataset. Set K = 5 for example given your number of samples. Also make sure to use the same loss metric across variations and to use validation data when training. 2. After every training ends (i.e. all the K = 5 splits have finished training), check for abnormalities in every individual fold (this could indicate a disbalanced dataset) and whether your average across the folds is acceptably high. 3. If you see no abnormalities, you can be confident that your model will generalize to data sampled from that distribution. This means that you can now train every variation again, but then with the entire dataset (i.e. no test data – you just used K-fold CV to validate that it generalizes). 4. As you trained all variations with the same dataset, an example way to choose the best final model would be to pick the trained-on-full-dataset model with lowest validation loss after training. That’s a general strategy I would follow. However, since you have very few samples (2000 is really small in deep learning terms, where ~60k is considered small if the data is complex), you might also wish to take a look at SVM based classification/regression with manual feature extraction. For example, one option for computer vision based problems would be using a clustering algorithm such as Mean Shift to derive more abstract characteristics followed by an SVM classifier. This setup would be better suited to smaller datasets, I’d say – because your neural networks will likely overfit pretty rapidly. Here’s more information about Mean Shift and SVM classifiers: https://www.machinecurve.com/index.php/2020/04/23/how-to-perform-mean-shift-clustering-with-python-in-scikit/ https://www.machinecurve.com/index.php/2020/05/03/creating-a-simple-binary-svm-classifier-with-python-and-scikit-learn/ Regards, Chris 1. Josseline Thanks for your help Chris! I am going to put into practice this strategy. I am aware my dataset is pretty small, I am thinking in use data augmentation in order to increase the samples used for training. I read SVM would be another approach, I am going to check your suggestions 🙂 1. Chris Data augmentation would absolutely be of help in your case. Best of luck! 🙂 Chris 5. Rebeen Ali Hi thank you very much, have you shared your code in GitHub to see all the code together Thank you 6. Rebeen Hi could you please provide this code in a github to see all the code together thank you 7. John Hi, the formatting of this tutorial is a bit confusing at the moment, all the embedded code appears in single lines with no line breaks. Is there some way to fix this, or perhaps a link to download and view the code ourselves? Thank you 1. Chris Hi John, Thanks for your comment. I am aware of the issue and am looking for a fix. Most likely, I can spend some time on the matter tomorrow. Regards, Chris 2. Chris Hi John, Things should be normal again! Regards, Chris 8. Ming Hi Chris, Thank you for sharing a very nice tutorial. But I am just curious about ‘validation_split’ inside the cross validation. # Fit data to model history = model.fit(inputs[train], targets[train], batch_size=batch_size, epochs=no_epochs, verbose=verbosity, validation_split=validation_split) This will actually reserve 0.2 of inputs[train], targets[train] in your code to be used as validation data. Why do you need validation data here since all results will be averaged after cross validation? In wikipedia, they don’t use validation data. https://en.wikipedia.org/wiki/Cross-validation_(statistics) 1. Chris Hi Ming, Thanks and I agree, I’ve adapted the article. Regards, Chris
auto_math_text
web
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Impact of weather seasonality and sexual transmission on the spread of Zika fever ## Abstract We establish a compartmental model to study the transmission of Zika virus disease including spread through sexual contacts and the role of asymptomatic carriers. To incorporate the impact of the seasonality of weather on the spread of Zika, we apply a nonautonomous model with time-dependent mosquito birth rate and biting rate, which allows us to explain the differing outcome of the epidemic in different countries of South America: using Latin Hypercube Sampling for fitting, we were able to reproduce the different outcomes of the disease in various countries. Sensitivity analysis shows that, although the most important factors in Zika transmission are the birth rate of mosquitoes and the transmission rate from mosquitoes to humans, spread through sexual contacts also highly contributes to the transmission of Zika virus: our study suggests that the practice of safe sex among those who have possibly contracted the disease, can significantly reduce the number of Zika cases. ## Zika virus Zika virus (ZIKV) is a virus belonging to the family Flaviviridae, primarily transmitted to humans by the bites of infected female mosquitoes from the Aedes genus, such as Aedes aegypti and Aedes albopictus1, widespread in tropical and subtropical regions and spreading in temperate areas as well. ZIKV is related to other arboviruses like chikungunya and dengue. Beside the major source of transmission (mosquito bites) the virus can also be passed on through other means. Unlike the above-listed diseases, Zika can be transmitted through sexual contacts, mostly from men to women2. The disease can be passed from a person with Zika before the start of the symptoms, while having symptoms, and after their symptoms end. The virus may also be passed by asymptomatic carriers3. Studies suggest that ZIKV can remain in semen longer (possibly even six months) than in other body fluids4. Another important way of transmission is from expectant mother to her child during pregnancy or around the time of birth. This may lead to Congenital Zika Syndrome the symptoms of which include microcephaly and a specific pattern of brain damage. The virus can also be transmitted through blood transfusion and breastfeeding5,6. Figure 1 shows the possible methods of Zika transmission. The infection, known as Zika fever or Zika virus disease, shares clinical signs and symptoms with dengue and chikungunya fever, including mild fever, rash, conjunctivitis and joint pain. It has also been linked to severe neurological diseases, such as Guillain–Barré syndrome (a muscle weakness caused by the immune system damaging the peripheral nervous system)7 and microcephaly, a medical condition in which the brain does not develop properly resulting in a head smaller than its normal size. ZIKV poses a serious threat to public health internationally due to the continuous geographic expansion of both the virus and its mosquito vectors8. As for today, there are no known vaccines or specific therapies available for treatment and prevention9. First identified in 1947 in a rhesus monkey in the Zika forest in Uganda10, the virus was recovered in 1948 from the mosquito Aedes africanus, caught in the Zika forest11. The first human cases were reported in Uganda and Tanzania in 195212. The first large outbreak in humans took place in 2007 in the Yap Island and later in French Polynesia, Easter Island, the Cook Islands, New Caledonia. In 2008, a scientist contracted Zika fever in Senegal and after returning to the US, he infected his wife. This is the first documented case of sexual transmission of a disease transmitted by insects13. The first cases in South America were discovered in Brazil in the spring of 2015, and several other countries from the region confirmed Zika cases at the end of 2015 and early 2016 (for the incidence of Zika fever in Central and South American countries affected by the epidemic 2015–2017, see Fig. 2). The rapid spread of Zika in Brazil can be attributed to the completely susceptible population, high population density, tropical climate and inadequate control of Aedes mosquitoes14,15. In October 2015, Brazil reported an increase in the number of microcephaly cases among newborns. In November, Zika virus genome was detected in the blood and tissues of a baby born with microcephaly in Brazil. In January 2016, intrauterine transmission of Zika virus was detected for the first time in two pregnant women in Brazil whose fetuses were diagnosed with microcephaly. An increased number of cases of Guillain–Barré syndrome was also reported from other countries of South America. The course of the Zika epidemic was different in various countries of South America. The reason behind this is most probably that these countries are very heterogeneous in their climatic, geographic, demographic characteristics. Basically, we can distinguish two different situations. In one part of the countries, e.g. Colombia, Puerto Rico, Suriname, there was a single outbreak, while in other countries, including Bolivia, Costa Rica, Ecuador, one can observe two major peaks in two successive years16. Although the number of Zika cases has declined since the virus was first introduced in the Americas, in February 2018, Zika fever was included in WHO’s Blueprint list of priority diseases to be prioritized for research and development17. Temperature is known to be a strong driver of vector-borne disease transmission18, hence, considering climate change, a probable extension of the distribution of carrying mosquitoes implying a possible introduction of Zika into so far unaffected regions, Zika virus will most probably continue to be an important menace in the future. ## Mathematical models for Zika transmission Various mathematical models have been established to study the transmission dynamics of the spread of Zika virus. Gao et al.19 introduced a compartmental model of Zika spread considering vector-borne and sexual transmission proposing an SEIR-type model for the human population and an SEI-chain for vectors. They separated asymptomatically infected humans from those who had symptoms, but males and females were not differentiated. The authors used historical data to approximate the parameters of the system. Baca-Carrasco et al.20 proposed compartmental models considering vector-borne and sexual transmission (with the two sexes differentiated) and migration as well, showing that sexual transmission influences the magnitude of the outbreaks. Suparit et al.21 studied the spread of Zika fever in Bahia, Brazil considering two vector control strategies: reducing mosquito biting rates and mosquito population size. The model also includes the influences of seasonal change on the ZIKV transmission dynamics via time-varying mosquito biting rate, however, it does not take into account human-to-human transmission. There are also papers which consider the importance of weather and climate changes in the models, see e.g. Caminade et al.14 and Mordecai et al.22. Guzzetta et al.23, Rocklöv et al.24, Marini et al.25 studied models for the spread of Zika to new areas. Further models for Zika transmission are studied e.g. in26,27,28,29,30,31,32,33. The majority of models so far did not consider the seasonality of the spread of the disease, induced by the seasonality of mosquito population size. In the present work, we establish a compartmental model for Zika transmission, considering the important features of the disease included in earlier models such as mosquito-borne and sexual transmission, asymptomatically infected people contributing to the spread of the disease, a prolonged infectivity through sexual transmission, and we also incorporate the effect of the seasonality of weather by introducing time-dependent (periodic) parameters. The resulting model is able to reproduce the different outcomes of the epidemic in various countries affected by Zika, and enables us to obtain a better understanding of the role of different parameters. ## Methods ### Compartmental model with time-dependent parameters As described in the Introduction, the Zika epidemic had different outcome in the countries of South America. Earlier mathematical models, though able to provide a good fit for the one-peak case, were unable to reproduce the situation with two peaks19,20. The reason for this is that these models did not consider the annual change of weather conditions and the consequent annual fluctuation of the size of mosquito populations. This motivated us to establish a new compartmental model for Zika virus transmission which also considers the periodicity of weather. Furthermore, to assure that our model properly describes the real world situation, we also considered both symptomatic and asymptomatic carriers of the disease, and the two sexes were differentiated to make the model applicable for evaluation of the role of sexual transmission of Zika as well. As the number of sexual transmissions from women to men is reported to be very small compared to transmission in the other direction, in this model we only consider sexual transmission from men to women34,35. To incorporate all of the above features in our study, our model includes 14 compartments. Male human, female human and vector compartments are differentiated by the subscripts $$m,f,v$$, respectively. Susceptible humans ($${S}_{m}$$ and $${S}_{f}$$) are those who can be infected by Zika virus. After contracting the disease, one moves to the exposed class ($${E}_{m},{E}_{f}$$), these individuals do not have any symptoms yet. After the incubation period, one moves either to the symptomatically infected class ($${I}_{m}^{s},{I}_{f}^{s}$$) or to the asymptomatically infected class ($${I}_{m}^{a},{I}_{f}^{a}$$), depending on whether that individual develops the symptoms or not. Symptomatically and asymptomatically infected women move to the recovered compartment $${R}_{f}$$ after recovery, however, for men, there is an additional convalescent compartment ($${I}_{m}^{r}$$) for those who have recovered from the disease but can still transmit it through sexual contact. After the convalescent phase, men move to the recovered class $${R}_{m}$$. We emphasize that the infectious classes $$E,{I}^{s},{I}^{a},{I}^{r}$$ are also distinguished by their differing transmission and recovery rates. For the mosquitoes, we have three compartments: susceptibles ($${S}_{v}$$), exposed ($${E}_{v}$$) and infected $$({I}_{v})$$. The transmission diagram of the model is shown in Fig. 3. The governing differential equations are specified in the Supplementary Information S.1, while the parameters applied in our work are described in Table 1. ### Parameter estimation and sensitivity To study the phenomena described above, we fitted our model to data from South American countries with different outcomes of the epidemic. As examples for the cases with one or two peaks, we chose Suriname and Costa Rica respectively. To estimate the parameters which provide the best fit, we applied Latin Hypercube Sampling, which is a sampling method used in statistics to measure simultaneous variation of several parameter values (see, e.g.36). The method consists in generating a representative sample set from the parameter ranges for all fitted parameters shown in Table 1: to obtain a representative sample set of size $$m$$, all parameter ranges are divided into $$m$$ equal parts select one point in each subinterval. After obtaining the $$m$$ lists of samples, we combine them randomly, into $$m$$-tuples. Then, for all elements of this representative sample set, one numerically calculates the solutions of the model with the given parameter values. Finally, the least squares method is applied to obtain the parameters which give the best fit. However, it is important to note that due to the large number of parameters and the broad intervals of possible values of these, one cannot expect to find a single parameter set which perfectly fits the data of the epidemic, but rather to give a good approximation of the real situation and determine a region for each of the parameters so that the real values of the parameters fall in these intervals with a high probability. Hence, following the procedure described e.g. in37, we applied several rounds of LHS samplings as follows. In each round, we select the ten parameter sets which offer the best fits and the intervals for all parameters are narrowed down to the set between the minimal and maximal values for the given parameter among the ten best fitting parameter sets. Then, the next LHS sampling will take values from these narrower intervals. After performing this several times (eight times in our case), we obtain a reasonably small neighbourhood around the best fitting parameters. Apart from giving a good approximation of the parameter values, the fitting also allows us to estimate the burden of disease associated to sexual transmission, a novel phenomenon for mosquito-borne diseases, as well as to estimate the effect of the asymmetry of sexual transmission rate on the number of cases in the two sexes. Using Partial Rank Correlation Coefficients analysis (PRCC38), we performed sensitivity analysis, to determine which of the parameters in our model have the most important effect on the transmission dynamics. As a response function for these simulations, we chose the cumulative number of new symptomatically infected cases. The sensitivity analysis based on the PRCC ranks the effect of the parameters on the response function (or outcome), while varying the parameters in their given ranges (parameters with higher positive (negative) PRCC values are positively (negatively) correlated with the response function). ### Basic reproduction number and instantaneous reproduction number The estimation of the basic reproduction number (measuring the expected number of secondary infections generated by a single infected individual introduced into a completely susceptible population during his/her infection) is usually an utmost important question in the study of mathematical models of infectious diseases. The instantaneous reproduction number is considered when part of the population is immune. In mathematical models with periodic coefficients, the basic reproduction number can be obtained as the spectral radius of a linear integral operator on a space of periodic functions (for details, see39). In general, the actual value of the basic reproduction number cannot be calculated analytically, however, there exist methods to numerical approximate it (see Supplementary Information S.3 or e.g.40 for details). Besides determining the basic reproduction number of the periodic model, computing the formula for the basic reproduction number of the time-constant model obtained from the periodic model by setting the time-dependent parameters (mosquito birth rate and transmission rates between humans and mosquitoes) to constant also provides interesting results. The derivation of the formula for the basic reproduction number can be found in the Supplementary Information S.2. This formula gives us the basic reproduction number in any given time instant by substituting the parameter values into it, including the value of the time-dependent parameters at that given moment. This also allows us to calculate the instantaneous reproduction rate $${{\mathscr{R}}}_{{\rm{i}}{\rm{n}}{\rm{s}}{\rm{t}}}$$ which estimates the average number of secondary cases per infectious case in a population made up of both susceptible and non-susceptible individuals and which can be obtained by multiplying the basic reproductive rate by the size of the susceptible fraction of the host population. ## Results ### Parameter estimation for countries with different outcome of the epidemic Using the method described in Subsection 2.2, we fitted our model to data in countries where the epidemic outbreak had different outcome: Suriname, where there was a single peak of the epidemic and Costa Rica, where there were two peaks in two subsequent years. Figure 4 shows our model fitted to data from Suriname, in which country there was only one peak of the Zika epidemic, as well as to data from Costa Rica16,41. The best fitting solutions obtained with parameters given in Table 1 are depicted together with the 99% confidence range, which was obtained by letting for all parameters a 1% relative error w.r.t. the best fitting parameters. Our model gives a reasonably good fit, reproducing the two larger outbreaks of 2016 and 2017 and a modest number of cases in 2018. We note that the two-peak case could not have been reproduced using a time-constant model. The results show that our model is able to reproduce both typical types of scenarios of the Zika epidemic. Depending on the parameter values characteristic of the given country, the simulations show that after one or more years, the number of susceptibles drops to a level where no further outbreak is possible, regardless of the annual periodicity of the number of mosquitoes caused by periodically changing weather conditions. The amount of newborns is not sufficient to provide a level of susceptibles which is enough to start a new outbreak. The reasonably good fits obtained demonstrates that our model is able to reproduce both typical outcomes of the Zika epidemic, showing that the periodic change of weather, resulting in a setback in the number of new infections in autumn/winter, may also cause a recurrence of the epidemic, depending on the parameters characteristic for the given region, while under different circumstances, even the return of the warm, rainy season is insufficient to induce a new outbreak. ### Basic reproduction number and instantaneous reproduction number We calculated numerically the basic reproduction number as the spectral radius of a linear integral operator (for details of the method, see e.g.40). This quantity serves as a threshold parameter for the eventual persistence of extinction of the epidemic. In the case of Costa Rica, we obtained the value $${{\mathscr{R}}}_{0}\approx 0.924$$, while in the case of Suriname $${{\mathscr{R}}}_{0}\approx 0.737$$. Both values are less than $$1$$, in accordance with the fact that eventually, in both countries, the epidemic died out. It is also important to note that, as both values are rather close to $$1$$, a sufficient increase of the appropriate parameters might lead to $${{\mathscr{R}}}_{0}$$ becoming larger than $$1$$, which corresponds to the disease becoming endemic with an annual reappearance. Climate change or emergence of the disease in previously unaffected regions might lead to such alteration of the parameters, however, the details of predicting such a phenomenon is beyond the scope of the present study. We also determined the formula for the basic reproduction number of the time-constant model obtained by setting the periodic parameters to constant (see details in Supplementary Information S.2). In Fig. 5, we plot the basic reproduction number of the constant model as a function of mosquito birth rate, and human–mosquito transmission rates and human-to-human transmission rate (with the rest of the parameters set as obtained in the fitting to data from Suriname). The figure suggests that mosquito control and sexual protection are both important factors in the spread of Zika fever and that vector control might be insufficient to control the disease if sexual transmission rate is high. Figure 6 shows the change of the instantaneous reproduction number together with the number of symptomatically infected people in Costa Rica, 2016–18. One can see that in each outbreak, the number of infected people starts to decrease, once the instantaneous reproduction number drops below 1. The maximal value of the instantaneous reproduction number is estimated to be around $${{\mathscr{R}}}_{{\rm{i}}{\rm{n}}{\rm{s}}{\rm{t}}}\approx 1.47$$, while in Suriname it is estimated to be around $${{\mathscr{R}}}_{{\rm{i}}{\rm{n}}{\rm{s}}{\rm{t}}}\approx 1.45$$. These values can be compared with earlier calculations of the basic reproduction number. Gao et al.19, using a compartmental model for data from Brazil, Colombia, and El Salvador estimated $${{\mathscr{R}}}_{0}=2.055$$, Saad-Roy et al.42 estimated $${{\mathscr{R}}}_{0}=1.4$$ for Brazil, which are close to our results. We note however, that some other studies estimated a higher value for the basic reproduction number: Towers et al.43, through an analysis of the exponential rise in clinically identified ZIKV cases gave an estimate of $${{\mathscr{R}}}_{0}=3.8$$ for Barranquilla, Colombia, while Shutt et al.44 estimated the value of $${{\mathscr{R}}}_{0}$$ to be between 4 and 6 in El Salvador and Suriname using a simple compartmental model. ### Difference in prevalence among women and men due to asymmetric sexual transmission As the main concern about the spread of Zika virus disease is mother-to-child transmission resulting in brain malformations, it is of primary importance to study the effect of the virus on women and estimate the number of cases among them. Moreover, several studies and news articles reported a higher number of Zika virus infections in women compared to cases in men45,46,47. It is probable that the higher male-to-female sexual transmission rate is a contributing factor to this skewing of the burden of disease toward women. Using our model, we compare the number of symptomatic cases in women and men (see Fig. 7). Based on our model, we estimate a 39% surplus in the cumulative number of symptomatically infected women in comparison with the cumulative number of symptomatically infected men in Suriname, while the same surplus is estimated 65% in Costa Rica. Lozier et al.45 reported a 62.5% surplus in Puerto Rico and 75% in Brazil and El Salvador, Cruz et al.33 estimated a 60% surplus in Rio de Janeiro, while Coelho et al.46 reported a 90% surplus, also in Rio de Janeiro. In comparison with these studies, our simulation shows similar or somewhat smaller surplus of Zika cases in women, though, the high number given in these studies might also be attributed to the fact that women (and especially pregnant women) suspected to have Zika are more likely to visit their doctors because of the potential risk of birth defects and potentially because women visit doctors more often than men, as hypothesized in46. We also applied our model to estimate the effect of the presence of sexual transmission on the number of infected people. Figure 8 shows the actual number of symptomatically infected as well as the estimated number of symptomatically infected in the complete absence of sexual transmission. The estimation suggests that sexual transmission, a phenomenon earlier unknown in mosquito-borne diseases, significantly contributed to the number of Zika cases. More precisely, based on our model, we estimate that 32% of the total number of cases in Suriname and 54% in Costa Rica could be attributed to sexual transmission. Similar estimations were given in other studies as well, but a high level of uncertainty can be noticed48. Cruz et al.33 estimated that sexual transmission is responsible for 23% to 46% of the increment in the basic reproduction number. Towers et al.43 found that the fraction of cases due to sexual transmission was 0.23 $$[0.01,0.47]$$ with 95% confidence. Gao et al.19 gave a 95% confidence interval for the percentage of contribution of sexual spread in the basic reproduction number of $$[0.123,45.73]$$. ### Sensitivity analysis As currently no vaccine against Zika is available, the most important control measures to decrease the transmission of Zika include decreasing mosquito bites in areas affected by the disease using insect repellents, clothes covering much of the body, mosquito nets, decreasing mosquito birth rate by getting rid of standing water where mosquitoes reproduce, mosquito killing, as well as use of condoms to prevent sexual transmission. Figure 9 shows the comparison of the resulting PRCC values obtained for the parameters. The results suggest that the most important factors in Zika transmission are the birth rate of mosquitoes and the transmission rate from mosquitoes to humans. Although spread through sexual contacts has a smaller effect on the number of Zika cases, it is shown to be an important factor in the transmission of Zika virus. Considering this and the results presented in the previous subsections, our study suggests that the practice of safe sex among those who have possibly contracted the disease, can significantly reduce the number of Zika cases. However, the most important ways to reduce transmission are mosquito control and protection against mosquito bites. ## Discussion The Zika fever outbreak in South America, started in 2015, has been one of the most alarming epidemics in recent years due to its connection with microcephaly. Apart from vectorial transmission, the disease has been proved to be spread sexually, which is a previously unexperienced phenomenon among mosquito-borne diseases. Several mathematical models have been created to include the novel features of the disease, though, the majority of these did not consider the changes of mosquito populations due to periodicity of weather circumstances. In this paper, we have established a compartmental model to study the transmission of Zika virus disease including spread through sexual contacts, asymptomatic carriers and the periodicity of weather. Up to our knowledge, our model is the first compartmental model for Zika fever transmission which, besides considering both mosquito-borne and sexual transmission, the role of asymptomatic carriers and the prolonged period of sexual transmissibility, also takes into account the seasonality of weather. We fitted the model to the number of Zika cases in two countries where the Zika epidemic had a different outcome: Suriname, where there was only one peak of the epidemic and Costa Rica where two major peaks occurred in two subsequent years to show that the annual change of weather can be responsible for a recurrence of the epidemic after the autumn/winter season, though, depending on the circumstances characteristic for the given country, it is also possible that, even with the return of the rainy season, the depletion of the susceptible class might prevent a second outbreak. Using the fittings obtained, we studied the effects of sexual transmission, a novel phenomenon not experienced before in mosquito-borne diseases. We gave an estimate for the difference in the number of cases in men and women, possibly due to the asymmetric sexual transmission rates supporting earlier statements give e.g. in45,46,47. This is especially important because of the severe side-effects in newborns of women infected during their pregnancy. Our results suggests that a 39–65% surplus in Zika cases in women in comparison with men, which can be attributed to the asymmetric transmission rates. These values are smaller than the ones given in45,46,47, however, as also suggested in those studies, this might also be attributed to the fact that women suspected to have Zika are more likely to visit their doctors. Further, we estimated the increase in the number of cases due to sexual transmission. We found that sexual transmission could be responsible for about 32–54% of the Zika infections. Our findings are in accordance with similar earlier estimates given in19,43. These results of us suggest that the practice of safe sex among those who have possibly been infected can significantly reduce the number of Zika cases. Both the basic reproduction number of the time-periodic model (serving as a threshold parameter for the persistence of the epidemic) and the instantaneous reproduction number were calculated. The results are in accordance with the extinction of the epidemics in the countries of South America after one or possibly more peaks, and also with earlier results given in19,42. We carried out sensitivity analysis to compare the effect of different model parameters on the number of cases. We found that mosquito birth and death rates are the most important factors in the spread of Zika, but sexual transmission rate has also a significant effect on the prevalence of the disease, supporting the earlier statements about the possibility of reduction of the number of Zika cases by decreasing the probability of sexual transmission. ## Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. ## References 1. 1. Petersen, L. R., Jamieson, D. J., Powers, A. M. & Honein, M. A. Zika virus. N. Engl. J. Med. 375, 294–295 (2016). 2. 2. Magalhaes, T., Foy, B. D., Marques, E. T. A., Ebel, G. D. & Weger-Lucarelli, J. Mosquito-borne and sexual transmission of Zika virus: recent developments and future directions. Virus Research 254, 1–9 (2018). 3. 3. Centers for Disease Control and Prevention, Clinical guidance for healthcare providers for prevention of sexual transmission of Zika virus, https://www.cdc.gov/zika/hc-providers/clinical-guidance/sexualtransmission.html (accessed 27 June 2019). 4. 4. Mead, P. S. et al. Zika virus shedding in semen of symptomatic infected men. N. Engl. J. Med. 378, 1377–1385 (2018). 5. 5. Blohm, G. M. et al. Evidence for mother-to-child transmission of Zika virus through breast milk. Clin. Infect. Dis. 66, 1120–1121 (2018). 6. 6. Gregory, C. J. et al. Modes of transmission of Zika virus. J. Infect. Dis. 216, S875–S883 (2017). 7. 7. World Health Organization, Zika virus, microcephaly and Guillain–Barré syndrome. Situation report. 7 April 2016,  http://apps.who.int/iris/bitstream/handle/10665/204961/zikasitrep_7Apr2016_eng.pdf (accessed 30 November 2018). 8. 8. Carlson, C. J., Dougherty, E. R. & Getz, W. An ecological assessment of the pandemic threat of Zika Virus. PLOS Negl. Trop. Dis. 10.8, e0004968 (2016). 9. 9. Song, B. H. et al. Zika virus: history, epidemiology, transmission, and clinical presentation. J. Neuroimmunol. 308, 50–64 (2017). 10. 10. Dick, G. W. A., Kitchen, S. F. & Haddow, A. J. Zika virus (I). Isolations and serological specificity. Trans. R. Soc. Trop. Med. Hyg. 46, 509–520 (1952). 11. 11. Dick, G. W. A. Zika virus (II). Pathogenicity and physical properties. Trans. R. Soc. Trop. Med. Hyg. 46, 521–534 (1952). 12. 12. Smithburn, K. C. Neutralizing antibodies against certain recently isolated viruses in the sera of human beings residing in East Africa. J. Immunol. 69, 223–234 (1952). 13. 13. Foy, B. D. et al. Probable non-vector-borne transmission of Zika Virus, Colorado, USA. Emerg. Infect. Dis. 17, 880–882 (2011). 14. 14. Caminade, C. et al. Global risk model for vector-borne transmission of Zika virus reveals the role of El Niño 2015. Proc. Natl. Acad. Sci. USA 114, 119–124 (2017). 15. 15. Ai, J.-W., Zhang, Y. & Zhang, W. Zika virus outbreak: ’a perfect storm’. Emerg. Microbes Infect. 5, e21 (2016). 16. 16. Pan American Health Organization, Countries and territories with autochthonous transmission of Zika virus in the Americas reported in 2015–2017, https://www.paho.org/hq/index.php?option=com_content&view=article&id=11603:countries-and-territories-with-autochthonous-transmission-of-zika-virus-in-the-americas-reported-in-2015-2017&Itemid=41696&lang=en. 17. 17. World Health Organization, WHO list of blueprint priority diseases, https://www.who.int/blueprint/priority-diseases/en/ (February 2018). 18. 18. Tesla, B. et al. Temperature drives Zika virus transmission: evidence from empirical and mathematical models. Proc. R. Soc. B 285, 20180795 (2018). 19. 19. Gao, D. et al. Prevention and control of Zika as a mosquito-borne and sexually transmitted disease: a mathematical modeling analysis. Sci. Rep. 6, 28070 (2016). 20. 20. Baca-Carrasco, D. & Velasco-Hernández, J. X. Sex, mosquitoes and epidemics: an evaluation of Zika disease dynamics. Bull. Math. Biol. 78, 2228–2242 (2016). 21. 21. Suparit, P., Wiratsudakul, A. & Modchang, C. A mathematical model for Zika virus transmission dynamics with a time-dependent mosquito biting rate. Theor. Biol. Med. Model. 15, 11 (2018). 22. 22. Mordecai, A. et al. Detecting the impact of temperature on transmission of Zika, dengue, and chikungunya using mechanistic models. PLoS Negl. Trop. Dis. 11, e0005568 (2017). 23. 23. Guzzetta, G. et al. Assessing the potential risk of Zika virus epidemics in temperate areas with established Aedes albopictus populations. Euro Surveill. 21, 15 (2016). 24. 24. Rocklöv, J. et al. Assessing seasonal risks for the introduction and mosquito-borne spread of Zika virus in Europe. EBioMedicine 9, 250–256 (2016). 25. 25. Marini, G. et al. First outbreak of Zika virus in the continental United States: a modelling analysis. Euro Surveill. 22, 37 (2017). 26. 26. Nah, K. et al. Estimating risks of importation and local transmission of Zika virus infection. PeerJ 4, e1904 (2016). 27. 27. Agusto, F. B., Bewick, S. & Fagan, W. F. Mathematical model of Zika virus with vertical transmission. Infect. Dis. Model. 2, 244–267 (2017). 28. 28. Okuneye, K. O., Velasco-Hernández, J. X. & Gumel, A. B. The “unholy” chikungunya–dengue–Zika trinity: a theoretical analysis. J. Biol. Syst. 25, 545–585 (2017). 29. 29. Padmanabhan, P., Seshaiyer, P. & Castillo-Chavez, C. Mathematical modeling, analysis and simulation of the spread of Zika with influence of sexual transmission and preventive measures. Lett. Biomath. 4, 148–166 (2017). 30. 30. Chen, J. et al. Modeling the importation and local transmission of vector-borne diseases in Florida: the case of Zika outbreak in 2016. J. Theor. Biol. 455, 342–356 (2018). 31. 31. Saad-Roy, C. M., Ma, J. & van den Driessche, P. The effect of sexual transmission on Zika virus dynamics. J. Math. Biol. 77, 1917–1941 (2018). 32. 32. Sasmal, S. K., Ghosh, I., Huppert, A. & Chattopadhyay, J. Modeling the spread of Zika virus in a stage-structured population: effect of sexual transmission. Bull. Math. Biol. 80, 3038–3067 (2018). 33. 33. Cruz-Pacheco, G., Esteva, L. & Pio Ferreira, C. A mathematical analysis of Zika virus epidemic in Rio de Janeiro as a vector-borne and sexually transmitted disease. J. Biol. Systems 27, 83–105 (2019). 34. 34. Centers for Disease Control and Prevention, First female-to-male sexual transmission of Zika virus infection reported in New York City, http://www.cdc.gov/media/releases/2016/s0715-zika-female-to-male.html (Accessed on December 2016). 35. 35. Davidson, A., Slavinski, S., Komoto, K., Rakeman, J. & Weiss, D. Suspected female-to-male sexual transmission of Zika virus – New York City, 2016. MMWR Morb. Mortal. Wkly Rep. 65, 716–717 (2016). 36. 36. McKay, M. D., Beckman, R. J. & Conover, W. J. Comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21, 239–245 (1979). 37. 37. Guzzetta, G. et al. Effectiveness of contact investigations for tuberculosis control in Arkansas. J. Theor. Biol. 380, 238–246 (2015). 38. 38. Blower, S. M. & Dowlatabadi, H. Sensitivity and uncertainty analysis of complex models of disease transmission: an HIV model, as an example. Int. Stat. Rev. 62, 229–243 (1994). 39. 39. Wang, W. & Zhao, X.-Q. Threshold dynamics for compartmental epidemic models in periodic environments. J. Dyn. Diff. Equat. 20, 699–717 (2008). 40. 40. Mitchell, C. & Kribs, C. A comparison of methods for calculating the basic reproductive number for periodic epidemic systems. Bull. Math. Biol. 79, 1846–1869 (2017). 41. 41. Ministerio de Salud, Costa Rica, Boletín epidemiológico No. 23–2018, Enfermedades transmitidas por vectores (23. Nov. 2018). 42. 42. Saad-Roy, C. M., van den Driessche, P. & Ma, J. Estimation of Zika virus prevalence by appearance of microcephaly. BMC Infect Dis. 16.1, 754 (2016). 43. 43. Towers, S. et al. Estimate of the reproduction number of the 2015 Zika virus outbreak in Barranquilla, Colombia, and estimation of the relative role of sexual transmission. Epidemics 17, 50–55 (2016). 44. 44. Shutt, D. P., Manore, C. A., Pankavich, S., Porter, A. T. & Del Valle, S. Y. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America. Epidemics 21, 63–79 (2017). 45. 45. Lozier, M. et al. Incidence of Zika virus disease by age and sex – Puerto Rico, November 1, 2015–October 20, 2016. MMWR Morb. Mortal. Wkly. Rep. 65, 1219–1223 (2016). 46. 46. Coelho, F. C. et al. Higher incidence of Zika in adult women than adult men in Rio de Janeiro suggests a significant contribution of sexual transmission from men to women. Int. J. Infect. Dis. 51, 128–132 (2016). 47. 47. 48. 48. Althaus, C. L. & Low, N. How relevant is sexual transmission of Zika virus? PLOS Med. 13(10), e1002157 (2016). 49. 49. World Health Organization, WHO Global Health Observatory data repository. Crude birth and death rate. Data by country,  http://apps.who.int/gho/data/node.main.CBDR107?lang=en (accessed 8 April 2018). 50. 50. Chikaki, E. & Ishikawa, H. A dengue transmission model in Thailand considering sequential infections with all four serotypes. J. Infect. Dev. Countr. 3, 711–722 (2009). 51. 51. Andraud, M., Hens, N., Marais, C. & Beutels, P. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches. PloS One 7, e49085 (2012). 52. 52. Duffy, M. R. et al. Zika virus outbreak on Yap Island, Federated States of Micronesia. N. Engl. J. Med. 360, 2536–2543 (2009). 53. 53. Bearcroft, W. G. Zika virus infection experimentally induced in a human volunteer. Trans. R. Soc. Trop. Med. Hyg. 50, 442–448 (1956). 54. 54. Gourinat, A.-C., O’Connor, O., Calvez, E., Goarant, C. & Dupont-Rouzeyrol, M. Detection of Zika virus in urine. Emerg. Infect. Dis 21, 84–86 (2015). 55. 55. Musso, D. et al. Potential sexual transmission of Zika virus. Emerg. Infect. Dis. 21, 359–361 (2015). 56. 56. Boorman, J. P. & Porterfield, J. S. A simple technique for infection of mosquitoes with viruses; transmission of Zika virus. Trans. R. Soc. Trop. Med. Hyg. 50, 238–242 (1956). ## Acknowledgements A. Dénes was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, by the project no. 128363, implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the PD_18 funding scheme. M. A. Ibrahim was supported by a fellowship from the Egyptian government in the long-term mission system. T. Tekeli was supported by the project no. 124016, implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the FK_17 funding scheme. The authors are grateful to the anonymous reviewers for their insightful and constructive comments and suggestions which helped to improve the paper. ## Author information Authors ### Contributions Wrote the paper: A.D., M.A.I., L.O., M.T. and T.T. Collected data: A.D., M.A.I., L.O., M.T. and T.T. Collected literature, state of the art: A.D., M.A.I., L.O., M.T. and T.T. Conceived the study: A.D. Developed the model: A.D., M.A.I., L.O., M.T. and T.T. Performed model analysis: M.A.I. Simulations and parameter fitting: A.D., M.A.I. and M.T. ### Corresponding author Correspondence to Attila Dénes. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Dénes, A., Ibrahim, M.A., Oluoch, L. et al. Impact of weather seasonality and sexual transmission on the spread of Zika fever. Sci Rep 9, 17055 (2019). https://doi.org/10.1038/s41598-019-53062-z • Accepted: • Published: • ### The effects of flooding and weather conditions on leptospirosis transmission in Thailand • , Karine Chalvet-Monfray • , Anuwat Wiratsudakul •  & Charin Modchang Scientific Reports (2021) • ### Threshold Dynamics in a Model for Zika Virus Disease with Seasonality • Mahmoud A. Ibrahim •  & Attila Dénes Bulletin of Mathematical Biology (2021)
auto_math_text
web
# Talk:Significance of E. Coli Evolution Experiments SJohnson, your assessment, while good in the utilization of the chi-squared test is unfortunately incorrect. The Monte Carlo resampling gives a more accurate p-value than the chi-squared. You may research the literature (i.e. publications in statistical mathematics, many pubs actualy compare Monte Carlo vs Chi Squared) to discover that this method is commonly used in advance statistical work and how it is more accurate than the chi-squared test.--Able806 17:00, 4 March 2009 (EST) It doesn’t make sense to compare the chi-square test, which is a specific statistical hypothesis test, to Monte Carlo methods, which can be used for anything from fluid motion modeling to p-value computations. You can use Monte Carlo methods to compute the p-values of the chi-square test! Monte Carlo methods involve the generation of random realizations. Your broad claim the Monte Carlo methods are “more accurate” than the chi-square test is obviously incorrect because the accuracy of Monte Carlo methods always depends on the number of random realizations generated. When p-values are small, Monte Carlo methods are notoriously inaccurate unless the number of realizations generated is enormous. Which publications compare Monte Carlo to chi-square and show that the former is more accurate? Could you provide specific examples? Thanks. SJohnson 18:50, 4 March 2009 (EST) In furtherance of SJohnson's remarks with respect to rarely occurring events, the use of the basic Monte Carlo method is plainly incorrect for modeling a rarely occurring event, as the Lenski paper did. This has long been pointed out in Flaws in Richard Lenski Study. I know evolutionists will never admit a flaw in anything promoting their pet theory, but this (and other) flaws in that paper is undeniable. Watch how evolutionists defended obvious errors in the Lenski paper, and then realize why the Piltdown Man fraud was taught for 40 years without evolutionists admitting it was a hoax.--Andy Schlafly 09:55, 5 March 2009 (EST) Andy, how exactly is the Monte Carlo method incorrect to use in this case? I have seen it used in publications with much smaller datasets.--Able806 10:29, 5 March 2009 (EST) Able806, I'm interested in looking at the publications you mentioned that use Monte Carlo methods to analyze small data sets. Could you provide some examples? Thanks. SJohnson 16:41, 5 March 2009 (EST) SJohnson, here are two papers, 1 and 2. Most are in chemistry and genetics where you find the observed to be much smaller and have to use the MCM. You can search on the subject as well and find that how Lenski performed the test is the standard for microbiological genetic analysis.--Able806 10:19, 11 March 2009 (EDT) Those papers have nothing to do with hypothesis testing. One is an archeology paper. To be blunt, it seems like you’re just doing internet searches on “Monte Carlo” to find these links. SJohnson 10:10, 12 March 2009 (EDT) SJohnson, actually they do, did you read the papers? If so you would see how they used the MCM for their data analysis of small data sets, which indeed was hypothesis testing and answers you inquiry about publications that use MCM for small data set analysis. If you wish I can try to track down some actual mathematical publications, however, I am not as familiar with mathematical journals as I am with science/medical journals (not knowing which mathematical journals are acceptable). I am assuming that you have a background in math and possibly access to mathematical journals, therefore if you know the reputable ones I can do the leg work. I believe the thing that needs to be looked at is there truly a problem with the choice of test and if so what is an alternative. Bayesian might be an option but seems to be difficult to employ for this situation.--Able806 12:36, 12 March 2009 (EDT) Able806, you still seem to miss the point about how inappropriate the Monte Carlo method (as used in the Lenski paper) is for evaluating rarely occurring events. You need to open your mind to be productive. If you simply cling to a view that Lenski (who I don't think has any meaningful education in statistics) must somehow be right, then you're not going to make any progress in understanding the flaws.--Andy Schlafly 17:07, 5 March 2009 (EST) Andy, you still have not answered what you find inappropriate about his use of the Monte Carlo method? I am a reasonable person and with evidence I do have an open mind. I provided examples last week, with a working model, showing that Monte Carlo is better than the chi-square in this case. I have also shown where the Chi-Square was inappropriate due to the occurrence size as well. So if you have any evidence that Monte Carlo should not be used in the way that Lenski used please let it be shown.--Able806 10:19, 11 March 2009 (EDT) Sjohnson, I believe you just proved my point. In the literature of mean and covariance structure analysis, non-central chi-square distribution is commonly used to describe the behavior of the likelihood ratio statistic under alternative hypothesis; it is widely believed that the non-central chi-square distribution is justified by statistical theory. Actually, when the null hypothesis is not trivially violated, the non-central chi-square distribution cannot describe the LR statistic well even when data are normally distributed and the sample size is large. Monte Carlo results compare the strength of the normal distribution against that of the non-central chi-square distribution. In an association analysis comparing cases and controls with respect to allele frequencies at a highly polymorphic locus, a potential problem is that the conventional chi-squared test may not be valid for a large, sparse contingency table. Reliance on statistics with known asymptotic distribution is unnecessary, as Monte Carlo simulations can be performed to estimate the significance level of the test statistic. Here is a link to a great page the provides an interactive example as to why the Chi Squared test would provide poor results compared to the Monte Carlo in relation to the Lenski data workup. Something you may have overlooked was that the data set is actually too small to use the chi square method correctly. It is often accepted that is any of the analyzed data falls under 10 for a particular cell of the data set then the Yates correction needs to be applied; unfortunately the Yates correction can over correct thus skewing the p-value. Lenksi seemed to understand this by supporting his Monte Carlo p-value results with the Fisher z-transformation p-value. I hope this helps.--Able806 10:27, 5 March 2009 (EST) I’m still waiting to hear which literature says that “Monte Carlo resampling” is “more accurate than the chi-squared test”. The page mentioned above [1] is a discussion of why statisticians “fail to reject the null” rather than “accepting the null” when the p-value is above 0.05 or so. The page says nothing about superiority of Monte Carlo methods. Why were alternate hypothesis distributions mentioned? Only the null hypothesis distribution is used to calculate a p-value. Yates’s correction is for 2x2 contingency tables [2]. It doesn’t apply in this case. Finally, what the heck do “covariance structure analysis” and “allele frequencies at a highly polymorphic locus” have to do with this problem? SJohnson 16:38, 5 March 2009 (EST) SJohnson, I am looking for this paper for you, I cited it for one of my past publications dealing with allele frequencies (I believe it came from the Duke Biostatistics group). To answer your question about allele frequencies, that is the issue at hand, more about the genetics than the math, but it is the item being studied. So you stated that Yates can not be used and statistics says the number of occurrences is too small to evaluate using the Chi-Squared test so what would you recommend instead of the Monte-Carlo Method? Regarding the "Fisher z-transformation p-value" from the paper, garbage in garbage out. If the p-values were bad to begin with, then why would a combination of them be meaningful? SJohnson 10:49, 9 March 2009 (EDT) You are assuming that p-values are wrong based on a test that is inappropriate in this case due to data limitations. Did you perform a z-transformation on the chi-squared for the three data groups?--Able806 10:19, 11 March 2009 (EDT) You asked about the “Fisher z-transformation p-value”. The z-transformation test and Fisher’s method are actually two different things (see Whitlock's 2005 paper - Ref. 49 in Blount et al.). But no, I haven’t tried either. SJohnson 10:10, 12 March 2009 (EDT) There's a large literature on various kinds of Monte Carlo test, a very short summary of which is that they're inevitably more accurate than parametric tests (e.g. F, t, chi-squared, etc) because they don't make assumptions about the distribution of the data under the null hypothesis. See for example Introduction to the Bootstrap by B. Efron and R. Tibshirani and The Jack-knife, the Bootstrap and Other Resampling Plans, also by Efron. They're certainly applicable to small datasets and their accuracy is really only limited by the number of samples you care to take. E.g. 1000 M-C samples would give you a pretty accurate idea about significance at the alpha<1% level (That book should answer SJohnson's questions of 18:50 on 4/3/09 and 16:38 on 5/3/09 about accuracy and Aschalfly's comment of 17:07 on 5/3/09 about appropriateness of Monte Carlo tests.) FredFerguson 16:53, 11 March 2009 (EDT) Your claim that Monte Carlo methods are “inevitably more accurate” than other tests is obviously wrong because the accuracy of MC methods always depends on the number of realizations used. You should have written $\alpha=1\%$, not $\alpha<1\%$. If 1,000 random realizations are generated, the number of realizations above the true $\alpha=1\%$ level is binomial with mean 10 and variance about 10. Thus, the standard deviation of the MC estimate is >0.003. In this example, a Monte Carlo p-value could be off by 30% and still be within a standard deviation. Is that really “pretty accurate”? Using one million MC realizations (as done in the paper) at the α = 0.001 level means the standard deviation is about 3%. The paper reported a p-value of less than 0.001 (experiment two). It wouldn’t surprise me to find out that the experiment two p-value for the flawed test is off because only one million realizations were used. My original statement, “When p-values are small, Monte Carlo methods are notoriously inaccurate unless the number of realizations generated is enormous” is correct. SJohnson 10:10, 12 March 2009 (EDT) You're talking about miniscule differences in the accuracy of a test. 0.013 isn't very different from 0.007. In either case, it's very unlikely the experimenter would have obtained that result if the null hypothesis were true. If you're bothered about differences in P-values to the third decimals (which would make you unusual!), just run more MC realisations, that's all. Not really a problem. FredFerguson 11:53, 12 March 2009 (EDT) There’s still confusion about the difference between test statistics and Monte Carlo methods. Before you find a Monte Carlo estimate of a p-value, you need to select a test statistic to reduce the data set to a scalar. I am interested in hearing which test statistic you believe should be used in place of the chi-square test and why. SJohnson 10:10, 12 March 2009 (EDT) Quick question for SJohnson: How many degrees of freedom did you choose when calculating the p-value? I'd like to know upon what condition you base that number. Thanks.--Argon 11:05, 5 March 2009 (EST) The degree of freedom for a contingency table is rows minus one times columns minus one. That is, (r − 1)(c − 1). Here’s a pretty good tutorial I came across: [3]. For the experiments from [4], the DOFs are 11, 11, and 13. For experiment one, the chi-square test statistic is $X^2 =\sum\limits_i\sum\limits_j \frac{\left(n_{i,j}-E\left[n_{i,j}\right]\right)^2} {E\left[n_{i,j}\right]}$ $=\frac{\left(0-1/3\right)^2}{1/3} +\frac{\left(6-17/3\right)^2}{17/3} +\frac{\left(0-1/3\right)^2}{1/3} +\ldots+$ $+\frac{\left(2-1/3\right)^2}{1/3} +\frac{\left(4-17/3\right)^2}{17/3}$ $\approx 14.82$ where ni,j is the observed value and $E\left[n_{i,j}\right]$ is the expected null hypothesis value. So if you have MS Excel, another way to arrive at the p-value of 0.19 is to type “=CHIDIST(14.82,11)” into a cell. Cheers! SJohnson 16:38, 5 March 2009 (EST) OK, thanks for the info. From what I'd calculated and looked up in tables, the numbers seemed close to a df=11 for a chi-square of ~14. (Aside: With terms having 17/3 in the denominator in the figures above, were you using the test of independence? I was using Pearson's test for fit of a distribution which returns a chi-squared value of 14 and roughly matched the p-values you reported, assuming the df was 11). Also, the first sentence of the article reads: "Blount, Borland, and Lenski[1] claimed that a key evolutionary innovation was observed during a laboratory experiment. That claim is false." A small correction: There were several claims in the paper. The 'key evolutionary innovation' was acquiring the ability to utilize citrate as a food source. That claim was demonstrated multiple times. The claim, which pertains to this statistics discussion was that the Cit+ phenotype arose in a multi-step process, first requiring a rare, pre-adaptive mutation before additional mutation(s) lead to the subsequent development of citrate utilization.--Argon 20:46, 5 March 2009 (EST) My biology-degreed wife assures me that mutation does not necessarily mean that evolution occurred. What the paper claimed is that evolution (a “key innovation”) occurred in the lab. The key innovation supposedly increased the mutation rate. In the experiments, the observed mutation rate increased after generation 31,000, but not enough to make a statistically significant claim that the rate is not constant. The analysis in the paper was similar to flipping a coin ten times, counting six heads and claiming that the coin must be biased against tails. In reality, there’s nothing surprising about a fair coin producing slightly more of one outcome than the other. Just like there's nothing surprising about there being slightly more mutations in later generations than early generations given the null hypothesis (constant mutation rate). SJohnson 10:46, 9 March 2009 (EDT) >>Inserting a later comment first<< SJohnson, the paper's title is: "Historical contingency and the evolution of a key innovation in an experimental population of Escherichia coli" As I mentioned earlier, the key innovation is the evolution of the Cit+ phenotype and not the timing or rate of its acquisition. And yes, it *is* evolution (call it microevolution, if you wish). Blount et al went on further to speculate how this evolutionary innovation arose and they proposed the historical contingency hypothesis in which 'pre-adaptive' mutations were required before the Cit+ phenotype developed. It is only this latter hypothesis that you are attempting to address with your chi-square analysis, not the fact that Cit+ mutants arose (which is the evolutionary innovation).--Argon 21:57, 18 March 2009 (EDT) SJohnson, not to say anything about your wife, but has she had a 400 level molecular genetics course (most general biology degrees do not cover the detail unless they are specialized)? If so, she would have mentioned that if the mutation passes to the offspring and is selectively beneficial to the population then it is a step of evolution as along as the conditions continue through the sharing of the mutation with the population and the environment is such that reduces the growth rate of the non-transformed population. While not all mutations are signs that evolution occurred the mutations that pass to offspring and provide a benefit compared to other offspring are very strong indicators. In the case of this paper the population that evolved the cit+ was able to metabolize a chemical in their environment which allowed for an adaptation advantage compared to the non-transformed colonies.--Able806 10:19, 11 March 2009 (EDT) Let’s go back to the beginning. There appears to be confusion about the difference between test statistics and methods for computing p-values. As is noted at the beginning of the page [5], the fundamental problem with the paper is that it used a flawed test statistic, not that it used Monte Carlo methods to find the p-value for that flawed statistic. Every hypothesis test uses a test statistic to reduce the data to a single number. The p-value for the test statistic can be calculated analytically (as I’ve done for the chi-square test statistic) or by Monte Carlo methods. In the paper, Monte Carlo methods were used to compute the p-value of the “mutation generation” test statistic. The key problem with the analysis from the paper is that it doesn’t work to use a weighted average to test for variations in mutation rate. This is like trying to use the sample variance to test for an increase in the mean in Gaussian-distributed data. A statistic should be selected based on the null and alternate hypothesis distributions of the data. The chi-square test (unlike the weighted average from the paper) is a reasonable choice for data that mutates at a constant rate under the null hypothesis, but mutates at varying rates under the alternate hypothesis. Able806, you made a good point about the contingency table cell frequencies being relatively low, but were wrong when you said ”the data set is actually too small to use the chi square method correctly”. In the low cell frequency case the chi-square test is still effective, but the null hypothesis distribution of the chi-square statistic starts to look less like the chi-square distribution. Thus, p-values calculated using the chi-square distribution may be a bit off. However, Monte Carlo p-values are always imperfect as well because it's impossible to generate an infinite number of random realizations. There are imperfections in p-values generated by analytic and Monte Carlo methods. However, low cell frequencies does not explain the >20x and >2.5x differences between chi-square p-values and p-values from the paper for experiments one and three. The reason for those huge differences was the use of the flawed test statistic (“mutation generation”) in the paper. SJohnson 16:38, 5 March 2009 (EST) SJohnson, the chi-squared test is a valuable statistical tool, but the limitations of the test must be acknowledged. The chi-squared test can only produce valid results if the assumptions that underly the test are not violated. As an analogy, Newtonian models of motion fail to produce accurate results as velocities approach the speed of light; under those circumstances one must switch to a theory that accounts for relativistic effects. It seems that you have simply dismissed the widely-acknowledged fact that the chi-squared test is inappropriate for use in situations where n in any cell is less less than a threshold number. Different authors set different thresholds, but all are well above the numbers seen in your chi-squared analysis - even the most liberal guidelines advise against the chi-squared test when any expected cell frequency is less than one or more than 20% of the table cells are less than 5; others require that expected values in all cells must be more than 5. With smaller amounts of data, the test is insensitive and errs on the side of rejecting the hypothesis. If you attempt your chi-squared statistical analysis with a program that is more sophisticated than MS Excel (as I did), you get an error message indicating that the results are invalid due to low expected cell counts. That issue aside, there are other reasons that the chi-squared test is inappropriate here. As the links above point out, the categories tested must be truly independent; one example is that you can't use the chi-squared test to compare age and ability to kick a field goal by testing the same experimental group twice, one year apart; you have to test one group of age A and a different group of age B. In the case of the Blount paper, the categories are not independent. Even if there were adequate numbers to address the low-expected-frequency problem, this would make the chi-squared an invalid test in this case. There are other significant problems with the use of the chi-squared test in this circumstance, but they can wait until you address these first major problems.--ElyM 12:18, 11 March 2009 (EDT) Wackerly et al. says in general it’s assumed that the cell frequencies are above five so that the chi-square statistic (under the null) is approximately chi-square distributed (see p. 703). That book does not say chi-square test results are invalid if frequencies are five or less. Your example of a chi-square test warning message (it said "warning" not "error" as you stated) in Minitab [6] said “approximation probably invalid” referring to the chi-square distribution approximation to the chi-square test statistic’s distribution. Your example did not say “chi-square test invalid”. I agree that when cell frequencies are low, the chi-square test statistic’s distribution starts to deviate from the chi-square distribution. I maintain that this deviation is not enough to explain the >2.5x and >20x differences in the chi-square test p-values and the p-values from the paper. As the numerous links in your post proved, the chi-square test is widely-used by statisticians. Can you give examples of statisticians using mean mutation generation as a test statistic? Also, did your software agree with the chi-square test p-values I presented? Thanks. SJohnson 10:10, 12 March 2009 (EDT) Thank you for giving page references for Wackerly; however it seems we have different editions, since page 703 in my copy (5th ed, 1996) does not deal with chi-squared issues at all. My copy does state the following, on page 622: "Although the mathematical proof is beyond the scope of this text, it can be shown that, when n is large [chi-squared] will possess approximately a chi-square probability distribution in repeated sampling." Then, on page 624: "Experience has shown that cell counts [n sub i] should not be too small in order that the chi-square distribution provide an accurate approximation to the distribution of [chi squared]. As a rule of thumb we require that all expected cell counts equal or exceed 5, although Cochran (1952) has noted that this value can be as low as 1 for some situations." Wackerly then goes on, in the problems sections, to describe the use of the chi-squared test as a "violation of good statistical practice" when "some expected counts [are] <5." It seems that you are already aware that the [chi-square] statistic under the null is no longer chi-square distributed for small n; this is precisely why the test should not be used under those conditions. I can claim to be able to accelerate a 1-kg mass to 10 times the speed of light by applying 1 N of force for 95 years by using F=ma and t= (vf-vi)/a. Plugging the numbers into those equations will produce the same result every time, but the answer is illegitimate because those equations are only valid under certain assumptions, which are violated as velocities approach the speed of light. Similarly, having a statistical program calculate a chi-squared value given the Blount data will produce a number result, but since the assumptions of the test are violated the result is not legitimate. Yes, if I put the Blount data in SAS 9.2, I get the same numerical answer as you do, but I also get the following message: "WARNING: >89% of the cells have expected counts less than 5. Chi-square may not be a valid test." You may argue that that's a warning, not an error; that's a semantic distinction. The reason that the program says that it MAY not be valid is that the chi-squared test skews in the direction of being too conservative at low n values; the test has an acceptable rate of false positives but an unacceptably high rate of false negatives. Comparing the results of the Monte Carlo and chi-squared results in this case is like comparing the results of Newtonian and relativistic equations of motion: they can produce very different results from the same input data. For a finite amount of data, the chi-square statistic is never chi-square distributed under the null. The p-values are always approximate regardless of cell frequencies. The approximation becomes more accurate as the amount of data increases, but I don’t believe that this inaccuracy will change p-values that are about 0.2 (for experiments 1 and 3) into statistically significant p-values. How much do you expect the p-values to change if an exact computation is used in place of the chi-square distribution approximation? SJohnson 20:49, 18 March 2009 (EDT) Your last paragraph has a major non sequitur in it: yes, many statisticians use the chi-square test. As long as the assumptions of the test are not violated, it is a valuable tool. That has nothing to do with the validity of using mean mutation generation as a test statistic. 'Mean number of werewolf attacks in Mumbai in the week centered on the new moon, by month, from 1654 to 1798' is a valid test statistic. I am quite sure that it has never been used in a peer-reviewed paper before. That does not mean that I can't perform valid statistical tests on that statistic. If, however, the incorrect test is applied, the results of the analysis will be flawed. Papers apply a (relatively small) standard repertoire of valid tests to a (potentially infinite) number of test statistics. The particular test statistic used in a paper may never have been used before and may never be used again; that does not address the validity of the analysis. In Blount's case, the test is the Monte Carlo analysis, which is also "widely-used by statisticians". There are an infinite number of ways to reduce a data set to a single number. However, it’s foolish to think every method would be effective. I gave an example of a flawed test statistic in an earlier post [7]. Another example of a flawed test statistic is the one used in the paper because it does not always detect deviations from the null hypothesis (see: Significance of E. Coli Evolution Experiments#Test Statistics). Test statistics are typically derived. The likelihood ratio test is a common method used to derive them. The chi-square test for independence is an approximation to the LRT. Where is the derivation saying that mean mutation generation is an appropriate test statistic for this problem? SJohnson 20:49, 18 March 2009 (EDT) We still haven't touched on the issue of the categories not being independent, which by itself is sufficient to invalidate the chi-squared technique. I'm new to this site, so I'm unsure as to the etiquette of making changes to the articles of another person - but the article here should at the very least mention that the chi-square test is being used here in a manner that violates its underlying assumptions in at least two fundamental ways, and the results are therefore suspect.--ElyM 17:34, 12 March 2009 (EDT) When generating random realizations of experiment outcomes, the authors assumed that the total number of mutants was fixed. Thus the paper assumed the numbers of mutants per generation are statistically dependent. Does this seem like a realistic model, or do you think that if the experiments were recreated that the total number of mutants could vary? For example, if experiment one were recreated, would the total number of mutants always be exactly four? SJohnson 20:49, 18 March 2009 (EDT) It looks to me as though SJohnson has misinterpreted the application of the chi-squared test in quite a fundamental way. His/her analysis of Blount's data are therefore close to meaningless, regardless of whether the test used by Blount is appropriate or not. In my opinion, the entire page should therefore be deleted. FredFerguson 08:18, 13 March 2009 (EDT) "Fred", perhaps you mistakenly think this is Wikipedia, where censorship and deletion of pages for ideological reasons are common. Not here.--Andy Schlafly 10:23, 14 March 2009 (EDT) Umm... I'm suggesting deletion for mathematical reasons, not ideological reasons. Using an argument filled with mathematical errors to try to support your case only detracts from your credibility. FredFerguson 10:38, 14 March 2009 (EDT) Actually, I think correction is better than deletion. So that's what I've done. FredFerguson 11:01, 14 March 2009 (EDT) I find no credibility in your denial of having ideological reasons.--Andy Schlafly 11:04, 14 March 2009 (EDT) ## Misinterpretation of test SJohnson, Your analysis misinterprets the test. You say the null hypothesis is that this mutation cannot happen. They saw a mutation (4 mutations, in fact, in the data set you show) so the null hypothesis (as you state is) is disproved. That's perfectly straightforward. I don't know what the "mean mutation generation" test is but you're doing when you apply a chi-squared test to this dataset is to test if the mutations are evenly distributed throughout the generations. Your test says they are, so there's no strong evidence to suppose that mutations are likely to occur in one generation rather than another in the series of tests. Blount's test says thay aren't, so it's more likely that the mutation will occur later in the series of tests. I can't tell which test is right without knowing more about the test that Blount used. But that point (the foregoing paragraph) has no bearing at all on the null hypothesis, as you describe it. The mutation appeared, so that means the hypothesis that the mutation can't happen is disproved. Very simple. FredFerguson 21:10, 8 March 2009 (EDT) I never said that “the null hypothesis is that this mutation cannot happen”. The chi-square test statistic I'm using wouldn’t be defined if the null hypothesis mutation rate was zero because the $E\left[n_{i,j}\right]$ term in the denominator of the statistic (see above equation) would be zero. The test statistic from the paper is the average of the generation numbers of observed mutations. For experiment one this number is $\frac{1}{4}\left(30500+31500+2\times32500\right)= 31750.$ The same number is shown in Table 2 of the paper. SJohnson 10:46, 9 March 2009 (EDT) SJohnson, the way you're calculating the chi-squared statistic implies that you're testing the null hypothesis of a constant mutation rate over time against an alternative hypothesis of a mutation rate which varies over time. FredFerguson 11:02, 9 March 2009 (EDT) As it currently stands, the article makes the following statement: "The expected outcomes under the null hypothesis (no evolutionary innovation occurs) are also shown." This misstates the null hypothesis of the paper, which is elaborated in the Introduction section of the paper, and repeated in the section Statistical Analysis of the Replay Experiments: For each experiment, we compared the observed mean generation of those clones that yielded Cit+ variants to the mean expected under the null hypothesis that clones from all generations have equal likelihood. The null thus corresponds to the rare-mutation hypothesis laid out in the Introduction." Block quote [1] The article also continues to describe 'mean mutation generation' as a test rather than a statistic to which the Monte Carlo test was applied.--ElyM 17:24, 14 March 2009 (EDT) I'm a bit confused about why the Chi-squared test, which we're told compares the results to a null hypothesis of a constant mutation rate, seems insensitive to which generations the Cit+ mutations are found. Instead, the chi-square test seems only to be evaluating whether the frequencies of Cit+ mutations in any particular generation are 'expected'. Thus the test is asking whether finding a distribution (e.g. in the first experiment) across nine periods that have no mutations, two periods that have one mutation and one period with two mutations is a statistically significant deviation from what you'd expect of the mutations were randomly distributed. The number returned from the function is the same regardless of the order of Cit+ results. The number of mutations per bin is not the only question being asked. Instead it's the order and temporal distribution of Cit+ mutants that the analyses probably need to confront. It's not whether one can get nine no-mutants, two single mutants and one double-mutant result, it's a matter of when they occur and whether that distribution affects the significance of the results. Blount's hypothesis is that mutations should appear later in the experiment. When formulating a suitable null hypothesis, wouldn't one want to take the timing of Cit+ mutants into consideration too?--Argon 22:25, 18 March 2009 (EDT) ## Unreferenced Claims I deleted the claim that mean mutation generation is an appropriate test statistic because no reference was produced that back that claim. No reference was provided to back the claim that the chi-square test p-values are always conservative, either. SJohnson 12:57, 14 March 2009 (EDT) The reference is Everitt. I'll check I put it in the right place. FredFerguson 13:30, 14 March 2009 (EDT) There was a typo in my edit summaries on the talk page and the main page. I meant to say "Removed unsupported claims" rather than "Removed supported claims". SJohnson 13:13, 14 March 2009 (EDT) I do not believe that anyone has claimed that 'chi-square test p-values are always conservative'. The claim that has been made is that under certain circumstances, namely low n and low individual cell values, the chi-square test is an invalid test; that under those circumstances the power of the test is low and it becomes impossible to reject the null hypothesis even when it is false. You may have missed the pertinent sections in my links above, so I will directly quote the relevant sections. All the quoted sections refer to chi-square testing in particular. Any bolding below is mine. This edit claimed that chi-square test p-values are conservative, but didn't back that claim with a reference: [8]. SJohnson 20:49, 18 March 2009 (EDT) "Assumptions: Even though a nonparametric statistic does not require a normally distributed population, there still are some restrictions regarding its use. 1. Representative sample (Random) 2. The data must be in frequency form (nominal data) or greater. 3. The individual observations must be independent of each other. 4. Sample size must be adequate. In a 2 x 2 table, Chi Square should not be used if n is less than 20. In a larger table, no expected value should be less than 1, and not more than 20% of the variables can have expected values of less than 5. 5. Distribution basis must be decided on before the data is collected. 6. The sum of the observed frequencies must equal the sum of the expected frequencies." "Assumptions: • Random sample data are assumed. As with all significance tests, if you have population data, then any table differences are real and therefore significant. If you have non-random sample data, significance cannot be established, though significance tests are nonetheless sometimes utilized as crude "rules of thumb" anyway. • A sufficiently large sample size is assumed, as in all significance tests. Applying chi-square to small samples exposes the researcher to an unacceptable rate of Type II errors. There is no accepted cutoff. Some set the minimum sample size at 50, while others would allow as few as 20. Note chi-square must be calculated on actual count data, not substituting percentages, which would have the effect of pretending the sample size is 100. • Adequate cell sizes are also assumed. Some require 5 or more, some require more than 5, and others require 10 or more. A common rule is 5 or more in all cells of a 2-by-2 table, and 5 or more in 80% of cells in larger tables, but no cells with zero count. When this assumption is not met, Yates' correction is applied. • Independence. Observations must be independent. The same observation can only appear in one cell. This means chi-square cannot be used to test correlated data (ex., before-after, matched pairs, panel data). • Similar distribution. Observations must have the same underlying distribution. • Known distribution. The hypothesized distribution is specified in advance, so that the number of observations that are expected to appear each cell in the table can be calculated without reference to the observed values. Normally this expected value is the crossproduct of the row and column marginals divided by the sample size. • Non-directional hypotheses are assumed. Chi-square tests the hypothesis that two variables are related only by chance. If a significant relationship is found, this is not equivalent to establishing the researcher's hypothesis that A causes B, or that B causes A. * Finite values. Observations must be grouped in categories. * Normal distribution of deviations (observed minus expected values) is assumed. Note chi-square is a nonparametric test in the sense that is does not assume the parameter of normal distribution for the data -- only for the deviations. * Data level. No assumption is made about level of data. Nominal, ordinal, or interval data may be used with chi-square tests." "Assumptions: -None of the expected values may be less than 1 -No more than 20% of the expected values may be less than 5" [4] "When performing a chi-square test, your data must satisfy important assumptions. Although these assumptions may be stated differently in different textbooks, they generally assert that: 1)The sample must be randomly drawn from the population 2)The sample size, n, must be large enough so that the expected cell count in each cell is greater than or equal to 5. Both assumptions must be met in the process of collecting your data, and violations of the second assumption will appear in the Minitab output when you run the analysis. ... You may wonder why the second assumption is necessary for performing the chi-square test. The second assumption arises because the distribution of counts under the null hypothesis is multinomial, and the normal distribution can be used to approximate the multinomial distribution if the sample size is sufficiently large and the probability parameters aren't too small. It can be shown via the Central Limit Theorem that the multinomial distribution converges to the normal distribution as the sample size approaches infinity; however, there is no easy way to show mathematically how and when the convergence fails." "The chi-square test is simpler to calculate but yields only an approximate P value. ... You should definitely avoid the chi-square test when the numbers in the contingency table are very small (any number less than about six)." [6] "The most important things to remember to get a valid χ2 test are that the expected values are not too small in any bin (certainly 5 or more), and that the degrees of freedom are properly evaluated. Unless you have a very large amount of data, the test is not very sensitive and errs on the side of safety. If you get a significant result, however, it is not likely to be wrong." [7] "The critical assumptions of the chi-square test for k independent samples are similar to those for the chi-square test for two independent samples. ... 4. No more than 20% of the cells may have expected frequencies of less than 5, and no cell should have an expected frequency of less than 1. The rule given in Assumption 4 is particularly important for a contingency table that is larger than 2X2" [8] "Special problems with small expected cell frequencies for the chi-square test: The chi-square test involves using the chi-square distribution to approximate the underlying exact distribution. The approximation becomes better as the expected cell frequencies grow larger, and may be inappropriate for tables with very small expected cell frequencies. For tables with expected cell frequencies less than 5, the chi-square approximation may not be reliable. A standard (and conservative) rule of thumb (due to Cochran) is to avoid using the chi-square test for tables with expected cell frequencies less than 1, or when more than 20% of the table cells have expected cell frequencies less than 5. Another rule of thumb (due to Roscoe and Byars) is that the average expected cell frequency should be at least 1 when the expected cell frequencies are close to equal, and 2 when they are not. (If the chosen significance level is 0.01 instead of 0.05, then double these numbers.) Koehler and Larntz suggest that if the total number of observations is at least 10, the number categories is at least 3, and the square of the total number of observations is at least 10 times the number of categories, then the chi-square approximation should be reasonable. Care should be taken when cell categories are combined (collapsed together) to fix problems of small expected cell frequencies. Collapsing can destroy evidence of non-independence, so a failure to reject the null hypothesis for the collapsed table does not rule out the possibility of non-independence in the original table. As with most statistical tests, the power of the chi-square test increases with a larger number of observations. If there are too few observations, it may be impossible to reject the null hypothesis even if it is false." [9]--ElyM 17:24, 14 March 2009 (EDT) Thanks for this really excellent contribution, ElyM. The only thing I'd like to add is in relation to your initial statement, "I do not believe that anyone has claimed that 'chi-square test p-values are always conservative'". The question of whether a test is conservative in a particular situation is probabilistic. One can determine whether a test is likely to generate a p-value which is too high in a particular situation (e.g. for a chi-squared test, when there are lots of small expected values) but one needs an exact test (such as an appropriate Monte Carlo randomisation test) to determine whether the p-value in any particular test is in fact excessively high. This edit also claimed that chi-square test p-values are conservative, but didn't back that claim with a reference: [9]. SJohnson 20:49, 18 March 2009 (EDT) I hope careful reading of your very clear description will put SJohnson's mind at rest on this subject. FredFerguson 18:11, 14 March 2009 (EDT) ElyM, you've provided nothing to address the basic flaw that "The paper incorrectly applied a Monte Carlo resampling test to exclude the null hypothesis for rarely occurring events." See Flaws in Lenski Study. Also, do not impose your view on the content page until after SJohnson has had an opportunity to respond to your posting. As to "Fred", his put-downs are getting tiresome and I'm going to review his edit pattern now to see if he's been contributing anything of value to this site.--Andy Schlafly 14:06, 15 March 2009 (EDT) Mr. Schlafly, per your request I have not added anything to the content page as SJohnson has not yet responded to my posts. Since all of my comments have been in regards to SJohnson's use of the chi-square test in this particular article, I'm not sure why you expect me to address Blout's use of Monte Carlo - that issue seems to be addressed on the Flaws in Lenski Study page. SJohnson has added a reformulation of the chi-square test for two possible outcomes, and stated that the chi-square test is at a minimum when all success probabilities are equal. He then extrapolates from this to claim that the chi-square test is an effective test for the data from Blount. The reformulation of the equations for two possible outcomes does not address the underlying problem that the chi-square test has universally accepted parameters outside of which it is considered an invalid test; I have provided references for these parameters and shown that the data from Blount lies outside them. None of the expected cells in SJohnson's analysis have values above one, and the total n is four. SJohnson's own reference states that the application of the chi-square test in this circumstance is a "violation of good statistical practice". Analogously, combining F=ma and t=(vf-vi)/a into t=(vf-vi)m/F and showing that t is a minimum when m approaches zero does not address the fact that those Newtonian equations do not apply as velocities approach the speed of light. The legitimacy of Blount's arguments cannot be determined by the application of illegitimate counterarguments. If SJohnson or others can point to references from the statistical literature that show that Blount has made methodological errors - as I have been able to do with SJohnson's chi-square analysis - I would welcome their input, and no doubt Conservapedia's other readers would as well, and this page would be greatly improved. I have not seen a rebuttal from SJohnson in the four days since my last post, although he has added new material to the content page since then. In light of this, I would appreciate some guidelines as to when it is appropriate for me to add my information and references to the content page. I can add citations from the primary mathematical literature if necessary, but in general I find that these are less helpful as they are not easily accessible by readers without access to academic libraries.--ElyM 18:07, 18 March 2009 (EDT) You say, "I'm not sure why you expect me to address Blout's use of Monte Carlo." The reason is obvious: the title of the content page is the "Significance of E. Coli Evolution Experiments." You haven't addressed the inappropriateness of using Monte Carlo simulations for assessing the significance rarely occurring events, which was central to Lenski's statistical claims. I suggest you address this flaw if you want to be taken seriously.--Andy Schlafly 23:12, 18 March 2009 (EDT) ASchlafly, again per your request, and based on your statement regarding the "inappropriateness of using Monte Carlo simulations for assessing the significance of rarely occurring events", I have spent the last several days reviewing the literature available to me on Monte Carlo and other resampling techniques, looking for ways in which Blount may have made a methodological error of the sort that SJohnson has made. I have been unable to find any examples of authors suggesting that Monte Carlo be avoided for low n, or for events with low probability regardless of n, much less providing specific cutoff numbers as are seen in the references that I provided for the chi-square test. Similarly, the technique that Blount used does not require/assume that categories are unrelated, as the chi-square test does. Of course, the absence of evidence is not evidence of absence, and I may have misinterpreted the basis of your objection. At this point I'll need you to explain your objection in more detail if you wish me to find the appropriate literature addressing your concerns. Do you believe that the number of resamplings was too low in Blount's paper? That the analysis should have been performed with a software package other than Statistics101? Some other procedural issue? Some issue of interpretation? The statistical problem that Blount must address is straightforward: given a distribution of mutant cultures that appears to be skewed toward the higher generations, what is the probability that this same amount of skew (or a greater degree) could arise by chance, given the null hypothesis that every generation is equally likely to produce a mutant? Interestingly, in the case of the first replay experiment, the total number of ways to randomly select (equal probability, no replacement) four cultures from seventy-two is 72x71x70x69, or 24,690,960. This number is small enough that a program can brute-force-calculate the 'mean generation number' of all possible combinations of four cultures in a reasonable amount of time. An experimentally-derived 'mean generation number' can be checked against this exhaustive list, and the number of means equal to or larger than the experimental mean can be found exactly. Converting this number to a percentage of 24,690,960 provides an exact p-value for any given experimental 'mean generation number'. This exhaustive approach is different than the Monte Carlo technique, in that all possible outcomes are examined, rather than a random subset of all possible outcomes. For the first replay experiment, it provides a way to independently check Blount's Monte Carlo results. This approach is not possible for the second and third replay experiments, in which the total number of possible combinations becomes impractically large: 340!/335! = 4.41 x10^12 and 2800!/2792! = 3.74 x 10^27, respectively. I asked a colleague to run just such a brute-force program for me on the first replay data. I also ran several Monte Carlo simulations (not using Statsistics101) with Blount's data, using twenty-five million, one hundred million, and 493,819,200 resamplings - note that this last is twenty times the number of all possible combinations of 4 samples drawn without replacement from 72. The p-values from the 25M, 100M, and 493M Monte Carlo resamplings (0.00844, 0.00846, and 0.00846, respectively) compare favorably with Blount's 1M value of 0.0085 and the non-Monte-Carlo brute-force exact calculation, which provides a p-value of 0.008457. Thus it appears that Blount's statistical results are confirmed by a non-Monte Carlo technique, at least for the first replay experiment. What do you get for the experiment two p-value using the method from the paper and at least ten million realizations? SJohnson 08:48, 25 March 2009 (EDT) For the second replay experiment, Blount reports that one million resamplings gives a p-value of 0.0007. When I run the Monte Carlo simulations, ten million resamplings give a p-value of 0.00060; one hundred million resamplings give a p-value of 0.00062, and one billion resamplings give a p of 0.00061. I got 0.0006 using ten million realizations and the flawed test statistic. The paper had 0.0007. The authors obviously didn't use enough Monte Carlo realizations. I'm going to add this to the list of flaws in the paper. [10] SJohnson 08:51, 26 March 2009 (EDT) As to the brute-force method for the second replay: the 4.41x10^12 combinations of five cultures picked from 340 actually represents 'only' 36.8 billion unique combinations, since for the purposes of calculating a mean generation value, the ordering of the cultures does not matter: 0, 0, 0, 0, 10 gives the same mean as 10, 0, 0, 0, 0 and 0, 10, 0, 0, 0. With brute force, it turns out that out of the 36,760,655,568 unique combinations possible in the second replay, 22,536,306 have means that are greater than or equal to 32,100. 22,536,306 / 36,760,655,568 = 0.000613 = the exact p-value derived from exhaustive evaluation rather than Monte Carlo. The third replay has 9.27 x 10^22 unique combinations; at a billion comparisons a minute it would take over 170,000,000 years to check them all.--ElyM 12:36, 25 March 2009 (EDT) ---------- Here are pointers to the freely available Statistics 101 package [10] and the actual programs run through the package by Blount et al. [11]. The stats package is written in Java and should run under many operating systems. A 10 million trial run of the second experiment took a bit of time and yielded a p-value of 0.00061. Ten separate, one-million trial runs produced an average p-value of 0.00061 (std.dev=0.00002, n=10). Even with trial sizes of 5K, the numbers averaged about 0.0006 (std.dev=0.0004 n=10).--Argon 20:52, 25 March 2009 (EDT) My intention is not to get caught up in a digression about Monte Carlo, though - I'd rather keep the focus on the fact that the main article should acknowledge that SJohnson is using chi-square in a way that violates accepted guidelines; this remains true whether Blount's analysis is valid or not.--ElyM 12:13, 23 March 2009 (EDT) ## Caveat If SJohnson can provide citations to authors who support the use of chi-square where all expected cell counts are less than one, or where categories are not independent, I look forward to evaluating them.--ElyM 12:35, 27 March 2009 (EDT) Wackerly et al. does not say to avoid the test because of low cell frequencies. You're still making a false claim that p-values are always high if cell frequencies are low. The last paragraph you added is just your opinions about the test being inappropriate. Modeling each trial as a statistically independent Bernoulli trial is reasonable. Thus, the chi-square test is appropriate. The assumption from the paper that the numbers of mutants per experiment would never change is an example of a bad way to model an experiment. Note that all p-values in Blount et al. were calculated under that unreasonable assumption. Do you think that if these experiments were recreated, that the total number of mutants would always be exactly the same? SJohnson 08:58, 28 March 2009 (EDT)
auto_math_text
web
linear-base-0.1.0: Standard library for linear types. Streaming.Linear Synopsis # Documentation The Stream data type is an effectful series of steps with some payload value at the bottom. The steps are represented with functors. The effects are represented with some control monad. (Control monads must be bound to exactly once; see the documentation in linear-base to learn more about control monads, control applicatives and control functors.) In words, a Stream f m r is either a payload of type r, or a step of type f (Stream f m r) or an effect of type m (Stream f m r) where f is a Control.Functor and m is a Control.Monad. This module exports combinators that pertain to this general case. Some of these are quite abstract and pervade any use of the library, e.g. maps :: (forall x . f x %1-> g x) -> Stream f m r %1-> Stream g m r mapped :: (forall x. f x %1-> m (g x)) -> Stream f m r %1-> Stream g m r concats :: Stream (Stream f m) m r %1-> Stream f m r (assuming here and thoughout that m or n satisfies a Control.Monad constraint, and f or g a Control.Functor constraint). Others are surprisingly determinate in content: chunksOf :: Int -> Stream f m r %1-> Stream (Stream f m) m r splitsAt :: Int -> Stream f m r %1-> Stream f m (Stream f m r) intercalates :: Stream f m () -> Stream (Stream f m) m r %1-> Stream f m r unzips :: Stream (Compose f g) m r %1-> Stream f (Stream g m) r separate :: Stream (Sum f g) m r -> Stream f (Stream g m) r -- cp. partitionEithers unseparate :: Stream f (Stream g) m r -> Stream (Sum f g) m r groups :: Stream (Sum f g) m r %1-> Stream (Sum (Stream f m) (Stream g m)) m r One way to see that any streaming library needs some such general type is that it is required to represent the segmentation of a stream, and to express the equivalents of Prelude/Data.List combinators that involve 'lists of lists' and the like. See for example this post on the correct expression of a streaming 'lines' function. The module Streaming.Prelude exports combinators relating to > Stream (Of a) m r where Of a r = !a :> r is a left-strict pair. This expresses the concept of a Producer or Source or Generator and easily inter-operates with types with such names in e.g. conduit, iostreams and pipes. # The Stream and Of types The Stream data type is equivalent to FreeT and can represent any effectful succession of steps, where the form of the steps or commands is specified by the first (functor) parameter. The effects are performed exactly once since the monad is a Control.Monad from linear-base. data Stream f m r = Step !(f (Stream f m r)) | Effect (m (Stream f m r)) | Return r The producer concept uses the simple functor (a,_) - or the stricter Of a _ . Then the news at each step or layer is just: an individual item of type a. Since Stream (Of a) m r is equivalent to Pipe.Producer a m r, much of the pipes Prelude can easily be mirrored in a streaming Prelude. Similarly, a simple Consumer a m r or Parser a m r concept arises when the base functor is (a -> _) . Stream ((->) input) m result consumes input until it returns a result. To avoid breaking reasoning principles, the constructors should not be used directly. A pattern-match should go by way of inspect - or, in the producer case, next data Stream f m r where Source # Constructors Step :: !(f (Stream f m r)) -> Stream f m r Effect :: m (Stream f m r) -> Stream f m r Return :: r -> Stream f m r #### Instances Instances details Functor f => MonadTrans (Stream f) Source # Instance detailsDefined in Streaming.Internal.Type Methodslift :: Monad m => m a %1 -> Stream f m a Source # (Functor m, Functor f) => Functor (Stream f m) Source # Instance detailsDefined in Streaming.Internal.Type Methodsfmap :: (a %1 -> b) -> Stream f m a %1 -> Stream f m b Source # (Functor m, Functor f) => Applicative (Stream f m) Source # Instance detailsDefined in Streaming.Internal.Type Methodspure :: a -> Stream f m a Source #(<*>) :: Stream f m (a %1 -> b) %1 -> Stream f m a %1 -> Stream f m b Source #liftA2 :: (a %1 -> b %1 -> c) -> Stream f m a %1 -> Stream f m b %1 -> Stream f m c Source # (Functor m, Functor f) => Monad (Stream f m) Source # Instance detailsDefined in Streaming.Internal.Type Methods(>>=) :: Stream f m a %1 -> (a %1 -> Stream f m b) %1 -> Stream f m b Source #(>>) :: Stream f m () %1 -> Stream f m a %1 -> Stream f m a Source # (Functor m, Functor f) => Applicative (Stream f m) Source # Instance detailsDefined in Streaming.Internal.Type Methodspure :: a %1 -> Stream f m a Source #(<*>) :: Stream f m (a %1 -> b) %1 -> Stream f m a %1 -> Stream f m b Source #liftA2 :: (a %1 -> b %1 -> c) %1 -> Stream f m a %1 -> Stream f m b %1 -> Stream f m c Source # (Functor m, Functor f) => Functor (Stream f m) Source # Instance detailsDefined in Streaming.Internal.Type Methodsfmap :: (a %1 -> b) %1 -> Stream f m a %1 -> Stream f m b Source # data Of a b where Source # A left-strict pair; the base functor for streams of individual elements. Constructors (:>) :: !a -> b -> Of a b infixr 5 #### Instances Instances details Functor (Of a) Source # Instance detailsDefined in Streaming.Internal.Type Methodsfmap :: (a0 %1 -> b) -> Of a a0 %1 -> Of a b Source # Functor (Of a) Source # Instance detailsDefined in Streaming.Internal.Type Methodsfmap :: (a0 %1 -> b) %1 -> Of a a0 %1 -> Of a b Source # # Constructing a Stream on a given functor yields :: (Monad m, Functor f) => f r %1 -> Stream f m r Source # yields is like lift for items in the streamed functor. It makes a singleton or one-layer succession. lift :: (Control.Monad m, Control.Functor f) => m r %1-> Stream f m r yields :: (Control.Monad m, Control.Functor f) => f r %1-> Stream f m r Viewed in another light, it is like a functor-general version of yield: S.yield a = yields (a :> ()) effect :: (Monad m, Functor f) => m (Stream f m r) %1 -> Stream f m r Source # Wrap an effect that returns a stream effect = join . lift wrap :: (Monad m, Functor f) => f (Stream f m r) %1 -> Stream f m r Source # Wrap a new layer of a stream. So, e.g. S.cons :: Control.Monad m => a -> Stream (Of a) m r %1-> Stream (Of a) m r S.cons a str = wrap (a :> str) and, recursively: S.each' :: Control.Monad m => [a] -> Stream (Of a) m () S.each' = foldr (\a b -> wrap (a :> b)) (return ()) The two operations wrap :: (Control.Monad m, Control.Functor f) => f (Stream f m r) %1-> Stream f m r effect :: (Control.Monad m, Control.Functor f) => m (Stream f m r) %1-> Stream f m r are fundamental. We can define the parallel operations yields and lift in terms of them yields :: (Control.Monad m, Control.Functor f) => f r %1-> Stream f m r yields = wrap . Control.fmap Control.return lift :: (Control.Monad m, Control.Functor f) => m r %1-> Stream f m r lift = effect . Control.fmap Control.return replicates :: (HasCallStack, Monad m, Functor f) => Int -> f () -> Stream f m () Source # Repeat a functorial layer, command or instruction a fixed number of times. replicatesM :: forall f m. (Monad m, Functor f) => Int -> m (f ()) -> Stream f m () Source # replicatesM n repeats an effect containing a functorial layer, command or instruction n times. unfold :: (Monad m, Functor f) => (s %1 -> m (Either r (f s))) -> s %1 -> Stream f m r Source # untilJust :: forall f m r. (Monad m, Applicative f) => m (Maybe r) -> Stream f m r Source # streamBuild :: (forall b. (r %1 -> b) -> (m b %1 -> b) -> (f b %1 -> b) -> b) -> Stream f m r Source # Reflect a church-encoded stream; cp. GHC.Exts.build streamFold return_ effect_ step_ (streamBuild psi) = psi return_ effect_ step_ delays :: forall f r. Applicative f => Double -> Stream f IO r Source # # Transforming streams maps :: forall f g m r. (Monad m, Functor f) => (forall x. f x %1 -> g x) -> Stream f m r %1 -> Stream g m r Source # Map layers of one functor to another with a transformation. maps id = id maps f . maps g = maps (f . g) mapsPost :: forall m f g r. (Monad m, Functor g) => (forall x. f x %1 -> g x) -> Stream f m r %1 -> Stream g m r Source # Map layers of one functor to another with a transformation. mapsPost id = id mapsPost f . mapsPost g = mapsPost (f . g) mapsPost f = maps f mapsPost is essentially the same as maps, but it imposes a Control.Functor constraint on its target functor rather than its source functor. It should be preferred if Control.fmap is cheaper for the target functor than for the source functor. mapsM :: forall f g m r. (Monad m, Functor f) => (forall x. f x %1 -> m (g x)) -> Stream f m r %1 -> Stream g m r Source # Map layers of one functor to another with a transformation involving the base monad. maps is more fundamental than mapsM, which is best understood as a convenience for effecting this frequent composition: mapsM phi = decompose . maps (Compose . phi) The streaming prelude exports the same function under the better name mapped, which overlaps with the lens libraries. mapsMPost :: forall m f g r. (Monad m, Functor g) => (forall x. f x %1 -> m (g x)) -> Stream f m r %1 -> Stream g m r Source # Map layers of one functor to another with a transformation involving the base monad. mapsMPost is essentially the same as mapsM, but it imposes a Control.Functor constraint on its target functor rather than its source functor. It should be preferred if Control.fmap is cheaper for the target functor than for the source functor. mapsPost is more fundamental than mapsMPost, which is best understood as a convenience for effecting this frequent composition: mapsMPost phi = decompose . mapsPost (Compose . phi) The streaming prelude exports the same function under the better name mappedPost, which overlaps with the lens libraries. mapped :: forall f g m r. (Monad m, Functor f) => (forall x. f x %1 -> m (g x)) -> Stream f m r %1 -> Stream g m r Source # Map layers of one functor to another with a transformation involving the base monad. This could be trivial, e.g. let noteBeginning text x = (fromSystemIO (System.putStrLn text)) Control.>> (Control.return x) this is completely functor-general maps and mapped obey these rules: maps id = id mapped return = id maps f . maps g = maps (f . g) mapped f . mapped g = mapped (f <=< g) maps f . mapped g = mapped (fmap f . g) mapped f . maps g = mapped (f <=< fmap g) maps is more fundamental than mapped, which is best understood as a convenience for effecting this frequent composition: mapped phi = decompose . maps (Compose . phi) mappedPost :: forall m f g r. (Monad m, Functor g) => (forall x. f x %1 -> m (g x)) -> Stream f m r %1 -> Stream g m r Source # A version of mapped that imposes a Control.Functor constraint on the target functor rather than the source functor. This version should be preferred if Control.fmap on the target functor is cheaper. hoistUnexposed :: forall f m n r. (Monad m, Functor f) => (forall a. m a %1 -> n a) -> Stream f m r %1 -> Stream f n r Source # A less-efficient version of hoist that works properly even when its argument is not a monad morphism. groups :: forall f g m r. (Monad m, Functor f, Functor g) => Stream (Sum f g) m r %1 -> Stream (Sum (Stream f m) (Stream g m)) m r Source # Group layers in an alternating stream into adjoining sub-streams of one type or another. # Inspecting a stream inspect :: forall f m r. Monad m => Stream f m r %1 -> m (Either r (f (Stream f m r))) Source # Inspect the first stage of a freely layered sequence. Compare Pipes.next and the replica Streaming.Prelude.next. This is the uncons for the general unfold. unfold inspect = id Streaming.Prelude.unfoldr StreamingPrelude.next = id # Splitting and joining Streams splitsAt :: forall f m r. (HasCallStack, Monad m, Functor f) => Int -> Stream f m r %1 -> Stream f m (Stream f m r) Source # Split a succession of layers after some number, returning a streaming or effectful pair. >>> rest <- S.print $S.splitAt 1$ each' [1..3] 1 >>> S.print rest 2 3 splitAt 0 = return (\stream -> splitAt n stream >>= splitAt m) = splitAt (m+n) Thus, e.g. >>> rest S.print $(s - splitsAt 2 s >>= splitsAt 2) each' [1..5] 1 2 3 4 >>> S.print rest 5 chunksOf :: forall f m r. (HasCallStack, Monad m, Functor f) => Int -> Stream f m r %1 -> Stream (Stream f m) m r Source # Break a stream into substreams each with n functorial layers. >>> S.print$ mapped S.sum $chunksOf 2$ each' [1,1,1,1,1] 2 2 1 concats :: forall f m r. (Monad m, Functor f) => Stream (Stream f m) m r %1 -> Stream f m r Source # Dissolves the segmentation into layers of Stream f m layers. intercalates :: forall t m r x. (Monad m, Monad (t m), MonadTrans t, Consumable x) => t m x -> Stream (t m) m r %1 -> t m r Source # Interpolate a layer at each segment. This specializes to e.g. intercalates :: Stream f m () -> Stream (Stream f m) m r %1-> Stream f m r # Zipping, unzipping, separating and unseparating streams unzips :: forall f g m r. (Monad m, Functor f, Functor g) => Stream (Compose f g) m r %1 -> Stream f (Stream g m) r Source # separate :: forall f g m r. (Monad m, Functor f, Functor g) => Stream (Sum f g) m r -> Stream f (Stream g m) r Source # Given a stream on a sum of functors, make it a stream on the left functor, with the streaming on the other functor as the governing monad. This is useful for acting on one or the other functor with a fold, leaving the other material for another treatment. It generalizes partitionEithers, but actually streams properly. >>> let odd_even = S.maps (S.distinguish even) $S.each' [1..10::Int] >>> :t separate odd_even separate odd_even :: Monad m => Stream (Of Int) (Stream (Of Int) m) () Now, for example, it is convenient to fold on the left and right values separately: >>> S.toList$ S.toList $separate odd_even [2,4,6,8,10] :> ([1,3,5,7,9] :> ()) Or we can write them to separate files or whatever: >>> S.writeFile "even.txt" . S.show$ S.writeFile "odd.txt" . S.show $S.separate odd_even >>> :! cat even.txt 2 4 6 8 10 >>> :! cat odd.txt 1 3 5 7 9 Of course, in the special case of Stream (Of a) m r, we can achieve the above effects more simply by using copy >>> S.toList . S.filter even$ S.toList . S.filter odd $S.copy$ each [1..10::Int] [2,4,6,8,10] :> ([1,3,5,7,9] :> ()) But separate and unseparate are functor-general. unseparate :: (Monad m, Functor f, Functor g) => Stream f (Stream g m) r -> Stream (Sum f g) m r Source # decompose :: forall f m r. (Monad m, Functor f) => Stream (Compose m f) m r %1 -> Stream f m r Source # Rearrange a succession of layers of the form Compose m (f x). we could as well define decompose by mapsM: decompose = mapped getCompose but mapped is best understood as: mapped phi = decompose . maps (Compose . phi) since maps and hoist are the really fundamental operations that preserve the shape of the stream: maps :: (Control.Monad m, Control.Functor f) => (forall x. f x %1-> g x) -> Stream f m r %1-> Stream g m r hoist :: (Control.Monad m, Control.Functor f) => (forall a. m a %1-> n a) -> Stream f m r %1-> Stream f n r expand :: forall f m r g h. (Monad m, Functor f) => (forall a b. (g a %1 -> b) -> f a %1 -> h b) -> Stream f m r %1 -> Stream g (Stream h m) r Source # If Of had a Comonad instance, then we'd have copy = expand extend See expandPost for a version that requires a Control.Functor g instance instead. expandPost :: forall f m r g h. (Monad m, Functor g) => (forall a b. (g a %1 -> b) -> f a %1 -> h b) -> Stream f m r %1 -> Stream g (Stream h m) r Source # If Of had a Comonad instance, then we'd have copy = expandPost extend See expand for a version that requires a Control.Functor f instance instead. # Eliminating a Stream mapsM_ :: (Functor f, Monad m) => (forall x. f x %1 -> m x) -> Stream f m r %1 -> m r Source # Map each layer to an effect, and run them all. run :: Monad m => Stream m m r %1 -> m r Source # Run the effects in a stream that merely layers effects. streamFold :: (Functor f, Monad m) => (r %1 -> b) -> (m b %1 -> b) -> (f b %1 -> b) -> Stream f m r %1 -> b Source # streamFold reorders the arguments of destroy to be more akin to foldr It is more convenient to query in ghci to figure out what kind of 'algebra' you need to write. >>> :t streamFold Control.return Control.join (Control.Monad m, Control.Functor f) => (f (m a) %1-> m a) -> Stream f m a %1-> m a -- iterT >>> :t streamFold Control.return (Control.join . Control.lift) (Control.Monad m, Control.Monad (t m), Control.Functor f, Control.MonadTrans t) => (f (t m a) %1-> t m a) -> Stream f m a %1-> t m a -- iterTM >>> :t streamFold Control.return effect (Control.Monad m, Control.Functor f, Control.Functor g) => (f (Stream g m r) %1-> Stream g m r) -> Stream f m r %1-> Stream g m r >>> :t f -> streamFold Control.return effect (wrap . f) (Control.Monad m, Control.Functor f, Control.Functor g) => (f (Stream g m a) %1-> g (Stream g m a)) -> Stream f m a %1-> Stream g m a -- maps >>> :t f -> streamFold Control.return effect (effect . Control.fmap wrap . f) (Control.Monad m, Control.Functor f, Control.Functor g) => (f (Stream g m a) %1-> m (g (Stream g m a))) -> Stream f m a %1-> Stream g m a -- mapped streamFold done eff construct = eff . iterT (Control.return . construct . Control.fmap eff) . Control.fmap done iterTM :: (Functor f, Monad m, MonadTrans t, Monad (t m)) => (f (t m a) %1 -> t m a) -> Stream f m a %1 -> t m a Source # Specialized fold following the usage of Control.Monad.Trans.Free iterTM alg = streamFold Control.return (Control.join . Control.lift) iterTM alg = iterT alg . hoist Control.lift iterT :: (Functor f, Monad m) => (f (m a) %1 -> m a) -> Stream f m a %1 -> m a Source # Specialized fold following the usage of Control.Monad.Trans.Free iterT alg = streamFold Control.return Control.join alg iterT alg = runIdentityT . iterTM (IdentityT . alg . Control.fmap runIdentityT) destroy :: forall f m r b. (Functor f, Monad m) => Stream f m r %1 -> (f b %1 -> b) -> (m b %1 -> b) -> (r %1 -> b) -> b Source # Map a stream to its church encoding; compare Data.List.foldr. destroyExposed may be more efficient in some cases when applicable, but it is less safe. destroy s construct eff done = eff . iterT (Control.return . construct . Control.fmap eff) . Control.fmap done \$ s
auto_math_text
web
FLOC 2018: FEDERATED LOGIC CONFERENCE 2018 PROGRAM FOR MONDAY, JULY 9TH Days: previous day next day all days View: session overviewtalk overview 09:00-10:30 Session 46: FLoC Plenary Lecture: Peter O'Hearn (FLoC) Location: Maths LT1 09:00 Continuous Reasoning: Scaling the Impact of Formal Methods ABSTRACT. Formal reasoning about programs is one of the oldest and most fundamental research directions in computer science. It has also been one of the most elusive. There has been a tremendous amount of valuable research in formal  methods, but rarely have formal reasoning techniques been deployed as part of the development process of large industrial codebases. This talk describes work in continuous reasoning, where formal reasoning about a (changing) codebase is done in a fashion which mirrors the iterative, continuous model of software development that is increasingly practiced in industry. We suggest that advances in continuous reasoning will allow formal reasoning to scale to more programs, and more programmers. We describe our experience using continuous reasoning with large, rapidly changing codebases at Facebook, and we describe open problems and directions for research for the scientific community. This a paper with the same title accompanying this talk appears in the LICS’18 proceedings. 10:30-11:00Coffee Break 11:00-12:30 Session 47A: Security protocols I (CSF) Location: Maths LT2 11:00 An extensive formal analysis of multi-factor authentication protocols users, even though they have been shown to create huge security problems. This motivated the use of additional authentication mechanisms used in so-called multi-factor authentication protocols. In this paper we define a detailed threat model for this kind of protocols: while in classical protocol analysis attackers control the communication network, we take into account that many communications are performed over TLS channels, that computers may be infected by different kinds of malwares, that attackers could perform phishing, and that humans may omit some actions. We formalize this model in the applied pi calculus and perform an extensive analysis and comparison of several widely used protocols --- variants of Google 2 Step and FIDO's U2F. The analysis is completely automated, generating systematically all combinations of threat scenarios for each of the protocols and using the Proverif tool for automated protocol analysis. Our analysis highlights weaknesses and strengths of the different protocols, and allows us to suggest several small modifications of the existing protocols which are easy to implement, yet improve their security in several threat scenarios. 11:30 Composition Theorems for CryptoVerif and Application to TLS 1.3 ABSTRACT. We present composition theorems for security protocols, to compose a key exchange protocol and a symmetric-key protocol that uses the exchanged key. Our results rely on the computational model of cryptography and are stated in the framework of the tool CryptoVerif. They support key exchange protocols that guarantee injective or non-injective authentication. They also allow random oracles shared between the composed protocols. To our knowledge, they are the first composition theorems for key exchange stated for a computational protocol verification tool, and also the first to allow such flexibility. As a case study, we apply our composition theorems to a proof of TLS 1.3 Draft-18. This work fills a gap in a previous paper that informally claims a compositional proof of TLS 1.3, without formally justifying it. 12:00 A Cryptographic Look at Multi-Party Channels ABSTRACT. Cryptographic channels aim to enable authenticated and confidential communication over the Internet. The general understanding seems to be that providing security in the sense of authenticated encryption for every (unidirectional) point-to-point link suffices to achieve this goal. As recently shown (in FSE17/ToSC17), however, even in the bidirectional case just requiring the two unidirectional links to provide security independently of each other does not lead to a secure solution in general. Informally, the reason for this is that the increased interaction in bidirectional communication may be exploited by an adversary. The same argument applies, a fortiori, in a multi-party setting where several users operate concurrently and the communication develops in more directions. In the cryptographic literature, however, the targeted goals for group communication in terms of channel security are still unexplored. Applying the methodology of provable security, we fill this gap by (i) defining exact (game-based) authenticity and confidentiality goals for broadcast communication and (ii) showing how to achieve them. Importantly, our security notions also account for the causal dependencies between exchanged messages, thus naturally extending the bidirectional case where causal relationships are automatically captured by preserving the sending order. On the constructive side we propose a modular and yet efficient protocol that, assuming only reliable point-to-point links between users, leverages (non-cryptographic) broadcast and standard cryptographic primitives to a full-fledged broadcast channel that provably meets the security notions we put forth. 11:00-12:30 Session 47B: Linear Logic (FSCD) 11:00 Proof nets for bi-intuitionistic linear logic ABSTRACT. Bi-Intuitionistic Linear Logic (BILL) is an extension of Intuitionistic Linear Logic with a par, dual to the tensor, and subtraction, dual to linear implication. It is the logic of categories with a monoidal closed and a monoidal co-closed structure that are related by linear distributivity, a strength of the tensor over the par. It conservatively extends Full Intuitionistic Linear Logic (FILL), which includes only the par. We give proof nets for the multiplicative, unit-free fragment MBILL-. Correctness is by local rewriting in the style of Danos contractibility. This rewrite relation yields sequentialization into a relational sequent calculus that extends the existing one for FILL. We give a second, geometric correctness condition via Danos-Regnier switching, and demonstrate composition both inductively and as a one-off global operation. 11:30 Unique perfect matchings and proof nets ABSTRACT. This paper establishes a bridge between linear logic and mainstream graph theory, building previous work by Retoré (2003). We show that the problem of correctness for MLL+Mix proof nets is equivalent to the problem of uniqueness of a perfect matching. By applying matching theory, we obtain new results for MLL+Mix proof nets: a linear-time correctness criterion, a quasi-linear sequentialization algorithm, and a characterization of the sub-polynomial complexity of the correctness problem. We also use graph algorithms to compute the dependency relation of Bagnol et al. (2015) and the kingdom ordering of Bellin (1997), and relate them to the notion of blossom which is central to combinatorial maximum matching algorithms. 12:00 Lifting Coalgebra Modalities and IMELL Model Structure to Eilenberg-Moore Categories ABSTRACT. A categorical model of the multiplicative and exponential fragments of intuitionistic linear logic (IMELL), known as a linear category, is a symmetric monoidal closed category with a monoidal coalgebra modality (also known as a linear exponential comonad). Inspired by R. Blute and P. Scott's work on categories of modules of Hopf algebras as models of linear logic, we study Eilenberg-Moore categories of monads as models of IMELL. We define an IMELL lifting monad on a linear category as a Hopf monad -- in the Bruguieres, Lack, and Virelizier sense --  with a mixed distributive law over the monoidal coalgebra modality. As our main result, we show that the linear category structure lifts to Eilenberg-Moore categories of IMELL lifting monads. We explain how monoids in the Eilenberg-Moore of the monoidal coalgebra modality can induce IMELL lifting monads and provide sources for such monoids. Along the way, we also define mixed distributive laws of bimonads over coalgebra modalities and lifting differential category structure to Eilenberg-Moore categories of exponential lifting monads. 11:00-12:00 Session 47C: ITP Invited Talk: John Harrison (ITP) Location: Blavatnik LT1 11:00 Mike Gordon: Tribute to a Pioneer in Theorem Proving and Formal Verification ABSTRACT. Prof. Michael J. C. Gordon, FRS was a great pioneer in both computer-aided formal verification and interactive theorem proving. His own work and that of his students helped to explore and map out these new fields and in particular the fruitful connections between them. His seminal HOL theorem prover not only gave rise to many successors and relatives, but was also the framework in which many new ideas and techniques in theorem proving and verification were explored for the first time. Mike's untimely death in August 2017 was a tragedy first and foremost for his family, but was felt as a shocking loss too by many of us who felt part of his extended family of friends, former students and colleagues throughout the world. Mike's intellectual example as well as his unassuming nature and personal kindness will always be something we treasure. In my talk here I will present an overall perspective on Mike's life and the whole arc of his intellectual career. I will also spend  time looking ahead, for the research themes he helped to establish are still vital and exciting today in both academia and industry. 11:00-12:40 Session 47D (LICS) Location: Maths LT1 11:00 Definable decompositions for graphs of bounded linear cliquewidth ABSTRACT. We prove that for every positive integer k, there exists an MSO_1-transduction that given a graph of linear cliquewidth at most k outputs, nondeterministically, some clique decomposition of the graph of width bounded by a function of k. A direct corollary of this result is the equivalence of the notions of CMSO_1-definability and recognizability on graphs of bounded linear cliquewidth. 11:20 Parameterized circuit complexity of model-checking on sparse structures ABSTRACT. We prove that for every class $C$ of graphs with effectively bounded expansion, given a first-order sentence $\varphi$ and an $n$-element structure $A$ whose Gaifman graph belongs to $C$, the question whether $\varphi$ holds in $A$ can be decided by a family of AC-circuits of size $f(\varphi)\cdot n^c$ and depth $f(\varphi)+c\log n$, where $f$ is a computable function and $c$ is a universal constant. This places the model-checking problem for classes of bounded expansion in the parameterized circuit complexity class $paraAC^1$. On the route to our result we prove that the basic decomposition toolbox for classes of bounded expansion, including orderings with bounded weak coloring numbers and low treedepth decompositions, can be computed in $paraAC^1$. 11:40 Sequential Relational Decomposition SPEAKER: Dror Fried ABSTRACT. The concept of decomposition in computer science and engineering is considered a fundamental component of computational thinking and is prevalent in design of algorithms, software construction, hardware design, and more. We propose a simple and natural formalization of sequential decomposition,in which a task is decomposed into two sequential sub-tasks, with the first sub-task to be executed out before the second sub-task is executed. These tasks are specified by means of input/output relations. We define and study decomposition problems,which is to decide whether a given specification can be sequentially decomposed. Our main result is that decomposition itself is a difficult computational problem. More specifically, we study decomposition problems in three settings: where the input task is specified explicitly, by means of Boolean circuits, and by means of automatic relations. We show that in the first setting decomposition is NP-complete, in the second setting it is NEXPTIME-complete, and in the third setting there is evidence to suggest that it is undecidable. Our results indicate that the intuitive idea of decomposition as a system-design approach requires further investigation. In particular, we show that adding human to the loop by asking for a decomposition hint lowers the complexity of decomposition problems considerably. 12:00 A parameterized halting problem, the linear time hierarchy, and the MRDP theorem SPEAKER: Yijia Chen ABSTRACT. The complexity of the parameterized halting problem for nondeterministic Turing machines p-Halt is known to be related to the question of whether there are logics capturing various complexity classes [Chen and Flum, 2012]. Among others, if p-Halt is in para-AC^0, the parameterized version of the circuit complexity class AC^0, then AC^0, or equivalently, (+,\times)-invariant FO, has a logic. Although it is widely believed that p-Halt\notin para-AC^0, we show that the problem is hard to settle by establishing a connection to the question in classical complexity of whether NE\not\subseteq LINH. Here, LINH denotes the linear time hierarchy. On the other hand, we suggest an approach toward proving NE\not\subseteq LINH using bounded arithmetic. More specifically, we demonstrate that if the much celebrated MRDP (for Matiyasevich-Robinson-Davis-Putnam) theorem can be proved in a certain fragment of arithmetic, then NE\not\subseteq LINH. Interestingly, central to this result is a para-AC^0 lower bound for the parameterized model-checking problem for FO on arithmetical structures. 12:20 Regular and First Order List Functions ABSTRACT. We define two classes of functions, called regular (respectively, first-order) list functions, which manipulate objects such as lists, lists of lists, pairs of lists, lists of pairs of lists, etc. The definition is in the style of regular expressions: the functions are constructed by starting with some basic functions (e.g. projections from pairs, or head and tail operations on lists) and putting them together using four combinators (most importantly, composition of functions). Our main results are that first-order list functions are exactly the same as first-order transductions, under a suitable encoding of the inputs; and the regular list functions are exactly the same as MSO-transductions. 11:00-12:40 Session 47E (LICS) Location: Maths LT3 11:00 A theory of linear typings as flows on 3-valent graphs ABSTRACT. Building on recently established enumerative connections between lambda calculus and the theory of embedded graphs (or "maps"), this paper develops an analogy between typing (of lambda terms) and coloring (of maps). Our starting point is the classical notion of an abelian group-valued "flow" on an abstract graph (Tutte, 1954). Typing a linear lambda term may be naturally seen as constructing a flow (on an embedded 3-valent graph with boundary) valued in a more general algebraic structure consisting of a preordered set equipped with an "implication" operation and unit satisfying composition, identity, and unit laws. Interesting questions and results from the theory of flows (such as the existence of nowhere-zero flows) may then be re-examined from the standpoint of lambda calculus and logic. For example, we give a characterization of when the local flow relations (across vertices) may be categorically lifted to a global flow relation (across the boundary), proving that this holds just in case the underlying map has the orientation of a lambda term. We also develop a basic theory of rewriting of flows that suggests topological meanings for classical completeness results in combinatory logic, and introduce a polarized notion of flow, which draws connections to the theory of proof-nets in linear logic and to bidirectional typing. 11:20 Cellular Cohomology in Homotopy Type Theory ABSTRACT. We present a development of cellular cohomology in homotopy type theory. Cohomology associates to each space a sequence of abelian groups capturing part of its structure, and has the advantage over homotopy groups in that these abelian groups of many common spaces are easier to compute. Cellular cohomology is a special kind of cohomology designed for cell complexes: these are built in stages by attaching spheres of progressively higher dimension, and cellular cohomology defines the groups out of the combinatorial description of how spheres are attached. Our main result is that for finite cell complexes, a wide class of cohomology theories (including the ones defined through Eilenberg-MacLane spaces) can be calculated via cellular cohomology. This result was formalized in the Agda proof assistant. 11:40 Free Higher Groups in Homotopy Type Theory SPEAKER: Nicolai Kraus ABSTRACT. Given a type A in homotopy type theory (HoTT), we define the free infinity-group on A as the higher inductive type FA with constructors [unit : FA], [cons : A -> FA -> FA], and conditions saying that every cons(a) is an auto-equivalence on FA. Assuming that A is a set (i.e. satisfies the principle of unique identity proofs), we are interested in the question whether FA is a set as well, which is very much related to an open problem in the HoTT book [Ex. 8.2]. In this paper, we show an approximation to the question, namely that the fundamental groups of FA are trivial. 12:00 Higher Groups in Homotopy Type Theory ABSTRACT. We present a development of the theory of higher groups, including infinity groups and connective spectra, in homotopy type theory. An infinity group is simply the loops in a pointed, connected type, where the group structure comes from the structure inherent in the identity types of Martin-Löf type theory. We investigate ordinary groups from this viewpoint, as well as higher dimensional groups and groups that can be delooped more than once. A major result is the stabilization theorem, which states that if an n-type can be delooped n+2 times, then it has the structure of an infinite loop type. Most of the results have been formalized in the Lean proof assistant. 12:20 Strong Sums in Focused Logic ABSTRACT. A useful connective that has not previously been made to work in focused logic is the strong sum, a form of dependent sum that is eliminated by projection rather than pattern matching. This makes strong sums powerful, but it also creates a problem adapting them to focusing: The type of the right projection from a strong sum refers to the term being projected from, but due to the structure of focused logic, that term is not available. In this work we confirm that strong sums can be viewed as a negative connective in focused logic. The key is to resolve strong sums' dependencies eagerly, before projection can see them, using a notion of selfification adapted from module type systems. We validate the logic by proving cut admissibility and identity expansion. All the proofs are formalized in Coq. 11:00-12:15 Session 47F: SAT Invited Talk: Christoph Scholl (SAT) 11:00 Welcome to SAT 2018 11:05 Dependency Quantified Boolean Formulas: An Overview of Solution Methods and Applications ABSTRACT. Dependency quantified Boolean formulas (DQBFs) as a generalization of quantified Boolean formulas (QBFs) have received considerable attention in research during the last years. Here we give an overview of the solution methods developed for DQBF so far. The exposition is complemented with the discussion of various applications that can be handled with DQBF solving. 12:00-12:30 Session 48 (ITP) Location: Blavatnik LT1 12:00 Efficient Mendler-Style Lambda-Encodings in Cedille SPEAKER: Denis Firsov ABSTRACT. It is common to model inductive datatypes as least fixed points of functors. We show that within the Cedille type theory we can relax functoriality constraints and generically derive an induction principle for Mendler-style lambda-encoded inductive datatypes, which arise as least fixed points of covariant schemes where the morphism lifting is defined only on identities. Additionally, we implement a destructor for these lambda-encodings that runs in constant-time. As a result, we can define lambda-encoded natural numbers with an induction principle and a constant-time predecessor function so that the normal form of a numeral requires only linear space. The paper also includes several more advanced examples. 12:30-14:00Lunch Break 14:00-15:00 Session 49A: CSF Invited Talk: Srini Devadas (CSF) Location: Maths LT2 14:00 Sanctum: Towards an Open-Source, Formally Verified Secure Processor ABSTRACT. Architectural isolation can be used to secure computation on a remote secure processor with a private key where the privileged software is potentially malicious as recently deployed by Intel's Software Guard Extensions (SGX). This talk will first describe the Sanctum secure processor architecture, which offers the same promise as SGX, namely strong provable isolation of software modules running concurrently and sharing resources, but protects against an important class of additional software attacks that infer private information by exploiting resource sharing. The talk will then describe a verification methodology based on a trusted abstract platform (TAP) that formally models idealized enclaves and a parameterized adversary. Machine-checked proofs show that the TAP satisfies the three key security properties needed for secure remote execution: integrity, confidentiality and secure measurement. Machine-checked proofs also show that SGX and Sanctum are refinements of the TAP under certain parameterizations of the adversary, demonstrating these systems implement secure enclaves for the stated adversary models. Biography: Srini Devadas is the Webster Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT) where he has been on the faculty since 1988. Devadas's research interests span Computer-Aided Design (CAD), computer security and computer architecture. He is a Fellow of the IEEE and ACM. He has received a 2014 IEEE Computer Society Technical Achievement award, the 2015 ACM/IEEE Richard Newton technical impact award, and the 2017 IEEE Wallace McDowell award for his research. Devadas is a MacVicar Faculty Fellow and an Everett Moore Baker teaching award recipient, considered MIT's two highest undergraduate teaching honors. 14:00-15:00 Session 49B: FSCD Invited talk: Peter Selinger (FSCD) 14:00 Challenges in quantum programming languages ABSTRACT. In this talk, I will give an overview of some recent progress and current challenges in the design of quantum programming languages. Unlike classical programs, which can in principle be debugged by stopping the program at critical moments and examining the contents of variables, quantum programs are not amenable to traditional debugging because the state of a quantum system cannot usually be examined in a meaningful way. Therefore, we need other methods for ensuring the correctness of quantum programs, such as formal verification. For this reason, I advocate the use of strongly typed, functional programming languages for quantum computing. As far as functional quantum programming languages are concerned, there is currently a relatively wide gap between theory and practice. On the one hand, we have languages with strong theoretical foundations, such as the quantum lambda calculus, which operate at a relatively low level of abstraction and lack many features that would be useful to practical quantum programmers. On the other hand, we have practical functional quantum programming languages such as Quipper, which is implemented as an embedded language in Haskell, has many high-level features, and has been used in large-scale projects, but lacks a theoretical basis and a strong type system. We have recently attempted to narrow this gap through a family of languages called Proto-Quipper, which are designed to offer Quipper-like features while having sound theoretical foundations. I will give an overview of Quipper and its most useful features, report on the progress we made with formalizing fragments of Quipper, and outline several of the still remaining challenges. 14:00-15:30 Session 49C (ITP) Location: Blavatnik LT1 14:00 Formalizing Implicative Algebras in Coq ABSTRACT. We present a Coq formalization of Alexandre Miquel’s implicative algebras, which aim at providing a general algebraic framework for the study of classical realizability models. We first give a self-contained presentation of the underlying implicative structures, which roughly consists of a complete lattice equipped with a binary law representing the implication. We then explain how these structures can be turned into models by adding separators, giving rise to the so-called implicative algebras. Additionally, we show how they generalize Boolean and Heyting algebras as well as the usual algebraic structures used in the analysis of classical realizability. 14:30 Software Tool Support for Modular Reasoning in Modal Logics of Actions SPEAKER: Samuel Balco ABSTRACT. We present a software tool for reasoning in and about propositional sequent calculi for modal logics of actions. As an example, we implement the display calculus D.EAK of dynamic epistemic logic. The tool generates embeddings of the calculus in the theorem prover Isabelle for formalising proofs about D.EAK. Integrating propositional reasoning in D.EAK with inductive reasoning in Isabelle, we verify in Isabelle the solution of the muddy children puzzle for any number of muddy children. There also is a set of meta-tools that allows us to adapt the software for a wide variety of user defined calculi. 15:00 The Coinductive Formulation of Common Knowledge SPEAKER: Colm Baston ABSTRACT. We study the coinductive formulation of common knowledge in type theory. We formalise both the traditional relational semantics and an operator semantics, similar in form to the epistemic system S5, but at the level of events on possible worlds rather than as a logical derivation system. We have two major new results. Firstly, the operator semantics is equivalent to the relational semantics: we discovered that this requires a new hypothesis of semantic entailment on operators, not known in previous literature. Secondly, the coinductive version of common knowledge is equivalent to the traditional transitive closure on the relational interpretation. All results are formalised in the proof assistants Agda and Coq. 14:00-15:40 Session 49D (LICS) Location: Maths LT1 14:00 A modal mu perspective on solving parity games in quasipolynomial time. ABSTRACT. We present a new quasi-polynomial algorithm for solving parity games. It is based on a new bisimulation invariant measure of complexity for parity games, called the register-index, which captures the complexity of the priority assignment. For fixed parameter k, the class of games with register-index bounded by k is solvable in polynomial time. We show that the register-index of parity games of size n is bounded by O(log n) and derive a quasi-polynomial algorithm. Finally we give the first descriptive complexity account of the quasi-polynomial solvability of parity games: The winning regions of parity games with p priorities and register-index k are described by a modal μ formula of which the complexity, as measured by its alternation depth, depends on k rather than p. 14:20 A pseudo-quasi-polynomial algorithm for solving mean-payoff parity games SPEAKER: Laure Daviaud ABSTRACT. In a mean-payoff parity game, one of the two players aims both to achieve a qualitative parity objective and to minimize a quantitative long-term average of payoffs (aka. mean payoff). The game is zero-sum and hence the aim of the other player is to either foil the parity objective or to maximize the mean payoff. Our main technical result is a pseudo-quasi-polynomial algorithm for solving mean-payoff parity games. All algorithms for the problem that have been developed for over a decade have a pseudo-polynomial and an exponential factors in their running times; in the running time of our algorithm the latter is replaced with a quasi-polynomial one. Our main conceptual contributions are the definitions of strategy decompositions for both players, and a notion of progress measures for mean-payoff parity games that generalizes both parity and energy progress measures. The former provides normal forms for and succinct representations of winning strategies, and the latter enables the application to mean-payoff parity games of the order-theoretic machinery that underpins a recent quasi-polynomial algorithm for solving parity games. 14:40 Rational Synthesis Under Imperfect Information ABSTRACT. In this paper, we study the rational synthesis problem for multi-player non zero-sum games played on finite graphs for omega-regular objectives. Rationality is formalized by the concept of Nash equilibrium (NE). Contrary to previous works, we consider in this work the more general and more practically relevant case where players are imperfectly informed. In sharp contrast with the perfect information case, NE are not guaranteed to exist in this more general setting. This motivates the study of the NE existence problem. We show that this problem is ExpTime-C for parity objectives in the two-player case (even if both players are imperfectly informed) and undecidable for more than 2 players. We then study the rational synthesis problem and show that the problem is also ExpTime-C for two imperfectly informed players and undecidable for more than 3 players. As the rational synthesis problem considers a system (Player 0) playing against a rational environment (composed of k players), we also consider the natural case where only Player 0 is imperfectly informed about the state of the environment (and the environment is considered as perfectly informed). In this case, we show that the ExpTime-C result holds when k is arbitrary but fixed. We also analyse the complexity when k is part of the input. 15:00 Playing with Repetitions in Data Words Using Energy Games SPEAKER: M. Praveen ABSTRACT. We introduce two-player games which build words over infinite alphabets, and we study the problem of checking the existence of winning strategies. These games are played by two players, who take turns in choosing valuations for variables ranging over an infinite data domain, thus generating multi-attributed data words. The winner of the game is specified by formulas in the Logic of Repeating Values, which can reason about repetitions of data values in infinite data words. We prove that it is undecidable to check if one of the players has a winning strategy, even in very restrictive settings. However, we prove that if one of the players is restricted to choose valuations ranging over the Boolean domain, the games are effectively equivalent to single-sided games on vector addition systems with states (in which one of the players can change control states but cannot change counter values), known to be decidable and effectively equivalent to energy games. Previous works have shown that the satisfiability problem for various variants of the logic of repeating values is equivalent to the reachability and coverability problems in vector addition systems. Our results raise this connection to the level of games, augmenting further the associations between logics on data words and counter systems. 15:20 Compositional game theory SPEAKER: Jules Hedges ABSTRACT. We introduce open games as a compositional foundation of economic game theory. A compositional approach potentially allows methods of game theory and theoretical computer science to be applied to large-scale economic models for which standard economic tools are not practical. An open game represents a game played relative to an arbitrary environment and to this end we introduce the concept of coutility, which is the utility generated by an open game and returned to its environment. Open games are the morphisms of a symmetric monoidal category and can therefore be composed by categorical composition into sequential move games and by monoidal products into simultaneous move games. Open games can be represented by string diagrams which provide an intuitive but formal visualisation of the information flows. We show that a variety of games can be faithfully represented as open games in the sense of having the same Nash equilibria and off-equilibrium best responses. 14:00-15:40 Session 49E (LICS) Location: Maths LT3 14:00 Concurrency and Probability: Removing Confusion, Compositionally SPEAKER: Roberto Bruni ABSTRACT. Assigning a satisfactory truly concurrent semantics to Petri nets with confusion and distributed decisions is a long standing problem, especially if one wants to resolve decisions by drawing from some probability distribution. Here we propose a general solution based on a recursive, static decomposition of (occurrence) nets in loci of decision, called structural branching cells (s-cells). Each s-cell exposes a set of alternatives, called transactions. Our solution transforms a given Petri net into another net whose transitions are the transactions of the s-cells and whose places are those of the original net, with some auxiliary structure for bookkeeping. The resulting net is confusion-free, and thus conflicting alternatives can be equipped with probabilistic choices, while nonintersecting alternatives are purely concurrent and their probability distributions are independent. The validity of the construction is witnessed by a tight correspondence with the recursively stopped configurations of Abbes and Benveniste. Some advantages of our approach are that: i) s-cells are defined statically and locally in a compositional way; ii) our resulting nets exhibit the complete concurrency property. 14:20 ReLoC: A Mechanised Relational Logic for Fine-Grained Concurrency SPEAKER: Dan Frumin ABSTRACT. We present ReLoC: a logic for proving refinements of programs in a language with higher-order state, fine-grained concurrency, polymorphism and recursive types. The core of our logic is a judgement e ≾ e' : τ, which expresses that a program e refines a program e' at type τ. In contrast to earlier work on refinements for languages with higher-order state and concurrency, ReLoC provides type- and structure-directed rules for manipulating this judgement, whereas previously, such proofs were carried out by unfolding the judgement into its definition in the model. These more abstract proof rules make it simpler to carry out refinement proofs. Moreover, we introduce logically atomic relational specifications: a novel approach for relational specifications for compound expressions that take effect at a single instant in time. We demonstrate how to formalise and prove such relational specifications in ReLoC, allowing for more modular proofs. ReLoC is built on top of the expressive concurrent separation logic Iris, allowing us to leverage features of Iris such as invariants and ghost state. We provide a mechanisation of our logic in Coq, which does not just contain a proof of soundness, but also tactics for interactively carrying out refinements proofs. We have used these tactics to mechanise several examples, which demonstrates the practicality and modularity of our logic. 14:40 Eager Functions as Processes ABSTRACT. We study Milner's encoding of the call-by-value lambda-calculus in the pi-calculus. We show that, by tuning the encoding to two subcalculi of the pi-calculus (Internal pi and Asynchronous Local pi), the equivalence on lambda-terms induced by the encoding coincides with Lassen's eager normal form bisimilarity, extended to handle eta-equality. As behavioural equivalence in the pi-calculus we consider contextual equivalence and barbed congruence. We also extend the results to preorders. A crucial technical ingredient in the proofs is the recently-introduced technique of unique solutions of equations, further developed in this paper. In this respect, the paper also intends to be an extended case study on the applicability and expressiveness of the technique. 15:00 Quasi-Open Bisimilarity with Mismatch is Intuitionistic SPEAKER: Ki Yung Ahn ABSTRACT. Quasi-open bisimilarity is the coarsest notion of bisimilarity for the pi-calculus that is also a congruence. This work extends quasi-open bisimilarity to handle mismatch (guards with inequalities). This minimal extension of quasi-open bisimilarity allows fresh names to be manufactured to provide constructive evidence that an inequality holds. The extension of quasi-open bisimilarity is canonical and robust --- coinciding with open barbed bisimilarity (an objective notion of bisimilarity congruence) and characterised by an intuitionistic variant of an established modal logic. The more famous open bisimilarity is also considered, for which the coarsest extension for handling mismatch is identified. Applications to symbolic equivalence checking and symbolic model checking are highlighted, e.g., for verifying privacy properties. Theorems and examples are mechanised using the proof assistant Abella. 15:20 Causal Computational Complexity of Distributed Processes ABSTRACT. This paper studies the complexity of pi-calculus processes with respect to the quantity of transitions caused by an incoming message. First we propose a typing system for integrating Bellantoni and Cook's characterisation of polynomially-bound recursive functions into Deng and Sangiorgi's typing system for termination. We then define computational complexity of distributed messages based on Degano and Priami's causal semantics, which identifies the dependency between interleaved transitions. Next we apply a syntactic flow analysis to typable processes to ensure the computational bound of distributed messages. We prove that our analysis is decidable for a given process; sound in the sense that it guarantees that the total number of messages causally dependent of an input request received from the outside is bounded by a polynomial of the content of this request; and complete which means that each polynomial recursive function can be computed by a typable process. 14:00-15:30 Session 49F: MaxSAT (SAT) 14:00 Approximately Propagation Complete and Conflict Propagating Constraint Encodings ABSTRACT. The effective use of satisfiability (SAT) solvers requires problem encodings that make good use of the reasoning techniques employed in such solvers, such as unit propagation and clause learning. Propagation completeness has been proposed as a useful property for constraint encodings as it maximizes the utility of unit propagation. Experimental results on using encodings with this property in the context of satisfiability modulo theory (SMT) solving have however remained inconclusive, as such encodings are typically very large, which increases the bookkeeping work of solvers. In this paper, we introduce approximate propagation completeness and approximate conflict propagation as novel SAT encoding property notions. While approximate propagation completeness is a generalization of classical propagation completeness, (approximate) conflict propagation is a new concept for reasoning about how early conflicts can be detected by a SAT solver. Both notions together span a hierarchy of encoding quality choices, with classical propagation completeness as a special case. We show how to compute approximately propagation complete and conflict propagating constraint encodings with a minimal number of clauses using a reduction to MaxSAT. To evaluate the effect of such encodings, we give results on applying them in a case study. 14:30 Dynamic Polynomial Watchdog Encoding for Solving Weighted MaxSAT SPEAKER: Tobias Paxian ABSTRACT. In this paper we present a novel cardinality constraint encoding for solving the weighted MaxSAT problem with iterative SAT-based methods based on the Polynomial Watchdog (PW) CNF encoding for Pseudo-Boolean (PB) constraints. The watchdog of the PW encoding indicates whether the bound of the PB constraint holds. In our approach, we lift this static watchdog concept to a dynamic one allowing an incremental convergence to the optimal result. Consequently, we formulate and implement a SAT-based algorithm for our new Dynamic Polynomial Watchdog (DPW) encoding which can be applied for solving the MaxSAT problem. Furthermore, we introduce three fundamental optimizations of the PW encoding also suited for the original version leading to a significantly less encoding size. Our experimental results show that our encoding and algorithm is competitive with state-of-the-art encodings as utilized in QMaxSAT (3rd place in last MaxSAT Evaluation 2017). Our encoding dominates two of the QMaxSAT encodings, and at the same time is able to solve unique instances. We integrated our new encoding into QMaxSAT and adapt the heuristic to choose between the only remaining encoding of QMaxSAT and our approach. This combined version solves 19 (4%) more instances in overall 30% less run time on the benchmark set of the MaxSAT Evaluation 2017. However, for the instances solved by both solvers our encoding is 2X faster than all employed encodings of QMaxSAT used in the evaluation. 15:00 Solving MaxSAT with Bit-Vector Optimization ABSTRACT. We explore the relationships between two closely related optimization problems: MaxSAT and Optimization Modulo Bit-Vectors (OBV). Given a bit-vector or a propositional formula F and a target bit-vector T, Unweighted Partial MaxSAT maximizes the number of satisfied bits in T, while OBV maximizes the value of T. We propose a new OBV-based Unweighted Partial MaxSAT algorithm. Our resulting solver–Mrs. Beaver–outscores the state-of-the-art solvers when run with the settings of the Incomplete-60-Second-Timeout Track of MaxSAT Evaluation 2017. Mrs. Beaver is the first MaxSAT algorithm designed to be incremental in the following sense: it can be re-used across multiple invocations with different hard assumptions and target bit-vectors. We provide experimental evidence showing that enabling incrementality in MaxSAT significantly improves the performance of a MaxSAT-based Boolean Multilevel Optimization (BMO) algorithm when solving a new, critical industrial BMO application: cleaning-up weak design rule violations during the Physical Design stage of Computer-Aided-Design. 15:00-15:30 Session 50A: Attack trees (CSF) Location: Maths LT2 15:00 Guided design of attack trees: a system-based approach ABSTRACT. Attack trees are a well-recognized formalism for security modeling and analysis, but in this work we tackle a problem that has not yet been addressed by the security or formal methods community – namely guided design of attack trees. The objective of the framework presented in this paper is to support a security expert in the process of designing a pertinent attack tree for a given system. In contrast to most of existing approaches for attack trees, our framework contains an explicit model of the real system to be analyzed, formalized as a transition system that may contain quantitative information. The leaves of our attack trees are labeled with reachability goals in the transition system and the attack tree semantics is expressed in terms of traces of the system. The main novelty of the proposed framework is that we start with an attack tree which is not fully refined and by exhibiting paths in the system that are optimal with respect to the quantitative information, we are able to suggest to the security expert which parts of the tree contribute to optimal attacks and should therefore be developed further. Such useful parts of the tree are determined by solving a satisfiability problem in propositional logic. 15:00-15:30 Session 50B: Quantum Computing (FSCD) 15:00 A diagrammatic axiomatisation of fermionic quantum circuits ABSTRACT. We introduce the fermionic ZW calculus, a string-diagrammatic language for fermionic quantum computing (FQC). After defining a fermionic circuit model, we present the basic components of the calculus, together with their interpretation, and show how the main physical gates of interest in FQC can be represented in the language. We then list our axioms, and derive some additional equations. We prove that the axioms provide a complete equational axiomatisation of the monoidal category whose objects are quantum systems of finitely many local fermionic modes, with operations that preserve or reverse the parity (number of particles mod 2) of states, and the tensor product, corresponding to the composition of two systems, as monoidal product. We achieve this through a procedure that rewrites any diagram in a normal form. We conclude by showing, as an example, how the statistics of a fermionic Mach-Zehnder interferometer can be calculated in the diagrammatic language. 15:30-16:00Coffee Break 16:00-17:30 Session 51A: CSF 5 minutes talks (CSF) Short talks by attendees. The 5-minute talk schedule is available here. This is a fun session in which you can describe work in progress, crazy-sounding ideas, interesting questions and challenges, research proposals, or anything else within reason! You can use 2-3 slides, or you can just speak without slides. Chair: Location: Maths LT2 16:00-18:00 Session 51B: Corrado Böhm Memorial (FSCD) 16:00 ALGORAND A Truly Distributed Ledger ABSTRACT. A distributed ledger is a tamperproof sequence of data that can be read and augmented by everyone. Distributed ledgers stand to revolutionize the way a democratic society operates. They secure all kinds of traditional transactions –such as payments, asset transfers, titling– in the exact order in which they occur; and enable totally new transactions ---such as cryptocurrencies and smart contracts. They can remove intermediaries and usher in a new paradigm for trust. As currently implemented, however, distributed ledgers cannot achieve their enormous potential. Algorand is an alternative, democratic, and efficient distributed ledger. Unlike prior ledgers based on ‘proof of work’, it dispenses with ‘miners’. Indeed, Algorand requires only a negligible amount of computation. Moreover, its transaction history does not ‘fork’ with overwhelming probability: i.e., Algorand guarantees the finality of all transactions. Finally, Algorand enjoys flexible self-governance. By using its hallmark propose-and-agree process, Algorand can correct its course as necessary or desirable, without any ‘hard forks’. 17:00 Corrado Böhm: the white magician in programming and its semantics ABSTRACT. Several results of Corrado Böhm will be presented  that have made programming more transparant and efficient. 1.  Self-compiling. In his PhD thesis Corrado carefully presented a program that could translate itself to machine code. This resulted in the boot-strap of computers warming up efficiently. 2.  Eliminating the go-to. With Giuseppe Jacopini Corrado showed that jumps in programming can be avoided. This resulted in  structured programming. 3.  The foundation of functional programming. Corrado was one of the first to realize that the computational model of lambda calculus can be used for progr amming by introducing the CUCH-machine. 4. Fine-structure of lambda terms.  In a paper in Italian Corrado studied what lambda terms cannot be equated. This resulted in a deep analysis of lambda models. 5. A simple self-evaluator in lambda calculus: E=<<K,S,C>>, where K, S, C, are the well known combinators and <M_1,...,M_n>=\lambda z.zM_1...M_n. Here unexpectedly the initials of Steve Cole Kleene appear, who constructed the first self-evaluator in the 1930's. This has the flavor of a magic tric! 16:00-18:00 Session 51C (ITP) Location: Blavatnik LT1 16:00 Understanding Parameters of Deductive Verification: an Empirical Investigation of KeY ABSTRACT. As formal verification of software systems is a complex task comprising many algorithms and heuristics, modern theorem provers offer numerous parameters that are to be selected by a user to control how a piece of software is verified. Evidently, the number of parameters even increases with each new release. One challenge is that default parameters are often insufficient to close proofs automatically and are not optimal in terms of verification effort. The verification phase becomes hardly accessible for non-experts, who typically must follow a time-consuming trial-and-error strategy to choose the right parameters for even trivial pieces of software. To aid users of deductive verification, we apply machine learning techniques to empirically investigate which parameters and combinations thereof impair or improve provability and verification effort. We exemplify our procedure on the deductive verification system KeY 2.6.1 and formulate 38 hypotheses of which only two have been invalidated. We identified parameters that portrait a trade-off between high provability and low verification effort, enabling the possibility to prioritize the selection of a parameter for either direction. Our insights give tool builders a better understanding of their control parameters and constitute a stepping stone towards automated deductive verification and better applicability of verification tools for non-experts. 16:30 Boosting the Reuse of Formal Specifications ABSTRACT. Advances in theorem proving have enabled the emergence of a variety of formal developments that, over the years, have resulted in large corpuses of formalizations. For example, the NASA PVS Library is a collection of 55 formal developments written in the Prototype Verification System (PVS) over a period of almost 30 years and containing more 28000 proofs. Unfortunately, the simple accumulation of formal developments does not guarantee their reusability. In fact, in formal systems with very expressive specification languages, it is often the case that a particular conceptual object is defined in different ways. This paper presents a technique to establish sound connections between formal definitions. Such connections support the possibility of (partial) borrowing of proved results from one formal description into another, improving the reusability of formal developments. The technique is described using concepts from the field of universal algebra and algebraic specification. The technique is illustrated with concrete examples taken from formalizations available in the NASA PVS Library. 17:00 Formalization of a Polymorphic Subtyping Algorithm SPEAKER: Jinxu Zhao ABSTRACT. Modern functional programming languages such as Haskell support sophisticated forms of type-inference, even in the presence of higher-order polymorphism. Central to such advanced forms of type-inference is an algorithm for polymorphic subtyping. This paper formalizes an algorithmic specification for polymorphic subtyping in the Abella theorem prover. The algorithmic specification is shown to be decidable, and sound and complete with respect to Odersky and Laufer's well-known declarative formulation of polymorphic subtyping. While the meta-theoretical results are not new, as far as we know our work is the first to mechanically formalize them. Moreover, our algorithm differs from those currently in the literature by using a novel approach based on worklist judgements. Worklist judgements simplify the propagation of information required by the unification process during subtyping. Furthermore they enable a simple formulation of the meta-theoretical properties, which can be easily encoded in theorem provers. 17:30 A Formal Equational Theory for Call-by-Push-Value ABSTRACT. Establishing that two programs are contextually equivalent is hard, yet essential for reasoning about semantics preserving program transformations such as compiler optimizations. We adapt Lassen's normal form bisimulations technique to establish the soundness of equational theories for both an untyped call-by-value lambda calculus and a variant of Levy's call-by-push-value language. We demonstrate that our equational theory significantly simplifies the verification of optimizations. 16:00-18:00 Session 51D (LICS) Location: Maths LT1 16:00 One Theorem to Rule Them All: A Unified Translation of LTL into ω-Automata ABSTRACT. We present a unified translation of LTL formulas into deterministic Rabin automata, limit-deterministic Büchi automata, and nondeterministic Büchi automata. The translations yield automata of asymptotically optimal size (double or single exponential, respectively). All three translations are derived from one single Master Theorem of purely logical nature. The Master Theorem decomposes the language of a formula into a positive boolean combination of languages that can be translated into ω-automata by elementary means. In particular, the breakpoint, Safra, and ranking constructions used in other translations are not needed. 16:20 A Simple and Optimal Complementation Algorithm for Büchi Automata SPEAKER: Joel Allred ABSTRACT. Complementation of Büchi automata is well known for being complex, as Büchi automata in general are nondeterministic. In the worst case, a state-space growth of $O((0.76n)^n)$ cannot be avoided. Experimental results suggest that complementation algorithms perform better on average when they are structurally simple. In this paper, we present a simple algorithm for complementing Büchi automata, operating directly on subsets of states, structured into state-set tuples (similar to slices), and producing a deterministic automaton. The second step in the construction is then a complementation procedure that resembles the straightforward complementation algorithm for deterministic Büchi automata, the latter algorithm actually being a special case of our construction. Finally, we prove our construction to be optimal, i.e.\ having an upper bound in $O((0.76n)^n)$, and furthermore calculate the $0.76$ factor in a novel exact way. 16:40 The State Complexity of Alternating Automata ABSTRACT. This paper studies the complexity of languages of finite words using automata theory. To go beyond the class of regular languages, we consider infinite automata and the notion of state complexity defined by Karp. We look at alternating automata as introduced by Chandra, Kozen and Stockmeyer: such machines run independent computations on the word and gather their answers through boolean combinations. We devise a lower bound technique relying on boundedly generated lattices of languages, and give two applications of this technique. The first is a hierarchy theorem, stating that there are languages of arbitrarily high polynomial alternating state complexity, and the second is a linear lower bound on the alternating state complexity of the prime numbers written in binary. This second result strengthens a result of Hartmanis and Shank from 1968, which implies an exponentially worse lower bound for the same model. 17:00 Automaton-Based Criteria for Membership in CTL ABSTRACT. Computation Tree Logic (CTL) is widely used in formal verification, however, unlike linear temporal logic (LTL), its connection to automata over words and trees is not yet fully understood. Moreover, the long sought connection between LTL and CTL is still missing; It is not known whether their common fragment is decidable, and there are very limited necessary conditions and sufficient conditions for checking whether an LTL formula is definable in CTL. We provide sufficient conditions and necessary conditions for LTL formulas and omega-regular languages to be expressible in CTL. The conditions are automaton-based; We first tighten the automaton characterization of CTL to the class of Hesitant Alternating Linear Tree Automata (HLT), and then conduct the conditions by relating between the cycles of a word automaton for a given omega-regular language and the cycles of a potentially equivalent HLT. The new conditions allow to simplify proofs of known results on languages that are definable, or not, in CTL, as well as to prove new results. Among which, they allow us to refute a conjecture by Clarke and Draghicescu from 1988, regarding a condition for a CTL* formula to be expressible in CTL. 17:20 Separability by piecewise testable languages and downward closures beyond subwords ABSTRACT. We introduce a flexible class of well-quasi-orderings (WQOs) on words that generalizes the ordering of (not necessarily contiguous) subwords. Each such WQO induces a class of piecewise testable languages (PTLs) as Boolean combinations of upward closed sets. In this way, a range of regular language classes arises as PTLs. Moreover, each of the WQOs guarantees regularity of all downward closed sets. We consider two problems. First, we study which (perhaps non-regular) language classes permit a decision procedure to decide whether two given languages are separable by a PTL with respect to a given WQO. Second, we want to effectively compute downward closures with respect to these WQOs. Our first main result that for each of the WQOs, under mild assumptions, both problems reduce to the simultaneous unboundedness problem (SUP) and are thus solvable for many powerful system classes. In the second main result, we apply the framework to show decidability of separability of regular languages by $\mathcal{B}\Sigma_1[<, \mathsf{mod}]$, a fragment of first-order logic with modular predicates. 17:40 Regular Transducer Expressions for Regular Transformations over infinite words SPEAKER: Vrunda Dave ABSTRACT. Functional MSO transductions, deterministic two-way transducers, as well as streaming string transducers are all equivalent models for regular functions. In this paper, we show that every regular function, either on finite words or on infinite words, captured by a deterministic two-way transducer, can be described with a regular transducer expression (RTE). For infinite words, the transducer uses Muller acceptance and omega-regular look-ahead. RTEs are constructed from constant functions using the combinators if-then-else (deterministic choice), Hadamard product, and unambiguous versions of the Cauchy product, the 2-chained Kleene-iteration and the 2-chained omega-iteration. Our proof works for transformations of both finite and infinite words, extending the result on finite words of Alur et al. in LICS'14. In order to construct an RTE associated with a deterministic two-way Muller transducer with look-ahead, we introduce the notion of transition monoid for such two-way transducers where the look-ahead is captured by some backward deterministic Buchi automaton. Then, we use an unambiguous version of Imre Simon's famous forest factorization theorem in order to derive a good'' (omega-)regular expression for the domain of the two-way transducer. Good'' expressions are unambiguous and Kleene-plus as well as omega-iterations are only used on subexpressions corresponding to idempotent elements of the transition monoid. The combinator expressions are finally constructed by structural induction on the good'' (omega-)regular expression describing the domain of the transducer. 16:00-18:00 Session 51E (LICS) Location: Maths LT3 16:00 Enriching a Linear/Non-linear Lambda Calculus: A Programming Language for String Diagrams ABSTRACT. Linear/non-linear (LNL) models, as described by Benton, soundly model a LNL term calculus and LNL logic closely related to intuitionistic linear logic. Every such model induces a canonical enrichment that we show soundly models a LNL lambda calculus for string diagrams, introduced by Rios and Selinger (with primary application in quantum computing). Our abstract treatment of this language leads to simpler concrete models compared to those presented so far. We also extend the language with general recursion and prove soundness. Finally, we present an adequacy result for the diagram-free fragment of the language which corresponds to a modified version of Benton and Wadler's adjoint calculus with recursion. 16:20 An algebraic theory of Markov processes SPEAKER: Giorgio Bacci ABSTRACT. Markov processes are a fundamental models of probabilistic transition systems and are the underlying semantics of probabilistic programs. We give an algebraic axiomatization of Markov processes using the framework of quantitative equational reasoning introduced in LICS2016. We present the theory in a structured way using work of Hyland et al. on combining monads. We take the interpolative barycentric algebras of LICS16 which captures the Kantorovich metric and combine it with a theory of contractive operators to give the required axiomatization of Markov processes both for discrete and continuous state spaces. This work, apart from its intrinsic interest, shows how one can extend the general notion of combining effects to the quantitative setting. 16:40 Boolean-Valued Semantics for Stochastic Lambda-Calculus ABSTRACT. The ordinary untyped lambda-calculus has a set-theoretic model proposed in two related forms by Scott and Plotkin in the 1970s. Recently Scott saw how to extend such $\lambda$-calculus models using random variables in a standard way. However, to do reasoning and to add further features, it is better to interpret the construction in a higher-order Boolean- valued model theory using the standard measure algebra. In this paper we develop the semantics of an extended stochastic lambda-calculus suitable for a simple probabilistic programming language, and we exhibit a number of key equations satisfied by the terms of our example language. The terms are interpreted using a continuation-style semantics along with an additional argument, an infinite sequence of coin tosses which serve as a source of randomness. The construction of the model requires a subtle measure-theoretic analysis of the space of coin-tossing sequences. We also introduce a fixed-point operator as a new syntactic construct, as beta-reduction turns out not sound for all terms in our semantics. Finally, we develop a new notion of equality between terms valued by elements of the measure algebra, allowing one to reason about terms that may not be equal almost everywhere. This we hope provides a new framework for developing reasoning about probabilistic programs and their properties of higher type. 17:00 Sound up-to techniques and Complete abstract domains ABSTRACT. Abstract interpretation is a method to automatically find invariants of programs or pieces of code whose semantics is given via least fixed-points. Up-to techniques have been introduced as enhancements of coinduction, an abstract principle to prove properties expressed as greatest fixed-points. While abstract interpretation is always sound by definition, the soundness of up-to techniques needs some ingenuity to be proven. For completeness, the setting is switched: up-to techniques are always complete, while abstract domains are not. In this work we show that, under reasonable assumptions, there is an evident connection between sound up-to techniques and complete abstract domains. 17:20 Every λ-Term is Meaningful for the Infinitary Relational Model ABSTRACT. Infinite types and formulas are known to have really curious and unsound behaviors. For instance, they allow to type Ω, the auto-autoapplication and they thus do not ensure any form of normalization/productivity. Moreover, in most infinitary frameworks, it is not difficult to define a type R that can be assigned to every λ- term. However, these observations do not say much about what coinductive (i.e. infinitary) type grammars are able to provide: it is for instance very difficult to know what types (besides R) can be assigned to a given term in this setting. We begin with a discussion on the expressivity of different forms of infinite types. Then, using the resource-awareness of sequential intersection types (system S) and tracking, we prove that infinite types are able to characterize the order (arity) of every λ-terms and that, in the infinitary extension of the relational model, every term has a “meaning” i.e. a non-empty denotation. From the technical point of view, we must deal with the total lack of productivity guarantee for typable terms: we do so by importing methods inspired by first order model theory. 17:40 Probabilistic Böhm Trees and Probabilistic Separation ABSTRACT. We study the notion of observational equivalence in the call-by-name probabilistic lambda-calculus, where two terms are said observationally equivalent if under any context, their head reductions converge with the same probability. Our goal is to generalise the separation theorem to this probabilistic setting. To do so we define probabilistic Böhm trees and probabilistic Nakajima trees, and we mix the well-known B\"öhm-out technique with some new techniques to manipulate and separate probability distributions. 16:00-18:00 Session 51F: CDCL (SAT) 16:00 Using Combinatorial Benchmarks to Probe the Reasoning Power of Pseudo-Boolean Solvers SPEAKER: Jan Elffers ABSTRACT. We study cdcl-cuttingplanes, Open-WBO, and Sat4j, three successful solvers from the Pseudo-Boolean Competition 2016, and evaluate them by performing experiments on crafted benchmarks designed to be trivial for the cutting planes (CP) proof system underlying pseudo-Boolean (PB) proof search, but yet potentially tricky for PB solvers. Our results demonstrate severe shortcomings in state-of-the-art PB solving techniques. Despite the fact that our benchmarks have linear-size tree-like CP proofs, the solvers often perform quite badly even for very small instances. Our analysis is that this shows that solvers need to explore stronger methods of pseudo-Boolean reasoning within cutting planes. We make an empirical observation from the competition data that many of the easy crafted instances are also infeasible over the rational numbers, or have small strong backdoors to PB instances without rational solutions. This raises the intriguing question whether the existence of such backdoors can be correlated with easiness/hardness. However, for some of our constructed benchmark families even rationally infeasible instances are completely beyond reach. This indicates that PB solvers need to get better not only at Boolean reasoning but even at linear programming. Finally, we compare CP-based solvers with CDCL and MIP solvers. For those of our benchmarks where the natural CNF encodings admit efficient resolution proofs, we see that the CDCL-based solver Open-WBO is orders of magnitude faster than the CP-based solvers cdcl-cuttingplanes and Sat4j (though it seems very sensitive to the ordering of the input). And the MIP solver Gurobi beats all of these solvers across the board. These experimental results point to several crucial challenges in the quest for more efficient pseudo-Boolean solvers, and we also believe that a further study of our benchmarks could shed important light on the potential and limitations of current state-of-the-art PB solving. 16:30 Machine Learning-based Restart Policy for CDCL SAT Solvers SPEAKER: Jia Liang ABSTRACT. Restarts are a critically important heuristic in most modern conflict-driven clause-learning (CDCL) SAT solvers. The precise reason as to why and how restarts enable CDCL solvers to scale efficiently remains obscure. In this paper we address this question, and provide some answers that enabled us to design a new effective machine learning-based restart policy. Specifically, we provide evidence that restarts improve the quality of learnt clauses as measured by one of best known clause quality metrics, namely, literal block distance (LBD). More precisely, we show that more frequent restarts decrease the LBD of learnt clauses, which in turn improves solver performance. We also note that too many restarts can be harmful because of the computational overhead of rebuilding the search tree from scratch too frequently. With this tradeoff in mind, between that of learning better clauses vs. the computational overhead of rebuilding the search tree, we introduce a new machine learning-based restart policy that predicts the quality of the next learnt clause based on the history of previously learnt clauses. The restart policy erases the solver’s search tree during its run, if it predicts that the quality of the next learnt clause is below some dynamic threshold that is determined by the solver’s history on the given input. Our machine learning-based restart policy is based on two observations gleaned from our study of LBDs of learned clauses. First, we discover that high LBD percentiles can be approximated with z-scores of the normal distribution. Second, we find that LBDs, viewed as a sequence, are correlated and hence the LBDs of past learned clauses can be used to predict the LBD of future ones. With these observations in place, and techniques to exploit them, our new restart policy is shown to effective over a large benchmark from the SAT Competition 2014 to 2017. 17:00 Chronological Backtracking
auto_math_text
web
# plotnine.geoms.geom_sina¶ class plotnine.geoms.geom_sina(mapping=None, data=None, **kwargs)[source] Draw a sina plot A sina plot is a data visualization chart suitable for plotting any single variable in a multiclass dataset. It is an enhanced jitter strip chart, where the width of the jitter is controlled by the density distribution of the data within each class. Usage geom_sina(mapping=None, data=None, stat='sina', position='dodge', na_rm=False, inherit_aes=True, show_legend=None, **kwargs) Only the mapping and data can be positional, the rest must be keyword arguments. **kwargs can be aesthetics (or parameters) used by the stat. Parameters mappingaes, optional Aesthetic mappings created with aes(). If specified and inherit.aes=True, it is combined with the default mapping for the plot. You must supply mapping if there is no plot mapping. Aesthetic Default value x y alpha 1 color 'black' fill None group shape 'o' size 1.5 stroke 0.5 The bold aesthetics are required. datadataframe, optional The data to be displayed in this layer. If None, the data from from the ggplot() call is used. If specified, it overrides the data from the ggplot() call. statstr or stat, optional (default: stat_sina) The statistical transformation to use on the data for this layer. If it is a string, it must be the registered and known to Plotnine. positionstr or position, optional (default: position_dodge) Position adjustment. If it is a string, it must be registered and known to Plotnine. na_rmbool, optional (default: False) If False, removes missing values with a warning. If True silently removes missing values. inherit_aesbool, optional (default: True) If False, overrides the default aesthetics. show_legendbool or dict, optional (default: None) Whether this layer should be included in the legends. None the default, includes any aesthetics that are mapped. If a bool, False never includes and True always includes. A dict can be used to exclude specific aesthetis of the layer from showing in the legend. e.g show_legend={'color': False}, any other aesthetic are included by default. References Sidiropoulos, N., S. H. Sohi, T. L. Pedersen, B. T. Porse, O. Winther, N. Rapin, and F. O. Bagger. 2018. "SinaPlot: An Enhanced Chart for Simple and Truthful Representation of Single Observations over Multiple Classes." J. Comp. Graph. Stat 27: 673–76.
auto_math_text
web
# How does the rand() function in C work? [closed] I want to know how rand() works (even when I don't provide any seed how it produces PRNs?) thanks! - Given the typical implementation of rand(), you're using a quite generous definition of "works". IMO it does not work, even for non security related stuff. –  CodesInChaos Mar 20 at 17:47 The rand() function of C is not a cryptographic pseudo-random generator, and as such off-topic here. Therefore I'm closing this question. –  Paŭlo Ebermann Mar 21 at 21:11 ## closed as off topic by fgrieu, Paŭlo Ebermann♦Mar 21 at 21:09 Questions on Cryptography Stack Exchange are expected to relate to cryptography within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question. Well, successive calls rand() just produce numbers that "look random". Now, rand() doesn't take a seed; that means that everytime the program runs, calls to rand() will generate the exact same sequence of numbers. This is a deliberate design decision; that means that the program behavior is reproducible (which can be important if you're debugging). If you don't want this behavior, well, that's why srand() is provided. As for what "looks random" means, well, it essentially means "if you eyeball the output, no obvious pattern jumps out at you". When working with cryptography (you did ask in the crypto stack exchange), we don't have much use for rand(); even if you feed in a seed via srand(), that seed is generally too small to be useful, and even if the rand() output isn't obviously patterned, crypto has much higher criteria for randomness. On the other hand, other uses need not have such high standards. I believe rand() may be useful within some randomized algorithms; just not cryptographical ones. - Even when eyeballing the plot of rand() it often exhibits total failure –  CodesInChaos Mar 20 at 17:48 You can see in stdlib.c (I found it here) that rand() is defined as: static long holdrand = 1L; ... int rand() { return (((holdrand = holdrand * 214013L + 2531011L) >> 16) & 0x7fff); } Of couse this may vary from disto to distro and version to version. You should find your local stdlib.c file (or the one that corresponds to your distro) to see how exactly it's implemented. srand() merely changes holdrand: void srand(unsigned int seed) { holdrand = (long) seed; } so as @poncho said it merely garbles the output. No true randomness whatsoever. - That implementation is just one possibility; in fact, the C standard provides a different sample implementation (similar, but with a different multiplier and constant). Different C compilers/libraries are perfectly free to use something else (and likely will). –  poncho Mar 20 at 19:04 Yes that's very likely. I posted the source to show there's no elaborate algorithms behind rand(), just vile trickery and deception. –  rath Mar 20 at 19:08
auto_math_text
web
# 38 Phylogenetic trees ## 38.1 Overview Phylogenetic trees are used to visualize and describe the relatedness and evolution of organisms based on the sequence of their genetic code. They can be constructed from genetic sequences using distance-based methods (such as neighbor-joining method) or character-based methods (such as maximum likelihood and Bayesian Markov Chain Monte Carlo method). Next-generation sequencing (NGS) has become more affordable and is becoming more widely used in public health to describe pathogens causing infectious diseases. Portable sequencing devices decrease the turn around time and hold promises to make data available for the support of outbreak investigation in real-time. NGS data can be used to identify the origin or source of an outbreak strain and its propagation, as well as determine presence of antimicrobial resistance genes. To visualize the genetic relatedness between samples a phylogenetic tree is constructed. In this page we will learn how to use the ggtree package, which allows for combined visualization of phylogenetic trees with additional sample data in form of a dataframe. This will enable us to observe patterns and improve understanding of the outbreak dynamic. ## 38.2 Preparation This code chunk shows the loading of required packages. In this handbook we emphasize p_load() from pacman, which installs the package if necessary and loads it for use. You can also load installed packages with library() from base R. See the page on R basics for more information on R packages. pacman::p_load( rio, # import/export here, # relative file paths tidyverse, # general data management and visualization ape, # to import and export phylogenetic files ggtree, # to visualize phylogenetic files treeio, # to visualize phylogenetic files ggnewscale) # to add additional layers of color schemes ### Import data There are several different formats in which a phylogenetic tree can be stored (eg. Newick, NEXUS, Phylip). A common one is the Newick file format (.nwk), which is the standard for representing trees in computer-readable form. This means an entire tree can be expressed in a string format such as “((t2:0.04,t1:0.34):0.89,(t5:0.37,(t4:0.03,t3:0.67):0.9):0.59);”, listing all nodes and tips and their relationship (branch length) to each other. Note: It is important to understand that the phylogenetic tree file in itself does not contain sequencing data, but is merely the result of the genetic distances between the sequences. We therefore cannot extract sequencing data from a tree file. First, we use the read.tree() function from ape package to import a Newick phylogenetic tree file in .txt format, and store it in a list object of class “phylo”. If necessary, use the here() function from the here package to specify the relative file path. Note: In this case the newick tree is saved as a .txt file for easier handling and downloading from Github. tree <- ape::read.tree("Shigella_tree.txt") We inspect our tree object and see it contains 299 tips (or samples) and 236 nodes. tree ## ## Phylogenetic tree with 299 tips and 236 internal nodes. ## ## Tip labels: ## SRR5006072, SRR4192106, S18BD07865, S18BD00489, S17BD08906, S17BD05939, ... ## Node labels: ## 17, 29, 100, 67, 100, 100, ... ## ## Rooted; includes branch lengths. Second, we import a table stored as a .csv file with additional information for each sequenced sample, such as gender, country of origin and attributes for antimicrobial resistance, using the import() function from the rio package: sample_data <- import("sample_data_Shigella_tree.csv") Below are the first 50 rows of the data: ### Clean and inspect We clean and inspect our data: In order to assign the correct sample data to the phylogenetic tree, the values in the column Sample_ID in the sample_data data frame need to match the tip.labels values in the tree file: We check the formatting of the tip.labels in the tree file by looking at the first 6 entries using with head() from base R. head(tree$tip.label) ## [1] "SRR5006072" "SRR4192106" "S18BD07865" "S18BD00489" "S17BD08906" "S17BD05939" We also make sure the first column in our sample_data data frame is Sample_ID. We look at the column names of our dataframe using colnames() from base R. colnames(sample_data) ## [1] "Sample_ID" "serotype" "Country" "Continent" "Travel_history" ## [6] "Year" "Belgium" "Source" "Gender" "gyrA_mutations" ## [11] "macrolide_resistance_genes" "MIC_AZM" "MIC_CIP" We look at the Sample_IDs in the data frame to make sure the formatting is the same than in the tip.label (eg. letters are all capitals, no extra underscores _ between letters and numbers, etc.) head(sample_data$Sample_ID) # we again inspect only the first 6 using head() ## [1] "S17BD05944" "S15BD07413" "S18BD07247" "S19BD07384" "S18BD07338" "S18BD02657" We can also compare if all samples are present in the tree file and vice versa by generating a logical vector of TRUE or FALSE where they do or do not match. These are not printed here, for simplicity. sample_data$Sample_ID %in% tree$tip.label tree$tip.label %in% sample_data$Sample_ID We can use these vectors to show any sample IDs that are not on the tree (there are none). sample_data$Sample_ID[!tree$tip.label %in% sample_dataSample_ID] ## character(0) Upon inspection we can see that the format of Sample_ID in the dataframe corresponds to the format of sample names at the tip.labels. These do not have to be sorted in the same order to be matched. We are ready to go! ## 38.3 Simple tree visualization ### Different tree layouts ggtree offers many different layout formats and some may be more suitable for your specific purpose than others. Below are a few demonstrations. For other options see this online book. Here are some example tree layouts: ggtree(tree) # simple linear tree ggtree(tree, branch.length = "none") # simple linear tree with all tips aligned ggtree(tree, layout="circular") # simple circular tree ggtree(tree, layout="circular", branch.length = "none") # simple circular tree with all tips aligned ### Simple tree plus sample data The %<+% operator is used to connect the sample_data data frame to the tree file. The most easy annotation of your tree is the addition of the sample names at the tips, as well as coloring of tip points and if desired the branches: Here is an example of a circular tree: ggtree(tree, layout = "circular", branch.length = 'none') %<+% sample_data + # %<+% adds dataframe with sample data to tree aes(color = I(Belgium))+ # color the branches according to a variable in your dataframe scale_color_manual( name = "Sample Origin", # name of your color scheme (will show up in the legend like this) breaks = c("Yes", "No"), # the different options in your variable labels = c("NRCSS Belgium", "Other"), # how you want the different options named in your legend, allows for formatting values = c("blue", "black"), # the color you want to assign to the variable na.value = "black") + # color NA values in black as well new_scale_color()+ # allows to add an additional color scheme for another variable geom_tippoint( mapping = aes(color = Continent), # tip color by continent. You may change shape adding "shape = " size = 1.5)+ # define the size of the point at the tip scale_color_brewer( name = "Continent", # name of your color scheme (will show up in the legend like this) palette = "Set1", # we choose a set of colors coming with the brewer package na.value = "grey") + # for the NA values we choose the color grey geom_tiplab( # adds name of sample to tip of its branch color = 'black', # (add as many text lines as you wish with + , but you may need to adjust offset value to place them next to each other) offset = 1, size = 1, geom = "text", align = TRUE)+ ggtitle("Phylogenetic tree of Shigella sonnei")+ # title of your graph theme( axis.title.x = element_blank(), # removes x-axis title axis.title.y = element_blank(), # removes y-axis title legend.title = element_text( # defines font size and format of the legend title face = "bold", size = 12), legend.text=element_text( # defines font size and format of the legend text face = "bold", size = 10), plot.title = element_text( # defines font size and format of the plot title size = 12, face = "bold"), legend.position = "bottom", # defines placement of the legend legend.box = "vertical", # defines placement of the legend legend.margin = margin()) You can export your tree plot with ggsave() as you would any other ggplot object. Written this way, ggsave() saves the last image produced to the file path you specify. Remember that you can use here() and relative file paths to easily save in subfolders, etc. ggsave("example_tree_circular_1.png", width = 12, height = 14) ## 38.4 Tree manipulation Sometimes you may have a very large phylogenetic tree and you are only interested in one part of the tree. For example, if you produced a tree including historical or international samples to get a large overview of where your dataset might fit in the bigger picture. But then to look closer at your data you want to inspect only that portion of the bigger tree. Since the phylogenetic tree file is just the output of sequencing data analysis, we can not manipulate the order of the nodes and branches in the file itself. These have already been determined in previous analysis from the raw NGS data. We are able though to zoom into parts, hide parts and even subset part of the tree. ### Zoom in If you don’t want to “cut” your tree, but only inspect part of it more closely you can zoom in to view a specific part. First, we plot the entire tree in linear format and add numeric labels to each node in the tree. p <- ggtree(tree,) %<+% sample_data + geom_tiplab(size = 1.5) + # labels the tips of all branches with the sample name in the tree file geom_text2( mapping = aes(subset = !isTip, label = node), size = 5, color = "darkred", hjust = 1, vjust = 1) # labels all the nodes in the tree p # print To zoom in to one particular branch (sticking out to the right), use viewClade() on the ggtree object p and provide the node number to get a closer look: viewClade(p, node = 452) ### Collapsing branches However, we may want to ignore this branch and can collapse it at that same node (node nr. 452) using collapse(). This tree is defined as p_collapsed. p_collapsed <- collapse(p, node = 452) p_collapsed For clarity, when we print p_collapsed, we add a geom_point2() (a blue diamond) at the node of the collapsed branch. p_collapsed + geom_point2(aes(subset = (node == 452)), # we assign a symbol to the collapsed node size = 5, # define the size of the symbol shape = 23, # define the shape of the symbol fill = "steelblue") # define the color of the symbol ### Subsetting a tree If we want to make a more permanent change and create a new, reduced tree to work with we can subset part of it with tree_subset(). Then you can save it as new newick tree file or .txt file. First, we inspect the tree nodes and tip labels in order to decide what to subset. ggtree( tree, branch.length = 'none', layout = 'circular') %<+% sample_data + # we add the asmple data using the %<+% operator geom_tiplab(size = 1)+ # label tips of all branches with sample name in tree file geom_text2( mapping = aes(subset = !isTip, label = node), size = 3, color = "darkred") + # labels all the nodes in the tree theme( legend.position = "none", # removes the legend all together axis.title.x = element_blank(), axis.title.y = element_blank(), plot.title = element_text(size = 12, face="bold")) Now, say we have decided to subset the tree at node 528 (keep only tips within this branch after node 528) and we save it as a new sub_tree1 object: sub_tree1 <- tree_subset( tree, node = 528) # we subset the tree at node 528 Lets have a look at the subset tree 1: ggtree(sub_tree1) + geom_tiplab(size = 3) + ggtitle("Subset tree 1") You can also subset based on one particular sample, specifying how many nodes “backwards” you want to include. Let’s subset the same part of the tree based on a sample, in this case S17BD07692, going back 9 nodes and we save it as a new sub_tree2 object: sub_tree2 <- tree_subset( tree, "S17BD07692", levels_back = 9) # levels back defines how many nodes backwards from the sample tip you want to go Lets have a look at the subset tree 2: ggtree(sub_tree2) + geom_tiplab(size =3) + ggtitle("Subset tree 2") You can also save your new tree either as a Newick type or even a text file using the write.tree() function from ape package: # to save in .nwk format ape::write.tree(sub_tree2, file='data/phylo/Shigella_subtree_2.nwk') # to save in .txt format ape::write.tree(sub_tree2, file='data/phylo/Shigella_subtree_2.txt') ### Rotating nodes in a tree As mentioned before we cannot change the order of tips or nodes in the tree, as this is based on their genetic relatedness and is not subject to visual manipulation. But we can rote branches around nodes if that eases our visualization. First, we plot our new subset tree 2 with node labels to choose the node we want to manipulate and store it an a ggtree plot object p. p <- ggtree(sub_tree2) + geom_tiplab(size = 4) + geom_text2(aes(subset=!isTip, label=node), # labels all the nodes in the tree size = 5, color = "darkred", hjust = 1, vjust = 1) p We can then manipulate nodes by applying ggtree::rotate() or ggtree::flip(): Note: to illustrate which nodes we are manipulating we first apply the geom_hilight() function from ggtree to highlight the samples in the nodes we are interested in and store that ggtree plot object in a new object p1. p1 <- p + geom_hilight( # highlights node 39 in blue, "extend =" allows us to define the length of the color block node = 39, fill = "steelblue", extend = 0.0017) + geom_hilight( # highlights the node 37 in yellow node = 37, fill = "yellow", extend = 0.0017) + ggtitle("Original tree") p1 # print Now we can rotate node 37 in object p1 so that the samples on node 38 move to the top. We store the rotated tree in a new object p2. p2 <- rotate(p1, 37) + ggtitle("Rotated Node 37") p2 # print Or we can use the flip command to rotate node 36 in object p1 and switch node 37 to the top and node 39 to the bottom. We store the flipped tree in a new object p3. p3 <- flip(p1, 39, 37) + ggtitle("Rotated Node 36") p3 # print ### Example subtree with sample data annotation Lets say we are investigating the cluster of cases with clonal expansion which occurred in 2017 and 2018 at node 39 in our sub-tree. We add the year of strain isolation as well as travel history and color by country to see origin of other closely related strains: ggtree(sub_tree2) %<+% sample_data + # we use th %<+% operator to link to the sample_data geom_tiplab( # labels the tips of all branches with the sample name in the tree file size = 2.5, offset = 0.001, align = TRUE) + theme_tree2()+ xlim(0, 0.015)+ # set the x-axis limits of our tree geom_tippoint(aes(color=Country), # color the tip point by continent size = 1.5)+ scale_color_brewer( name = "Country", palette = "Set1", na.value = "grey")+ geom_tiplab( # add isolation year as a text label at the tips aes(label = Year), color = 'blue', offset = 0.0045, size = 3, linetype = "blank" , geom = "text", align = TRUE)+ geom_tiplab( # add travel history as a text label at the tips, in red color aes(label = Travel_history), color = 'red', offset = 0.006, size = 3, linetype = "blank", geom = "text", align = TRUE)+ ggtitle("Phylogenetic tree of Belgian S. sonnei strains with travel history")+ # add plot title xlab("genetic distance (0.001 = 4 nucleotides difference)")+ # add a label to the x-axis theme( axis.title.x = element_text(size = 10), axis.title.y = element_blank(), legend.title = element_text(face = "bold", size = 12), legend.text = element_text(face = "bold", size = 10), plot.title = element_text(size = 12, face = "bold")) Our observation points towards an import event of strains from Asia, which then circulated in Belgium over the years and seem to have caused our latest outbreak. ## More complex trees: adding heatmaps of sample data We can add more complex information, such as categorical presence of antimicrobial resistance genes and numeric values for actually measured resistance to antimicrobials in form of a heatmap using the ggtree::gheatmap() function. First we need to plot our tree (this can be either linear or circular) and store it in a new ggtree plot object p: We will use the sub_tree from part 3.) p <- ggtree(sub_tree2, branch.length='none', layout='circular') %<+% sample_data + geom_tiplab(size =3) + theme( legend.position = "none", axis.title.x = element_blank(), axis.title.y = element_blank(), plot.title = element_text( size = 12, face = "bold", hjust = 0.5, vjust = -15)) p Second, we prepare our data. To visualize different variables with new color schemes, we subset our dataframe to the desired variable. It is important to add the Sample_ID as rownames otherwise it cannot match the data to the tree tip.labels: In our example we want to look at gender and mutations that could confer resistance to Ciprofloxacin, an important first line antibiotic used to treat Shigella infections. We create a dataframe for gender: gender <- data.frame("gender" = sample_data[,c("Gender")]) rownames(gender) <- sample_dataSample_ID We create a dataframe for mutations in the gyrA gene, which confer Ciprofloxacin resistance: cipR <- data.frame("cipR" = sample_data[,c("gyrA_mutations")]) rownames(cipR) <- sample_data$Sample_ID We create a dataframe for the measured minimum inhibitory concentration (MIC) for Ciprofloxacin from the laboratory: MIC_Cip <- data.frame("mic_cip" = sample_data[,c("MIC_CIP")]) rownames(MIC_Cip) <- sample_data$Sample_ID We create a first plot adding a binary heatmap for gender to the phylogenetic tree and storing it in a new ggtree plot object h1: h1 <- gheatmap(p, gender, # we add a heatmap layer of the gender dataframe to our tree plot offset = 10, # offset shifts the heatmap to the right, width = 0.10, # width defines the width of the heatmap column, color = NULL, # color defines the boarder of the heatmap columns colnames = FALSE) + # hides column names for the heatmap scale_fill_manual(name = "Gender", # define the coloring scheme and legend for gender values = c("#00d1b1", "purple"), breaks = c("Male", "Female"), labels = c("Male", "Female")) + theme(legend.position = "bottom", legend.title = element_text(size = 12), legend.text = element_text(size = 10), legend.box = "vertical", legend.margin = margin()) ## Scale for 'y' is already present. Adding another scale for 'y', which will replace the existing scale. ## Scale for 'fill' is already present. Adding another scale for 'fill', which will replace the existing scale. h1 Then we add information on mutations in the gyrA gene, which confer resistance to Ciprofloxacin: Note: The presence of chromosomal point mutations in WGS data was prior determined using the PointFinder tool developed by Zankari et al. (see reference in the additional references section) First, we assign a new color scheme to our existing plot object h1 and store it in a now object h2. This enables us to define and change the colors for our second variable in the heatmap. h2 <- h1 + new_scale_fill() Then we add the second heatmap layer to h2 and store the combined plots in a new object h3: h3 <- gheatmap(h2, cipR, # adds the second row of heatmap describing Ciprofloxacin resistance mutations offset = 12, width = 0.10, colnames = FALSE) + scale_fill_manual(name = "Ciprofloxacin resistance \n conferring mutation", values = c("#fe9698","#ea0c92"), breaks = c( "gyrA D87Y", "gyrA S83L"), labels = c( "gyrA d87y", "gyrA s83l")) + theme(legend.position = "bottom", legend.title = element_text(size = 12), legend.text = element_text(size = 10), legend.box = "vertical", legend.margin = margin())+ guides(fill = guide_legend(nrow = 2,byrow = TRUE)) ## Scale for 'y' is already present. Adding another scale for 'y', which will replace the existing scale. ## Scale for 'fill' is already present. Adding another scale for 'fill', which will replace the existing scale. h3 We repeat the above process, by first adding a new color scale layer to our existing object h3, and then adding the continuous data on the minimum inhibitory concentration (MIC) of Ciprofloxacin for each strain to the resulting object h4 to produce the final object h5: # First we add the new coloring scheme: h4 <- h3 + new_scale_fill() # then we combine the two into a new plot: h5 <- gheatmap(h4, MIC_Cip, offset = 14, width = 0.10, colnames = FALSE)+ scale_fill_continuous(name = "MIC for Ciprofloxacin", # here we define a gradient color scheme for the continuous variable of MIC low = "yellow", high = "red", breaks = c(0, 0.50, 1.00), na.value = "white") + guides(fill = guide_colourbar(barwidth = 5, barheight = 1))+ theme(legend.position = "bottom", legend.title = element_text(size = 12), legend.text = element_text(size = 10), legend.box = "vertical", legend.margin = margin()) ## Scale for 'y' is already present. Adding another scale for 'y', which will replace the existing scale. ## Scale for 'fill' is already present. Adding another scale for 'fill', which will replace the existing scale. h5 We can do the same exercise for a linear tree: p <- ggtree(sub_tree2) %<+% sample_data + geom_tiplab(size = 3) + # labels the tips theme_tree2()+ xlab("genetic distance (0.001 = 4 nucleotides difference)")+ xlim(0, 0.015)+ theme(legend.position = "none", axis.title.y = element_blank(), plot.title = element_text(size = 12, face = "bold", hjust = 0.5, vjust = -15)) p h1 <- gheatmap(p, gender, offset = 0.003, width = 0.1, color="black", colnames = FALSE)+ scale_fill_manual(name = "Gender", values = c("#00d1b1", "purple"), breaks = c("Male", "Female"), labels = c("Male", "Female"))+ theme(legend.position = "bottom", legend.title = element_text(size = 12), legend.text = element_text(size = 10), legend.box = "vertical", legend.margin = margin()) ## Scale for 'y' is already present. Adding another scale for 'y', which will replace the existing scale. ## Scale for 'fill' is already present. Adding another scale for 'fill', which will replace the existing scale. h1 Then we add Ciprofloxacin resistance mutations after adding another color scheme layer: h2 <- h1 + new_scale_fill() h3 <- gheatmap(h2, cipR, offset = 0.004, width = 0.1, color = "black", colnames = FALSE)+ scale_fill_manual(name = "Ciprofloxacin resistance \n conferring mutation", values = c("#fe9698","#ea0c92"), breaks = c( "gyrA D87Y", "gyrA S83L"), labels = c( "gyrA d87y", "gyrA s83l"))+ theme(legend.position = "bottom", legend.title = element_text(size = 12), legend.text = element_text(size = 10), legend.box = "vertical", legend.margin = margin())+ guides(fill = guide_legend(nrow = 2,byrow = TRUE)) ## Scale for 'y' is already present. Adding another scale for 'y', which will replace the existing scale. ## Scale for 'fill' is already present. Adding another scale for 'fill', which will replace the existing scale. h3 Then we add the minimum inhibitory concentration determined by the laboratory (MIC): h4 <- h3 + new_scale_fill() h5 <- gheatmap(h4, MIC_Cip, offset = 0.005, width = 0.1, color = "black", colnames = FALSE)+ scale_fill_continuous(name = "MIC for Ciprofloxacin", low = "yellow", high = "red", breaks = c(0,0.50,1.00), na.value = "white")+ guides(fill = guide_colourbar(barwidth = 5, barheight = 1))+ theme(legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 8), legend.box = "horizontal", legend.margin = margin())+ guides(shape = guide_legend(override.aes = list(size = 2))) ## Scale for 'y' is already present. Adding another scale for 'y', which will replace the existing scale. ## Scale for 'fill' is already present. Adding another scale for 'fill', which will replace the existing scale. ## 38.5 Resources Ea Zankari, Rosa Allesøe, Katrine G Joensen, Lina M Cavaco, Ole Lund, Frank M Aarestrup, PointFinder: a novel web tool for WGS-based detection of antimicrobial resistance associated with chromosomal point mutations in bacterial pathogens, Journal of Antimicrobial Chemotherapy, Volume 72, Issue 10, October 2017, Pages 2764–2768, https://doi.org/10.1093/jac/dkx217
auto_math_text
web
## Tuesday, June 12, 2012 ### Fermi $$130\GeV$$ line claimed to reach 5 sigma Evidence for WIMP dark matter could have gotten strong Christoph Weniger's observation of a gamma-ray line in the Fermi photographs of the galaxy has ignited some activity in the astroparticle physics boundary. In eight weeks, the paper has collected 20 citations. Most of the followups tend to be positive; some papers prefer to claim that "nothing can be seen here". The list of mostly negative papers includes a report by the Fermi collaboration itself. The newest paper (an astro-ph paper that is one week old) Strong Evidence for Gamma-ray Lines from the Inner Galaxy was written by Meng Su and Douglas P. Finkbeiner who are Harvard- and Harvard-Smithsonian-affiliated – and seem kind of trustworthy to me for various reasons. In particular, they're two of the four people who found the Fermi bubbles, see the picture above. In the new paper, Finkbeiner and Su claim that the line near $$130\GeV$$ has a significance of 5.0 sigma, exceeding the previous estimates. With some dilution of the probability, they stand at 3.7 sigma but even this figure could reach 5 sigma by the end of 2013. The authors also say that a pair of lines, near $$110$$ and $$130\GeV$$, could provide us with a slightly better fit than a single line although the one-line description isn't bad at all. Even more interestingly, they say that the pair of lines could be compatible with the Higgs in space paradigm. In that 2009-2010 paper, it was conjectured that the dark matter particles, the WIMPs, may annihilate to $$\gamma Z$$, something that e.g. Gordon Kane and others consider the most likely decay channel, but also to $$\gamma h$$ where $$h$$ is the Higgs boson. The kinematics is compatible with the idea that the two lines come exactly from these two decay channels, using the usual estimates for the masses $$90+$$, $$130-$$, and $$140+\GeV$$ for the Z-boson, Higgs boson, and the WIMP, respectively. If these lines – and both of them – got indeed stronger and if their energies continued to match the predictions from the decays, it would be quite a galactic test for particle physics, relativistic kinematics, and the masses of the particles originally extracted from the terrestrial colliders. I think it's clear that not only because of the LHC, we have entered a new era dominated by the experimental data. I don't remember we could have gotten excited by such experimental hints just a few years ago. Of course, most of such hints will go away but some of them may stay with us which would be exciting, indeed.
auto_math_text
web
Compare imputation accuracies of different imputation methods (Beagle 5.1, LinkImputeR, kNNi, FILLIN) by calculating the percentage of correctly imputed genotypes in TASSEL GUI software. Please find more information on TASSEL software and its documentaton at this Link In this tutorial, I am only showing how one can evaluate the imputation accuracy in TASSEL GUI using LinkImpute (LD kNNi) imputation algorithm. In my research experince, I have worked with genotype data of maize, teosinte, soybean, and grapes, and LinkImpute has been an efficient imputation algorithm with low imputation error rate. Note: I strongly suggest to try other imputation methods and compare their error rates, which can be easily done in TASSEL GUI software by following the below steps: • Import your genotype data (VCF, Hapmap and other formats) • Mask about 1-10% of the genotype data using Mask Genotype plugin under Data • Impute the masked genotype data (I use LD kNNi or linkimpute) or load imputed data using different platform such as Beagle, but make sure that imputation was performed on the SAME masked genotype data • Select the three files (Raw data, Masked data, and Imputed), and click Evaluate Imputation Accuracy under Impute and press OK • Summary of the evaluation should be generated in the new node ## Steps in TASSEL GUI The above steps are also shown in the below steps: ## Plot imputation accuracies One can plot the imputation accuracies by exporting the evaluation summary stats in R or Excel as shown in example below: Thank you for reading this tutorial. I really hope these steps will assist in your analysis. If you have any questions or comments, please comment below or send an email. ### Bibliography Glaubitz, J. C., Casstevens, T. M., Lu, F., Harriman, J., Elshire, R. J., Sun, Q., & Buckler, E. S. (2014). TASSEL-GBS: a high capacity genotyping by sequencing analysis pipeline. PloS one, 9(2), e90346. Money, Daniel, et al. “LinkImpute: fast and accurate genotype imputation for nonmodel organisms.” G3: Genes, Genomes, Genetics 5.11 (2015): 2383-2390.
auto_math_text
web
## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE #### View: Message: [ First | Previous | Next | Last ] By Topic: [ First | Previous | Next | Last ] By Author: [ First | Previous | Next | Last ] Font: Proportional Font Subject: Re: LaTeX's internal char prepresentation (UTF8 or Unicode?) From: Date: Sun, 11 Feb 2001 19:20:49 -0500 Content-Type: text/plain Parts/Attachments: text/plain (22 lines) a small update on unicode and math symbols; roozbeh says     There are also ones that will most probably never get into Unicode, like     the dotless j.     [...]     Unicode recommends modifiers for the unified math symbols, which will     become something like \mathmakelonger{\rightarrow} internally. this is still an active area. there has been a strong counterproposal to explicitly encode the long arrows, based on the fact that they are given distinct entity names in the sgml public entity set isoamsa, and the concept of "disunification" has some strong support. the defining proposal will be distributed in a week or two to the utc for a letter ballot; i can't predict whether it will or won't be accepted, but i can say with assurance that the matter isn't yet settled, and if the decision ends up to use a modifier, it won't be a *unicode* modifier, but one defined in mathml.                                                         -- bb
auto_math_text
web
# Near the End of A PhD and Have No Job by Astro_Dude Tags: None Share this thread: P: 40 Quote by evankiefl Shame we still don't build those. It was a big white elephant they built for some superstitious reason. While working as a programmer at IR, the design engineers used Intergraph Solid Edge modeling software. They drew the sheet metal part in 3D and Solid Edge did everything for them. It would calculate the bend allowance and unfold the part into a flat drawing ready for print out. Engineering is getting like that. Software is analyzing the forces acting on parts making up a bridge. Perhaps the engineer has forgotten how to do it himself. That was in a Statics and Dynamics course he took years ago. Is it possible they have even forgotten their trig? P: 40 Quote by ParticleGrl I feel your pain. I've been looking since last December, when I finished my high energy phd, and am currently tending bar while I look for more challenging work. I've had some luck getting interviews with business consulting firms, so you might want to look at that route. Be careful what you wish for! You will notice that red fire alarm with piezoelectric element on the wall. It WILL damage your hearing. Every month they will do a test: "We will be testing the fire alarm. Please remain in your work areas." If you do anything strange like wear hearing protection in the office or even put your fingers in your ears when it goes off, you'll probably be fired. That's how people are. The job might pay $80,000/year, but if it costs you your hearing, is it worth it? Do something outdoors. Go hunt for meteorites. P: 6,863 Quote by Astro_Dude I really should have learned more fluid dynamics... In fact fluid dynamics isn't directly used that much in oil/gas exploration. The closest thing to CFD is reservoir simulation and those are mostly difussive equations. Once on a lark I tried to calculate the effective reynolds number of an oil reservoir and the numbers were really, really tiny. What I did when I worked in oil/gas was data processing software for well logs. What I ended up doing was mostly algebra. There is a lot of sophisticated physics "under the covers" which was then packaged for use by people with middle educations. What someone (who I never met and might have done the work in the 1970's) did was to calculate things like neutron diffusion and then created "graphical charts" which someone with a middle school education could use to do calculations. It's really cool because someone in the 1950's figured out a way for someone with no knowledge of algebra to do PDE computations. A lot of what I did was then to take those charts digitize them and have it so that the computer could do chart lookups. Of course, the logical thing would be to have the computers run the actual equations, but that would have been too logical. P: 12 This is a good time for a young man to join the industry.Oil and gas pays BIG ,,,,the standard has been raised because of BP and the gulf thing. ------------- All I ever did was drill,,,,I was a driller for 9 years,,good enough money that I could afford to save most of it,married a woman from Wisconsin,,with same mindset. At 65 Life is Good. a joe in Texas HW Helper P: 2,277 Quote by twofish-quant What someone (who I never met and might have done the work in the 1970's) did was to calculate things like neutron diffusion and then created "graphical charts" which someone with a middle school education could use to do calculations. It's really cool because someone in the 1950's figured out a way for someone with no knowledge of algebra to do PDE computations. A lot of what I did was then to take those charts digitize them and have it so that the computer could do chart lookups. Of course, the logical thing would be to have the computers run the actual equations, but that would have been too logical. Yes.... Standardized engineering is pretty much following those steps at least in some fields like Civil. Someone solved the problem before and created a guide that you must follow. This stifles creativity, but standardizing is a way to make sure that everything works "fine". That's one of the reasons I left engineering. P: 40 Quote by twofish-quant In fact fluid dynamics isn't directly used that much in oil/gas exploration. The closest thing to CFD is reservoir simulation and those are mostly difussive equations. Once on a lark I tried to calculate the effective reynolds number of an oil reservoir and the numbers were really, really tiny. What I did when I worked in oil/gas was data processing software for well logs. What I ended up doing was mostly algebra. There is a lot of sophisticated physics "under the covers" which was then packaged for use by people with middle educations. What someone (who I never met and might have done the work in the 1970's) did was to calculate things like neutron diffusion and then created "graphical charts" which someone with a middle school education could use to do calculations. It's really cool because someone in the 1950's figured out a way for someone with no knowledge of algebra to do PDE computations. A lot of what I did was then to take those charts digitize them and have it so that the computer could do chart lookups. Of course, the logical thing would be to have the computers run the actual equations, but that would have been too logical. Then what skills do we need? What should we study? Have your read the tale of John Henry? He tried to compete with a steam powered hammer. Don't try to beat a machine. Don't try to be like the computer. P: 50 Just as an update, I sent out another 100 apps in July alone. Got a few first round phone interviews, a call or two from managers regarding unposted jobs, but nothing beyond that. It seems like every time I feel like I may be getting something it doesn't go through. I don't feel anything when I get the standard no-reply or rejection email anymore, but I just wish I'd get called back just to hear "oh yeah, sorry, we're going with someone else." I know that a lot of these companies move VERY slow, and a lot more just don't feel a need to call anyone back. I just wish I could do something of use. :( The economy getting worse and the cuts to the defense industry are not helping my optimism. P: 551 I've only just seen this thread. Good luck to you, Astro_Dude. I know how you're feeling; I'm experiencing the "end of my rope" feeling as well. What kinds of things are you applying for? Looks like I'll need to do 100 app months as soon as I'm done with my PhD. Wow . P: 1 I am in the same boat as you astro_dude except I have an EE degree. However, like you, my dissertation topic is not readily transferable to industry so I am also having a hard time finding jobs. I was "lucky" to land a part-time gig as lecturer at a university for$2,000 a month which is a bit higher than my monthly stipend as a grad student researcher. It's not your fault that you are having a hard time finding a job. The economy is just terrible and is probably the worst for new grads since the 1930s.If Ivy League grads are working minimum wage jobs for \$10 an hour after earning a bachelors degree and UC grads are thankful to be working as cashiers at Home Depot after finishing an undergraduate degree, you can only conclude that the job market is going to be hard for everyone - new grads and older workers who have just been unemployed. The best tip is this. The best way to get a job in this horrible economy is through networking and connections. Do your academic or thesis advisers or someone at your alma mater colleges have connections to industry, research, or government? Maybe through them, you can find a job. If you reconnect with them, they can put in a good word for you with their contacts and then you can pretty much skip the resume & phone interview crap and go straight to an in-person interview. That might be the best way to go. Otherwise, if you have any grad school classmates who were able to find employment, it might be helpful to check with them also to see if they can get you in through the back door. In this economy, the easiest way to get a job is through the "back door" via networking and connections. Otherwise, going through the formal procedures of resume, phone interview, in-person interview, and pray that the company hires you is really difficult. It's not going to help that the economy might go into a double dip recession. Anyway, best of luck to you! I hope something works out! P: 184 This thread is making me second guess my desire to get a PhD. P: 6,863 Quote by nickadams This thread is making me second guess my desire to get a PhD. There are some good things about getting a physics Ph.D. This discussion is really useful because it helps people make informed decisions and know what they are in for, but part of the reason that I encourage discussions like these is because I think that society would be better off with more physics Ph.D.'s. 1) Remember that the job market is bad for everyone. 2) Getting a Ph.D. gets you out of the market for about seven years, and lets you reroll the dice. Hopefully the US economy will recover in a few years, but if it doesn't then you are probably in trouble no matter what you do. 3) You leave the Ph.D. without much debt. Yes it is a bad thing, to get your Ph.D. and then work as a bartender, but you have to realize that this puts you in a *much* better position than people that went to med school or law school. In the worst case scenario, you get some job that keeps you from starving and wait for things to improve. People that went to law and med school now have massive debt that *cannot be discharged by bankruptcy*. Interest payments are building up, so even if they economy improves in two years, they are totally hosed. 4) Is it better to have loved and lost than to have never loved at all? The biggest regret that physics Ph.D.'s have is that they aren't going to be able to do science for their entire lives. However, most people can't do that, and for me at least, I think it is better that I spent ten years doing astrophysics research (with the possibility that I'll be able to do it again in a few years) than to have never done it at all. 5) Finally, if everything does blow up, a Ph.D. will get you in front of the queue if you have to immigrate to another country. Remember that a lot of scientists ran to the US in order to escape extreme hardship, and if things get really, really bad, a Ph.D. will help you get out of the US to somewhere that the grass is greener. P: 148 I have found that sending resumes at random is highly ineffective. I have been in working in industry for 15 years and have only had that work twice and one of those times was during the dot com boom when they were taking anyone with a pulse. I graduated with a BS Physics in 95 and have been working as a programmer since. I've worked for all kinds of companies from tiny mom and pop shop that needed someone to do networking and some data analysis, to oil and gas, large law firm (~2k employees), dot com, major financial company and government contractor. The most effective methods for job hunting that I have found are (in decreasing order of effectiveness): 1. Networking. This is by far the most effective way. If you know someone in the company you want to work for who can place your resume on the desk of a technical manager who is looking to hire, you have cleared the number one hurdle that trips up every job hunter. If you don't know anyone at that company, your job is to get to know somebody there. You can ask friends and family, neighbors, professors, your pastor and any person you come in contact with for more than 5 minutes if they know someone there. LinkedIn can be invaluable in this way (I know someone who got hired through contacts they made on LinkedIn. I also got someone an interview because he found out that I was linked to someone who worked at a company he was interested in. I made the introductions, his resume was placed on the tech manager's desk and he got the interview. He did not get the job but I can't do everything :) Of course everyone tells you to network but if you've spent the last 8 years or so buried in books, you probably haven't built up a particularly robust network. You can start by going to industry functions, chamber of commerce events, local speakers from the industry you are interested in and even enrolling in some classes where people of that ilk are bound to be found. For example, you can audit a financial derivatives class at your local MBA mill. The thing is, you are not going to find those people sitting at home and sending resumes into the wild (more on this later). One thing you can do is call up people in the industry and ask if you can do informational interview. If someone calls me and I'm not under pressure to give them a job, I'm more than willing to give them advice on the industry. Just make sure you don't call them on Monday morning when they're trying to get caught up on all the crap they were supposed to do over the weekend. People will usually give you pointers. At they very worst, they will hang up on you - you have nothing to lose. 2. Head hunters. They have a bad reputation (some deservedly so) but the good ones have contacts in their respective industries that keep them informed. The really good ones have top level contacts (There was one particular head hunter who was rumored to have been romantically involved with a married director of the financial firm I was working at. That is contacts!). Your job is to find such headhunters - which is much easier than finding those elusive jobs. You have to make sure that whoever you get specializes in the industry you are interested in. It does you no good to go to a recruiter who works with oil and gas if you're interested in finance (unless it was energy trading) and vice versa. 3. Your school's career center. Depending on where you are located and what kind of jobs you are looking for, this may or may not be effective. For example, if you are in Arkansas and you want to work in Quantitative Finance, they probably won't be able to hook you up. But if you are in Texas and want to work in the Oil and Gas industry, they may have something for you. Companies routinely go to schools for recruiting events. They're probably looking for people with BS and MBA degrees but all you need is a chance to talk to the person the company sends. He or she may pass on your resume if you look promising. 4. Job fairs. You probably won't get a job out of these unless it's something like Walmart is in town and wants 200 people. If it is in your industry though, it is worthwhile to go and spend as much time as possible networking, ie. doing all those things that techies normally hate like accosting random people, introducing yourself, asking them what they do (even better if you know what they do - do your research beforehand) and then pumping them for information. Take their business cards. You will need it later when you call them up two weeks later, introduce yourself and ask if you can do an "informational interview". 5. Sending out resumes blindly. This is the least effective way to get a job. Yes, if you do it long enough and you send out enough resumes, you may get something. But are you willing to do this for a year and send out thousands of resumes? Especially when there are other more effective, if less comfortable, ways to get jobs? The thing is, while you are sending those resumes you feel like you are doing something. You can tell yourself at the end of a grueling day of sending resumes and cold calling (you are doing that, right?) that you are searching for a job. Truth is, you are doing the most comfortable thing for you to get a job. If you really want a job, you have to get out of your comfort zone and try the other things I listed. It took me a while to get this but once I did, I never really had a problem finding a job. When you send a resume blindly, it will land on some HR flunky's email inbox (if you're lucky). Usually, it will go to an automated inbox which scans for keywords and throws out those that don't have them. If by chance you get through to a human, most likely it will be someone in HR who has no clue what half the things on your resume are (I know this because I used to do programming for PeopleSoft and worked with HR people. They were really nice ladies but their priorities are learning new rules and policy changes, dealing with things like sexual harrassment and discrimination, etc. and not learning the latest hot programming language). Trust me, HR is your enemy. Repeat this until it sinks in. Their job is to filter out resumes, not to find that rare gem. One more thing. Read a book called "What Color is your Parachute?". I read it when I first left school and the advice the author gave in that book has been spot on. It is updated every couple of years so you should get the latest one. If there is one book you should read on job hunting, this is it. P: 13 Unfortunately its ture that people who has astro or cosmos diploma cant find a job in the industry area easily. I know someone who have similar situation to you. But it isnt so hard to do postdoc for them. I live in France. Here, PHD has salary as a normal employee . The salary is +/-2000euro/months . You can try to come to Europ if you dont mind living in another country. Maybe its easier to get a job. Best luck for you!! P: 50 Ugh. I've been completely slowed down due to this job I had to take to keep my head above water as I'm looking. Long days, and I'm absolutely wiped most every night. Quote by jk I have found that sending resumes at random is highly ineffective. I have been in working in industry for 15 years and have only had that work twice and one of those times was during the dot com boom when they were taking anyone with a pulse. I graduated with a BS Physics in 95 and have been working as a programmer since. I've worked for all kinds of companies from tiny mom and pop shop that needed someone to do networking and some data analysis, to oil and gas, large law firm (~2k employees), dot com, major financial company and government contractor. The most effective methods for job hunting that I have found are (in decreasing order of effectiveness): 1. Networking. This is by far the most effective way. If you know someone in the company you want to work for who can place your resume on the desk of a technical manager who is looking to hire, you have cleared the number one hurdle that trips up every job hunter. If you don't know anyone at that company, your job is to get to know somebody there. You can ask friends and family, neighbors, professors, your pastor and any person you come in contact with for more than 5 minutes if they know someone there. LinkedIn can be invaluable in this way (I know someone who got hired through contacts they made on LinkedIn. I also got someone an interview because he found out that I was linked to someone who worked at a company he was interested in. I made the introductions, his resume was placed on the tech manager's desk and he got the interview. He did not get the job but I can't do everything :) Of course everyone tells you to network but if you've spent the last 8 years or so buried in books, you probably haven't built up a particularly robust network. You can start by going to industry functions, chamber of commerce events, local speakers from the industry you are interested in and even enrolling in some classes where people of that ilk are bound to be found. For example, you can audit a financial derivatives class at your local MBA mill. The thing is, you are not going to find those people sitting at home and sending resumes into the wild (more on this later). One thing you can do is call up people in the industry and ask if you can do informational interview. If someone calls me and I'm not under pressure to give them a job, I'm more than willing to give them advice on the industry. Just make sure you don't call them on Monday morning when they're trying to get caught up on all the crap they were supposed to do over the weekend. People will usually give you pointers. At they very worst, they will hang up on you - you have nothing to lose. You are correct, I don't have the best of networks. I do, however, have good friends at very many defense contractors. I've had them suggest me for jobs, I've had some that are the heads of entire divisions send my resume out to their people, I've had others directly talk to their boss about how I would be great for some position in their own group. None of this has worked. I keep hearing people talk about the magic of networking, but when you have friends who directly know people making the decisions and you can't get hired... Anyway, yes. Everyone knows this is the way to network, but most people don't WANT to network with a physics person. 99% of the people you meet don't know what to do with you. I also despise companies who are claiming to hire people but aren't. Stop bleeping lying, and wasting everyone's time. Quote by jk 2. Head hunters. They have a bad reputation (some deservedly so) but the good ones have contacts in their respective industries that keep them informed. The really good ones have top level contacts (There was one particular head hunter who was rumored to have been romantically involved with a married director of the financial firm I was working at. That is contacts!). Your job is to find such headhunters - which is much easier than finding those elusive jobs. You have to make sure that whoever you get specializes in the industry you are interested in. It does you no good to go to a recruiter who works with oil and gas if you're interested in finance (unless it was energy trading) and vice versa. This is MUCH easier said than done. Quote by jk 3. Your school's career center. Depending on where you are located and what kind of jobs you are looking for, this may or may not be effective. For example, if you are in Arkansas and you want to work in Quantitative Finance, they probably won't be able to hook you up. But if you are in Texas and want to work in the Oil and Gas industry, they may have something for you. Companies routinely go to schools for recruiting events. They're probably looking for people with BS and MBA degrees but all you need is a chance to talk to the person the company sends. He or she may pass on your resume if you look promising. 4. Job fairs. You probably won't get a job out of these unless it's something like Walmart is in town and wants 200 people. If it is in your industry though, it is worthwhile to go and spend as much time as possible networking, ie. doing all those things that techies normally hate like accosting random people, introducing yourself, asking them what they do (even better if you know what they do - do your research beforehand) and then pumping them for information. Take their business cards. You will need it later when you call them up two weeks later, introduce yourself and ask if you can do an "informational interview". These are one in the same and the problem is companies don't actually care. They're purposely not sending anyone worth networking with to these things. They send college-age kids who are usually one or two years out of their BE. 99% of the time all they have to say is how much fun they're having and to "use the website". It's almost never worth going to job fairs. I've never once met anyone who is worth "networking with" or is even interested in networking. Maybe this was different when you were looking for work. Most companies just see job fairs as a way of reminding those kids who did co-ops that they have a job waiting for them. Quote by jk 5. Sending out resumes blindly. This is the least effective way to get a job. Yes, if you do it long enough and you send out enough resumes, you may get something. But are you willing to do this for a year and send out thousands of resumes? Especially when there are other more effective, if less comfortable, ways to get jobs? The thing is, while you are sending those resumes you feel like you are doing something. You can tell yourself at the end of a grueling day of sending resumes and cold calling (you are doing that, right?) that you are searching for a job. Truth is, you are doing the most comfortable thing for you to get a job. If you really want a job, you have to get out of your comfort zone and try the other things I listed. It took me a while to get this but once I did, I never really had a problem finding a job. I've never, not once gotten a response back from a cold call. I always get a voice mail, and never, ever, get a call back. It's like when you pull a hot chick's number and she has no intention of actually picking up! :p Yes, this is the worst possible way, but when the system is DESIGNED to screw anyone qualified, it's usually the ONLY way. Quote by jk When you send a resume blindly, it will land on some HR flunky's email inbox (if you're lucky). Usually, it will go to an automated inbox which scans for keywords and throws out those that don't have them. If by chance you get through to a human, most likely it will be someone in HR who has no clue what half the things on your resume are (I know this because I used to do programming for PeopleSoft and worked with HR people. They were really nice ladies but their priorities are learning new rules and policy changes, dealing with things like sexual harrassment and discrimination, etc. and not learning the latest hot programming language). Trust me, HR is your enemy. Repeat this until it sinks in. Their job is to filter out resumes, not to find that rare gem. One more thing. Read a book called "What Color is your Parachute?". I read it when I first left school and the advice the author gave in that book has been spot on. It is updated every couple of years so you should get the latest one. If there is one book you should read on job hunting, this is it. HR is the enemy, I know. However, there is little hope for me elsewhere since literally all my professors and colleagues have been career academics. I bleeping hate academia, and my contacts in industry, helpful as they have been, have not yielded results. P: 6,863 Quote by jk 2. Head hunters. They have a bad reputation (some deservedly so) but the good ones have contacts in their respective industries that keep them informed. The really good ones have top level contacts (There was one particular head hunter who was rumored to have been romantically involved with a married director of the financial firm I was working at. That is contacts!). Your job is to find such headhunters - which is much easier than finding those elusive jobs. For physics Ph.D's, you can find headhunters at www.dice.com, www.efinancialcareers.com, www.phds.org, www.wilmott.com. Also *.jobs USENET is also useful. 3. Your school's career center. Depending on where you are located and what kind of jobs you are looking for, this may or may not be effective. For example, if you are in Arkansas and you want to work in Quantitative Finance, they probably won't be able to hook you up. The problem with large schools like UT Austin is that physics Ph.D.'s can use the good career services. UT Austin has very good contacts in the financial industry, but those are in the McCombs Business School for MBA's, and I was told specifically that because I was natural sciences, that I would not be allowed to use MBA career services (I even offered to pay them). Take their business cards. You will need it later when you call them up two weeks later, introduce yourself and ask if you can do an "informational interview". For Ph.D.'s it is extremely useful to go to conferences. Even if you don't get a job, you can get information. They were really nice ladies but their priorities are learning new rules and policy changes, dealing with things like sexual harrassment and discrimination, etc. and not learning the latest hot programming language). Trust me, HR is your enemy. Repeat this until it sinks in. Their job is to filter out resumes, not to find that rare gem. One thing that I learned is don't consider people enemies. HR people have a job to do. Their job is to get rid of you. Also, one thing that helps a lot for Ph.D.'s is to write a resume that confuses HR. If an HR person sees that you have a Ph.D. and has no clue what you did, they might forward your resume to someone that has some clue, at which point you've gotten over the first hurdle. Also, be *VERY* careful when you are interviewed by someone from HR. Their job in the interview is to make you feel warm and comfortable so that you say something about yourself that disqualifies you from the job. Also, be *VERY* careful at assuming roles. Some people that look like stereotypical HR people are actually computer geeks, and some people that look like stereotypical computer geeks are actually HR people. One more thing. Read a book called "What Color is your Parachute?". I read it when I first left school and the advice the author gave in that book has been spot on. It is updated every couple of years so you should get the latest one. If there is one book you should read on job hunting, this is it. I haven't read that book, so I don't know about it, but I've found that other books about resume writing and job searching often get it wrong. For example, a lot of books say that you should write your resume so that the reader will understand what you did, but if you are a Ph.D. looking for a Ph.D. position, you should write your resume so that the average person *doesn't* have much of a clue what you did. P: 148 Quote by Astro_Dude You are correct, I don't have the best of networks. I do, however, have good friends at very many defense contractors. I've had them suggest me for jobs, I've had some that are the heads of entire divisions send my resume out to their people, I've had others directly talk to their boss about how I would be great for some position in their own group. None of this has worked. I keep hearing people talk about the magic of networking, but when you have friends who directly know people making the decisions and you can't get hired... There is no magic in job searches. Networking is work and it is not guaranteed to produce results all the time. But it is the best method that I know of. What was the feedback you got from the jobs you were rejected for? Did you get any? Also, can you post your resume (after removing the personal info) here so we can give you feedback? Anyway, yes. Everyone knows this is the way to network, but most people don't WANT to network with a physics person. 99% of the people you meet don't know what to do with you. This is not true. Most people don't give a flip what you studied if they think you can do stuff for them. That is all that matters in the corporate world. I also despise companies who are claiming to hire people but aren't. Stop bleeping lying, and wasting everyone's time. Strange as this advice may sound, don't take it so personal when you get rejected. You will drive yourself crazy. You need to develop a thicker skin or you won't last long These are one in the same and the problem is companies don't actually care. They're purposely not sending anyone worth networking with to these things. They send college-age kids who are usually one or two years out of their BE. 99% of the time all they have to say is how much fun they're having and to "use the website". It's almost never worth going to job fairs. I've never once met anyone who is worth "networking with" or is even interested in networking. It is true that companies don't care but that is not relevant for your purposes. This is a commercial transaction. Your job is to convince the recruiter that by passing on your resume to his/her boss, they are doing something to help themselves. Of course, they don't care about you - they don't know you. Try this next time you run into those "college-age kids"...instead of deciding that they are too low level to do anything for you, try to chat them up about the company in general. Don't tell them that you would like to work for the company. Tell them that you are looking around and trying to find one that you like. You don't want to give the impression of desperation, even if you are desperate. It's a funny thing about people that if they think you want to join their group badly (whatever their group is), they will be standoffish. But if you act as if you have options and are just being choosy, they will consider you more seriously. Maybe this was different when you were looking for work. Most companies just see job fairs as a way of reminding those kids who did co-ops that they have a job waiting for them. I don't think things have changed. For one thing, just because I got in the market 15 years ago doesn't mean I had never to look for work after that. The last time I got a job offer was in the middle of the financial crash when everyone was thinking the world was coming to an end. Of course, I have experience so that makes it a bit easier for me. But it is a question of degree and not a qualitative difference. I've never, not once gotten a response back from a cold call. I always get a voice mail, and never, ever, get a call back. It's like when you pull a hot chick's number and she has no intention of actually picking up! :p I agree cold calls are not very effective. That is why you should network and be introduced to the person you are calling. I am more likely to return a call if the person who is calling me was referred to me by someone I know and trust. Are you on LinkedIn? Yes, this is the worst possible way, but when the system is DESIGNED to screw anyone qualified, it's usually the ONLY way. First of all, no one knows if you are qualified. A PhD is not a guarantee of qualification - it just means you were able to go through a few years of focused work in one very narrow area. That may or may not translate into productivity once you are at job. That is the only metric that counts for a manager. When I used to interview applicants, I noticed that there was very little correlation between advanced degrees and someone's performance. In fact, I had one PhD working for me that was ok but was not as good as this kid who was 6 months out of college with a BS. The system is not designed to screw anyone. I think you need to step back for a minute and view this whole job search in a more dispassionate light. No one is out to get you. But no one is going to bend over backwards for you either. What you have to do is view this as a puzzle without getting emotional about it. HR is the enemy, I know. However, there is little hope for me elsewhere since literally all my professors and colleagues have been career academics. I bleeping hate academia, and my contacts in industry, helpful as they have been, have not yielded results. If you realize that HR is not going to help you, then the corollary is that you have to look elsewhere for help. If your professors are of no help, then you need to plug into a new network. Have you done any of the things I suggested earlier (like talk to people at industry conferences, go to chamber of commerce events, etc)? P: 148 The problem with large schools like UT Austin is that physics Ph.D.'s can use the good career services. UT Austin has very good contacts in the financial industry, but those are in the McCombs Business School for MBA's, and I was told specifically that because I was natural sciences, that I would not be allowed to use MBA career services (I even offered to pay them). I think you meant "physics PhD's can't" use the good career services. Yeah, MBA schools can be a bit territorial but there are ways around that. Audit an MBA class and network with some of the students. Then ask them to get you information from the career services (like which companies are hiring, when they are coming to campus etc and also access to the job listings). One thing that I learned is don't consider people enemies. HR people have a job to do. Their job is to get rid of you. Also, one thing that helps a lot for Ph.D.'s is to write a resume that confuses HR. If an HR person sees that you have a Ph.D. and has no clue what you did, they might forward your resume to someone that has some clue, at which point you've gotten over the first hurdle. Of course, they are not literal enemies. But people let the HR job description fool them into thinking that HR is there to facilitate job applicants' access to information. I haven't read that book, so I don't know about it, but I've found that other books about resume writing and job searching often get it wrong. For example, a lot of books say that you should write your resume so that the reader will understand what you did, but if you are a Ph.D. looking for a Ph.D. position, you should write your resume so that the average person *doesn't* have much of a clue what you did. I have read a lot of job search books as well and this one is the one I found the most useful. It does a good job of breaking the illusion that mass mailing of resumes is effective. P: 685 What was the feedback you got from the jobs you were rejected for? Did you get any? The feedback I get is consistently that other candidates had more experience doing X (where X is some technical technique/skill that is needed for the job) than I did. Generally, this is no doubt true, because odds are I self taught whatever I thought I needed as I was applying for the job. (My theory phd didn't give me much in the way of what industry might want). This is starting to make me worried that engineering/science industry jobs are NOT what I should be applying for (despite being what I would like to do, and despite having a physics phd), because they seem to care more about experience with some technique than a broad background/trainable.
auto_math_text
web
# Sodium content as a predictor of the advanced evolution of globular cluster stars ## Abstract The asymptotic giant branch (AGB) phase is the final stage of nuclear burning for low-mass stars. Although Milky Way globular clusters are now known to harbour (at least) two generations of stars1,2, they still provide relatively homogeneous samples of stars that are used to constrain stellar evolution theory3,4,5. It is predicted by stellar models that the majority of cluster stars with masses around the current turn-off mass (that is, the mass of the stars that are currently leaving the main sequence phase) will evolve through the AGB phase6,7. Here we report that all of the second-generation stars in the globular cluster NGC 6752—70 per cent of the cluster population—fail to reach the AGB phase. Through spectroscopic abundance measurements, we found that every AGB star in our sample has a low sodium abundance, indicating that they are exclusively first-generation stars. This implies that many clusters cannot reliably be used for star counts to test stellar evolution timescales if the AGB population is included. We have no clear explanation for this observation. ## Access options from$8.99 All prices are NET prices. ## References 1. 1 Carretta, E., Bragaglia, A., Gratton, R. & Lucatello, S. Na-O anticorrelation and HB. VIII. Proton-capture elements and metallicities in 17 globular clusters from UVES spectra. Astron. Astrophys. 505, 139–155 (2009) 2. 2 Gratton, R. G., Carretta, E. & Bragaglia, A. Multiple populations in globular clusters. Lessons learned from the Milky Way globular clusters. Astron. Astrophys. Rev. 20, 50 (2012) 3. 3 Iben, I. & Rood, R. T. Ratio of horizontal branch stars to red giant stars in globular clusters. Nature 224, 1006–1008 (1969) 4. 4 Buonanno, R., Corsi, C. E. & Fusi Pecci, F. The giant, asymptotic, and horizontal branches of globular clusters. II — Photographic photometry of the metal-poor clusters M15, M92, and NGC 5466. Astron. Astrophys. 145, 97–117 (1985) 5. 5 Renzini, A. & Fusi Pecci, F. Tests of evolutionary sequences using color-magnitude diagrams of globular clusters. Annu. Rev. Astron. Astrophys. 26, 199–244 (1988) 6. 6 Kippenhahn, R. & Weigert, A. Stellar Structure and Evolution (Springer, 1990) 7. 7 Landsman, W. B. et al. Ultraviolet imagery of NGC 6752: a test of extreme horizontal branch models. Astrophys. J. 472, L93–L96 (1996) 8. 8 Carretta, E., Bragaglia, A., Gratton, R. G., Lucatello, S. & Momany, Y. Na-O anticorrelation and horizontal branches. II. The Na-O anticorrelation in the globular cluster NGC 6752. Astron. Astrophys. 464, 927–937 (2007) 9. 9 Norris, J., Cottrell, P. L., Freeman, K. C. & Da Costa, G. S. The abundance spread in the giants of NGC 6752. Astrophys. J. 244, 205–220 (1981) 10. 10 Campbell, S. W. et al. The case of the disappearing CN-strong AGB stars in Galactic globular clusters — preliminary results. Mem. Soc. Astron. Ital. 81, 1004 (2010) 11. 11 Yong, D., Grundahl, F., Johnson, J. A. & Asplund, M. Nitrogen abundances in giant stars of the globular cluster NGC 6752. Astrophys. J. 684, 1159–1169 (2008) 12. 12 Smith, G. H. & Tout, C. A. The production of surface carbon depletions among globular cluster giants by interior mixing. Mon. Not. R. Astron. Soc. 256, 449–456 (1992) 13. 13 Boothroyd, A. I., Sackmann, I.-J. & Ahern, S. C. Prevention of high-luminosity carbon stars by hot bottom burning. Astrophys. J. 416, 762–768 (1993) 14. 14 Sandquist, E. L. & Bolte, M. Exploring the upper red giant and asymptotic giant branches: the globular cluster M5. Astrophys. J. 611, 323–337 (2004) 15. 15 Cassisi, S., Salaris, M. & Irwin, A. W. The initial helium content of galactic globular cluster stars from the R-parameter: comparison with the cosmic microwave background constraint. Astrophys. J. 588, 862–870 (2003) 16. 16 Cassisi, S. & Castellani, V. Degl’Innocenti, S. Piotto, G. & Salaris, M. asymptotic giant branch predictions: theoretical uncertainties. Astron. Astrophys. 366, 578–584 (2001) 17. 17 Villanova, S., Piotto, G. & Gratton, R. G. The helium content of globular clusters: light element abundance correlations and HB morphology. I. NGC 6752. Astron. Astrophys. 499, 755–763 (2009) 18. 18 Grundahl, F., Catelan, M., Landsman, W. B., Stetson, P. B. & Andersen, M. I. Hot horizontal-branch stars: the ubiquitous nature of the “jump” in Strömgren u, low gravities, and the role of radiative levitation of metals. Astrophys. J. 524, 242–261 (1999) 19. 19 Momany, Y. et al. A new feature along the extended blue horizontal branch of NGC 6752. Astrophys. J. 576, L65–L68 (2002) 20. 20 Sneden, C. A. Carbon and Nitrogen Abundances in Metal-Poor Stars. Ph.D. thesis, Univ. Texas at Austin. (1973) 21. 21 Kurucz, R. ATLAS9 Stellar Atmosphere Programs and 2 km/s grid. (CD-ROM no. 13, Smithsonian Astrophysical Observatory, 1993) 22. 22 Alonso, A., Arribas, S. & Martínez-Roger, C. The effective temperature scale of giant stars (F0–K5). II. Empirical calibration of Teff versus colours and [Fe/H]. Astron. Astrophys. 140, 261–277 (1999) 23. 23 Gratton, R. G., Carretta, E. & Castelli, F. Abundances of light elements in metal-poor stars. I. Atmospheric parameters and a new Teff scale. Astron. Astrophys. 314, 191–203 (1996) 24. 24 Gratton, R. G., Carretta, E., Eriksson, K. & Gustafsson, B. Abundances of light elements in metal-poor stars. II. Non-LTE abundance corrections. Astron. Astrophys. 350, 955–969 (1999) 25. 25 Campbell, S. W. & Lattanzio, J. C. Evolution and nucleosynthesis of extremely metal-poor and metal-free low- and intermediate-mass stars. I. Stellar yield tables and the CEMPs. Astron. Astrophys. 490, 769–776 (2008) 26. 26 Marigo, P. & Aringer, B. Low-temperature gas opacity. ÆSOPUS: a versatile and quick computational tool. Astron. Astrophys. 508, 1539–1569 (2009) 27. 27 Reimers, D. Circumstellar absorption lines and mass loss from red giants. Mem. Soc. R. Sci. Liege 8, 369–382 (1975) 28. 28 Clem, J. L., VandenBerg, D. A., Grundahl, F. & Bell, R. A. Empirically constrained color-temperature relations. II. uvby. Astron. J. 127, 1227–1256 (2004) Download references ## Acknowledgements We thank Y. Momany of the European Southern Observatory (ESO, Chile) for providing his UBV photometric data set, which is mentioned in Supplementary Information section 2. S.W.C. acknowledges support from the Australian Research Council’s Discovery Projects funding scheme (project DP1095368). R.J.S. is the recipient of a Sofja Kovalevskaja Award from the Alexander von Humboldt Foundation. F.G. acknowledges funding for the Stellar Astrophysics Centre provided by The Danish National Research Foundation. The research was supported by the ASTERISK project funded by the European Research Council (grant agreement no. 267864). This work was based on observations made with ESO telescopes at the La Silla Paranal Observatory under programme ID 089.D-0038 (principal investigator S.W.C.) and made extensive use of the SIMBAD, Vizier, 2MASS and NASA ADS databases. ## Author information ### Affiliations Authors ### Contributions S.W.C. designed and prepared the ESO Very Large Telescope (VLT) observing proposal, collected the spectroscopic data, and prepared the paper. V.D. reduced and analysed the spectroscopic data, and prepared the paper. D.Y. designed and prepared the ESO/VLT observing proposal and assisted in the paper preparation. T.N.C. calculated the stellar models and prepared figures for the paper. J.C.L. assisted in the preparation of the observing proposal and with the paper preparation. R.J.S., G.C.A. and E.C.W. assisted in the paper preparation and made preliminary observations with the Anglo-Australian Telescope. F.G. provided the uvby photometric data for the AGB and red giant branch sample and assisted in the paper preparation. ### Corresponding author Correspondence to Simon W. Campbell. ## Ethics declarations ### Competing interests The authors declare no competing financial interests. ## Supplementary information ### Supplementary Information This file contains Supplementary Table 1, Supplementary Discussion and Supplementary References. (PDF 184 kb) ## PowerPoint slides ### PowerPoint slide for Fig. 1 ### PowerPoint slide for Fig. 2 ### PowerPoint slide for Fig. 3 ## Rights and permissions Reprints and Permissions ## About this article ### Cite this article Campbell, S., D’Orazi, V., Yong, D. et al. Sodium content as a predictor of the advanced evolution of globular cluster stars. Nature 498, 198–200 (2013). https://doi.org/10.1038/nature12191 Download citation • Received: • Accepted: • Published: • Issue Date: ## Further reading • ### Identifying Multiple Populations in M71 Using CN • Jeffrey M. Gerber • , Eileen D. Friel • & Enrico Vesperini The Astronomical Journal (2020) • ### Confirming the Presence of Second-population Stars and the Iron Discrepancy along the AGB of the Globular Cluster NGC 6752 • A. Mucciarelli • , E. Lapenna • , C. Lardo • , P. Bonifacio • , F. R. Ferraro • & B. Lanzoni The Astrophysical Journal (2019) • ### Multiple Stellar Populations of Globular Clusters from Homogeneous Ca–CN–CH Photometry. V.${{cn}}_{\mathrm{JWL}}^{{\prime} }\$ as a Surrogate cn JWL Index and NGC 6723 • Jae-Woo Lee The Astrophysical Journal (2019) • ### Four stellar populations and extreme helium variation in the massive outer-halo globular cluster NGC 2419 • M Zennaro • , A P Milone • , A F Marino • , G Cordoni • , E P Lagioia •  & M Tailo Monthly Notices of the Royal Astronomical Society (2019) • ### What is a globular cluster? An observational perspective • Raffaele Gratton • , Angela Bragaglia • , Eugenio Carretta • , Valentina D’Orazi • , Sara Lucatello •  & Antonio Sollima The Astronomy and Astrophysics Review (2019)
auto_math_text
web
# Is the char-CNN-RNN text encoder is an encoder part of an auto-encoder? The char-CNN-RNN encoder is relatively a popular encoder. It was proposed in the paper titled Learning Deep Representations of Fine-Grained Visual Descriptions by Scott Reed et al. Is it an encoder part of an autoencoder? Or an stand alone neural network?
auto_math_text
web
# Research The overall themes of my research are supercooled liquids and the glass transition. A brief introduction to these topics is given below. After the introduction, I will explain additional research topics and provide a few relevant papers dealing with the specific topic. The articles can be accessed by clicking on the associated links. Comments are always welcome. • ## Supercooled liquids and glasses A liquid cooled below the melting line without crystallizing is called a supercooled liquid. Most liquids can be supercooled and eventually form a glass when sufficiently supercooled. Upon approaching this glass transition, supercooled liquids display dramatic changes in, e.g., their viscosity as is illustrated in Fig. 1. This observation remains an unresolved mystery of the glass transition, also called the non-Arrhenius problem, and is a field of intensive research. • ## Excess-entropy scaling The excess entropy $$S_{\rm ex} = S - S_{\rm id}$$ is the entropy minus the ideal gas contribution at the same density and temperature. Physically, it quantifies the number of available states of the liquid relative to that of an ideal gas. The excess entropy was proposed by Rosenfeld in 1977 to correlate to dimensionless transport coefficients, i.e., $$\widetilde{X} = f(S_{\rm ex})$$. Rosenfeld found for single-component atomic liquids a quasiuniversal relation which enables prediction of unknown transport coefficients if the excess entropy is known. Since then excess-entropy scaling has been the subject of intensive research. An example of a quasiuniversal relation is illustrated in Fig. 2 for binary mixtures (Ref. 1) where the excess entropy collapses the data to an almost universal curve. A similar universality even works in nanoconfinement which has very different behavior from bulk liquids (Ref. 2). ### Relevant papers: 1. Excess-entropy scaling in supercooled binary mixtures I. H. Bell, J. C. Dyre, and T. S. Ingebrigtsen, Nat. Commun. 11, 4300 (2020) 2. Predicting how nanoconfinement changes the relaxation time of a supercooled liquid T. S. Ingebrigtsen, J. R. Errington, T. M. Truskett, and J. C. Dyre, Phys. Rev. Lett. 111, 235901 (2013) • ## Shear thinning Shear thinning is a phenomenon which occurs when you force a liquid to flow. Initially, the viscosity remains constant but when the driving force becomes large enough the viscosity starts to decrease. This property is used when you paint a house, or when you apply toothpaste in the morning. Nevertheless, the microscopic origin behind shear thinning remains to date unclear. Ref. 1 details that the structural changes in the extentional direction under Couette shear flow (π/4 radians), captured in the form of the orientation-dependent two-body entropy $$s_{2}^{\theta}$$, correlates very well to shear thinning. This can be seen in Fig. 3D where the sheared dynamics maps onto the equilibrium dynamics even in the highly nonlinear region using this new definition of $$s_{2}$$ when $$\theta = \pi/4$$. ### Relevant papers: 1. Structural predictor for nonlinear sheared dynamics in simple glass-forming liquids T. S. Ingebrigtsen, and H. Tanaka, Proc. Natl. Acad. Sci. U.S.A. 115, 87 (2018) • ## Crystallization Crystallization is the process where a small solid nucleus is formed inside a liquid. This nucleus then grows and turns the liquid into a crystal. Everyday crystallization occurs when water turns into ice below zero degrees. Sometimes one would like to avoid crystallization to obtain, e.g., the glass state (see introduction) which can have desirable properties distinct from the crystal. Ref. 1 studied crystallization in supercooled binary mixtures. A basic mechanism to crystallization was always found to be present in mixtures where spontaneous concentration fluctuations initiate crystallization. These fluctuations make the full utilization of, e.g., metallic glasses rather difficult. ### Relevant papers: 1. Crystallization instability in glass-forming mixtures T. S. Ingebrigtsen, J. C. Dyre, T. B. Schrøder, and C. P. Royall, Phys. Rev. X 9, 031016 (2019) • ## Roskilde-simple liquids Roskilde-simple (RS) liquids are liquids with strong correlations between the virial-potential energy fluctuations in the constant volume ensemble. This class of liquids was discovered in the ''Glass and Time'' group at Roskilde University in 2007. Van the Waals and metallic liquids are RS liquids but, e.g., hydrogen-bonding liquids are not RS. The strong UW correlation is illustrated in Fig. 5 (Ref. 2). Roskilde-simple liquids are characterized by having isomorphs in their thermodynamic phase diagram which are invariance curves of structure and dynamics in appropriate dimensionless units. This fact makes these liquids simpler than other types of liquids (Ref. 3). RS liquids are relevant in many different situations, e.g., molecular liquids (Ref. 2), polydisperse systems (Ref. 1), and more. ### Relevant papers: 1. Effect of size polydispersity on the nature of Lennard-Jones liquids T. S. Ingebrigtsen, and H. Tanaka, J. Phys. Chem. B 119, 11052 (2015) 2. Isomorphs in model molecular liquids T. S. Ingebrigtsen, T. B. Schrøder, and J. C. Dyre, J. Phys. Chem. B 116, 1018 (2012) 3. What is a simple liquid? T. S. Ingebrigtsen, T. B. Schrøder, and J. C. Dyre, Phys. Rev. X 2, 011011 (2012) • ## NVU dynamics In molecular dynamics (MD) computer simulations Newton's second law is solved numerically. This keeps the energy E, the number of particles N, and the volume V constant. The dynamics is therefore called NVE dynamics. Over the years many different numerical algorithms for MD have been developed such as constant temperature NVT dynamics. In three papers (Refs. 1-3) we developed a new MD algorithm keeping not the energy E but the potential energy U constant, i.e., NVU dynamics. NVU dynamics is equivalent to Newton's first law on a many dimensional hypersurface. It turns out that NVU dynamics gives almost identical results to Newtonian NVE dynamics (Fig. 6). ### Relevant papers: 1. NVU dynamics. III. Simulating molecules at constant potential energy T. S. Ingebrigtsen, and J. C. Dyre, J. Chem. Phys. 137, 244101 (2012) 2. NVU dynamics. II. Comparing to four other dynamics T. S. Ingebrigtsen, S. Toxvaerd, T. B. Schrøder, and J. C. Dyre, J. Chem. Phys. 135, 104102 (2011) 3. NVU dynamics. I. Geodesic motion on the constant-potential-energy hypersurface T. S. Ingebrigtsen, S. Toxvaerd, O. J. Heilmann, T. B. Schrøder, and J. C. Dyre, J. Chem. Phys. 135, 104101 (2011)
auto_math_text
web
1 answer # Your variable annuity charges administrative fees at an annual rate of 0.17 percent of account value.... Your variable annuity charges administrative fees at an annual rate of 0.17 percent of account value. Your average account value during the year is $218,000. What is the administrative fee for the year? (Round your answer to the nearest whole dollar.) Annual administrative fee 7 ## Answers #### Similar Solved Questions 5 answers ##### Check in your lab manual to help this make sense:Mass of unknown sample: Mass of filter paper:3.0999g 1.1288gMass of filter paper and Cu: 1.6508 (final) Your n = [ Fill in the rest: Use the formulas shown in the Introduction for % and fwt Mass of recovered copper:_ % by mass copper:Formula weight of Unknown Check in your lab manual to help this make sense: Mass of unknown sample: Mass of filter paper: 3.0999g 1.1288g Mass of filter paper and Cu: 1.6508 (final) Your n = [ Fill in the rest: Use the formulas shown in the Introduction for % and fwt Mass of recovered copper:_ % by mass copper: Formula weigh... 1 answer ##### A 0.328-kg mass is attached to a spring with a force constant of 52.7 N/m. Part... A 0.328-kg mass is attached to a spring with a force constant of 52.7 N/m. Part A If the mass is displaced 0.270 m from equilibrium and released, what is its speed when it is 0.134 m from equilibrium... 1 answer ##### Which of the following is not a principal characteristic of the partnership form of business organization?... Which of the following is not a principal characteristic of the partnership form of business organization? a. Mutual agency b. Association of individuals c. Limited liability d. Limited life The basis for dividing partnership net income or net loss is referred to as any of the following except the a... 1 answer ##### 1. Which of the following is true? A. A nation can have a comparative advantage in... 1. Which of the following is true? A. A nation can have a comparative advantage in the production of a good only if it also has an absolute advantage. B. A nation can have a comparative advantage in the production of every good, but not an absolute advantage. C. A nation cannot have an absolute adva... 5 answers ##### Fish are hung on a spring scale to determing thelr mass What is the force constant of the spring in such load? scale If It the spring stretches 8.2 cm for a(n) 11.3-kgWhat Is the potential energy . stored In the spring when the fish In part (A) is hung on thc spring scale?PE; Fish are hung on a spring scale to determing thelr mass What is the force constant of the spring in such load? scale If It the spring stretches 8.2 cm for a(n) 11.3-kg What Is the potential energy . stored In the spring when the fish In part (A) is hung on thc spring scale? PE;... 1 answer ##### Vom Udruer question will save this response. Question 7 According to the equation of exchange, in... vom Udruer question will save this response. Question 7 According to the equation of exchange, in the long run how silla 5% increase in the money supply effect prices? prices will fall by 5% prices will rise by 10% prices will rise by 50% prices will rise by 2.5% and output will increase by 2.5%... 4 answers ##### 1Llaua FU mol1503 1 Llaua FU mol 1503... 5 answers ##### (uld 1) Hzsow/heat 2) HzIPd 7 Keduce |rid apene 3) NaBH4 veduce 4) Hzot Wurkupcrossed 1) NaOH (dol benzaldehyde (xs) 2) H3ot (uld 1) Hzsow/heat 2) HzIPd 7 Keduce |rid apene 3) NaBH4 veduce 4) Hzot Wurkup crossed 1) NaOH (dol benzaldehyde (xs) 2) H3ot... 5 answers ##### Let Q1 lim Hc+ Let Q = In(3 + IQ) Then T 5sin? (LOOQ) satisfies: (A) 0 <T< [ "(B) 1 < T < 2 (C) 2 <T < 3 (D) 3 <T < 4 (E) 4 <T <5. Let Q1 lim Hc+ Let Q = In(3 + IQ) Then T 5sin? (LOOQ) satisfies: (A) 0 <T< [ "(B) 1 < T < 2 (C) 2 <T < 3 (D) 3 <T < 4 (E) 4 <T <5.... 5 answers ##### 250 9) 1000 - 12.5 . 6.25- 1250 Rating The accompanying table shows the data for the top 10 passers in certain year; In the regular season certain player attempted 546 passes, completed 313, passed for 15 touchdowns, gained 3838 yards passing_ and was intercepted 28 times_ With no interceptions, would this player have risen into the top ten rankings? 83 Click the icon to view the rating points of the top ten players_The players rating would have been The player have risen into the top ten ranki 250 9) 1000 - 12.5 . 6.25- 1250 Rating The accompanying table shows the data for the top 10 passers in certain year; In the regular season certain player attempted 546 passes, completed 313, passed for 15 touchdowns, gained 3838 yards passing_ and was intercepted 28 times_ With no interceptions, wo... 1 answer ##### Natural chlorophyll is apparently one of the least stable forms of chlorophyll. But what is the distribution of metals i... Natural chlorophyll is apparently one of the least stable forms of chlorophyll. But what is the distribution of metals in the earth’s crust? Does the chlorophyll molecule need to be very stable in order to function properly?... 5 answers ##### Find all solutions for the following trigonometric inequality2 csc(r) < 1 Find all solutions for the following trigonometric inequality 2 csc(r) < 1... 1 answer ##### A 1.35-kg falcon catches a 0.355-kg dove from behind in midair. What is their velocity after... A 1.35-kg falcon catches a 0.355-kg dove from behind in midair. What is their velocity after impact if the falcon\'s velocity is initially 27.5 m/s and the dove\'s velocity is 5.95 m/s in the same direction?... 1 answer ##### Why is increasing trade a goal of United States foreign policy? Why is increasing trade a goal of United States foreign policy?... 1 answer ##### Prime Company began operations in January, 2019, by issuing 5,700 shares of 9%, cumulative,$65 par... Prime Company began operations in January, 2019, by issuing 5,700 shares of 9%, cumulative, $65 par value preferred stock and 25,000 shares of$8 par value common stock. Prime Company paid $26,000 of dividends in 2019, they paid$35,000 of dividends in 2020, they paid \$38,000 of dividends in 2021, t... 5 answers ##### 1. What happens when the molecules of a solid are attracted tothe molecules of a liquid? *A. the solid does not dissolve in the liquidB. the solid dissolves in the liquid2. What happens when the molecules of a solid are not attractedto the molecules of a liquid? A. the solid dissolves in the liquidB. the solid does not dissolve in the liquid 1. What happens when the molecules of a solid are attracted to the molecules of a liquid? * A. the solid does not dissolve in the liquid B. the solid dissolves in the liquid 2. What happens when the molecules of a solid are not attracted to the molecules of a liquid? A. the solid dissolves in the l... 1 answer ##### Fast please 2. In the shown Wheatstone bridge circuit, R., and R, are the two portions... fast please 2. In the shown Wheatstone bridge circuit, R., and R, are the two portions of a I m single wire with a sliding contact. If the bridge is balanced when L - 46 cm and R, -100 2. What is the value of the unknown resistor R.... 1 answer ##### 9) which financial statement reports the amounts of the accounting equation which is assets equal liabilities... 9) which financial statement reports the amounts of the accounting equation which is assets equal liabilities plus equity?... 1 answer ##### Nurses working in an ambulatory care clinic observe an increase in the number of clients with... Nurses working in an ambulatory care clinic observe an increase in the number of clients with hypertension. In planning community education, which of the following approaches is likely to have the most positive impact on reducing the development of hypertension? A appropriate rtising or a chan who i... 5 answers ##### The billing cpartmient of & major credit card company allempe t0 contol cn (clakal du transMSNIOn etc on customcs" bills Suppose thut cors occur sccording t0 1 Potsson dstnbution uth parinietcr h 0.075 . What I the probability that 4 cuslonxr' - bill scated & rndom Will contain one enor ? The billing cpartmient of & major credit card company allempe t0 contol cn (clakal du transMSNIOn etc on customcs" bills Suppose thut cors occur sccording t0 1 Potsson dstnbution uth parinietcr h 0.075 . What I the probability that 4 cuslonxr' - bill scated & rndom Will contain o... 1 answer ##### How does electron configuration affect chemical behavior? How does electron configuration affect chemical behavior?... 1 answer ##### Will rate when answered thank you. CE 100 -Assignment No.3-Three Point Problem Name Show your line... Will rate when answered thank you. CE 100 -Assignment No.3-Three Point Problem Name Show your line work on the map view as discussed in the guide in the course Blackboard file. Include your final strike and dip calculation in the answer block below North 400 ft 550 ft 200 ft Strike: Dip Map Distanc... 5 answers ##### 1b) Consider the acid-base reactions belw. Draw the products of the reactions und indicate whether the reactants or products ate favoured_ Lone pairs arc not shown (Hint: th pal valucs mc Listcd in tba appendix C, paga A-14I myour lextbook)HzoHzOHEN-H 1b) Consider the acid-base reactions belw. Draw the products of the reactions und indicate whether the reactants or products ate favoured_ Lone pairs arc not shown (Hint: th pal valucs mc Listcd in tba appendix C, paga A-14I myour lextbook) Hzo HzO HEN-H... 5 answers ##### Lea can only take her children and one of her pets (0 Disney World Her four children are Michael Calherine , Andiew and Nya The pets she has are & fish dog and a Gat: each child and each pet havo an equal opportunityat0 be selected Determlne Ihe probabillly Ihat one of hel sons IS chosen along WILI Ihe dog, Lea can only take her children and one of her pets (0 Disney World Her four children are Michael Calherine , Andiew and Nya The pets she has are & fish dog and a Gat: each child and each pet havo an equal opportunityat0 be selected Determlne Ihe probabillly Ihat one of hel sons IS chosen along ... 1 answer ##### Question 12 (4 points) What is the major product of the following reaction? NaOH heat OH... Question 12 (4 points) What is the major product of the following reaction? NaOH heat OH th they xox the... 5 answers ##### Consider the balanced reaction of magnesium and oxygen.2Mg+O2⟶2MgO2Mg+OX2⟶2MgOWhat mass, in grams, of MgO can be produced from 1.78 g of Mgand 2.03 g of O2? Consider the balanced reaction of magnesium and oxygen. 2Mg+O2⟶2MgO2Mg+OX2⟶2MgO What mass, in grams, of MgO can be produced from 1.78 g of Mg and 2.03 g of O2?... 1 answer ##### A 1.50-kg iron horseshoe initially at 570°C is dropped into a bucket containing 19.0 kg of... A 1.50-kg iron horseshoe initially at 570°C is dropped into a bucket containing 19.0 kg of water at 22.0°C. What is the final temperature of the water–horseshoe system? Ignore the heat capacity of the container and assume a negligible amount of water boils away.... 1 answer ##### Search for Coca-Cola using ticker symbol "KO" 2018.   What is Coca-Cola current total market capitalization? What... Search for Coca-Cola using ticker symbol "KO" 2018.   What is Coca-Cola current total market capitalization? What does this represent?... 5 answers ##### What 2 2 2 The can 8 8 8 8 QUESTION we E Kes 12 ezey about V standard data the point mean E that has egelal above 1 the C the "ueaw mean_ of -2.3?5 The 4214M If n is Any repeated first QUESTION even the odd; the step of the following 11 lil is to order not I the true data the the with removed from single set from average regards the geiaclbe smallest to finding data set emuade the Uergeshe = prior to median theordeeeansh determining data set? the the ordered list: 'ueipaw What 2 2 2 The can 8 8 8 8 QUESTION we E Kes 12 ezey about V standard data the point mean E that has egelal above 1 the C the "ueaw mean_ of -2.3? 5 The 4214M If n is Any repeated first QUESTION even the odd; the step of the following 11 lil is to order not I the true data the the with removed... 5 answers ##### Q5. How many grams of KNO; (an electrolyte) must be added to 00 kg of water to produce solution that freezes at -8.00 %C? What is the mass percent of KNOz for this solution? (Kawater) 1.86 'Clm; Mw of KNO;-I0L.IO gmol) Q5. How many grams of KNO; (an electrolyte) must be added to 00 kg of water to produce solution that freezes at -8.00 %C? What is the mass percent of KNOz for this solution? (Kawater) 1.86 'Clm; Mw of KNO;-I0L.IO gmol)... 5 answers ##### Write the first four terms of the sequence {an} defined by the recurrence relation below:an +1 =a an - 1; 81 = 9,a0 = 1(Simplify your answer:)(Simplify your answer )82(Simplify your answer:)83(Simplify your answer:) Write the first four terms of the sequence {an} defined by the recurrence relation below: an +1 =a an - 1; 81 = 9,a0 = 1 (Simplify your answer:) (Simplify your answer ) 82 (Simplify your answer:) 83 (Simplify your answer:)... 5 answers ##### 451 V7ceig SreT tjuna " t lsroneozl (cSate [eze1 asorrqaese &tesurelzroeuetn Fmie[4peteasuba Xigtue ebortlc-ad6J "( Gblatater TAL ] lat! ,60nhi 451 V7ceig SreT tjuna " t lsroneozl (cSate [eze1 asorrqaese &tesurelzroeuetn Fmie [4peteasuba Xigtue ebortlc-ad6 J "( Gblatater TAL ] lat! , 60nhi... 5 answers ##### Review Constants Periodic Tablelonization energy (Ei) is the amount of energy required remove an electron from neutral gaseous atom Or gaseous Electrons altracted the positively charged nucleus; therefore removing an electron requires energy: The process endothermic, ang co onization energies have posilive value. The Iirst ionization energy (Ej1) is the energy associated with the removal of an electron from the neutral gasoous alom. The reaction is represented for the generalized atom XPart ABas Review Constants Periodic Table lonization energy (Ei) is the amount of energy required remove an electron from neutral gaseous atom Or gaseous Electrons altracted the positively charged nucleus; therefore removing an electron requires energy: The process endothermic, ang co onization energies have ... 1 answer ##### A and B are two statistically independent events, assume the probability of A is 0.4 and... A and B are two statistically independent events, assume the probability of A is 0.4 and the probability of B is 0.5. 1) Determine the P(An B). [The answer should be a number rounded to five decimal places, don't use symbols such as %] 2) Determine the P(AUB). [The answer should be a number roun... 5 answers ##### Fan ^ 1344which one WRONG Ofthe following symbol natna cemeni , Cleurbon comhinationle Wlbarium F/uorine Nnitrogen Ufuraniur 2.The number ligures? 005436 Ius hol MATN ~enilieant _ (6) (d)s density is 72 grams pcT What metal whose the volume of a 50 Pram block cubie eentitneler? 16.8 eubic centitnelers (6) 2.69 cubic centimeters 0.0595 cubic centimeters 0.372 cubic centimeters (e) 1.60 eubic centimetersIdentity the INCORRECT statement (a) Helium in balloon: an element (b) Paint: mixture Tap water Fan ^ 1344 which one WRONG Ofthe following symbol natna cemeni , Cleurbon comhinationle Wlbarium F/uorine Nnitrogen Ufuraniur 2.The number ligures? 005436 Ius hol MATN ~enilieant _ (6) (d)s density is 72 grams pcT What metal whose the volume of a 50 Pram block cubie eentitneler? 16.8 eubic centitnel... 5 answers ##### ZZZZZZZZZZZZZZZZZREZZZZZZZZZRZThe WVU BnE college' conducted simulation study about the travel time between nCw building undcr construction and Mountainlair as prelimninary analysis for potential needs of stores in the nCw building: From 6 runs of the simulation modcl the estimated sample mcan and standard deviation of the travel time were 16.5 minutes and 2.5 minutes, rcspectively. 33rBased on thc sample data from simulation, obtain 90% confidencc intcrval for thc mean travcl timne betwccn ZZZZZZZZZZZZZZZZZRE ZZZZZZZZZRZ The WVU BnE college' conducted simulation study about the travel time between nCw building undcr construction and Mountainlair as prelimninary analysis for potential needs of stores in the nCw building: From 6 runs of the simulation modcl the estimated sample mca... 5 answers ##### MinimumMaximum165Range156Interquartile Range36Skewness2.605637Kurtosis7.304232The crilical value is:11,78241,79621.0451,96Muvu ' ll 'puIaLol' mIll Minimum Maximum 165 Range 156 Interquartile Range 36 Skewness 2.605 637 Kurtosis 7.304 232 The crilical value is: 11,782 41,796 21.045 1,96 Muvu ' ll 'puIaLol' mIll... 1 answer ##### You are treating a patient with a gunshot wound to the chest. According to your protocols... You are treating a patient with a gunshot wound to the chest. According to your protocols you should maintain his mean arterial pressure (MAP) at ~65. You determine that it is necessary to administer fluids. Using a 10 gtt set, how many gtt/min will it take to run 250 mL of Ringers Lactate in 5 minu... -- 0.026429--
auto_math_text
web
# MultiMeshSubSpace¶ class dolfin.cpp.function.MultiMeshSubSpace(*args) This class represents a subspace (component) of a multimesh function space. The subspace is specified by an array of indices. For example, the array [3, 0, 2] specifies subspace 2 of subspace 0 of subspace 3. A typical example is the function space W = V x P for Stokes. Here, V = W[0] is the subspace for the velocity component and P = W[1] is the subspace for the pressure component. Furthermore, W[0][0] = V[0] is the first component of the velocity space etc. • MultiMeshSubSpace(V, component) Create subspace for given component (one level) • MultiMeshSubSpace(V, component, sub_component) Create subspace for given component (two levels) • MultiMeshSubSpace(V, component) Create subspace for given component (n levels) thisown The membership flag
auto_math_text
web
# Calculate the initial rate for the formation of C \rm C at 25? Calculate the initial rate for the formation of C \rm C at 25 ? A =2, B =1 k[A]^2 strong textConcepts and reason Rate: The rate of a chemical reaction is defined as the change in concentration of substance (reactant or product) per unit time. It is expressed in M/S. Order: Order of the reaction is a number, which indicates the raise of concentration of the substance. Significant figures: The term significant figures refer to the meaningful digits in it. Fundamentals
auto_math_text
web
# MTH02 Digital Temperature Sensor This product can calculate dew point and can be used in applications like automatic control, weather stations and humidity regulator. SKU: CQY12813165543 Price: 3.74 $Old Price: 4.20$ Product in stock SSL Certificate Quantity This product has small size and ultra-low power. It is a fully calibrated sensor with digital output and good long-term stability. It can calculate dew point and can be used in applications like automatic control, weather stations and humidity regulator. # Specifications • Humidity range: 18% to 98% • Temperature range: -40 ° C -70 ° C • Precision: ± 3% RH (maximum of ± 5% RH), ± 0.5 ° C • Power Supply: 3 V - 5.5 V (minimum 2V)
auto_math_text
web
# mech.timestep Syntax f = mech.timestep Get the mechanical timestep. This is the minimum timestep for all bodies or mechanical operations in the model. Returns: f - mechanical timestep
auto_math_text
web
arXiv:1202.1192 IPPP-12-01 DCPT-12-02 CERN-PH-TH-2011-322 MCNET-12-02 Eur.Phys.J. C72 (2012) 2028 by: Zapp, Korinna Christine (Durham U., IPPP) et al. Abstract: It is widely accepted that a phenomenologically viable theory of jet quenching for heavy ion collisions requires the understanding of medium-induced parton energy loss beyond the limit of eikonal kinematics formulated by Baier-Dokshitzer-Mueller-Peigne-Schiff and Zakharov (BDMPS-Z). Here, we supplement a recently developed exact Monte Carlo implementation of the BDMPS-Z formalism with elementary physical requirements including exact energy-momentum conservation, a refined formulation of jet-medium interactions and a treatment of all parton branchings on the same footing. We document the changes induced by these physical requirements and we describe their kinematic origin.
auto_math_text
web
# Exploring Genius – Terence Tao Each Exploring Genius article profiles an accomplished and recognised genius, details parts of their life and career, how they’ve influenced society, and what they’re like as people. The previous entry was on Stanley Kubrick. Genius appears in all fields of human accomplishment so these articles are naturally varied in style, length and approach. Terence Tao works in pioneering-level pure mathematics and I’m about as proficient with mathematics as a salamander, so this entry is coming from a particularly laymen (nay, idiot’s) point of view. It provides a generalised overview of Tao’s life, briefly covers the origins and significance of mathematics for context (which is actually pretty damn interesting), gives rough insight into the significance of his work, explores his giftedness growing up and how it was developed, and ends with an overview of his personality—which is exceptionally kind and humble—and how it all fits together. Introduction The term ‘genius’ is more related to accomplishment than ability, and can be equally applied to painting as it can be to theoretical physics. It has very little to do with IQ (though some take having an IQ above 140 to also qualify a person as a genius). There may be a correlation with IQ scores in many cases, but an IQ score is only indicative of isolated aptitudes (such as memory and logical reasoning). Genius-level accomplishment comes from the interplay between cognitive control and creativity; it’s raw intelligence multiplied by open-minded imagination and wonder. Certain fields display a stronger correlation than others, and from what I can tell it appears strongest in mathematics and physics. The Nobel Prize-winning physicist Richard Feynman is notoriously used as an example of the irrelevancy of IQ testing, with a tested score of only 125 and a clearly genius-level intellect, but closer inspection reveals that to be a likely product of the specific test he took, which was heavily language-focused. IQ tests are largely irrelevant, by Feynman isn’t the best example. The kind of thinking required for mathematics and physics is pure logical reasoning and abstraction, with processing speed, braveness (yep, braveness) and imagination being key bonuses. Terence Tao has a tested IQ score of over 220, and by many accounts demonstrates those attributes better than any mathematician alive today. He’s known as the “Mozart of math” and in the classical sense of the term, he may well be the smartest guy on the planet. What is Mathematics? For a better appreciation of Tao it helps to understand the broader significance of his field, so without deviating too much, here’s a basic rundown: We don’t exactly know when it ended, but there was a time in human history when we had no concept of counting. We intuitively understood the concepts of ‘more’ and ‘less’—generalised quantity—but couldn’t differentiate anything in abstract terms. Seeing two antelope and recognising them as more than one antelope was one thing, recognising their quantity as an abstract concept equally applicable to fingers and days on a calendar—the concept of the number 2—was a quantum leap in human thought. The first person to achieve this may well be the most important genius in our ancestry. But we have no idea who it was, or how it came about. Anthropologists theorise that counting started as the tallying of single units, seen as vertical lines drawn on a wall, and that symbols were eventually incorporated to represent larger groups of tallies. In ancient Sumerian culture for example, a small clay cone was used to denote ‘1’, a clay sphere ’10’ and a large clay cone was ’60’. Many different systems of symbols were used across the world before the establishment of 0 – 9, which came out of India after 300BC. The formation of symbols to represent groups of single units created a new dynamic between each symbol, and with each new dynamic came further symbol sub-systems (like algebra) with their own unique interplay, so that complexity grew exponentially from a mathematical big bang—an outward explosion of theory from the use of the first single unit. The philosopher Bertrand Russell makes the case in The Principles of Mathematics (1903; not to be confused with his Principia Mathematica released in 1928) that mathematics and logic are the same thing (or at least, come from the same place), which becomes easier to comprehend when we consider that numbers are only representative—different systems (such as roman numerals and binary) yield different kinds of patterns, puzzles and insights, but all are bound by logic to the parameters of the system they belong to. Whether or not logic and mathematics are considered the same is a matter of definition, but thinking of logic as being fundamental to math at least helps us understand its nature from a deeper perspective and ponder the question: what exactly is mathematics? Is it something we’ve discovered, or is it something we’ve created? I think it makes sense to view logic as a core property of the universe, intrinsic to the way everything exists and functions, and that mathematical theory is a form of logical structuring—an interaction of human concepts with the order of the universe. I have nil expertise and may be way off, but it seems like the 0-9 number system could potentially be replaced by something much more complex; it’s just that it works broadly for our population and is complex enough to describe reality to the level we’re capable of being curious. So is mathematics just a way to describe reality? The physicist Max Tegmark makes the case in his book Our Mathematical Universe that mathematics not only describes reality, but that reality itself is mathematical in nature: “The idea that everything is, in some sense, mathematical goes back at least to the Pythagoreans of ancient Greece and has spawned centuries of discussion among physicists and philosophers. In the 17th century, Galileo famously stated that our universe is a “grand book” written in the language of mathematics. More recently, the Nobel laureate Eugene Wigner argued in the 1960s that “the unreasonable effectiveness of mathematics in the natural sciences” demanded an explanation. We humans have gradually discovered many additional recurring shapes and patterns in nature, involving not only motion and gravity, but also electricity, magnetism, light, heat, chemistry, radioactivity and subatomic particles. These patterns are summarized by what we call our laws of physics. Just like the shape of an ellipse, all these laws can be described using mathematical equations. Equations aren’t the only hints of mathematics that are built into nature: There are also numbers. As opposed to human creations like the page numbers in this book, I’m now talking about numbers that are basic properties of our physical reality. For example, how many pencils can you arrange so that they’re all perpendicular (at 90 degrees) to each other? The answer is 3, by placing them along the three edges emanating from a corner of your room. Where did that number 3 come sailing in from? We call this number the dimensionality of our space, but why are there three dimensions rather than four or two or 42?” The example Tegmark gives is a good illustration of the symbolic nature of numbers, showing there to be a fundamental truth of the universe beneath their representation, but whether or not reality is mathematical in nature is mostly redundant to the field; it’s just helpful when trying to understand why it’s all so important, and therefore, the importance of the work being done by someone like Terence Tao. There may be conjecture around the philosophical nature of mathematics but there’s little debate over the benefit. Without it, our cultural and technological evolution wouldn’t have progressed beyond the spear—every scientific and technological advancement involves mathematics to some degree. The paradigm shift available through understanding mathematics at a deeper level is also about mathematicians. Where once they appeared as number technicians, it now seems talented mathematicians are actually more tuned-in to the universe than anyone else (especially those making that kind of claim). Like a child who develops language early and is therefore at an advantage with interpersonal relationships, the gifted mathematician has an aptitude with the language of the universe, becoming the core force behind the progression of our species within it. If Tao really is the world’s most gifted mathematician, he’s more than just a guy who solves hard problems: he’s more fluent with universal language than anyone else alive. The Child Prodigy Terence Tao was born in Adelaide, South Australia in 1975 to Billy and Grace, both Chinese natives who had emigrated to Australia in 1972. They’d met a few years previously at Hong Kong university; Billy there to complete a doctorate in paediatrics while Grace became an honours-roll mathematics and physics graduate. They had three sons within a few years of arriving: Terence (known to his friends as Terry), Nigel and Trevor—their westernised names chosen to reflect the culture of the couple’s new home country. All three brothers would eventually become standout intellectuals, with Nigel scoring a 180 IQ and winning bronze at two international mathematics olympiads, and Trevor becoming a national chess champion at age 14 while winning numerous prizes for his classical music compositions; broad achievements made all the more impressive by the fact he has autism. Tao’s precocity became evident before the age of two, when his parents noticed him arranging an older child’s letter blocks alphabetically; a skill he’d learnt through watching Sesame Street. Things didn’t slow down: when he was 4 he was able to multiply two-digit numbers by two-digit numbers in his head. It was soon decided that regular schooling wouldn’t be suitable, and so he was placed into accelerated learning, which was eventually monitored by the Davidson Institute (Australia’s centre for the development of gifted children). The institute’s Miraca Gross writes: “A few months after Terry’s second birthday, the Taos found him using a portable typewriter which stood in Dr. Tao’s office; he had copied a whole page of a children’s book laboriously with one finger! At this stage his parents decided that, although they did not want to ‘push’ their brilliant son, it would be foolish to hold him back. They began to borrow and buy books for him and, indeed, found it hard to keep pace with the boy. They encouraged Terry to read and explore but were careful not to introduce him to highly abstract subjects, believing, rather, that their task was to help him develop basic literacy and numerical skills so that he could learn from books by himself and thus develop at his own rate. “Looking back,” says Dr. Tao, “we are sure that it was this capacity for individual learning which helped Terry to progress so fast without ever becoming bogged down by the inability to find a suitable tutor at a crucial time.” By the age of 3, Terry was displaying the reading, writing and mathematical ability of a 6-year-old.” Research has shown the likelihood of a child prodigy transitioning into an adult genius to be extremely rare. Genius-level intellect isn’t just about talent; it’s about creativity, inventiveness and open-minded intrigue. Tiger mothers forcing a discipline on a child may eventually produce a fantastically able technician in line with the best of a field, but geniuses are generally made through self-interested goals; at the core of true genius is one defining characteristic: self-propelled passion. Billy and Grace Tao are exceptional parents. Instead of marshalling their son’s progression forcibly, it was Tao’s own interest and maturity that informed each incremental step in his education. His father explains: “Firstly we realised that no matter how advanced a child’s intellectual development, he is not ready for formal schooling until he has reached a certain level of maturity, and it is folly to try to expose him to this type of education before he has reached that stage. This experience has made us monitor Terry’s educational progress very carefully. Certainly, he has been radically accelerated, but we have been careful to ensure, at each stage, that he is both ready and eager to move on, and that we are not exposing him to social experiences which could be harmful. Secondly, we have become aware that it is not enough for a school to have a fine reputation and even a principal who is perceptive and supportive of gifted education. The teacher who actually works with the gifted student must be a very flexible type of person who can facilitate and guide the gifted child’s development and who will herself model creative thinking and the love of intellectual activity. Also, and possibly most importantly, we learned that education cannot be the responsibility of the school alone. Probably for most children, but certain for the highly gifted, the educational program should be designed by the teachers and parents working together, sharing their knowledge of the child’s intellectual growth, his social and emotional development, his relationships with family and friends, his particular needs and interests… that is, all the aspects of his cognitive and affective development. This did not happen during Terry’s first school experience but I am convinced that the subsequent success of his academic program from the age of 5 onwards has been largely due to the quality of the relationships my wife and I have had with his teachers and mentors.” Contrasting this approach to other accelerated prodigies, the Taos seem to have viewed their son as his own person rather than as an extension of themselves. They cultivated an environment of deep caring and unconditional support around the interests of their children, allowing the spark of internal genius to ignite without the repressive force of projected self-expectation. The Davidson Institute’s Marica Gross continues: “In November of 1983, at the age of 8 years 3 months, Terry informally took the South Australian Matriculation (university entrance) examination in Mathematics 1 and 2 and passed with scores of 90% and 85%, respectively. In February the following year, on the advice of both his primary and secondary teachers, who felt he was emotionally, as well as academically ready, the Taos agreed that he should begin to attend high school full time. He was based in Grade 8 so that he could be with friends with whom he had undertaken some Grade 7 work the year before, and at this level he took English, French, general studies, art, and physical education. Continuing his integration pattern, however, he also studied Grade 12 physics, Grade 11 chemistry, and Grade 10 geography. He also began studying first-year university mathematics, initially by himself and then, after a few months, with help from a professor of mathematics at the nearby Flinders University of South Australia. In September that year he began to attend tutorials in first-year physics at the university, and 2 months later he passed university entrance physics with a score in the upper 90s. In the same month, finding that he had some time on his hands after the matriculation and internal exams, he started Latin at high school.” Though Tao’s education was governed by his parents and teachers, the trajectory was entirely driven by himself and was aided dramatically by an attention to his emotional and social maturity. In many respects he was actually held back. He was moved into high school at aged 10, but as noted above, he’d nearly aced university entrance exams two years previously (in Australia high school goes up to grade 12). He spent two thirds of his time with grade 11 and 12 students and the remainder attending 1st and 2nd year university maths and physics classes. This was all down to his parents, who felt strongly about not doing anything simply for appearances sake, and only taking steps when it was in their son’s best interests: “There is no need for him to rush ahead now. If he were to enter full-time now, just for the sake of being the youngest child to graduate, or indeed for the sake of doing anything ‘first,’ that would simply be a stunt. Much more important is the opportunity to consolidate his education, to build a broader base. If Terry entered university now he would certainly be able to handle the work but he would have little time to indulge in original exploration. Attending part-time, as he is now, he can progress at a more leisurely rate and more emphasis can be placed on creativity, original thinking, and broader knowledge. Later, when he does enter full time, he will have much more time for research or anything else he finds interesting. He may be a few years older when he graduates but he will be much better prepared for the more rigorous graduate and post-doctoral work.” Sitting among students nearly twice his age, the young Terry Tao became known for his humble and friendly nature, and by all accounts, was universally liked by teachers, mentors and peers alike. This may be his nature, but being as precocious as he was, his personality was undoubtedly benefited by the unwillingness of his parents to treat him any differently to his brothers (and other children of a similar age in ‘regular’ families). Modesty was a virtue in the Tao household; show-boating and arrogance made as much sense as a clown at a librarian convention. He didn’t care about winning prizes or being the best at anything; he just really loved doing maths, and received the perfect balance of encouragement and structure to reach his full potential without ever feeling superior. He knew he was different, but had no value placed on that difference: everyone else was viewed as a human equal. When the 10 year-old Tao was offered a prize for scoring the highest mark ever on the American SAT for a child of his age, he chose a chocolate bar, and when it was handed to him, broke it in half and shared it with his father! Professional Career Tao’s work has achieved everything from progressing prime number and infinity theory to advancing MRI scanning technology—rapidly improving the detection rate of tumours and spinal injuries across the globe. Professor of mathematics at Princeton University Charles Fefferman said in an interview: “Such is Tao’s reputation that mathematicians now compete to interest him in their problems, and he is becoming a kind of Mr Fix-it for frustrated researchers. If you’re stuck on a problem, then one way out is to interest Terence Tao” The influence of mathematical advancement on society is almost entirely indirect: it usually functions as a basis to the advancement of other sciences, especially physics, so drawing a clear line between Tao and the broader value of his work quickly becomes convoluted by additional theory and speculation. Not to mention, explaining pure mathematics in laymen’s terms is extremely difficult. The concepts being used are comprised of other concepts that themselves require their own multi-conceptual explanations, all of which are already well beyond the learning level of the average person (myself included). What I do understand though, is that mathematics at an advanced level can be a truly beautiful and creative phenomenon, and for many, an emotional one as well. It’s been said that most people don’t enjoy math because the schooling curriculum gives a vastly incomplete picture of the subject, analogous to an art class only teaching how to paint a single-coloured wall and never showing a Picasso or Rembrandt. For most of us it’s easy to recognise artistic and social talents as we have our own abilities as a point of reference, allowing us to perceive a distance between our own output and that of the great masters. In the case of mathematics it’s usually a case of viewing some kind of alien language. For example, here’s what Tao has been working on most recently: “I’ve been meaning to return to fluids for some time now, in order to build upon my construction two years ago of a solution to an averaged Navier-Stokes equation that exhibited finite time blowup. One of the biggest deficiencies with my previous result is the fact that the averaged Navier-Stokes equation does not enjoy any good equation for the vorticity ${\omega = \nabla \times u}$, in contrast to the true Navier-Stokes equations which, when written in vorticity-stream formulation, become $\displaystyle \partial_t \omega + (u \cdot \nabla) \omega = (\omega \cdot \nabla) u + \nu \Delta \omega$ $\displaystyle u = (-\Delta)^{-1} (\nabla \times \omega).$ (Throughout this post we will be working in three spatial dimensions ${{\bf R}^3}$.) So one of my main near-term goals in this area is to exhibit an equation resembling Navier-Stokes as much as possible which enjoys a vorticity equation, and for which there is finite time blowup. Heuristically, this task should be easier for the Euler equations (i.e. the zero viscosity case ${\nu=0}$ of Navier-Stokes) than the viscous Navier-Stokes equation, as one expects the viscosity to only make it easier for the solution to stay regular. Indeed, morally speaking, the assertion that finite time blowup solutions of Navier-Stokes exist should be roughly equivalent to the assertion that finite time blowup solutions of Euler exist which are “Type I” in the sense that all Navier-Stokes-critical and Navier-Stokes-subcritical norms of this solution go to infinity…” I don’t know about you, but I almost need a lay-down after reading that. It’s my goal over the next 12 months to both increase my own base understanding of mathematics and to source mathematicians capable of providing effective metaphors to better illustrate the work they’re doing for the rest of us. I’ll post more specifically on the subject then, and will potentially revisit this section to give it some greater context. Closing It’s no accident that Tao became passionate about mathematics, and it’s not just a matter of encouragement. His parents instilled him with a positive and compassionate outlook and supported him, but it was ultimately the conscious absence of his parents that helped him the most. The common sense fact is, if someone is good at anything, they’re much more inclined towards it over other activities, especially without there being any pressure around their achievements. The brain naturally releases higher dopamine levels when the mind perceives self-accomplishment easily relative to a common standard, which in Tao’s case, came very early when he was teaching children twice his age how to count before turning 3. His aptitude then went on to connect his developing interest to higher-concept (more elegant and interesting) mathematics much sooner than most professionals in the field, thereby giving him an enormous hook. The message for parents here is a clear one: for a child’s potential to be reached, their talent needs guidance without any pressure and expectation. The choices and direction of Tao’s parents were paramount to his development. They worked tirelessly in the background to create new and nurturing environments for him to grow in, and in terms of his personal experience, they were largely invisible. They recognised the importance of balance in the growth of modest self-confidence, a concept equally important to all avenues of his life—whether it be at school, at home or among friends. Most importantly, Tao’s parents understood his genius. His father sums it up: “I have seen too many situations where the parents did the wrong thing. A brilliant mind is not just a cluster of neurons crunching numbers but a deep pool of creativity, originality, experience and imagination. This is the difference between genius and people who are just bright. The genius will look at things, try things, do things, totally unexpectedly. It’s higher-order thinking. Genius is beyond talent. It’s something very original, very hard to fathom.” Terence Tao is more than a mathematical genius; he’s a role model for human conduct, a rare example of supreme talent and supreme humility existing in side-by-side unison. We may not be able to learn much directly from his work, or even understand the first thing about it, but I think most of us can learn from his outlook on the world: no matter who you are or how good at something you are, be humble, let your work speak for itself, and be a good and genuine person without motive. If you haven’t seen him before, here’s a brief interview he had on the Colbert Report a couple of years ago. Note his demeanour and the speed of his brain compared to his speech. He’s one of a kind. If you’re interested in learning more about the ‘Navier-Stokes’ equation or checking out more of his work, Tao runs his own WordPress blog here. ## 4 thoughts on “Exploring Genius – Terence Tao” 1. I feel there is a difference in wisdom and intellect ,a person may have high IQ but he may not be wise ,at the same time there are other factors ,like social behaviour ,ethics ,lots of parameters , life is too complex to sum up as achievement, its some other type of program which is not decoded yet Liked by 1 person 1. Garry Maurice says: Agreed! Cheers for commenting Ajay Like
auto_math_text
web
# Are laser-stars the better missile carriers in space warfare? Laser-stars are war-spacecraft optimized to accommodate a huge laser weapon capable of eliminating an enemy thousands of kilometers away. They will usually have enormous radiators to deal with the waste-heat of their lasers and to cool the lasers further to reduce thermal lensing to keep the beam quality high. During an attack-run, they will use pulsed-beam lasers, but many laser designs can be switched between pulsed- beam and continuous-wave mode. Missile Carriers are war-spacecraft optimized to shoot missiles at the enemy to kill them. The Rocinante from The Expanse could be considered a moderate missile carrier. An extreme missile carrier would be a propulsion bus whose only payload is hundreds or dozens of missiles. With the relevant terms cleared up, on to the issue. Lasers have, unlike missiles (and kinetics, but the lines between those two are blurry at best in my setting), a limited range due to diffraction while missiles can just shut of their engines and go into cruise mode, possibly for centuries. This means that laser-stars will spend a long time on the approach doing nothing with their amazing lasers (they could threaten the enemy in Morse code with them, but that doesn't seem to be an effective use of a billion-dollar weapon system). Missile carriers likewise have their problems, or rather their missiles have. Unless you can install small, ridiculously efficient and powerful drives on the missiles, you either have to use humongous swarms to get through the point-defense grid (laser-star lasers, PDS-lasers, PDS-guns and defensive wide-angle-casaba-howitzers) our your attack won't be effective. Additionally, the enemy will know where the missiles will come from due to their very detectable drives. And all of this assumes that the missile carriers can get effective launches of before the laser-stars vaporize them. So essentially laser-stars will have a lot of ineffective downtime and missile carriers will be countered by point defense and the rocket equation. My solution to these issues is based on the assumption that stealth in space is in fact possible (the links provide equations) (I know that a lot depends on the sensor nets available, but I assume that those will become worse as the war carries on as sensor hunters, x-ray fluorescent illumination searches, laser scans, and sandstorm kinetics will take their toll on them). I don't mean tactical Star Trek stealth but strategic detection lowering stealth. True, one can't hide a departure or launch burn but one can cool oneself down close to the temperature of the CMB with evaporative cooling and use metamaterials to make visible light and radio detection harder. If the cold missiles use serpentine rocket nozzles to strategically dump coolant to maneuver, the enemy will know only that there is a missile coming in and that it is somewhere in a 10 million square-kilometer zone. Useful information to be sure, but virtually useless to the point-defense effort. When the missiles will inevitably be detected they might have gotten into the range where they could effectively deploy their payload, say a nuclear-pumped x-ray laser. My solution to the issues of laser-stars and missile carriers is to use the laser-stars laser to accelerate missiles until it can effectively fire at the enemy. This is the concept of laser propulsion. I could either use laser sails or laser thermal rockets. The delta-v budget of the laser-boosted stage of the is given by: $$v = sqrt(d / 0.5 * a) * a$$ $$v$$ = change of missile velocity relative to laser-star $$d$$ = distance over which the missile is accelerated (effective range of laser) $$a$$ = acceleration of missile According to my research $$a$$ will be several G's or even several tens of G's and $$d$$ will be in the hundreds or thousands of kilometers, resulting in delta-v budgets of several tens of kilometers per second. After the laser burn, the hot propulsion stage is ditched and broken up in a wide field shrapnel storm meant to harass the enemy. The cold and stealthy missile continues to cool itself and uses the coolant to increase the volume of space where it could be. The warhead the missile will carry if they are used in an anti-spacecraft capacity are very diverse. • kinetic impactors (nuclear gun, kinetic missile, cold bullet) • sandstorm kinetics (small particles meant to damage surface structures) • directed energy (casaba howitzers and bomb pumped lasers) • virus bots (micro-machines meant to land and hack into enemy computers) • combat drone (the closest thing to a space-fighter in my universe, various applications) In the end, assuming that my assumptions about warheads and the nature of stealth in space are correct, is it thus reasonable to conclude that a laser dominated battlefield will automatically lead to a missile dominated one? Whilst your assertion that stealth may be possible in space is reasonable (and the links you've provided give reasonable arguments in its favour), it does not play well with this second assertion: According to my research a will be several G's or even several tens of G's and d will be in the hundreds or thousands of kilometers, resulting in delta-v budgets of several tens of kilometers per second Energetic reaction engines, regardless of their mode of operation, cannot be stealthy. The launch of your salvo of missile busses cannot go unremarked, even if the missiles themselves immediately become invisible. Once you've revealed your location, you can be observed via occultation. Your metamaterials can't help but be imperfect due to distorting light sources behind them, especially when faced with a broad-baseline sensor array. You'll also have problems with a coating that works equally well from mm-wave to sub-nanometre x-rays. Once spotted, a retaliatory missile strike can be launched against you. If your missiles are as dangerous as you imply, what you end up with is mutually-assured destruction all over again... you can't launch, because their relatiatory strike will annihilate you, even if you achieve total destruction of your targets (which is one of the reasons I'm finding "realistic" spaceship combat a bit uninteresting, these days). is it thus reasonable to conclude that a laser dominated battlefield will automatically lead to a missile dominated one? Given the assumptions you're operating under, it seems reasonable enough. Your missile dominated battlefield will then immediately turn into a stealth-dominated one, because as soon as anyone gets spotted then there's a good chance that everyone gets killed. They will see the missiles coming no matter what. Your giant laser ships have a giant laser that they are not doing very much with. They have a lot of empty space. They can power the laser down, broaden the beam and swing it around and around, lighting up the emptiness with the laser. As the beam scatters with distance, it will remain effective at illumination. If something interrupts the beam, that something will reflect the laser back, and the doppler shift of the reflected light will tell you the speed of the object. If the obstacle is nonreflective it will still prevent sparkle from dust further down the path of the laser and so give away its position. The laser then swings back for another more sustained look on what that thing was that got in the way. If it could be a missile, the laser delivers additional energy until something happens - either destruction of the thing or it is going the other way. Given that a push by a laser was what got your missiles going in the first place, this offers the awesome possibility of laser missile ping pong. But otherwise I think missiles in space are a hard sell. Space is big and light is fast. • Firstly the detecting lasers light will go right around the missiles. Then, while the detection strategy you suggest could theoretically work, it is useless in am military context. You need truly huge infrared telescopes to detect dust illuminated along the path. These telescopes can be blinded by an enemy laser from half across teh solarsystem. – TheDyingOfLight Oct 19 '19 at 22:12 • Furthermore, I think you overestimate the effective ranges of the lasers and underestimate the effective ranges of warheads. Especially bomb pumped x-ray lasers will get a good laugh out of your attempt to shoot them down. As soon as you open fire, they do as well and vaporize your delicate laser optics with a pulse of coherent ionizing radiation. – TheDyingOfLight Oct 19 '19 at 22:16 • It is not your fault that missiles are bad and lasers go far, @TheDyingOfLight. It is physics fault. But you are right that the detection range of the laser ship might be considerably farther than the destructive range. The laser ship will track them until they get close. Re returning fire / good laugh - If the range of the laughing laser missile is the same as the laser dreadnought then yes; no one would have laser ships. But really the laser missiles will be 5000 km out, bathed in a gentle red light which then focuses into a killing laser beam. Do not be mad. It is not your fault. – Willk Oct 19 '19 at 22:39 • Giant infrared telescope is a good idea! The laser dreadnaught needs 2 of those. But no telescopes will be blinded. It is not 1930. Light point sources from distant laser dreadnoughts, nearer scattered dust and nearly invisible missiles will be rerendered on screens. – Willk Oct 19 '19 at 22:42 • I'm not mad I'M FOURIOUS!!! No, not really. Sorry I my reaction was a bit brisk. 1. I don't mean that the poor recruit who has to stare into the telescope will be blinded. A telescope observing objects in the far IR must be extremely cold, otherwise it'll only observe itself. Shoot at it with a laser from afar and it'll only see itself cooling down. – TheDyingOfLight Oct 19 '19 at 23:09 There's no comparison... if you had your laserstar mount a whole bunch of lasers, a few really, really big, and the rest relatively small, the small lasers could act as point defence. The point defence lasers could kill any missile before it could reach effective detonation range, yet a powerful laser can be lethal at a range of several light seconds with dispersion. The only way a missile ship might be able to compete with a laser ship is if the missiles carry atomic-bomb-pumped rod lasers that are aimed at the target ship before detonation - effectively becoming shrapnel for an atomic-scale weapon. Consider the mathematical relationship between energy density and distance: Energy per square meter at the target varies proportionally to the square of the distance. However, the energy released from a bomb is released in all directions, so for a bomb to be maximally effective - to deliver the greatest amount of energy to its target - it needs to be in contact with - or better, inside - the target. Compare that with a laser: its energy is released unidirectionally with only a small dispersal, so a distance at which it delivers only a quarter of the energy per square meter is very much longer than for a bomb, and it may still deliver all of its energy to the target. You are right to think that the battlefield is not static. Even when the battlefield appears to stick to the same concept and strategies for an extended period of time it continues to evolve. If the current state of the battle field is dominated by lasers alone, then that leaves all those who believe it will stay as such vulnerable to even the mildest amount of creativity. The weakness of lasers is dissipation, so imagine if someone decided to store a missile with a substance that expands rapidly and dissipates light effectively when introduced to heat? Long range, and perhaps even point defense, would be rendered useless. Of course, this veil could become a disadvantage too if the only issue for defense is heat build up. My point is, you need to be flexible in battle to be able to survive. If the only defense you have is lasers and they've become useless, it doesn't matter if you can see the missiles coming or not. Making your laser specialized ship adapt and carry missiles that utilize the ship's strengths to enhance their effectiveness is perfectly natural, especially if your ship sails alone and not in a pack. Specialized ships only make sense when they are in a group, so when they are countered they are not completely vulnerable to obliteration. The conclusion I had gotten from sources like Atomic Rockets and Rocketpunk Manifeto is that the Laserstar is one component of a combat system, much like an aircraft carrier is one component of a naval task force or battle group. Luke Campbell RBoD. The liniac is @ 500m long Rocketpunk Manifesto in particular had lots of discussions about laser warfare, expecially using Ravening Beam of Death lasers (RBoDs) which would have an effective kill range of a light second (mostly in order to minimize the target cycle - there is only a two second delay from shot to seeing the effect). RBoDs were considered to be effective at vapourizing metals, ceramics and carbon fiber in milliseconds at that range, but Rick Robinson pointed out that massive weapons like that could engage far beyond, delivering "scorching" damage at a light minute and even appreciable energy on target at a light hour. This suggests that the laserstar would work as a searchlight, illuminating potential targets with high energy beams and the sensor cloud surrounding the constellation of warships would be looking for reflections, heat energy or other tell tale signs of enemy activity. A spaceship cooled by evaporation of liquid hydrogen or helium would be suddenly discovered by the appearance of ionized gas molecules from the evaporative cooling cloud, for example. When that happens, the RBoD simply refocuses the beam, and starts to sweep up and down the trail of ionized gas to find the source. Given your fleet mirrors many of the assumptions from Rocketpunk, Atomic Rockets and Tough SF, then you already are aware that the constellations suggested by these sources could be deployed in a disc or sphere about a light second in diameter, with multiple laserstars, kinetic platforms and other weapons and sensors. The inputs from so many different sensors (tuned to multiple different wavelengths) will provide a fairly detailed 3 dimensional picture of the space surrounding the constellation, and multiple RBoD's illuminating the space around them for light hours will give them more opportunities to discover stealth spacecraft and prepare countermeasures to deal with them. The other issue is that stealth spacecraft as described seem to be fairly low performance vehicles, so the defending commander can likely be able to look at potential launch sites and work out transfer orbits to search, cutting down on the need to observe the entire volume of space surrounding the constellation. I might also observe that in this environment, there will be a push to create more and less expensive lasers. Replacing the RBoD's 500m long liniac with a Plasma Wakefield Accelerator brings the entire apparatus down to the size of a kitchen table, allowing the constellation to have hundreds of mini RBoDs rather than a few stately kilometer long vessels. If the giant ones are used as illumination search lights, the oncoming stealth spacecraft might not even be able to determine where the multitude of "fighter" RBoDs actually are, meaning it can be skewered by many unexpected laser beams once its position and orbit are tracked. Visualization of the Plasma Wakefield in action
auto_math_text
web
The first mathematically unbiased estimates of neocortical cell numbers are presented from the developing pig brain, including a full description of tissue processing and optimal sampling for application of the stereological optical fractionator method in this species. The postnatal development of neocortical neurons and glial cells from the experimental Göttingen minipig was compared with the postnatal development of neocortical neurons in the domestic pig. A significant postnatal development was observed in the Göttingen minipig brain for both neuronal (28%; P=0.01) and glial cells (87%; P<0.01). A corresponding postnatal development of neurons was not detected in the domestic pig brain. The reason for this strain difference is not known. The mean total number of neocortical neurons is 324 million in the adult Göttingen minipig compared with 432 million in the domestic pig. The glial-to-neuron cell ratio is around 2.2 in the adult Göttingen minipig. Based on these results, the domestic pig seems to be a more suitable model for evaluating the effects of developmental insults on human brain growth and neuronal development than the Göttingen minipig. The Göttingen minipig and the domestic pig are increasingly used as non-primate models in basic experimental studies of neurological diseases. The gyrated pig brain is more similar to the primate brain than lissencephalic brains from small laboratory animals. The pig is affordable, it is easily handled and its use may potentially avoid some of the ethical considerations concerning the use of primates as laboratory animals. The pig has previously been considered a potential animal model for evaluating the effects of developmental insults on human brain growth and development (Dickerson and Dobbing,1966; Book and Bustad,1974; Dobbing and Sands,1979; Pond et al.,2000). In general, the mammalian brain appears to go through a momentary period of rapid growth, exemplified by a characteristic growth rate curve when the percent of adult brain mass is plotted against age(Dobbing and Sands, 1979). One of the most important interspecies differences seems to be the complexity of the final product as well as the timing of the brain growth spurt in relation to birth (Dobbing, 1974). The timing of the brain growth spurt can be used to roughly categorize mammalian species as prenatal, perinatal or postnatal developers(Dobbing and Sands, 1973). In a comparison of seven mammalian species, it was demonstrated that the pig brain, like that of humans, develops perinatally, with a brain growth spurt extending from midgestation to early postnatal life(Dickerson and Dobbing, 1966; Dobbing and Sands, 1979; Pond et al., 2000). This is in contrast to other mammalian species, e.g. the brain of guinea pig, sheep and monkey, which has a prenatal growth spurt, or the brain of rat and rabbit,which develops postnatally (Dobbing and Sands, 1979). The development of the pig brain is also considered more similar to the human brain with respect to myelination, compositions and electrical activity (Dickerson and Dobbing,1966; Fang et al.,2005; Flynn, 1984;Pampliglione, 1971; Thibault and Margulies, 1998). The traditional view of the primate neocortex is that it is structurally stable and that neurogenesis and synapse formation occur during early development (Bourgeois et al.,1994; Rakic,1985b). Quantitative studies based on DNA quantification in the human brain have indicated that the major phase of neuronal multiplication occurs during the first half of gestation, prior to the numerically larger but slower phase of glial multiplication, which continues into the first postnatal years (Dobbing and Sands,1973; Dobbing,1974). A two-phased growth pattern has similarly been observed in a stereological study on total cell numbers in the developing human fetal forebrain (Samuelsen et al.,2003). A clear cell discrimination has, however, not been performed in early fetal life. In an attempt to further evaluate the pig as a potential model for human brain development and to provide a quantitative structural basis for comparative and experimental studies, a number of quantitative examinations on the neonate and adult pig brain have been initiated. Quantitative data are obtained using assumption-free stereological methods. The methods are designed to describe quantitative parameters without assumptions about shape, size,orientation and distribution of cells in the reference space and are based on established procedures for systematic, uniformly random sampling that are superior in precision compared with results obtained by independent sampling(Gundersen and Jensen, 1987). In the present paper, neocortical cell numbers were obtained using the optical fractionator method (Gundersen,1986; West et al.,1991). A detailed description of tissue processing and optimal sampling procedures for application in the pig brain is presented. ### Experimental animals #### The Göttingen minipig The Göttingen minipig was developed in 1961–1962 at the Institute of Animal Breeding and Genetics of the University of Göttingen(Germany). The present characteristics of the Göttingen minipig, as a small, white miniature pig with good fertility and stable genetics, were obtained as a result of cross-breeding the Minnesota minipig with the Vietnamese potbelly pig and the German Landrace(Bollen and Ellegaard, 1997; Damm Jorgensen, 1998). The Göttingen minipig in Denmark originates from caesarean sections performed on German sows in the early 1990s. The offspring was transferred to a barrier facility and kept in a non-contaminated environment. Only three populations of limited population size exist in the world today. The Göttingen minipig is an outbred animal, with less than 5%in-breeding (Glodek, 1986). The newborn Göttingen minipig has a body mass of ∼350–450 g. Boars become sexually mature at an age of 3–4 months, weighing 6–8 kg, while sows are sexually mature at an age of 4–5 months, weighing 7–9 kg. The gestational period is 112–114 days and the average litter size is 5–6 animals. The adult mass of the Göttingen minipig, at an age of two years, is 35–40 kg(Bollen and Ellegaard, 1997; Damm Jorgensen, 1998). The Göttingen minipig of today is not gnotobiotic but is kept in barriers to avoid bacterial contamination and thereby minimize the risk of any influence on research caused by microbiological organisms. Health monitoring is carried out twice a year, based on FELASA (Federation of European Laboratory Animal Science Associations) guidelines. #### The domestic pig The domestic pigs used in the present study are all crossbreeds between Danish Landrace and Yorkshire. This combination is normally used for production of mother animals. Danish Landrace is a long and lean breed known for high fertility and good motherhood. The litter size is around 13.5 piglets per litter. The Yorkshire pig has a high lean meat percentage, high daily gain and good meat quality. The mothering characteristics are good, with a litter size of around 12 piglets per litter. The National Committee for pig production mainly organizes The Danish Pig breeding program. The newborn Landrace/Yorkshire domestic pig has a body mass of∼1.3–1.9 kg. Boars become sexually mature at an age of 6 months,while sows are sexually mature at an age of 7 months. The gestational period is 114 days and the average litter size is 10–14 animals. The adult mass of the domestic pig, at an age of two years, is 200–300 kg(Bollen et al., 2000). In Denmark, most of the domestic pigs are SPF-tested (Specific Pathogen Free) but are not submitted to the same strict control procedures as the Göttingen minipigs. ### Experimental set-up A total of 10 Göttingen minipigs and 12 domestic pigs were used in the study. All Göttingen minipigs [five neonate polts (1–2 days old)and five adult sows (1.5–3 years old)] were perfusion fixated by a procedure approved by the Danish Animal Research Inspectorate. The pigs were anesthetized with an intramuscular injection (1 ml per 10 kg body mass) of a mixture of 6.5 ml Narcoxyl®Vet (20 mg ml–1; Intervet,Denmark), 1.5 ml Ketaminol®Vet (100 mg ml–1; Intervet,Denmark) and 2.5 ml Methadone DAK (19 mg ml–1; Nycomed,Denmark) added to one bottle of Zoletil®50 Vet (Virbac Laboratories,France) without additional solvent. The pigs were then placed in a supine position on the operation table and supplied with a lethal dose of Pentobarbital (1 ml kg–1 body mass, 200 mg ml–1) before intervention. The deeply anesthetized' pigs received a midsternal incision followed by a sternal split. The left cardiac ventricle was punctured by an infusion cannula, the right auricle was cut open, and a transcardial perfusion with 0.5–2.5 l saline followed by 2.5–7.5 l of 4% paraformaldehyde in 0.15 mol l–1Sørensen phosphate buffer (pH 7.4) was executed. The procedure took on average 10–15 min. The brain was removed and postfixed for an additional 24 h at 5°C in 1% paraformaldehyde. The domestic pigs [six neonate sows (1 day old) and six adult sows(3–4 years old)] were euthanized with a lethal injection of Pentobarbital i.v. or with carbon dioxide in a Danish pig slaughterhouse. Following death, the brains were carefully removed from the skull and fixed by immediate immersion in 4% formaldehyde (0.15 mol l–1phosphate buffer, pH 7.4) for at least two weeks. ### Tissue processing The cerebral hemispheres were divided medially through the corpus callosum,and the left or right hemisphere was selected randomly before dehydration and infiltration in paraffin (Fig. 1A,B). The hemispheres were placed in a container with the midsagittal surface facing down, embedded in blocks of paraffin and sectioned into coronal series of 40 μm-thick sections on a Leica microtome(Fig. 1B,C). Satisfactory adhesion of the thick sections to glass slides was obtained using Superfrost Plus glass slides primed with a chromalun–gelatin solution(Feinstein et al., 1996). A predetermined fraction of the 40 μm-thick sections was sampled using moist filter paper and carefully rolled onto 40°C preheated primed slides covered by a thin layer of distilled water. The sections were dried overnight at 37°C and a final selection of slides was subsequently stained with Giemsa. ### Stereological techniques Estimates of neocortical cell numbers were obtained using the optical fractionator method (Fig. 1). The optical fractionator combines two stereological principles: the optical disector for counting particles and the systematic uniform sampling scheme of the fractionator (Gundersen,1986; West et al.,1991). The optical fractionator involves counting particles with optical disectors in a uniform random systematic sample that constitutes a known volume fraction of the region being analyzed. This is achieved by counting cells on a known fraction of sections (ssf; Fig. 1D), under a known fraction of the sectional area of the region (asf; Fig. 1E) in a known fraction of the thickness of a section (hsf; Fig. 1G). The total numbers of neocortical cells are estimated by multiplying the numbers of cells counted(∑Q) with the inverse sampling fractions. Finally, the bilateral cell number can be estimated by multiplying the unilateral number by two. $\ N_{\mathrm{total}}=(1{/}ssf)(1{/}asf)(1{/}hsf){\ }{\Sigma}Q^{-}{\ }2.$ (1) Fig. 1. The optical fractionator sampling scheme. The cerebral hemispheres were divided medially (A), and the left or right hemisphere was selected systematically randomly before dehydration and infiltration in paraffin (B). The hemispheres were sectioned exhaustively into 40 μm-thick coronal sections (C), from which a predetermined fraction, the section sampling fraction (ssf), was sampled systematically randomly (D) and subsequently stained with Giemsa. Optical disectors were positioned systematically randomly on each of the sampled sections at regular predetermined x,y-positions (E,F). The area of the counting frame of the disector to the area associated with each x,y-step represents the area sampling fraction (asf). Counts were performed in all optical disectors hitting the structure of interest (E,F). Cellular nuclei were counted by moving the counting frame through a continuous stack of thin optical planes inside the section (G). The height of the disector(hdis) relative to the height of the section (t)represents the height sampling fraction (hsf). The disector is protected by upper and lower guard zones. Fig. 1. The optical fractionator sampling scheme. The cerebral hemispheres were divided medially (A), and the left or right hemisphere was selected systematically randomly before dehydration and infiltration in paraffin (B). The hemispheres were sectioned exhaustively into 40 μm-thick coronal sections (C), from which a predetermined fraction, the section sampling fraction (ssf), was sampled systematically randomly (D) and subsequently stained with Giemsa. Optical disectors were positioned systematically randomly on each of the sampled sections at regular predetermined x,y-positions (E,F). The area of the counting frame of the disector to the area associated with each x,y-step represents the area sampling fraction (asf). Counts were performed in all optical disectors hitting the structure of interest (E,F). Cellular nuclei were counted by moving the counting frame through a continuous stack of thin optical planes inside the section (G). The height of the disector(hdis) relative to the height of the section (t)represents the height sampling fraction (hsf). The disector is protected by upper and lower guard zones. The optical disector may be considered as a three-dimensional probe,generated with the aid of a microscope with a high numerical aperture (NA=1.4)and an oil immersion objective, in which it is possible to observe thin focal planes in relatively thick sections (Fig. 1G). A counting frame with exclusion' and inclusion' lines is superimposed on the magnified image of the tissue on a computer screen, and the orientation in the z-axis is measured with a digital microcator with a precision of 0.5 μm. The purpose of exclusion' and inclusion'lines of the counting frame is to exclude edge effects arising from sub-sampling (Gundersen,1978). Similarly, upper and lower guard zones protect the counting frame to prevent bias as a consequence of loss of cells close to the section surfaces (Fig. 1G). All cells that come into focus within the frame and are not in focus at the uppermost position are counted as the focal plane is moved through the section. All cell counts were obtained using a BH-2 Olympus microscope and CAST-GRID software(Olympus, Ballerup, Denmark). #### Counting criteria The cells were identified as neurons if they had a combination of dendritic processes, a Giemsa-positive cytoplasm, a clearly defined nucleus with a pale and homogeneous nucleoplasm and a dark and condense centrally located nucleolus. The nucleus was used as the counting item, and around 250 neuronal nuclei were counted per brain( $${\Sigma}\mathrm{Q}_{\mathrm{neu}}^{-}$$ ⁠; Table 1). The glial cells were usually smaller and identified by the absence of a Giemsa-positive cytoplasm,the presence of heterochromatin clumps within the ovoid or irregularly shaped nucleus and the lack of a clearly identifiable nucleolus. Also here, the nucleus was used as the counting item, and an average of 360 glial nuclei were counted per brain( $${\Sigma}\mathrm{Q}_{\mathrm{glial}}^{-}$$ ; Table 1). No differentiation was made between astrocytes, oligodendrocytes or microglia. Endothelial cells were easily recognized by their dark and elongated nucleus and were excluded from all counts. Table 1. The fractionator sampling parameters, mean values BA (μm)1/ssf∑secta(frame) (μm2)Step size (x,y) (μm)1/asf∑dishdis (μm)tQ- (μm)1/hsf $${\Sigma}\mathrm{Q}_{\mathrm{neu}}^{-}$$ $${\Sigma}\mathrm{Q}_{\mathrm{glial}}^{-}$$ Göttingen minipig Neonate 40 50 11-13 749 1600 3417 172 15 39.1 2.6 253 383 Adult 40 100 11-12 2497 2600 2707 168 20 37.7 1.9 289 337 Domestic pig Neonate 40 100 8-9 999 1800 3242 141 15 38.4 2.6 260 Adult 40 200 7-8 2706 2800 2896 137 25 37.9 1.5 235 BA (μm)1/ssf∑secta(frame) (μm2)Step size (x,y) (μm)1/asf∑dishdis (μm)tQ- (μm)1/hsf $${\Sigma}\mathrm{Q}_{\mathrm{neu}}^{-}$$ $${\Sigma}\mathrm{Q}_{\mathrm{glial}}^{-}$$ Göttingen minipig Neonate 40 50 11-13 749 1600 3417 172 15 39.1 2.6 253 383 Adult 40 100 11-12 2497 2600 2707 168 20 37.7 1.9 289 337 Domestic pig Neonate 40 100 8-9 999 1800 3242 141 15 38.4 2.6 260 Adult 40 200 7-8 2706 2800 2896 137 25 37.9 1.5 235 Abbreviations: BA, block advance; ssf, section sampling fraction;∑sect, number of sections; a(frame), area of counting frame;∑dis, number of disectors hitting neocortex; hdis,disector height; tQ-, q-weighted section thickness; hsf, height sampling fraction; $${\Sigma}\mathrm{Q}_{\mathrm{neu}}^{-}$$ ⁠, number of neurons counted; $${\Sigma}\mathrm{Q}_{\mathrm{glial}}^{-}$$ , number of glial cells counted. The a(frame) for glial cell estimations in adult Göttingen minipigs was 50% of the original frame. #### The section sampling fraction A sampling scheme was designed based on systematic uniform random sampling(SURS) to ensure that all parts of the neocortex had an equal probability of being sampled. Based on a pilot study, every 50th or 100th section was sampled from the neonate and adult Göttingen minipig brain, respectively, whereas every 100th or 200th section was sampled from the domestic pig(Table 1). The first section was selected randomly using a random number within the sampled section period. In the event of poor technical quality, e.g. due to the presence of folds or breaks, the subsequent section was sampled instead. Extra sections were also sampled to circumvent potential damage of sections during staining procedures. These deviations are of no consequence to the results, provided the overall sampling scheme is maintained. The section fraction represents the section sampling fraction (ssf) and provided 11–13 sections for final quantitative analyses in the Göttingen minipig and 7–9 sections in the domestic pig. #### The area sampling fraction In each of the sampled sections, counts of neurons or glial cells were made with optical disectors at regular predetermined x,y positions in the neocortex. The neocortex was defined as the isocortex and mesocortex. Again,the first disector was positioned randomly within the first x,yinterval by the CAST-GRID software. The area of the counting frame of the disector, a(frame), is known relative to the area associated with each x,y-step. The area of the sampling fraction (asf) is accordingly: $\ asf=a(frame){\ }{/}{\ }a(x,y\mathrm{-step}).$ (2) In a pilot study, the step length between disectors was adjusted to provide approximately 150 disectors in each brain. Subsequently, the frame area and height of the disector were adjusted to obtain 1–2 particles per disector (see below). #### The height sampling fraction The height of the disector (hdis) should be known relative to the thickness (t) of the section. To prevent bias as a consequence of loss of cells, the disector is guarded by upper and lower guard zones. The size of the guard zones depends upon the size of the particles counted. For neocortical neurons it should usually be at least 5 μm in the top and bottom of the section. To compensate for potential deformation of the sections in the z-axis, the height sampling fraction (hsf)depends on the Q weighted mean section thickness(Q): $\ hsf=h_{\mathrm{dis}}{\ }{/}{\ }{\bar{t}}_{\mathrm{Q}^{-}},$ (3) $\ {\bar{t}}_{\mathrm{Q}^{-}}={\Sigma}(t_{\mathrm{i}}q_{\mathrm{i}}^{-}){\ }{/}{\ }{\Sigma}(q_{\mathrm{i}}^{-}),$ (4) where ti is the local section thickness centrally in the ith counting frame with a disector count of $$q_{{\bar{\mathrm{i}}}}$$ (Dorph-Petersen et al.,2001). #### Volume estimates The systematic uniform random placement of disectors in the fractionator design was used to estimate volumes in accordance with the unbiased principles of the Cavalieri-estimator: $\ V_{\mathrm{neo}}=2{\ }{\Sigma}P{\ }a(p){\ }t{\ }k,$ (5) where ∑P is the number of frame upper-corner-points hitting the neocortex, a(p) is the x,y-step area, t is the block or microtome advance and k is the section sampling fraction. Note that estimated volumes are not used for the estimation of total cell number; the fractionator estimates of total cell number are independent of the containing volume and its shrinkage and deformation. ### Statistical analyses and estimation of precision The differences between means were analyzed using an unpaired two-tailed Student's t-test. Group variability is shown in parentheses as the coefficient of variation (CV=s.d./mean)(Table 2). The precision of the estimate of the total cell number in each subject was estimated as the coefficient of error (CE=s.e.m./mean), caused by sampling error related to the counting noise, the systematic uniform random sampling and variances in section thickness (Table 3). The precision of an individual estimate is related to the uniformity of the distribution of particles being counted and the amount of sampling that has been performed (Table 3). The sampling was considered optimal when the observed variance of the individual estimate, CE2, was less than half the observed interindividual variance, CV2. Table 2. Litter parameters and major estimated quantities NBody mass (kg)Brain mass (g)Neocortex volume (cm3)Mean CE volumeNeocortical neurons (×106)Mean CE neuronsNeocortical glial cells (×106)Mean CE glial cells Göttingen minipig Neonate 0.56 (0.081) 27.8 (0.037) 1.75 (0.057) 0.049 252.5 (0.11) 0.065 381.9 (0.10) 0.053 Adult 37.9 (0.11) 79.0 (0.077) 9.01 (0.11) 0.052 323.8 (0.093) 0.064 714.2 (0.12) 0.057 P-value  — — — — 0.01 — <0.01 — Domestic pig Neonate 214.5 (0.30) 29.4 (0.16) 3.64 (0.098) 0.060 424.8 (0.13) 0.067 — — Adult 212-217 134.0 (0.11) 17.2 (0.083) 0.051 432.1 (0.046) 0.068 — — P-value  — — — — 0.76 — — — NBody mass (kg)Brain mass (g)Neocortex volume (cm3)Mean CE volumeNeocortical neurons (×106)Mean CE neuronsNeocortical glial cells (×106)Mean CE glial cells Göttingen minipig Neonate 0.56 (0.081) 27.8 (0.037) 1.75 (0.057) 0.049 252.5 (0.11) 0.065 381.9 (0.10) 0.053 Adult 37.9 (0.11) 79.0 (0.077) 9.01 (0.11) 0.052 323.8 (0.093) 0.064 714.2 (0.12) 0.057 P-value  — — — — 0.01 — <0.01 — Domestic pig Neonate 214.5 (0.30) 29.4 (0.16) 3.64 (0.098) 0.060 424.8 (0.13) 0.067 — — Adult 212-217 134.0 (0.11) 17.2 (0.083) 0.051 432.1 (0.046) 0.068 — — P-value  — — — — 0.76 — — — Mean $$\mathrm{CE}=\sqrt{\mathrm{mean}(\mathrm{CE}^{2})}$$ ⁠. For calculation of CE for cell number and volume, see Table 3. The inter-individual variability in each group is shown in parentheses as coefficient of variability (CV=s.d./mean). Volume estimates are from shrunken paraffin-embedded tissue and should not be compared with fresh brain volumes. Table 3. How to estimate the coefficient of error (CE) of the estimate in an individual animal Section no.Q-A $$\mathrm{Q}_{\mathrm{i}}^{-}{\times}\mathrm{Q}_{\mathrm{i}}^{-}$$ B $$\mathrm{Q}_{\mathrm{i}}^{-}{\times}\mathrm{Q}_{\mathrm{i}+1}^{-}$$ C $$\mathrm{Q}_{\mathrm{i}}^{-}{\times}\mathrm{Q}_{\mathrm{i}+2}^{-}$$ 14 196 224 308 16 256 352 288 22 484 396 506 18 324 414 216 23 529 276 552 12 144 288 312 24 576 624 528 26 676 572 884 22 484 726 748 10 33 1089 1122 462 11 34 1156 476 12 14 196 Σ 258 6110 5470 4778 VARNOISEa 258 VARSURSb 1.89 CE(t)c 0.007 CE(ΣQ-)d 0.062 CE(N)e 0.063 Section no.Q-A $$\mathrm{Q}_{\mathrm{i}}^{-}{\times}\mathrm{Q}_{\mathrm{i}}^{-}$$ B $$\mathrm{Q}_{\mathrm{i}}^{-}{\times}\mathrm{Q}_{\mathrm{i}+1}^{-}$$ C $$\mathrm{Q}_{\mathrm{i}}^{-}{\times}\mathrm{Q}_{\mathrm{i}+2}^{-}$$ 14 196 224 308 16 256 352 288 22 484 396 506 18 324 414 216 23 529 276 552 12 144 288 312 24 576 624 528 26 676 572 884 22 484 726 748 10 33 1089 1122 462 11 34 1156 476 12 14 196 Σ 258 6110 5470 4778 VARNOISEa 258 VARSURSb 1.89 CE(t)c 0.007 CE(ΣQ-)d 0.062 CE(N)e 0.063 The precision, CE, of a fractionator estimate is a function of three independent factors: the noise variance (VARNOISE), the systematic uniform random sampling variance (VARSURS) and the variance attributable to variations in section thickness [CE(t)](see also West and Gundersen, 1990; West et al., 1991; West et al., 1996). From the calculations above (a—e), it is obvious that the CE(N) for all practical purposes may be considered a function of the NOISE variance. If one wants to reduce the CEs of the individual estimates further, this would, in this example, be achieved most effectively by sampling more on the sections already in the series. a VARNOISE is the uncertainty' in the estimate that comes from disector counts within a section and is equal toΣQ-. b VARSURS is the uncertainty' in the estimate that arises due to sampling between sections, i.e. because repeated estimates based on different sets of sections may vary. The VARSURS is calculated using a prediction model that takes into account the systematic nature of the sampling (Gundersen et al.,1999): VARSURS= [3(A—Noise)-4B+C]/240. For calculation of A, B and C, see columns above. When one uses more than 5-10 sections for a biological estimate, the SURS variance is usually negligible relative to the Noise variance. The denominator is a constant(Gundersen et al., 1999). c CE(t)=[s.d.()/(√n)](1/),where is the mean section thickness in each section measured with the digital microcator, n is the number of sections and is the mean section thickness between all sections. The CE(t) usually contributes less than 1% to the overall estimator variance and can be ignored in most studies where section homogeneity is high. d The total sampling variance, CE(ΣQ-), is calculated as: $$\sqrt{\mathrm{VAR}_{\mathrm{NOISE}}+\mathrm{VAR}_{\mathrm{SURS}}}{/}{\Sigma}\mathrm{Q}^{-}$$ ⁠. e The total CE for the final estimate, CE(N), is eventually calculated as: $$\sqrt{\mathrm{CE}^{2}({\Sigma}\mathrm{Q}^{-})+\mathrm{CE}^{2}(t)}$$ ⁠. Fig. 2. The postnatal development of neocortical cell numbers in the Göttingen minipig and the domestic pig brain. Total neuron number (A) and total glial cell number (B) in the Göttingen minipig. Total neuron number in the domestic pig (C). Fig. 2. The postnatal development of neocortical cell numbers in the Göttingen minipig and the domestic pig brain. Total neuron number (A) and total glial cell number (B) in the Göttingen minipig. Total neuron number in the domestic pig (C). The total number of neocortical neurons in the Göttingen minipig brain increases from ∼253 million at birth to ∼324 million in adulthood(Table 2, Fig. 2A). This significant(P=0.01) 28% difference demonstrates a pronounced postnatal development of neurons in the Göttingen minipig brain. A significant postnatal development is also observed for neocortical glial cells(P<0.01), increasing from ∼382 million in the neonate to∼714 million glial cells in the adult Göttingen minipig, an 87%difference (Table 2; Fig. 2B). The glial-to-neuron ratio changes accordingly from 1.5 to 2.2. The total brain mass increases almost threefold from a mean of 27.8 g at birth to 79.0 g as adult. Meanwhile,the proportional increase of neocortex volume is more than 500%(Table 2). A corresponding postnatal development of neocortical neurons is not observed in the domestic pig brain. The domestic pig has ∼425 million neocortical neurons at birth and ∼432 million in adulthood(Table 2; Fig. 2C), which is ∼35%more than in the adult Göttingen minipig neocortex (P<0.01). The number of neocortical glial cells was not estimated. The total brain mass increases from a mean of 29.4 g at birth to 134.0 g as adult. The postnatal proportional increase in neocortex volume is, by comparison, close to 500%(Table 2). With the principal aim to evaluate the pig as a potential animal model for human brain development, a comparative study has been performed to evaluate the postnatal development of neocortical cell numbers in the experimental Göttingen minipig and the domestic pig brain. It is demonstrated that the cellular development in the Göttingen minipig brain is incomplete at term and that both neocortical neuron and glial cell numbers increase significantly from time of birth to adulthood. By contrast, the neocortical development of neurons in the domestic pig seems to be fully established at birth. An explanation for this strain difference is not at hand. A rapid development of the brain prior to the general body growth seems to be characteristic for all mammals. Differences between species are related to the gestational time of major cellular multiplication, the developmental stage at birth and the complexity of the final product(Dobbing, 1974). At the time of birth, the human brain constitutes around 23% of its adult mass, and in the first two postnatal years it increases rapidly to around 75% of its adult mass(Dobbing, 1974). In comparison, the brain of the closely related macaque monkey constitutes almost 65% of its adult mass at birth (Dobbing,1974). The pig brain constitutes around 25% of its adult mass at birth and seems in this aspect to be more similar to that of humans(Dickerson and Dobbing, 1966). The pig is also considered a perinatal brain developer, like human, and several studies have shown a good correspondence to the developing human brain with respect to myelination, compositions and electrical activity(Dickerson and Dobbing, 1966;Pampliglione, 1971; Fang et al.,2005; Flynn, 1984; Thibault and Margulies, 1998). Accordingly, the pig has been considered an appropriate model for human brain development (Dickerson and Dobbing,1966; Book and Bustad,1974; Pond et al.,2000). The present study demonstrates different developmental patterns in the neocortex of two strains of pigs. A significant postnatal development of neuron and glial cells is observed in the Göttingen minipig, whereas the adult number of neurons is established at birth in the domestic pig. This difference in cell number is not represented by a corresponding difference in relative brain mass of the neonates. The brain constitutes 35% of its adult mass in the Göttingen minipig and 22% in the domestic pig, whereas the relative growth of neocortical volume is close to 500% in both strains. The differences can by explained neither by differences in the stereological sampling or counting procedures (see below) nor the different tissue processing. Both fixation methods provided adequate histological sections with no clear difference in cellular morphology. No clinical data corroborate the observed difference; the gestational period is the same and birth mass and behavior is more or less identical in both strains. However, the results do substantiate that strain differences should be considered in future experimental studies using the pig brain and exemplifies the need for a full designation of the specific strain used. Based on the results, the domestic pig seems to be a more proper model for evaluating the effects of developmental insults on the human brain than the Göttingen minipig. Even though recent developments have made it possible to unambiguously demonstrate that new neurons are added to the adult primate neocortex as well as to other mammal brains(Gould et al., 1999; Gould et al., 2001; Gould and Gross, 2002), it is still generally accepted that most neurogenesis in monkeys(Rakic and Sidman, 1968; Sidman and Rakic, 1973; Rakic, 1974; Rakic, 1978; Rakic, 1985a; Rakic, 1985b; Rakic, 1988) and humans(Dobbing and Sands, 1973; Dobbing, 1974; Samuelsen et al., 2003) is completed at midgestation or at least before term. The rate of neurons added to the neocortex in adulthood has no relative influence on the total neocortical neuron number when compared with the rate of multiplication during early development (Gould et al.,2001). Furthermore, recent stereological results from our laboratory reveal that the total number of neurons in the cortical plate of human newborns equals the total number in adults(Larsen et al., in press). These results definitively refute previous results demonstrating a major postnatal neurogenesis in humans (Shankle et al., 1998; Shankle et al.,1999). Regrettably, glial cells were not estimated in the domestic pig. Results from one neonate and one adult domestic pig (not presented) do indicate a prenatal development in glial cell number similar to the postnatal increase observed in the Göttingen minipig. Further studies on several postnatal ages are considered valuable in order to describe the growth slope of neuronal increase in the Göttingen minipig. The Göttingen minipig and the domestic pig have also been considered as useful non-primate models for a number of human neurological diseases(McClellan, 1968; Douglas, 1972). However,although several neuroanatomical studies have been performed, the documentation of pig brain anatomy, connectivity and function is still incomplete. Quantitative information of cell numbers based on systematic sampling procedures has been limited to subcortical areas, e.g. the hippocampus (Holm and West,1994) and the subthalamic nucleus(Larsen et al., 2004). Here,we present the total number of neocortical neurons in two strains of pigs. In the adult Göttingen minipig, the neocortex contains 324 million neurons,whereas the domestic pig brain contains 432 million neocortical neurons. This 33% strain difference in neocortical neuron number was not an unexpected finding considering the general relationship between body size and neuron number (for a review, see Williams and Herrup, 1988). The total number of neocortical neurons has also been estimated in a number of other species. There are ∼3 million neocortical neurons in the mouse brain(Bonthius et al., 2004), 21 million in the rat (Korbo et al.,1990), 12.8 billion in the minke whale (Nina Eriksen and Bente Pakkenberg, unpublished data) and 19–23 billion in the human neocortex(Pakkenberg and Gundersen,1997). When compared to these species, one of the most valuable findings in the pig brain is the rather low coefficient of variation (CV) from which the true biological variance can be estimated to be less than 10% (see the additivity of variances below). The low biological variance seems to be a reproducible finding for the pig brain(Holm and West, 1994; Larsen et al., 2004; Jelsing et al., 2005a; Jelsing et al., 2005b) and supports the continued use of pigs in neurotoxicologic studies, since perturbations may be detected with great sensitivity. ### Stereological design Estimates of neocortical cell numbers were obtained using the optical fractionator method, which is efficient and independent of any tissue shrinkage or expansion that may take place during any stage of tissue preparation (Gundersen, 1986; West et al., 1991). It was considered optimal on paraffin sections in which shrinkage during processing is significant. Provided that mounted sections maintain a sufficient depth to accommodate optical disectors, the optical fractionator can also be applied on other preparations, e.g. cryostat or vibratome sections. Sampling parameters were optimized to obtain a high efficiency in terms of precision and effort. A total count of ∼100–150 cells in 75–100 disectors distributed systematically randomly on 5–10 sections is usually enough to obtain an estimate with a precision appropriate for most biological structures (Pakkenberg and Gundersen, 1988; Korbo et al., 1990; West,1993). However, because the variance in the experimental groups appeared to be rather low, it was decided to increase the efficiency of the fractionator by intensifying the sampling of cells to an average of 250 particles per brain in 140–170 disectors. An additional number of sections were sampled from the Göttingen minipig. This was done to ensure enough sections for forthcoming studies of specific glial subpopulations, of which some have a more heterogeneous distribution in the brain. The efficiency of the fractionator sampling was evaluated from variance analysis of relative variances and estimator CE. These two measures are related through the basic equation (the additivity of variances): $\ \mathrm{CV}_{\mathrm{obs}}^{2}=\mathrm{CV}_{\mathrm{biol}}^{2}+\mathrm{CE}^{2}.$ (6) The sampling was considered optimal when the variance of the individual estimate, CE2, was less than half the observed interindividual variance, CV2, because the real inherent biological variability in the cohorts of pigs (CV2biol) then contributes most to the observed relative variance. Subsequently, the estimator CE was optimized in relation to variances from sampling with disectors within a section and sampling between sections (Table 3). Even though the efficiency and mathematical unbiasedness of the optical fractionator method for estimating cells in the pig brain neocortex are evident, some fundamental requirements have to be fulfilled for a proper application (West, 1993). A first requirement is that the whole structure is accessible; secondly, one must be confident that all cells of interest can be identified unambiguously and that penetration of staining is complete throughout the thickness of the section. In the present study, a complete penetration of staining was optimized beforehand by registering the z-distribution of all cells. However, difficulty in distinguishing glial cells from neurons appeared when stained with the modified Giemsa method. This was especially true in the neonate brains, where the neuronal density was high and morphometric differences between cells were less pronounced. The lack of clear criteria for distinguishing neurons and glial cells in cortical regions has previously been a major problem for stereologists(Braendgaard et al., 1990; Davanlou and Smith, 2004) and may partly explain the somewhat lower hemispheric cell number published in a screening procedure of young Göttingen minipig(Jelsing et al., 2005a). Recently, a number of immunohistochemical markers have been evaluated in the pig brain (Lyck et al., 2006),and the combination of immunohistochemistry and stereology may provide a better approach for quantifying neurons and glial cell populations in future quantitative studies of the pig brain. The authors would like to thank Skaerbaek Pig Slaughterhouse (Skaerbaek,Denmark) for donating the brains from adult domestic pigs. Thanks to the technical staff at the Research Laboratory for Stereology and Neuroscience,especially Susanne Sørensen and Hans Jørgen Jensen. The study received financial support from the Lundbeck Foundation and the Gangsted Foundation. Bollen, P. and Ellegaard, L. ( 1997 ). The Göttingen minipig in pharmacology and toxicology. Pharmacol. Toxicol. 80 , 3 -4. Bollen, P. J. A., Hansen, A. K. and Rasmussen, H. J.( 2000 ). The Laboratory Swine. Boca Raton:CRC Press. Bonthius, D. J., McKim, R., Koele, L., Harb, H., Karacay, B.,Mahoney, J. and Pantazis, N. J. ( 2004 ). Use of frozen sections to determine neuronal number in the murine hippocampus and neocortex using the optical disector and optical fractionator. Brain Res. Brain Res. Protoc. 14 , 45 -57. Book, S. A. and Bustad, L. K. ( 1974 ). The fetal and neonatal pig in biomedical research. J. Anim. Sci. 38 , 997 -1002. Bourgeois, J. P., Goldman-Rakic, P. S. and Rakic, P.( 1994 ). Synaptogenesis in the prefrontal cortex of rhesus monkeys. Cereb. Cortex 4 , 78 -96. Braendgaard, H., Evans, S. M., Howard, C. V. and Gundersen, H. J. ( 1990 ). The total number of neurons in the human neocortex unbiasedly estimated using optical disectors. J. Microsc. 157 , 285 -304. Damm Jorgensen, K. ( 1998 ). Teratogenic activity of tretionin in the Göttingen minipig. Scand. J. Lab. Anim. Sci. 25 , 235 -243. Davanlou, M. and Smith, D. F. ( 2004 ). Unbiased stereological estimation of different cell types in rat cerebral cortex. Image Anal. Stereol. 23 , 1 -12. Dickerson, J. W. T. and Dobbing, J. ( 1966 ). Prenatal and postnatal growth and development of the central nervous system of the pig. Proc. R. Soc. Lond. B Biol. Sci. 166 , 384 -395. Dobbing, J. ( 1974 ). The later development of the brain and its vulnerability. In Scientific Foundation of Paediatrics (ed. J. A. Davis and J. Dobbing), pp. 565 Dobbing, J. and Sands, J. ( 1973 ). Quantitative growth and development of human brain. Arch. Dis. Child . 48 , 757 -767. Dobbing, J. and Sands, J. ( 1979 ). Comparative aspects of the brain growth spurt. Early Hum. Dev. 3 , 79 -83. Dorph-Petersen, K. A., Nyengaard, J. R. and Gundersen, H. J.( 2001 ). Tissue shrinkage and unbiased stereological estimation of particle number and size. J. Microsc. 204 , 232 -246. Douglas, W. R. ( 1972 ). Of pigs and men and research: a review of applications and analogies of the pig, Sus Scrofa, in human medical research. Space Life Sci. 3 , 226 -234. Fang, M., Li, J., Gong, X., Antonio, G., Lee, F., Kwong, W. H.,Wai, S. M. and Yew, D. T. ( 2005 ). Myelination of the pig's brain: a correlated MRI and histological study. Neurosignals 14 , 102 -108. Feinstein, R. E., Westergren, E., Bucht, E., Sjoberg, H. E. and Grimelius, L. ( 1996 ). Estimation of the C-cell numbers in rat thyroid glands using the optical fractionator. J. Histochem. Cytochem. 44 , 997 -1003. Flynn, T. J. ( 1984 ). Developmental changes of myelin-related lipids in brain of miniature swine. Neurochem. Res. 9 , 935 -945. Glodek, P. ( 1986 ). Breeding program and population standards of the Göttingen miniature swine. In Swine in Biomedical Research (ed. M. E. Tumbleson),pp. 23 -28. New York: Plenum Press. Gould, E. and Gross, C. G. ( 2002 ). Neurogenesis in adult mammals: some progress and problems. J. Neurosci. 22 , 619 -623. Gould, E., Reeves, A. J., Graziano, M. S. and Gross, C. G.( 1999 ). Neurogenesis in the neocortex of adult primates. Science 286 , 548 -552. Gould, E., Vail, N., Wagers, M. and Gross, C. G.( 2001 ). Adult-generated hippocampal and neocortical neurons in macaques have a transient existence. 98 , 10910 -10917. Gundersen, H. J. ( 1978 ). Estimators of the number of objects per area unbiased by edge effects. Microsc. Acta 81 , 107 -117. Gundersen, H. J. ( 1986 ). Stereology of arbitrary particles. A review of unbiased number and size estimators and the presentation of some new ones, in memory of William R. Thompson. J. Microsc. 143 , 3 -45. Gundersen, H. J. and Jensen, E. B. ( 1987 ). The efficiency of systematic sampling in stereology and its prediction. J. Microsc. 147 , 229 -263. Gundersen, H. J., Jensen, E. B., Kieu, K. and Nielsen, J.( 1999 ). The efficiency of systematic sampling in stereology– reconsidered. J. Microsc. 193 , 199 -211. Holm, I. E. and West, M. J. ( 1994 ). Hippocampus of the domestic pig: a stereological study of subdivisional volumes and neuron numbers. Hippocampus 4 , 115 -125. Jelsing, J., Olsen, A. K., Cumming, P., Gjedde, A., Hansen, A. K., Arnfred, S., Hemmingsen, R. and Pakkenberg, B.( 2005a ). A volumetric screening procedure for the Göttingen minipig brain. Exp. Brain Res. 162 , 428 -435. Jelsing, J., Rostrup, E., Markenroth, K., Paulson, O. B.,Gundersen, H. J., Hemmingsen, R. and Pakkenberg, B.( 2005b ). Assessment of in vivo MR imaging compared to physical sections in vitro – a quantitative study of brain volumes using stereology. Neuroimage 26 , 57 -65. Jones, N. A., Field, T., Davalos, M. and Hart, S.( 2004 ). Greater right frontal EEG asymmetry and nonemphathic behavior are observed in children prenatally exposed to cocaine. Int. J. Neurosci. 114 , 459 -480. Kjellmer, I., Thordstein, M. and Wennergren, M.( 1992 ). Cerebral function in the growth-retarded fetus and neonate. Biol. Neonate 62 , 265 -270. Korbo, L., Pakkenberg, B., Ladefoged, O., Gundersen, H. J.,Arlien-Soborg, P. and Pakkenberg, H. ( 1990 ). An efficient method for estimating the total number of neurons in rat brain cortex. J. Neurosci. Methods 31 , 93 -100. Larsen, C. C., Larsen, K. B., Bogdanovic, N., Laursen, H.,Græm, N., Samuelsen, G. B. and Pakkenberg, B. (in press). Total number of cells in the human newborn telencephalic wall. Neuroscience Larsen, M., Bjarkam, C. R., Ostergaard, K., West, M. J. and Sorensen, J. C. ( 2004 ). The anatomy of the porcine subthalamic nucleus evaluated with immunohistochemistry and design-based stereology. Anat. Embryol. (Berl) . 208 , 239 -247. Lyck, L., Jelsing, J., Jensen, P. S., Lambertsen, K. L.,Pakkenberg, B. and Finsen, B. ( 2006 ). Immunohistochemical visualization of neurons and specific glial cells for stereological application in the porcine neocortex. J. Neurosci. Methods 152 , 229 -242. McClellan, R. O. ( 1968 ). Applications of swine in biomedical research. Lab. Anim. Care 18 , 120 -126. Pakkenberg, B. and Gundersen, H. J. ( 1988 ). Total number of neurons and glial cells in human brain nuclei estimated by the disector and the fractionator. J. Microsc. 150 , 1 -20. Pakkenberg, B. and Gundersen, H. J. ( 1997 ). Neocortical neuron number in humans: effect of sex and age. J. Comp. Neurol. 384 , 312 -320. Pampiglione, G. ( 1971 ). Some aspects of development of cerebral function in mammals. Proc. R. Soc. Med. 64 , 429 -435. Pond, W. G., Boleman, S. L., Fiorotto, M. L., Ho, H., Knabe, D. A., Mersmann, H. J., Savell, J. W. and Su, D. R.( 2000 ). Perinatal ontogeny of brain growth in the domestic pig. Proc. Soc. Exp. Biol. Med. 223 , 102 -108. Rakic, P. ( 1974 ). Neurons in rhesus monkey visual cortex: systematic relation between time of origin and eventual disposition. Science 183 , 425 -427. Rakic, P. ( 1978 ). Neuronal migration and contact guidance in the primate telencephalon. . 54 , 25 -40. Rakic, P. ( 1985a ). DNA synthesis and cell division in the adult primate brain. 457 , 193 -211. Rakic, P. ( 1985b ). Limits of neurogenesis in primates. Science 227 , 1054 -1056. Rakic, P. ( 1988 ). Specification of cerebral cortical areas. Science 241 , 170 -176. Rakic, P. and Sidman, R. L. ( 1968 ). Supravital DNA synthesis in the developing human and mouse brain. J. Neuropathol. Exp. Neurol. 27 , 246 -276. Rodning, C., Beckwith, L. and Howard, J.( 1989 ). Prenatal exposure to drugs: behavioral distortions reflecting CNS impairment? Neurotoxology 10 , 629 -634. Samuelsen, G. B., Larsen, K. B., Bogdanovic, N., Laursen, H.,Graem, N., Larsen, J. F. and Pakkenberg, B. ( 2003 ). The changing number of cells in the human fetal forebrain and its subdivisions: a stereological analysis. Cereb. Cortex 13 , 115 -122. Shankle, W. R., Landing, B. H., Rafii, M. S., Schiano, A., Chen,J. M. and Hara, J. ( 1998 ). Evidence for a postnatal doubling of neuron number in the developing human cerebral cortex between 15 months and 6 years. J. Theor. Biol. 191 , 115 -140. Shankle, W. R., Rafii, M. S., Landing, B. H. and Fallon, J. H. ( 1999 ). Approximate doubling of numbers of neurons in postnatal human cerebral cortex and in 35 specific cytoarchitectural areas from birth to 72 months. Pediatr. Dev. Pathol. 2 , 244 -259. Sidman, R. L. and Rakic, P. ( 1973 ). Neuronal migration, with special reference to developing human brain: a review. Brain Res. 62 , 1 -35. Thibault, K. L. and Margulies, S. S. ( 1998 ). Age-dependent material properties of the porcine cerebrum: effect on pediatric inertial head injury criteria. J. Biomech. 31 , 1119 -1126. West, M. J. ( 1993 ). New stereological methods for counting neurons. Neurobiol. Aging 14 , 275 -285. West, M. J. and Gundersen, H. J. ( 1990 ). Unbiased stereological estimation of the number of neurons in the human hippocampus. J. Comp. Neurol. 296 , 1 -22. West, M. J., Slomianka, L. and Gundersen, H. J.( 1991 ). Unbiased stereological estimation of the total number of neurons in the subdivisions of the rat hippocampus using the optical fractionator. Anat. Rec. 231 , 482 -497. West, M. J., Ostergaard, K., Andreassen, O. A. and Finsen,B. ( 1996 ). Estimation of the number of somatostatin neurons in the striatum: an in situ hybridization study using the optical fractionator method. J. Comp. Neurol. 370 , 11 -22. Williams, R. W. and Herrup, K. ( 1988 ). The control of neuron number. Annu. Rev. Neurosci. 11 , 423 -453.
auto_math_text
web
Show Summary Details More options … # Open Physics ### formerly Central European Journal of Physics Editor-in-Chief: Seidel, Sally Managing Editor: Lesna-Szreter, Paulina IMPACT FACTOR 2018: 1.005 CiteScore 2018: 1.01 SCImago Journal Rank (SJR) 2018: 0.237 Source Normalized Impact per Paper (SNIP) 2018: 0.541 ICV 2018: 147.55 Open Access Online ISSN 2391-5471 See all formats and pricing More options … Volume 15, Issue 1 # Optimization of wearable microwave antenna with simplified electromagnetic model of the human body Łukasz Januszkiewicz • Corresponding author • Institute of Electronics, Lodz University of Technology, Wolczanska 211/215 Street, Lodz, Poland • Email • Other articles by this author: / Paolo Di Barba • Department of Electrical, Computer and Biomedical Engineering, University of Pavia, via Ferrata 5,27100 Pavia, Italy • Email • Other articles by this author: / Sławomir Hausman • Institute of Electronics, Lodz University of Technology, Wolczanska 211/215 Street, Lodz, Poland • Other articles by this author: Published Online: 2017-12-29 | DOI: https://doi.org/10.1515/phys-2017-0133 ## Abstract In this paper the problem of optimization design of a microwave wearable antenna is investigated. Reference is made to a specific antenna design that is a wideband Vee antenna the geometry of which is characterized by 6 parameters. These parameters were automatically adjusted with an evolution strategy based algorithm EStra to obtain the impedance matching of the antenna located in the proximity of the human body. The antenna was designed to operate in the ISM (industrial, scientific, medical) band which covers the frequency range of 2.4 GHz up to 2.5 GHz. The optimization procedure used the finite-difference time-domain method based full-wave simulator with a simplified human body model. In the optimization procedure small movements of antenna towards or away of the human body that are likely to happen during real use were considered. The stability of the antenna parameters irrespective of the movements of the user’s body is an important factor in wearable antenna design. The optimization procedure allowed obtaining good impedance matching for a given range of antenna distances with respect to the human body. PACS: 84.40.Ba; 07.05.Tp ## 1 Introduction Nowadays, the optimization design is a commonly used procedure applied to many fields of physics and engineering. Depending on the application, the optimization process may improve the performance of the device or the system in many physical aspects of its operation, like mechanics, thermodynamics and electromagnetics [1]. Multiphysics approach requires utilization of complex and computationally expensive simulation software. Another case that is considered in automated optimal design, is one physical domain optimization covering e.g. the electromagnetic properties of the device [2]. Minimizing an objective function, which depends simultaneously on a number of design variables subject to constraints, is a vital problem of many aspects of electromagnetic design in search for innovative or improved solutions. It is the task of e.g. engineers designing the sensors used for detecting high energy neutrino from the outer space [3] as well as the Micro-Electro-Mechanical Systems designers [4]. This is also an important problem in the design of the wearable antennas that are operating in complex electromagnetic environment such as vicinity of the human body. The evaluation of the objective function in electromagnetic problems requires the use of field simulations, which may be prohibitively time consuming. Due to the limited computational budget, the algorithms that do not require the objective-function derivatives are the first choice of the designer. In this group of algorithms (so called zero-order) there are local-search oriented (e.g. Nelder-Mead or Powell) and global search oriented (e.g. evolution strategy based). Depending on the application and on the choice of the starting point they differ with respect to computational effectiveness and also with the quality of the solution. Therefore, the appropriate selection of the improvement algorithm is important for the designers. The overall computational effectiveness of the multi-iterative optimization algorithm depends strongly on the time needed for electromagnetic simulation of the component or device under optimization, which is multiplied by the number of iterations needed for finding the final solution at convergence. For objects, which are significantly larger than the length of an electromagnetic wave the simulation time with numerical methods is significantly larger than the time needed for simulating small elements. This is the case of a wearable antenna that should be simulated jointly with the model of human body that is located in the proximity of the antenna. For the antennas operating in the microwave band the corresponding wave length does not exceed several decimeters. In this case the body model used for simulations has to be discretized with millimeter resolution, which increases the size of the numerical problem to be solved resulting in high memory requirement and the computation time. In this paper we show how the computational effectiveness of the wearable antenna design optimization can be improved by the exploitation of simplified numerical model of human body. ## 2 Wearable antenna design The wideband vee antenna, which is considered as the case study, is presented in Figure 1. It consists of a thin-layer type metallic radiator, which is placed on a flexible dielectric material (substrate). This is symmetrical antenna and its feeding points are located on the internal edges of the radiator (see Figure 1). The geometry of the radiator is described by means of 6 geometrical parameters that are indicated in Figure 2. Figure 1 The wideband vee antenna Figure 2 Antenna design parameters definition The following parameters control the geometry of the antenna, namely: L – length of the arm, D – smallest distance between the arms, W – width of the arms, R1 – arm curvature radius, R2 – circle radius that ends the antenna arm, A – the displacement of the last linear section of antenna arm. The dimensions of the base material were fixed to B = 120 mm, C = 100 mm and H = 0.1 mm. The effective dielectric constant of base material was assumed to be εr = 1.7 and the dielectric loss tg(δ) = 0.001. This corresponds to DuPont Pyralux® material, which is thin film polymer covered with copper on one side, that can be successfully used for wearable applications thanks to its flexibility and light weight. The antenna is designed to operate in the proximity of the human body. In the case of wearable antenna, we have to model the antenna impedance detuning effect caused by small antenna movements, which makes the human body to antenna distance vary. It is usually caused by the relative movements of clothes that carry the antenna on a human body. To achieve the expected improvement of the design, a large number of electromagnetic simulations may be required. It follows from the necessity of calculating the objective function for several antenna-body distance values for each set of geometric parameters generated by the optimization algorithm. Accordingly, to help the computational cost-effectiveness of the optimization procedure, instead of a full size heterogeneous human body model, the simplified numerical model of human body, which is presented in Figure 3, was used [5, 6]. It consists of two coaxially located cylinders of the height equal to H = 300 mm. The inner cylinder has the radius Ri = 102 mm and is made of material with relative dielectric constant εI = 42.94 and conductivity σi = 2.03 S/m. The outer cylinder has following parameters: Ro = 271 mm, o = 3.35, σo = 0.36 S/m. Thanks to its reduced size and lower complexity, the simulation time was reduced from c.a. 10 minutes for a full-size human body model to 1 minute in simplified model (on the computer that utilizes Nvidia Tesla C2070 GPU card). The antenna was placed at different distances towards body model, which was controlled with x parameter, defined in Figure 4. Figure 3 Simplified human body model Figure 4 Wearable antenna placement towards human body model – a cross section ## 3 Experimental procedures: Antenna optimization - analysis and sythesis The antenna, which was the subject of optimization, can be designed to operate either in free space or in proximity of the human body. In Figure 5 the impedance matching of antenna, that was designed to operate in free space, is presented. It is expected to cover the frequency range from 2.4 GHz to 2.5 GHz with the maximum value for VSWR smaller than 1.3. This can be considered a wide bandwidth of impedance matching, which makes it less sensitive to impedance detuning. In the case of the antenna designed for the free space condition, the antenna impedance matching changes significantly, while placed very close to the human body. In Figure 5 the VSWR, obtained for free space and on-body case, is presented. Figure 5 Impedance matching of antenna before optimization for free space and on body position (x = 8mm) The optimization process aimed at finding a set of 6 antenna’s geometrical parameters for which the antenna would obtain the best impedance matching to 50 Ω (lowest value of VSWR) in the frequency range of 2.4 GHz – 2.5 GHz. To calculate the input impedance of the antenna the Remcom XFdtd code that utilizes finite difference time domain method was used [7]. The simulations were controlled by the script that was launched in XFdtd program. For the set of geometrical parameters given by the optimization procedure, the script generated the model of the antenna and launched the simulations of its input impedance vs. frequency. Based on the antenna impedance the objective function was calculated as the maximum value of voltage standing wave ratio - VSWR - within the frequency range of interest. Additionally, 3 different values of antenna distance to the body model (parameter x in Figure 4) equal to 2, 6 and 8 mm were considered and the greatest value of VSWR was taken as the objective function value. The design problem formulation can be cast as follows: given an initial solution (antenna prototype), find the minimizer with respect to design vector g = (L,D,W,R1,R2,A) of the following objective function Ψ (1) should be found: $Ψg=supx∈Ωxsupf∈BVSWR(g,x,f)$(1) where: Ωx – the set of admissible x movements of antenna B – the ISM f frequency band: 2.4 – 2.5 GHz Therefore, (1) originates a double min-max problem: in principle, the solution to such class of problems might be non-smooth; therefore, derivative-free optimization algorithms, like evolutionary ones, is recommended. Coherently, the EStra optimization algorithm – an evolution strategy – was used. A version of the lowest order (i.e. a single parent generates a single offspring), which makes the search cost-effective, was chosen. A detailed analysis of the algorithm can be found e.g. in Ref. [2], while applications in electromagnetic simulations are presented e.g. in [8, 9]. The minimum value of objective function equal to fmin = 1.95 (that corresponds to the maximum value of VSWR) was achieved in 53 iterations. The values of the parameters found by the algorithm had following: L = 74.5 mm, D = 2.6 mm, W = 2.5 mm, R1 = 30.9 mm, R2 = 23.7 mm, A = 6.4 mm. The antenna impedance matching simulations for the antenna geometry achieved after the optimization process are presented in Figure 6: for antenna distance to body equal to 2 mm – position 1, and 8 mm – position 3. Figure 6 Impedance matching of antenna after optimization with EStra algorythm ## 4.1 Comparative assesment of optimization results The optimization process of the antenna was subsequently executed with Nelder-Mead simplex direct search algorithm, which was implemented in Matlab function “fminsearch”. This algorithm is very popular for local optimum search and was successfully applied for antenna design improvement [10, 11]. The initial set of design variables was the same as for the case of optimization procedure that used EStra algorithm. The minimum value of objective function found by Nelder-Mead algorithm was equal to fmin = 1.85 (that corresponds to the maximum value of VSWR) was achieved in 176 iterations. The values of the parameters found by the algorithm were: L = 69.3 mm, D = 1.54 mm, W = 7 mm, R1 =48.9 mm, R2 = 24 mm, A = 6 mm. For the sake of another comparison, the optimization process of the antenna was subsequently executed with Powell conjugate-direction algorithm. Also, this algorithm is very popular for local optimum search and it can be compared with Nelder-Mead because both of them do not use derivative to find the minimum of objective function and therefore they are methodologically equivalent. The initial set of design variables was the same as for the case of optimization procedure that used EStra and Nelder-Mead algorithm. Also the search tolerance was the same for all three algorithms used, and equal to 0.001. The minimum value of objective function found by Powell algorithm was equal to fmin = 1.98 (that corresponds to the maximum value of VSWR) was achieved in 150 calls to the objective function (electromagnetic simulations with XFdtd). The values of the parameters found by the algorithm were: L = 70.1 mm, D = 2.6 mm, W = 5.8 mm, R1 = 38.4 mm, R2 = 22.5 mm, A = 5.4 mm. At the first glance, the computational budget required by Powell algorithm is similar to N-M algorithm, but the final value of objective function is greater than both obtained with EStra and N-M algorithms. The comparative analysis of three sets of the results assessed the validity of the solution obtained with EStra with lowest computational afford. The impedance matching of antennas optimized with EStra and Nelder-Mead algorithm are presented in Figure 7 for the on body position no.1 (parameter x = 2 mm). Figure 7 Impedance matching of antenna after optimization with EStra algorythm and Nelder-Mead algorithm, on body position nr 1 (parameter x = 2 mm) ## 4.2 Assessment of the optimal design The performance of the antenna that was optimized with EStra algorithm and simplified cylindrical model of the human body was assessed with computer simulations that were performed with the Hershey model as the reference. It is the full scale heterogeneous model available in Remcom XFdtd program [7]. It maps the geometry of adult male and represents the internal structure of the body with 39 different tissues with dielectric properties that correspond to the human body [12]. In Figure 8 the numerical model consisting of the antenna placed towards heterogeneous human body model is presented. The antenna orientation is preserved from the simplified model, where the central axis of cylinders is z axis of the human body model. The distance of antenna to the body surface (x parameter) varied from 2 mm to 20 mm. The computer simulations were carried out to simulate the antenna radiation pattern and impedance matching for various x parameter values. Figure 8 Antenna placed towards heterogeneus human body model The antenna impedance matching to 50 Ω is presented in Figure 9. The antenna exhibits good impedance matching to 50 Ω in frequency range of 2.4 GHz up to 2.5 GHz for all considered distances of the antenna to the body, from 2 mm to 20 mm. The VSWR value for the considered ranges of antenna distances is always below 1.75, which is less than the minimum value of objective function. In Figure 10 the gain of the antenna in the horizontal plane (gain versus ϕ angle defined in Figure 8) is presented. The plot is normalized to the maximum value of gain equal to Gmax = 7 dBi. The maximum gain of the antenna changes depending on the antenna to the body distance, from Gmax = 4.4 in x = 2 mm position to Gmax = 7 dBi in x = 20 mm position. Figure 9 Impedance matching of the antenna optimized with EStra algorythm simulated with heterogeneus human body model, for different antenna to body distances (x parameter) Figure 10 Antenna radiation patterrn G(ϕ) in horizontal plane simulated with heterogeneus model for different antenna to body distances (x parameter), normalized to maximum gain Gmax = 7dBi The simulation time for one case of antenna placement towards the human body was approximately 11 minutes on the computer that utilizes Nvidia Tesla C2070 GPU card. The GPU memory used was 1.5 GB. ## 5 Conclusions Design of wearable antennas, which may change their position with respect to the human body during normal usage, is in general computationally costly, because the antenna performance has to be simulated for different distances from the body. Moreover, the number of design variables for the antenna type under consideration and the interdependence between them makes a trial-and-error design method ineffective. This process can be significantly improved with an automated optimization algorithm. This approach was investigated in the presented study. The optimization procedure presented above uses full wave simulator based on finite difference time domain method to obtain the objective function value based on antenna impedance matching calculated for several values of the antenna-to-body distance. We use a simplified model of the human body to simulate its influence on the antenna impedance. Thanks to the relatively simple structure of the body model and consequently a reduced computational cost, it was possible to use it in multiple simulations in our multi-iterative optimization procedure. The time needed for one simulation was reduced from 11 minutes, that were required in the case of full-scale heterogeneous model, to 1 minute in the case of the simplified cylindrical model. EStra, which is an evolution strategy based algorithm, proved to be effective for our design problem and allowed to improve antenna design with respect to its impedance matching. The objective function definition that covered both: antenna bandwidth and different antenna distances towards the body, is relatively simple to be derived in optimization process and at the same time is effective for design improvement. To show the numerical effectiveness of EStra algorithm it was compared to the popular Nelder-Mead algorithm. The evolution-strategy based algorithm needed only 53 calls to the full wave simulator, while the simplex algorithm needed 176 calls to the simulator with comparable convergence accuracy. The solutions of the design problem found by both algorithms had very similar performance (see Figure 7). The minimum value of objective function (that corresponds to the maximum value of VSWR over the assumed frequency band and all antenna-to-body distance values) was equal to fmin = 1.95 in the solution given by EStra, and fmin = 1.85 in the solution given by Nelder-Mead algorithm. With approximately 3 times less calls to the simulator, EStra algorithm has shown its computational effectiveness in our antenna design study. ## References • [1] Di Barba P., Dolezel I., Karban P., Kus P., Mach F., Mognaschi M.E., Savini A., Multiphysics field analysis and multiobjective design optimization: A benchmark problem, Inverse Problems in Science and Engineering, 2014, 22 (7), 1214-1225. • [2] • [3] Gorham P. et al., The Antarctic Impulsive Transient Antenna Ultra-high Energy Neutrino Detector Design, Performance, and Sensitivity for 2006-2007 Balloon Flight, Astropart.Phys., 2009,32, 10-41. • [4] Di Barba P., Mognaschi M.E., Savini A., Wiak S., Island biogeography as a paradigm for MEMS optimal design, International Journal of Applied Electromagnetics and Mechanics, 2016, 51, 97-105. • [5] Januszkiewicz Ł., Di Barba P., Hausman S., Automated identification of human-body model parameters, International Journal of Applied Electromagnetics and Mechanics, 2016, 51, 41-47. • [6] Januszkiewicz Ł., Hausman S., Simplified human phantoms for narrowband and ultra-wideband body area network modelling, COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering, 2015, 34, 439-447. • [7] XFDTD 7.3 Reference Manual, Remcom Inc., State College, PA USA. Google Scholar • [8] Di Barba P., Mognaschi M.E., Palka R., Savini A., Optimization of the MIT Field Exciter by a Multiobjective Design, Magnetics, IEEE Transactions on, 2009, 45, 1530–1533. Google Scholar • [9] Januszkiewicz Ł., Di Barba P., Hausman S., Field-Based Optimal Placement of Antennas for Body-Worn Wireless Sensors, Sensors, 2016, 16, 713. • [10] Baharlouei A., Abolhassani B., Oraizi H., A New Smart Antenna for CDMA using Nelder-Mead Simplex Algorithm and ESPAR Antenna, 2006 2nd International Conference on Information & Communication Technologies, Damascus, 2748-2753. Google Scholar • [11] Sokol V., Optimization Techniques in CST STUDIO SUITE, 2014, CST, www.cst.com • [12] Hasgall P., Neufled E., Gosselin M., Linkgenbock A., Kuster N., IT’IS Database for thermal and electromagnetic parameters of biological tissues, 2013, http://www.itis.ethz.ch/database Accepted: 2017-11-12 Published Online: 2017-12-29 Citation Information: Open Physics, Volume 15, Issue 1, Pages 1055–1060, ISSN (Online) 2391-5471, Export Citation
auto_math_text
web
Probing a quantum gas with single Rydberg atoms @inproceedings{Nguyen2015ProbingAQ, title={Probing a quantum gas with single Rydberg atoms}, author={Huan Nguyen and Tara Cubel Liebisch and Michael Schlagmuller and Graham Lochead and Karl M. Westphal and Robert J. Low and Sebastian Hofferberth and Tilman Pfau}, year={2015} } We present a novel spectroscopic method for probing the \insitu~density of quantum gases. We exploit the density-dependent energy shift of highly excited {Rydberg} states, which is of the order $10$\MHz\,/\,1E14\,cm$^{\text{-3}}$ for \rubidium~for triplet s-wave scattering. The energy shift combined with a density gradient can be used to localize Rydberg atoms in density shells with a spatial resolution less than optical wavelengths, as demonstrated by scanning the excitation laser spatially… CONTINUE READING
auto_math_text
web
## Monday, August 17, 2015 How is it that the muonic hydrogen radius works perfectly in this equation in this post: ??? Six (6) fundamental constants related in ONE equation. Bazinga! Can you beat that? It is also a new way to calculate $\pi:$ $$\pi={{\alpha^2 m_e}\over{r_pR_Hm_p}}={{\alpha^2 m_e}\over{4\ell m_{\ell}R_H}}$$ $\alpha=fine\;structure\;constant$ $m_e=mass\;of\;electron$ $m_p=mass\;of\;proton$ $r_p=2010\;and\;2013\;muonic\;hydrogen\;proton\;radius\;(Haramein's\;Equation)$ $R_H=Rydberg\;constant$ $m_pr_p=4\ell m_{\ell}\;(Haramein's\;Equation)$ $\ell=Planck\;Length$ $m_{\ell}=Planck\;Mass$ Google Calculator Link for new $\pi\;equation:$ Google Calculator link for $\pi$ More later, The Surfer
auto_math_text
web
# Continuum Hypothesis is true? 1. Aug 2, 2011 2. Aug 2, 2011 ### micromass Woodin is a very good set theorist, so he probably did something quite interesting. But the article doesn't really tell me what it is that Woodin did. I get the impression that he built another constructible universe $\mathbb{L}$ which seems to encompass a lot of current mathematics. This wouldn't solve the continuum hypothesis of course, the continuum hypothesis has been proved unsolvable. I'm really interested in reading a more advanced article on the matter, to see what it's all about. 3. Aug 2, 2011 ### spamiam Thanks for the great article, mathman! Personally, I've always found the undecidability of certain statements to be an unsatisfying answer, so maybe this idea can change that. 4. Aug 2, 2011 ### praeclarum In ZFC set theory. My personal reaction - it seems like you could create a set theory universe in which the continuum hypothesis is true, false, or undecideable. But does this say anything really over the truth of the continuum hypothesis itself? Perhaps a larger question - besides the issue of consistency, how do you know which set theory is "right"? 5. Aug 2, 2011 ### SteveL27 Now you're asking if math is Platonic. And the last time somebody asked that, the thread got carted off to the Philosophy section. By the way, if anyone's unfamiliar with Freiling's axiom of symmetry, it's an easily understandable and intuitively plausible axiom that makes CH false. http://en.wikipedia.org/wiki/Freiling's_axiom_of_symmetry The statement of the axiom is easy to understand; as is the proof that the axiom implies the negation of CH. It's an interesting example, and far more understandable than Woodin's work is ever going to be to most of us (speaking for myself here.) CH will always be provable in some axiom systems and its negation provable in others. The goal is to find an intuitively appealing set of axioms that settles the issue. I would be quite surprised if Woodin's framework is intuitively appealing to anyone outside of specialists in set theory. Here is an article about Woodin's Ultimate L. It's very technical and presumes a background in advanced set theory. http://caicedoteaching.wordpress.com/2010/10/19/luminy-hugh-woodin-ultimate-l-i/ Wikipedia has nothing on Ultimate L yet ... now that's an article I'd like to read! Last edited: Aug 2, 2011
auto_math_text
web
Standard: Description: Standard: Description: Standard: Description: Standard: M04.D-M.2.1.2 Description: Solve problems involving addition and subtraction of fractions by using information presented in line plots (line plots must be labeled with common denominators, such as 1/4, 2/4, 3/4). 4th Grade Math - Data Displays & Analysis Lesson
auto_math_text
web
## Main Surveillance and detection aim to rapidly identify and isolate cases to prevent onward transmission of SARS-CoV-2 in the community and to avoid a substantial resurgence of cases of COVID-19. After an initial period—during which, because of a limited capacity, testing for SARS-CoV-2 infections mainly focused on severely ill patients—a new testing policy was implemented in France to systematically screen for potential infections with SARS-CoV-2 and enable lifting of the lockdown restrictions on 11 May 20208. The specific characteristics of COVID-19, however, hinder the identification of cases9,10,11. Large proportions of asymptomatic infectious individuals12, and the presence of mild or paucisymptomatic infections that easily go unobserved9,11, present serious challenges to the detection and control of SARS-CoV-29,10,13. Missing a substantial portion of infectious individuals compromises the control effort, enabling the virus to silently spread10,11,12. Synthesizing evidence from virological3 and participatory syndromic surveillance4 with mathematical models2,14 that account for behavioural data15,16,17,18, we assessed the performance of the new testing policy in France and identified its main limitations for actionable improvements. ## COVID-19 surveillance Management of the COVID-19 pandemic in France after lockdown in spring (May–June) 2020 involved the generation of a centralized database that collected all data on virological testing (SI-DEP3, the information system for testing). All individuals with symptoms that were compatible with COVID-1919 were invited to consult their general practitioner and obtain a prescription for a virological test8. Contacts of confirmed cases were traced and tested. A total of 20,777 virologically confirmed cases were notified from 13 May (week 20) to 28 June (week 26) in mainland France. These cases included individuals with or without symptoms at the time of testing who tested positive for SARS-CoV-2 or individuals who tested positive for SARS-CoV-2 for whom information on clinical status at the time of testing was missing (Extended Data Fig. 1). Accounting for presymptomatic individuals among those presenting with no symptoms at the time of testing and after imputation of missing data (Methods), an estimated 16,165 (95% confidence interval, 16,101–16,261) symptomatic cases were tested in the study period (Fig. 1a). The average delay from symptom onset to testing decreased from 12.5 days in week 20 to 2.8 days in week 26 (Fig. 1b and Extended Data Fig. 1). Accounting for this delay (Methods and Extended Data Fig. 2), we estimated that 14,061 (13,972–14,156) virologically confirmed symptomatic cases had an onset of symptoms in the study period, showing a decreasing trend over time (2,493 in week 20, 1,647 in week 26). The test positivity rate decreased in the first weeks and stabilized at around 1.2% (mean over weeks 24–26). A digital participatory system was additionally considered for COVID-19 syndromic surveillance in the general population20, including those who did not consult a doctor. Called COVIDnet.fr, it was adapted from the platform GrippeNet.fr (which is dedicated to the surveillance of influenza-like illnesses4) to respond to the COVID-19 health crisis in early 2020. It is based on a set of volunteers who weekly self-declare their symptoms, along with sociodemographic information. On the basis of symptoms declared by an average of 7,500 participants each week, the estimated incidence of suspected cases of COVID-1919 decreased from about 1% to 0.8% over time (Fig. 1c). Of 524 suspected cases, 162 (31%) consulted a doctor in the study period. Among them, 89 (55%) received a prescription for a test, resulting in the screening of 50 individuals (56% of those given the prescription) (Fig. 1d). ## COVID-19 pandemic trajectories and detection rates We used stochastic discrete age-stratified epidemic models2,14 based on demography, age profile21 and social contact data15 of the 12 regions of mainland France to account for age-specific contact activity and role in COVID-19 transmission. Disease progression is specific to COVID-192,14 and parameterized using the current knowledge to include presymptomatic transmission22, and asymptomatic12 and symptomatic infections with different degrees of severity9,11,23,24. The model was shown to capture the transmission dynamics of the pandemic in Île-de-France in the first wave and was used to assess the effect of lockdown and exit strategies2,14. Full details are reported in the Methods. Intervention measures were modelled as mechanistic modifications of the contact matrices, accounting for a reduction in the number of contacts engaged in specific settings, and were informed from empirical data. Lockdown data were obtained from previously published studies2,14. The exit phase was modelled considering region-specific data of school attendance based on the data from the Ministry of Education16, partial presence at workplaces based on estimates from location history data of mobile phones17 (Fig. 1e), a reduction in the adoption of physical distancing over time and the increased risk aversion of older individuals based on survey data18 (Fig. 1f), and the partial reopening of activities. A sensitivity analysis was performed on the reopening of activities, as data were missing for an accurate parameterization of associated contacts. Testing and isolation of detected cases were implemented by considering a 90% reduction in contacts for the virologically confirmed cases of COVID-192,14. Region-specific models were fitted to regional hospital admission data (Fig. 2) using a maximum likelihood approach. Further details are reported in the Methods and Supplementary Information. The projected number of cases decreased over time in all regions, in agreement with the decreasing tendency reported in hospital admissions during the study period (Fig. 2 and Extended Data Fig. 3). Overall, 103,907 (95% confidence interval, 90,216–116,377) new symptomatic infections were predicted in mainland France in weeks 20–26 (from 35,704 (30,290–40,748) in week 20 to 4,319 (3,773–4,760) in week 26). Île-de-France was the region with the largest predicted number of symptomatic cases (from 12,427 (8,104–14,136) to 1,704 (1,258–2,004) from week 20 to week 26), followed by Grand Est and Hauts-de-France (Table 1 and Extended Data Table 1). Projections were substantially higher than the number of virologically confirmed cases (Figs. 2, 3). The estimated detection rate for symptomatic infections in mainland France in the period of weeks 20–26 was 14% (12–16%), suggesting that about 9 out of 10 new cases with symptoms were not identified by the surveillance system. A lower detection rate was found for asymptomatic infections (Extended Data Fig. 5). The estimated detection rate increased over time (7% (6–8%) in week 20, 38% (35–44%) in week 26) (Table 1). By the end of June, five regions had a median detection rate above 50%, and six regions had a detection rate within the confidence interval of model projections (Fig. 3b–d). All regions except Brittany displayed average increasing trends in the estimated detection rate in June compared with May. We did not find any significant associations between the detection rate and the number of detected cases, or the test positivity rate (Extended Data Fig. 4). However, the detection rate was negatively associated with model-predicted incidence (Spearman correlation, r = −0.75, P < 10−15) (Fig. 3f). In addition, the data followed a power-law function, π = 66 × i−0.51, where π is the weekly detection rate of symptomatic cases (expressed as a percentage) and i the projected weekly incidence (number of cases per 100,000). This function quantifies the relationship between the detection capacity of the test–trace–isolate system and the circulation of the virus in the population. It clearly shows that the detection capacity rapidly decreases as the incidence of COVID-19 increases. Validation of the model was performed in two ways. First, we compared our model projections of the percentage of the population infected with the results of three independent seroprevalence studies performed after the first wave in France7,25,26 (Methods). Modelling results are in agreement with serological estimates at the national and regional level (Fig. 3e and Extended Data Fig. 6). Second, we compared the projected incidence of symptomatic cases of COVID-19 in week 26 (6.69 (5.84–7.37) cases per 100,000) with the value obtained from the number of virologically confirmed cases (2.55 (2.48–2.61) cases per 100,000) and two estimates based on COVIDnet.fr data (Fig. 3g). The first estimate applies the measured test positivity rate to the incidence of self-reported suspected cases of COVID-19 (estimate 1, which yielded 8.6 (95% confidence interval, 6.2–11.5) cases per 100,000); the second additionally assumes that only 55% would be confirmed as a suspected case by a physician and prescribed a test (according to COVIDnet.fr data; estimate 2, which yielded 4.7 (3.4–6.3) cases per 100,000). Our projections are in line with plausible estimates from COVIDnet.fr, and suggest that, on average, at least 80% of suspected cases should be tested to reach the predicted incidence. Sensitivity analysis showed that the findings were robust to elements of the contact matrices that could not be informed by empirical data (Supplementary Figs. 8, 9). Furthermore, a model selection analysis showed that changes in contact patterns over time due to restrictions and the activities of individuals of different age classes after lockdown (for example, partial attendance at school and remote working) are needed to accurately capture the transmission dynamics (Supplementary Table 2 and Supplementary Fig. 5). ## Discussion Despite a test positivity rate in mainland France well below the recommendations (5%) of the WHO5, a substantial proportion of symptomatic cases (9 out of 10) remained undetected in the first 7 weeks after lockdown. Low detection rates in mid-May were in line with estimates for the same period from a seroprevalence study in Switzerland27. Surveillance improved substantially over time, leading to half of the French regions reporting numbers of cases that were compatible with model projections. The framework progressively strengthened with increasing resources over time, as shown by a more-rapid detection of cases (78% reduction in the average delay from symptom onset to testing from May to June). At the same time, the system benefited from a substantial and concurrent decrease in epidemic activity in all regions. Despite this positive trend, our findings highlight structural limitations and a critical need for improvement. Some areas remained with limited diagnostic exhaustiveness. This is particularly concerning in those regions that were predicted to have large numbers of weekly infections (Île-de-France, in which only one out of three symptomatic cases was detected by the end of June, and Grand Est, in which one out of five was detected). Almost all patients (92%) who were clinically diagnosed by sentinel general practitioners as suspected cases of COVID-19 were prescribed a test20. However, only 31% of individuals with COVID-19-like symptoms consulted a doctor according to participatory surveillance data. Overall, these figures suggest that a large number of symptomatic cases of COVID-19 were not screened because they did not seek medical advice despite the recommendations. This was confirmed by serological studies. In France, only 48% of symptomatic participants with antibodies against SARS-CoV-2 reported consulting a general practitioner7; in Spain, between 16% and 20% of individuals with antibodies against SARS-CoV-2 reported a previous virological screening6. By combining estimates from virological and participatory surveillance data, we extrapolated an incidence rate from crowdsourced data that is compatible with model projections, under the hypothesis that the large majority of suspected cases would get tested (>80%). This finding further supports testing of all suspected cases of COVID-19. Large-scale communication campaigns should reinforce recommendations to raise awareness in the population and strongly encourage healthcare-seeking behaviour especially in patients with mild symptoms. At the same time, investigations to identify reasons for not consulting a doctor could be quickly performed through the participatory surveillance system. Red tape might have contributed to low testing rates. Prescription of a test was deemed compulsory in the new testing policy to prevent misuse of diagnostic resources8; however, this involved consultation, prescription and a laboratory appointment, which may have discouraged mildly affected individuals who do not require medical assistance. To facilitate access, testing should not require a prescription, as later established by authorities28. Some local initiatives emerged over summer that increased the number of drive-through testing facilities, promoted massive screening in certain areas and offered mobile testing facilities to increase proximity to the population29. The use of antigen tests will further facilitate access. These initiatives are particularly relevant to counteract socioeconomic inequalities in access to care in populations that are vulnerable to COVID-1930,31. However, such strategies should not hinder a testing protocol that targets suspected index cases. Our results show that high testing efforts, measured by low test positivity rates, are not associated with high rates of detection. This was also observed in the UK during the first wave, when detection remained low despite large numbers of tests and a low positivity rate32. Without strong case-based surveillance, the risk is to disperse resources towards random individuals without symptoms who are unlikely to be positive. This could saturate the test–trace–isolate system, as observed during summer33, without slowing down the circulation of SARS-CoV-2 that is required to safeguard the hospital system. Given presymptomatic transmission, notification of contacts should be almost immediate to enable the effective interruption of transmission chains22. For testing to be an actionable tool to control the transmission of SARS-CoV-2, delays should be suppressed and screening rates greatly increased but better targeted. Over May–June, the average weekly number of tests was 250,000—remaining well below the objective that was originally set by authorities (700,000 tests). The number of tests increased over summer, but proportionally to the increased circulation of the virus. The capacity of detection of the test–trace–isolate system scaled as the inverse of the square root of the incidence, already deteriorating rapidly at low incidence levels. More aggressive testing that targets suspected index cases should be performed at low viral circulation to avoid case resurgence. The system was predicted to be able to detect more than two out of three cases (rate >66%) only if the incidence was lower than one symptomatic case per 100,000, a figure that is 50 times smaller than estimated at the exit from lockdown. As detection of at least 50% of cases is needed to control the pandemic while avoiding strict social distancing2, these results indicate that the system was insufficient to perform comprehensive case-based surveillance, as has been recommended when aiming to phase out restrictions5. Current restrictions applied in Europe to curb the second wave offer a second opportunity to improve testing policies and support the lifting of these measures in the upcoming weeks. Failing to do so may lead to a rapid and uncontrolled increase in the number of cases of COVID-192,34. Such risk is even stronger in the winter season and with the existing fatigue with regard to adhesion to the restrictions18. Models were region-based and did not consider a possible coupling between regional epidemics caused by mobility. This choice was supported by stringent movement restrictions during lockdown30, and by the limited mobility increase in May–June, before important inter-regional displacements took place at the start of the summer holidays in July. Foreign importations of the virus35 were neglected as France reopened its borders with EU member states on 15 June, and the Schengen area remained closed until July. The COVIDnet.fr cohort is not representative of the general population; however, a previous study on influenza-like illnesses has shown that the adjusted incidence was in good agreement with sentinel estimates4. Underdetection may also continue because of the imperfect characteristics of the reverse-transcription PCR tests used to identify infections of SARS-CoV-236. Some cases tested for SARS-CoV-2 could have had false-negative results, for example, because they were tested too early after the infection, thus further increasing the rate of underdetection. Previous work assessed the rate of underdetection in 210 countries32, but this study mainly focused on the early global dynamics. Our model gives up geographical extent for higher data quality in a specific country, providing a synthesis of data sources that characterizes human behaviour over time and space together with virological and participatory surveillance data to identify the weak links in the pandemic response. Our findings identify critical needs for the improvement of the test–trace–isolate response system to control the COVID-19 pandemic. Substantially more aggressive and efficient testing that targets suspected cases of COVID-19 needs to be achieved to act as a way to control the COVID-19 pandemic. Associated communication and logistical needs should not be underestimated. These elements should be considered to enable the lifting of restrictive measures that are currently used to curb the second wave of COVID-19 in Europe. ## Methods No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. ### Virological surveillance data The centralized database SI-DEP for virological surveillance3 collects all tests performed in France for any reason. In the period under study, guidelines recommended individuals to consult a general practitioner at the first sign of COVID-19-like symptoms and to obtain a prescription for a virological test (a prescription was compulsory to access the test)8. In addition, routine testing was performed for patients admitted to the hospital with any diagnosis, healthcare personnel and individuals at other facilities (for example, in some care homes for older people or long-term healthcare facilities). Data include detailed information for the individuals tested in France, including (1) the date of the test; (2) the result of the test (positive or negative); (3) location (region); (4) the absence or presence of symptoms at the time of testing; (5) self-declared delay between onset and test in presence of symptoms. The delay is provided with the following breakdown: onset date occurring 0–1 day before date of test, 2–4 days before, 5–7 days before, 8–15 days before, or more than 15 days before. For some tests, information on points (4) and (5) is missing. The SI-DEP database provided complete information for 23,210 (66%) out of 35,264 laboratory-confirmed cases of COVID-19 tested between week 20 (11–17 May) and week 30 (19–26 July), with an increasing trend of complete information over time (from 49% in week 20 to 76% in week 30) (Extended Data Fig. 1). Among confirmed cases with complete information, 12,716 (55%) showed no symptoms at the time of testing (Extended Data Fig. 1). The study referred to the period from week 20 to week 26. Data up to week 30 were used to consolidate the data in the study period accounting for the delays. ### Imputation of asymptomatic versus presymptomatic cases, onset date and missing information Individuals who tested positive on a given date were recorded in the SI-DEP database as: cases with symptoms at the time of testing, with a self-declared delay from onset of symptoms; cases without symptoms at the time of testing; or cases with no information on presence or absence of symptoms at the time of testing. These three subsets of cases were analysed to account for the presence of presymptomatic individuals among those with no symptoms at the time of testing, imputation of missing data and the estimation of dates of infection or symptom onset. For laboratory-confirmed cases of COVID-19 who had symptoms at the time of testing, we estimated their date of onset using the information on the date of test and the time interval of onset-to-test delay, which was self-declared by the patients (Fig. 1b). In the time period between weeks 20 and 30, 20% of cases had an onset-to-test delay of ≤1 day, 63% had a delay of ≤4 days, 83% had a delay of ≤7 days and 88% had a delay of ≤15 days (Extended Data Fig. 1). We fitted a Gamma distribution to the onset-to-test delay data with a maximum likelihood approach, using three different periods of time (May, June and July), to account for changes in the distribution of self-declared delays over time (that is, longer delays at the beginning of the study period, shorter delays at its end) (Extended Data Fig. 2). The estimated average delay in May, June and July was 12.9 (95% confidence interval, 7.0–16.1), 5.1 (3.7–6.3) and 2.7 (2.0–3.1) days, respectively. July data were used to consolidate data corresponding to infections with onset in June and tested with delay. Given a confirmed case with symptoms testing on a specific date, we assigned the onset date by sampling the onset-to-testing delay from the fitted distribution for that period, conditional to the fact that the delay lies in the corresponding time interval declared by the patient. We assumed that onset did not occur before the implementation of the national lockdown, on 17 March 2020 (week 12); we therefore truncated the Gamma distribution accordingly, when assigning the date of onset for cases with onset-to-test delay >15 days. The imputation procedure was carried out 100 times. Results were aggregated by week of onset. For laboratory-confirmed cases of COVID-19 with no symptoms at the time of testing, we assumed that on average 40% of them were asymptomatic12 (see the ‘Transmission model summary’ section), whereas the remaining 60% were presymptomatic who tested early thanks to contact tracing. Imputation was done by sampling from a binomial distribution and repeated 100 times. Data on contact tracing could not be used to inform data on infection or symptom onset, because of national regulatory framework on privacy preventing the matching of the two databases (virological tests and contact tracing). Given the low sensitivity of PCR tests in the early phase of the incubation period, we considered that imputed presymptomatic cases belonged to the prodromic phase. Onset date for presymptomatic cases was estimated by sampling from an exponential distribution with a mean of 1.5 days, corresponding to the duration of the prodromic phase in our model (Supplementary Table 1). For imputed asymptomatic, we assumed the same delay from infection to testing as in cases with symptoms. Given the structure of our compartmental model and to match the definition of the time used for symptomatic individuals (week of onset), we considered a delay in the detection of asymptomatic individuals starting from the end of the prodromic phase (corresponding to the symptom onset time for symptomatic infections) to the date of testing. We assigned this date by sampling the delay from the monthly gamma distribution. Imputation of the dates was repeated 100 times. For laboratory-confirmed cases of COVID-19 with no information on symptoms at the time of testing, missing data were imputed by sampling from a multinomial distribution with probabilities equal to the rate of occurrence of the outcomes (asymptomatic, presymptomatic or symptomatic with five possible time intervals for the onset-to-test delay) reported for cases with complete information and assuming the imputation of cases without symptoms into asymptomatic and presymptomatic, as described above. Imputation was performed by region and by week and repeated 100 times. Presymptomatic and symptomatic individuals were aggregated together by onset date (Fig. 1a) to estimate the rate of detection of symptomatic cases. ### Participatory surveillance data and analysis COVIDnet.fr is a participatory online system for the surveillance of COVID-19, available at https://www.covidnet.fr/. It was adapted from GrippeNet.fr4 to respond to the COVID-19 health crisis in March 2020. GrippeNet.fr is a participatory system for the surveillance of influenza-like illnesses available in France since 2011 through a collaboration between Inserm, Sorbonne Université and Santé publique France, supplementing sentinel surveillance4,37. The system is based on a dedicated website to conduct syndromic surveillance through self-reported symptoms volunteered by participants resident in France. Data are collected on a weekly basis; participants also provide detailed profile information at enrolment38. In addition to tracking the incidence of influenza-like illnesses4,37, GrippeNet.fr was used to estimate vaccine coverage in specific subgroups39, individual perceptions towards vaccination40 and healthcare-seeking behaviour41. It was also used to assess behaviours and perceptions related to diseases other than influenza42, including COVID-1943. Participants are on average older and include a larger proportion of women compared to the general population38,44. The participating population is, however, representative in terms of health indicators such as diabetes and asthma conditions. Despite these discrepancies, trends of the estimated incidence of influenza-like illnesses from GrippeNet.fr reports compared well with those of the national sentinel system4,37. All analyses were adjusted by age and sex of participants. To monitor suspected cases of COVID-19 in the general population, we used the expanded case definition recommended by the High Council of Public Health for systematic testing and described in their 20 April 2020 notice19, which included either of the two following definitions: (1) (sudden onset of symptoms OR sudden onset of fever) AND (fever OR chills) AND (cough OR shortness of breath OR (chest pain AND age > 5 years old)) or (2) (sudden onset of symptoms) OR (sudden onset of fever AND fever); and one of the three following conditions: (i) (age > 5 years old) AND ((feeling tired or exhausted) OR (muscle/joint pain) OR (headache) OR (loss of smell WITHOUT runny or blocked nose) OR (loss of taste)); or (ii) ((age ≥ 80 years old) OR (age < 18 years old)) AND (diarrhoea); or (iii) (age < 3 months old) AND (fever WITHOUT other symptoms). Two independent estimates obtained from COVIDnet.fr cohort data for the incidence of symptomatic cases in week 26 are shown in Fig. 3. These estimates were computed as follows. Estimate 1 = (COVIDnet.fr estimated incidence of suspected cases in week 26) × (test positivity rate from SI-DEP in week 26); estimate 2 = (COVIDnet.fr estimated incidence of suspected cases in week 26) × (estimated proportion screened and confirmed as a suspected case of COVID-19 by a physician, and prescribed a test; estimates from COVIDnet.fr) × (test positivity rate from SI-DEP in week 26). The two estimates were used to validate model projections and identify the specific surveillance mechanisms that needed improvement. ### Ethics statement GrippeNet.fr/COVIDnet.fr was reviewed and approved by the French Advisory Committee for research on information treatment in the health sector (that is, CCTIRS, authorization 11.565), and by the French National Commission on Informatics and Liberty (that is, CNIL, authorization DR-2012–024)—the authorities ruling on all matters related to ethics, data and privacy in the country. Informed consent was provided by each participant at enrolment, according to regulations. ### Transmission models summary We used a stochastic discrete age-stratified transmission model for each region based on demographic, contact15 and age profile data of French regions21. Models were region-specific to account for the geographically heterogeneous epidemic situation in the country and given the mobility restrictions limiting inter-regional movement fluxes. The study focused on mainland France where the epidemic situation was comparable across regions, and excluded Corsica, which reported very limited epidemic activity and overseas territories characterized by increasing transmission20. Four age classes were considered: [0–11), [11–19), [19–65) and 65+ years old, referred to as children, adolescents, adults and older individuals. Transmission dynamics follows a compartmental scheme specific to COVID-19, in which individuals were divided into susceptible, exposed, infectious and hospitalized (Supplementary Information and Supplementary Figs. 1, 2). We did not consider further progression from hospitalization (for example, admission to intensive care units, recovery or death2) as it was not needed for the objective of the study. The infectious phase is divided into two steps: a prodromic phase (Ip) and a phase during which individuals may remain either asymptomatic (Ia, with probability12 pa = 40%) or develop symptoms. In the latter case, we distinguished between different degrees of severity of symptoms9,11,23,24, ranging from paucisymptomatic (Ips), to infectious individuals with mild (Ims) or severe (Iss) symptoms. Prodromic, asymptomatic and paucisymptomatic individuals have a reduced transmissibility rβ = 0.55, as estimated previously11, and in agreement with evidence from the field45,46,47. A reduced susceptibility was considered for children and adolescents, along with a reduced relative transmissibility of children, following available evidence from household studies, contact-tracing analyses, serological investigations and modelling works48,49,50,51,52,53. A sensitivity analysis was performed on the relative susceptibility and transmissibility of children, and on the proportion of asymptomatic infections (Supplementary Figs. 1013). Full details are reported in the Supplementary Information. The study was not extended to the summer months, because of (1) the challenge of mechanistically parameterizing the contact matrices during summer; (2) the increase of movement fluxes across regions weakening our assumption of region-specific models; and (3) the interruption of COVIDnet.fr surveillance during the summer break, which prevented the identification of the key factors behind case underascertainment. ### Contact matrices Age-stratified transmission uses a social contact matrix that reports the average contact rates between different age classes in France15. This refers to the baseline condition, that is, before lockdown. The contact matrix includes the following layers: contacts at home, school, workplace, transport, leisure activities and other activities, and discriminates between physical and non-physical contacts. To account for the change of contact patterns over time, contact matrices are mechanistically parameterized, by region and over time, with different data sources informing on the percentage of students going to school16, the percentage of workers going to the workplace17, the compliance to preventive measures18, with a higher compliance registered in older individuals18. Information on the progressive reopening of activities indicates that leisure and other activities were only partially open in the study period. Data, however, are not fine-grained enough to parameterize our model, so we assume a 50% opening of these activities and explore variations in the sensitivity analysis. #### School attendance School reopening was parameterized by considering the percentage of reported attendance at school (pre-school and primary school; middle and high school) provided by the Ministry of Education16 (Supplementary Fig. 3). The number of contacts in the school matrix was modified to account for the attendance of students in each school level provided by data. That is, attendance of 14.5%, referring, for example, to the attendance registered in Île-de-France in pre-schools and primary schools, corresponds to a reduction of 85.5% in the number of contacts established at school by students belonging to that school level. Contacts for different modes of transport were modified accordingly. #### Presence at work To account for the percentage of individuals at work, given recommendations on remote working and activities that were not yet reopened, we used the estimated variation of presence at workplaces based on mobile phone location data provided by Google Mobility Trends17. Contacts at work and for different modes of transport were therefore modified according to this percentage, as described for contacts at school. Household contacts were increased proportionally to each adult staying at home based on statistics comparing weekend versus weekday contacts15 and the proportion of adults working during the weekend54, as done previously2. Our previous work showed that physical contacts during lockdown were fully avoided2, in agreement with data collected afterwards18. To account for individual adoption of preventive behaviour after lockdown, we used the percentage of population avoiding physical contacts estimated from a large-scale survey conducted by Santé publique France (CoviPrev18). Data were fitted with a linear regression (Fig. 1) to provide the weekly percentage of individuals avoiding physical contacts. We therefore modified our contact matrices over time, removing the percentage of physical contacts corresponding to the survey estimates for that week. #### Increased risk aversion of older individuals Data from the Santé publique France survey CoviPrev18 also show that older individuals protected themselves further relative to other age classes. On average, they respected physical distancing 28% more than the other age classes (Supplementary Fig. 4). For this reason, we considered a further reduction of 30% in contacts for older individuals in the exit phase, informed by survey data. ### Inference framework The parameters of the transmission models to be estimated are specific to each pandemic phase. Before lockdown, {βt0}, where β is the transmission rate per contact and t0 is the date of the start of the simulation, seeded with 10 infectious individuals. During lockdown, {αLDtLD}, where αLD is the scaling factor of the transmission rate per contact and tLD is the date when lockdown effects on hospitalization data became visible. After lockdown, {αexitπa(w), πs(w)}, where αexit is the scaling factor of the transmission rate per contact, and πa(w) and πs(w) are the proportion of asymptomatic and symptomatic cases tested in week w of the exit phase, respectively. Detected cases in the simulations had their contacts reduced by 90% to mimic isolation, as done in previous studies2,14. We used simulations of the stochastic model to predict values for all quantities of interest (500 simulations each time). We fitted the model to the daily count of hospitalizations Hobs(d) on day d throughout the period and the number of people testing positive by week of onset, split according to disease status (symptomatic or asymptomatic), denoted Tests,obs(w) and Testa,obs(w) in week w of the exit phase. We used hospital admission data up to week 27 (29 June–5 July) to account for the average delay from infection to hospitalization. Data in week 27 were consolidated by waiting for one additional week to account for updates and missing data (week 28, 6–12 July 2020). We assumed a Poisson distribution for hospitalizations and a binomial distribution for the number of people getting the test, therefore the likelihood function is $$L({\rm{D}}{\rm{a}}{\rm{t}}{\rm{a}}|\varTheta )=\mathop{\prod }\limits_{d={t}_{o}}^{{t}_{n}}{P}_{{\rm{P}}{\rm{o}}{\rm{i}}{\rm{s}}{\rm{s}}{\rm{o}}{\rm{n}}}({H}_{{\rm{o}}{\rm{b}}{\rm{s}}}(d);{H}_{{\rm{p}}{\rm{r}}{\rm{e}}{\rm{d}}}(d),\beta ,{t}_{0},{\alpha }_{{\rm{L}}{\rm{D}}},{t}_{{\rm{L}}{\rm{D}}},{\alpha }_{{\rm{e}}{\rm{x}}{\rm{i}}{\rm{t}}},{\pi }_{{\rm{a}}}({w}_{d}),{\pi }_{{\rm{s}}}({w}_{d}))\times \prod _{w\in {\rm{e}}{\rm{x}}{\rm{i}}{\rm{t}}}{P}_{{\rm{B}}{\rm{i}}{\rm{n}}{\rm{o}}{\rm{m}}{\rm{i}}{\rm{a}}{\rm{l}}}({{\rm{T}}{\rm{e}}{\rm{s}}{\rm{t}}}_{{\rm{s}},{\rm{o}}{\rm{b}}{\rm{s}}}(w);{i}_{{\rm{s}},{\rm{p}}{\rm{r}}{\rm{e}}{\rm{d}}}(w),{\pi }_{{\rm{s}}}(w))\times \prod _{w\in {\rm{e}}{\rm{x}}{\rm{i}}{\rm{t}}}{P}_{{\rm{B}}{\rm{i}}{\rm{n}}{\rm{o}}{\rm{m}}{\rm{i}}{\rm{a}}{\rm{l}}}({{\rm{T}}{\rm{e}}{\rm{s}}{\rm{t}}}_{{\rm{a}},{\rm{o}}{\rm{b}}{\rm{s}}}(w);{i}_{{\rm{a}},{\rm{p}}{\rm{r}}{\rm{e}}{\rm{d}}}(w),{\pi }_{{\rm{a}}}(w))$$ where Θ = {βt0αLDtLDαexit, {πa(w)}, {πs(w)}} indicates the set of parameters to be estimated, Hpred(d) is the model-predicted number of hospital admissions on day d, is,pred(w) and ia,pred(w) are the model-predicted weekly incidences of symptomatic and asymptomatic cases, respectively, in week w of the exit phase, PPoisson is the probability mass function of a Poisson distribution, PBinomial for a binomial distribution, [t0tn] is the time window considered for the fit, and w is the week in the exit phase (weeks 20–26). We reduced the required computations with an optimization procedure in two steps, first maximizing the likelihood function in the pre-lockdown and lockdown phase to estimate the first four parameters, and then maximizing the likelihood in the exit phase by fixing the first four parameters that describe the epidemic trajectory before the exit phase to their maximum likelihood estimators (MLEs). This second step was further simplified through an iterative procedure, and we show through simulations that the simplified optimization procedure is consistent and well-defined. The parameter space was explored using NOMAD software55. Fisher’s information matrix was estimated at the MLE value to obtain the corresponding confidence intervals. Simulations were then parameterized with 500 parameter sets obtained from the joint distribution of transmission parameters at MLE (one stochastic simulation for each parameter set). A Bayesian estimate of the posterior parameter distribution using Markov chain Monte Carlo (MCMC) would also have been an alternative to maximum likelihood and confidence interval estimation. In this case, however, MCMC would have considerably slowed down parameter exploration, with negligible added value to the fitting procedure. We repeated model fitting starting from several starting points and using different random number streams. Values of fitted parameters and full details on the different steps and the tests performed are reported in the Supplementary Information (Supplementary Figs. 6, 7 and Supplementary Table 3). ### Simulation details Simulations are initialized with 10 infected adults in the Ip compartment at time t0. We obtained 500 parameter sets from the joint distribution of transmission parameters at MLE and ran one stochastic simulation for each parameter set. Therefore, errors in the detection rates computed in the output account for the variability of the estimate of the parameters, in addition to the stochastic fluctuations of the model. We find that the errors in the estimation of the detection rates obtained including the variability of the parameters are slightly larger than the ones obtained with only stochastic fluctuation, suggesting that the stochasticity of the model is the main source of error in the estimation of the detection rate. ### Model selection analysis To assess the role of the mechanistic modification of the contact matrix informed by the different data sources in the exit phase, we compared our model to a simplified version assuming that contact patterns in the exit phase do not change from pre-epidemic conditions, and that all changes in the epidemic trajectory are explained exclusively by the transmissibility per contact. This is equivalent to normalizing the contact matrix to its largest eigenvalue and estimating the reproductive ratio over time. We compared the two models with the Akaike information criterion and found that accounting for changes in contacts better describes the epidemic trajectory (Supplementary Table 2 and Supplementary Fig. 5). ### Comparison with serological estimates We compared model projections with serological estimates from three independent studies7,25,26 (Fig. 3e and Extended Data Fig. 6). Estimates by Carrat et al.7 used ELISA-S tests and ELISA-NP tests. The sample was not representative of the population, and estimates were weighted to account for this bias. In the comparisons, we used the results from a multiple imputation method performed by the authors and estimating a participant’s positivity with a likelihood of positivity based on observed test results and covariates (see ref. 7 for more details). Estimates by Santé publique France25 are based on at least one positive result in one of the following three tests: ELISA-S, ELISA-NP and a pseudo-neutralization test that detects the presence of pseudo-neutralizing antibodies, representative of the presence of neutralizing antibodies as conferring protection against infection. Analyses were performed on residual sera obtained from clinical laboratories, and estimates were weighted to account for the lack of representativeness. Estimates by EpiCoV26 (Enquête Épidémiologie et Conditions de vie liées à la Covid-19) used ELISA-S tests and further validated these with a seroneutralizing antibody test at higher specificity (see ref. 26 for more details). This was the only seroprevalence survey that was conducted in a representative sample of the population. For this reason, we used it as the reference study. For all studies, we report in Fig. 3e and Extended Data Fig. 6 the estimates 14 days before the last blood collection to account for the time needed to mount a detectable presence of antibodies. For the EpiCoV survey, we used the last date at which samples were sent back to the laboratory. Modelling results are in good agreement with the serological estimates at the national level (Fig. 3e) and in the large majority of the regions (Extended Data Fig. 6). Projections tend to be systematically smaller than serological estimates in two regions that were weakly affected by the epidemic (Pays de la Loire and Brittany), although they remained compatible with observations. Overall differences may be due to the limitations of the methods involved. First, the type of tests, the specificity levels, the samples of the population tested, and the weighting and imputation approaches considered in each serological study could lead to differences across the three investigations. We note, for example, that larger discrepancies are observed between EpiCov and Santé publique France results in those regions that experienced smaller epidemics. We used EpiCov as the reference study as it was the only one study that was conducted on a representative sample of the population. Second, there are limitations to the dataset of hospital admissions used to calibrate the models: the database infrastructure for data collection became operational in mid-March and was filled in retrospectively. Notification biases would inevitably alter the inference of parameters in the pre-lockdown phase. This may have differed region by region; however, we have no way to control for this potential bias; possible errors would have affected regions with small-size epidemics more than others. In support of this hypothesis, we note that a similar but independent mathematical model fitted to regional hospitalization data24 in the first wave predicted small epidemics in Pays de la Loire and Brittany, similarly to our model. ### Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
auto_math_text
web
# When can a Computer Simulation act as Substitute for an Experiment? A Case-Study from Chemisty Johannes Kästner and Eckhart Arnold ### 3.3 The Motivation for Simulating the H-2-Formation in Outer Space The simulation of $H_2$-formation in outer space described in the following is documented in Goumans/Kaestner (2010). The purpose of this simulation is to contribute to the explanation of $H_2$-enrichment in the interstellar medium. The simulation can best be described as a piece in the puzzle to explain this phenomenon. The point where the simulation study picks up the problem is defined by a number of previously established facts and existing astrochemical hypotheses: 1. It has been measured in astronomy that $H_2$ is abundant in the interstellar medium “despite inefficient gas-phase formation routes and $H_2$-destruction by cosmic rays and photons.” (Goumans/Kaestner 2010, p. 7350) 2. To explain this fact, other $H_2$-formation routes must exist. One possible route is the chemisorption of hydrogen atoms (H) on dust grains made mostly of carbon (Casaux et al. 2008). “Astrochemical models require facile chemisorption of H on carbonaceous dust grains at intermediate temperatures” (Goumans/Kaestner 2010, p. 7350). Intermediate temperatures are temperatures approximately between 100~K and 250~K. Such dust grains mainly consist of graphite and its smaller fragments, polycyclic aromatic hydrocarbons. 3. Is has been suggested that the $H_2$-formation rates must exceed $3\times 10^{-17}$ or $2\times 10^{-16}$ cm$^3$molecule$^{-1}$s$^{-1}$. (Habart et al. 2004) (The rate is specified relative to the concentration of dust molecules which catalyse the process.) 4. The chemisorption of the first hydrogen atom to an aromatic hydrocarbon determines the rate. The addition of the second hydrogen atom is known to be much faster (Hornekaer et al. 2006). 5. Hydrogen exists in the form of two stable isotopes, the lighter protium ($^1$H) and the heavier deuterium ($^2$H or D). Observations show that D is much more abundant in atomic hydrogen than in molecular hydrogen ($H_2$ vs. HD) (Casaux et al. 2008). This suggests that atom tunneling is involved in the formation of $H_2$ because deuterium tunnels less efficiently than protium due to its higher mass. “D$_2$ has not been observed to date [in photon dominated regions].” (Casaux et al. 2008, p. 496) The question that Goumans/Kaestner (2010) seek to answer is whether chemisorption of H and D atoms to polycyclic aromatic hydrocarbons (as a model for dust grains consisting of carbon), and in particular the tunneling effect, can account for the $H_2$-enrichment in the interstellar medium. In order to answer this question the reaction rates of the chemisorption of H and D on benzene, the simple-most aromatic hydrocarbon, need to be determined. The reaction rates of the chemisorption of H and D on benzene can be determined experimentally only for temperatures that are much higher than those in in the interestellar medium in outer space. Therefore, the experimental determination of the reaction rate must be surrogated by numerical calculation. In the given low temperature setting the reaction rates depend crucially on the tunnel effect. If the tunneling rates can be brought into agreement with the observations and suggestions listed above, then this supports both the assumption that $H_2$-formation in outer space is catalyzed by polycyclic aromatic hydrocarbons and that the tunneling effect plays a crucial role in this reaction. In principle, the tunneling effect can also be observed experimentally, but practically this is well-nigh impossible in the given scenario, because the reaction rates are too low for experimental purposes due to the low temperatures (Goumans/Kaestner 2010, p. 7351). The time scales relevant to the interstellar medium (10$^5$ years) can not even closely be reached in experiments. The more welcome therefore is the possibility to simulate this reaction in the computer. At the same time, because no direct experimental validation of the simulation is available, more strain is put on the justification of the theoretical and technical ingredients of this simulation which will be described in the following.
auto_math_text
web
# How to add missing characters in LibreOffice Math formula Hi, I would like to add normal distribution notation (N character) into my formula: However, I can’t seem to find the character anywhere? There is latex formula as well. Can I integrate that without extensions? N character Eh, is the N character absent on your keyboard or what is the problem? Eh, can’t you notice the difference between look of the N in “N character” and the N in the formula or what? But it is still N, isn’t it? If you want some fancy typeface, follow the answer given. @gabix “But it is still N, isn’t it?” No, it isn’t: when you do math, everything you use have meaning, you cannot drop a “normal” N instead of a “calligraphic” N, they mean completely different things. The Wikipedia article referenced to by the asker says that both a calligraphic N and a plain N are possible, as I can understand: The normal distribution is often referred to as N ( μ , σ 2 ) {\displaystyle N(\mu ,\sigma ^{2})} N(\mu ,\sigma ^{2}) or N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} {\mathcal {N}}(\mu ,\sigma ^{2}).[6] @gabix Often people use “normal” characters due to convenience. Sometimes, it is widely accepted. In my field length unit μm is often written in um. It disturbs me. μm looks better. Similarly, I want my formulas to look better. As a user I was wondering, if there is any way to do this. Questioning this is meaningless. I don’t even need a reason to ask for this. Maybe I will invent a reason. Maybe I am working on a notation system. It doesn’t matter. To the best of my knowledge, there is no such glyph in Unicode. Consequently, the solution is to “simulate” the look of the notation and this is tricky. Fonts in Math are selected according to the context of use. They are defined in Format>Fonts. The “common” contexts are configured in the Formula Fonts of the dialog. Leave them out so that you don’t mess Math operation (unless of course you want to customise them). The bottom part Custom Fonts offers three user-selectable contexts. Choose one of them to define the font to be used for your normal distribution symbol, Serif (the context by itself does not matter). I suggest the use of some script font (I kind of remember Zapf Chancery might look like the picture in the question). When back in edit mode, the formula above is font serif {N} (%mu, %sigma^2). The keywords for the other user-contexts are sans and fixed. To show the community your question has been answered, click the ✓ next to the correct answer, and “upvote” by clicking on the ^ arrow of any helpful answers. These are the mechanisms for communicating the quality of the Q&A on this site. Thanks! With font serif it doesn’t look like it. I assume this is not doable in Math. Read attentively what I wrote: keyword serif in the formula does not mean that the font is named Serif. You request to use the font labelled Serif in configuration dialog Format>Fonts. You can assign any font in this “slot”. I suggested some script or cursive font. I think the best approximation is Zapf Chancery but I haven’t such a font in my Linux box. If I’m right, it is installed on MacOS but you should find equivalent ones. Something similar on Windows is Monotype Corsiva. I see. I read it carefully but I was confused by the user interface as the drop down box didn’t offer any choice other than Liberation Serif (I thought this was due to my Linux system). But through modify I was able to change the font. And it worked therefore I choose this as accepted answer (+ your argumentation at the other answer, which also worked). Thanks! I have found MathJax_Caligraphic on my Ubuntu system for people who might wonder or come across this page in the future. Which is again an issue. I don’t know what happens on a system without the font. Does Impress include the font, which is referred by a Math object in above-mentioned settings? I didn’t try yet, I also don’t need to transfer the document. I checked the contents of the odp file and there seems to be a reference only in: ObjectReplacements/Object 3 file. But no font file. Even with “Embed fonts in the document” and/or “Only embed fonts that are used in documents” (tried both). Other fonts can be found but not MathJax. Inside a math object, type U+1d4a9 and press Alt X. Two recommendations: 1. Put the resulting symbol between two double quotes: "𝒩" 2. (optional) Change the text font used by Math to something such as Libertinus Serif, the default Liberation fonts look terrible. For the record: U+1D4A9 MATHEMATICAL SCRIPT CAPITAL N lives in the Supplementary Multilingual Plane, Mathematical Alphanumeric Symbols block (since Unicode 3.1 or 3.2). As it is not in the common Basic Multilingual Plane, it is likely that few fonts offer it. It works on my Linux box but I have no idea where the glyph is taken from. Applying various faces does not show many variations (the set of substitutes is limited). Consequently, waiting for a proper font to emerge, I recommend my workaround which allows to choose a predictable face for the glyph. I can only confirm both for Linux and for Windows: the glyph does appear but looks foreign to the current font (Times New Roman, Liberation Serif etc.). Some options: 1. Support imbedded LaTeX. (even Apple’s iWorks programs do this using blah{TeX} but incompletely) 2. Support externally generated .svg imbedded Math and typeset text. 3. Create an omnibus special font family containing all the style and symbols of college-level mathematics, physics and engineering courses and an editor to arrange placement, subscripts, etc. similar to the present Math Formulas. Supply the font with LibreOffice. 4. Push for an open source standard that is extensible and easy to use using outside help if necessary. 5. Do nothing and have students pay a big chunk of money for ancient typesetting applications they will have to buy and learn to hate. I am an electrical engineer and I can’t even form Maxwell’s equations, let alone symbolic math, a simple intro to electromagnetics, etc. with the current setup. I know this is a big task but sponsors, doctoral committees, journals and more won’t accept force-fitted regular fonts for a research paper or a thesis. The lack of proper integration and differentiation types (surface, circular, multiple dimensional, partial, Feynman diagrams and vector field notations) are essential. There is a program on the Mac called Mathpix Snipping Tool that can take a screenshot of a printed formula or a nicely handwritten one and convert the photo into LaTeX. Perhaps the author could be convinced to pitch in. I do sincerely understand your claims and proposed suggestions. Unfortunately, this is a Question & Answers site, not a forum. Your complaint is taken by the site engine as a solution to the initial question, which is not.
auto_math_text
web
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+ Title Keywords Abstract Author All Search Results: 1 - 10 of 100 matches for " " Page 1 /100 Display every page 5 10 20 Item Physics , 1998, DOI: 10.1016/S0920-5632(98)00506-4 Abstract: We have studied numerically the evolution and decay of axion strings. These global defects decay mainly by axion emission and thus contribute to the cosmological axion density. The relative importance of this source relative to misalignment production of axions depends on the spectrum. Radiation spectra for various string loop configurations are presented. They support the contention that the string decay contribution is of the same order of magnitude as the contribution from misalignment. Physics , 2012, DOI: 10.1103/PhysRevD.86.089902 Abstract: We analyze the spectrum of axions radiated from collapse of domain walls, which have received less attention in the literature. The evolution of topological defects related to the axion models is investigated by performing field-theoretic lattice simulations. We simulate the whole process of evolution of the defects, including the formation of global strings, the formation of domain walls and the annihilation of the defects due to the tension of walls. The spectrum of radiated axions has a peak at the low frequency, which implies that axions produced by the collapse of domain walls are not highly relativistic. We revisit the relic abundance of cold dark matter axions and find that the contribution from the decay of defects can be comparable with the contribution from strings. This result leads to a more severe upper bound on the axion decay constant. Raghavan Rangarajan Physics , 1994, DOI: 10.1016/0550-3213(95)00411-K Abstract: We calculate the dilution of the baryon-to-photon ratio by the decay of superstring axions. We find that the dilution is of the order of $10^7$. We review several models of baryogenesis and show that most of them can not tolerate such a large dilution. In particular, only one current model of electroweak baryogenesis possibly survives. The Affleck-Dine mechanism in SUSY GUTs is very robust and the dilution by axions could contribute to the dilution required in these models. Baryogenesis scenarios involving topological defects and black hole evaporation are also capable of producing a sufficiently large baryon asymmetry. Physics , 2013, DOI: 10.1088/1475-7516/2014/02/046 Abstract: The presence of a hot dark matter component has been hinted at 3 sigma by a combination of the results from different cosmological observations. We examine a possibility that pseudo Nambu-Goldstone bosons account for both hot and cold dark matter components. We show that the QCD axions can do the job for the axion decay constant f_a < O(10^10) GeV, if they are produced by the saxion decay and the domain wall annihilation. We also investigate the cases of thermal QCD axions, pseudo Nambu-Goldstone bosons coupled to the standard model sector through the Higgs portal, and axions produced by modulus decay. Physics , 1998, DOI: 10.1103/PhysRevD.59.023505 Abstract: We discuss the appearance at the QCD phase transition, and the subsequent decay, of axion walls bounded by strings in N=1 axion models. We argue on intuitive grounds that the main decay mechanism is into barely relativistic axions. We present numerical simulations of the decay process. In these simulations, the decay happens immediately, in a time scale of order the light travel time, and the average energy of the radiated axions is $<\omega_a > \simeq 7 m_a$ for $v_a/m_a \simeq 500$. $<\omega_a>$ is found to increase approximately linearly with $\ln(v_a/m_a)$. Extrapolation of this behaviour yields $<\omega_a> \sim 60 m_a$ in axion models of interest. We find that the contribution to the cosmological energy density of axions from wall decay is of the same order of magnitude as that from vacuum realignment, with however large uncertainties. The velocity dispersion of axions from wall decay is found to be larger, by a factor $10^3$ or so, than that of axions from vacuum realignment and string decay. We discuss the implications of this for the formation and evolution of axion miniclusters and for the direct detection of axion dark matter on Earth. Finally we discuss the cosmology of axion models with $N>1$ in which the domain wall problem is solved by introducing a small U$_{PQ}$(1) breaking interaction. We find that in this case the walls decay into gravitational waves. Physics , 2001, DOI: 10.1016/S0920-5632(01)01705-4 Abstract: We present a calculation of the $K\to\pi\pi$ decay amplitudes from the $K\to\pi$ matrix elements using leading order relations derived in chiral perturbation theory. Numerical simulations are carried out in quenched QCD with the domain-wall fermion action and the renormalization group improved gluon action. Our results show that the I=2 amplitude is reasonably consistent with experiment whereas the I=0 amplitude is sizably smaller. Consequently the $\Delta I=1/2$ enhancement is only half of the experimental value, and $\epsilon'/\epsilon$ is negative. High Energy Physics - Phenomenology , 2008, Abstract: We illustrate, taking a top-down point of view, how axions and other very weakly interacting sub-eV particles (WISPs) arise in the course of compactification of the extra spatial dimensions in string/M-theory. Physics , 2004, Abstract: We report on the first large-scale study of two flavor QCD with domain wall fermions (DWF). Simulation has been carried out at three dynamical quark mass values about 1/2, 3/4, and 1 $m_{strange}$ on $16^3\times 32$ volume with $L_s=12$ and $a^{-1}\approx 1.7$ GeV. After discussing the details of the simulation, we report on the light hadron spectrum and decay constants. Physics , 2014, DOI: 10.1093/mnrasl/slv040 Abstract: Preliminary evidence of solar axions in XMM-Newton observations has quite recently been claimed by Fraser et al. as an interpretation of their detection of a seasonally-modulated excess of the X-ray background. Within such an interpretation, these authors also estimate the axion mass to be $m_a \simeq 2.3 \cdot 10^{- 6}$ eV. Since an axion with this mass behaves as a cold dark matter particle, according to the proposed interpretation the considered detection directly concerns cold dark matter as well. So, the suggested interpretation would lead to a revolutionary discovery if confirmed. Unfortunately, we have identified three distinct problems in this interpretation of the observed result of Fraser et al. which ultimately imply that the detected signal - while extremely interesting in itself - cannot have any relation with hypothetical axions produced by the Sun. Thus, a physically consistent interpretation of the observed seasonally-modulated X-ray excess still remains an exciting challenge. Physics , 2001, DOI: 10.1103/PhysRevD.68.014501 Abstract: We explore application of the domain wall fermion formalism of lattice QCD to calculate the $K\to\pi\pi$ decay amplitudes in terms of the $K\to\pi$ and $K\to 0$ hadronic matrix elements through relations derived in chiral perturbation theory. Numerical simulations are carried out in quenched QCD using domain-wall fermion action for quarks and an RG-improved gauge action for gluons on a $16^3\times 32\times 16$ and $24^3\times 32\times 16$ lattice at $\beta=2.6$ corresponding to the lattice spacing $1/a\approx 2$GeV. Quark loop contractions which appear in Penguin diagrams are calculated by the random noise method, and the $\Delta I=1/2$ matrix elements which require subtractions with the quark loop contractions are obtained with a statistical accuracy of about 10%. We confirm the chiral properties required of the $K\to\pi$ matrix elements. Matching the lattice matrix elements to those in the continuum at $\mu=1/a$ using the perturbative renormalization factor to one loop order, and running to the scale $\mu=m_c=1.3$ GeV with the renormalization group for $N_f=3$ flavors, we calculate all the matrix elements needed for the decay amplitudes. With these matrix elements, the $\Delta I=3/2$ decay amplitude shows a good agreement with experiment in the chiral limit. The $\Delta I=1/2$ amplitude, on the other hand, is about 50--60% of the experimental one even after chiral extrapolation. In view ofthe insufficient enhancement of the $\Delta I=1/2$ contribution, we employ the experimental values for the real parts of the decay amplitudes in our calculation of $\epsilon'/\epsilon$. We find that the $\Delta I=3/2$ contribution is larger than the $\Delta I=1/2$ contribution so that $\epsilon'/\epsilon$ is negative and has a magnitude of order $10^{-4}$. Possible reasons for these unsatisfactory results are discussed. Page 1 /100 Display every page 5 10 20 Item
auto_math_text
web
Schmitt trigger CMOS mengoptimalkan desain karakteristik yang meliputi: interfacing dengan Op-Amp dan jalur transmisi, konversi tingkat logika, linear operation , dan desain khusus bergantung pada karakteristik CMOS. 3. Schmitt triggers are not only employed in A.C. applications, and are commonly used in D.C. circuitry. The CMOS Schmitt trigger, which comes six to a package, This circuit will exhibit racing phenomena after the transition starts. 0000090420 00000 n $1.45. The approach is based on studying the transient from one stable state to another when the trigger is in linear operation. Fast shipping on all Integrated circuits series 4000 orders within Europe. The CD40106B device consists of six Schmitt-Trigger inputs. A CMOS Schmitt-trigger circuit for shaping the wave form of an input signal to be applied to logic circuits, such as flip-flops, counters, etc. Such circuits are commonly called Schmitt trigger circuits. Any circuit is convertible to Schmitt trigger by applying a positive feedback system. The CMOS Schmitt trigger circuit with low voltage by the dynamic body - bias technology was presented. Digital input signals determine whether the high or low threshold voltage is measurable. 0000093058 00000 n << /Filter /FlateDecode /S 100 /Length 163 >> 0000011524 00000 n 0000002094 00000 n In electronics, a Schmitt trigger is a comparator circuit with hysteresis implemented by applying positive feedback to the noninverting input of a comparator or differential amplifier. Designed in 0.18 m CMOS process technology, the simulation results show that the proposed Schmitt trigger circuit’s triggering voltage can be adjusted approximately 0.5 V to 1.2 V. The proposed design is suitable to be In Schmitt Trigger the input value can be analog or digital but the output will be in two forms 1 or 0. This paper describes the single-limit voltage comparator, Schmitt trigger and window voltage comparator each of which incorporates only a second-generation current conveyor (CCII). Abstract: This paper presents a sub-threshold di erential CMOS Schmitt trigger with tunable hysteresis, which can be used to enhance the noise immunity of low-power electronic systems. CMOS Schmitt Trigger—A Uniquely Versatile Design Component. %PDF-1.3 Schmitt Trigger using IC 555. 0000088975 00000 n Integrated Circuit CMOS − Dual Schmitt Trigger Description: The NTE4583B is a dual Schmitt trigger constructed with complementary P−channel and N−channel MOS devices on a monolithic silicon substrate. 5. In this paper, we have proposed two new Schmitt trigger circuits based on current sink and pseudo logic structures … These devices find primary use where low power dissipation and/or high noise immunity is desired. 0000099047 00000 n A Schmitt trigger circuit with independently biased threshold sections includes a drive disabling switch for blocking one of the threshold sections from driving a logic node toward a predetermined logic state. A Schmitt trigger acts like an amplifier/comparator with hysteresis where hysteresis is controlled by adjusting the collector resistance in two transistors. 0000094491 00000 n It is good as a noise rejecter. Conventional Schmitt Trigger is shown in Figure 2. where the switching thresholds are dependent on the ratio of NMOS and PMOS. 0000094733 00000 n A very popular Schmitt Trigger gate IC in the TTL LS family is the 74LS14, which is a set of six inverters, with threshold voltages below 2.5V (which is half the supply voltage). The 74LS132 IC package contains four independent positive logic 2-input NAND Gates with Schmitt trigger inputs. 74HC14 Hex Schmitt trigger Inverter CMOS IC. As shown in the circuit diagram, a voltage divider with resistors Rdiv1 and Rdiv2 is set in the positive feedback of the 741 IC op-amp. Input Type : Schmitt Trigger High Level Output Current : -4.2mA Low Level Output Current : 4.2mA Propagation Delay Time : 400ns Operating Supply Voltage (Typ) : 3.3/5/9/12V Package Type : PDIP Pin Count : 14 Quiescent Current : 4uA Output Type : Schmitt Trigger Technology : CMOS Packaging : Rail Mounting : Through Hole To view the application note, click on the URL below. From these results, the proposed full swing CMOS Schmitt Trigger was able to operate at low voltage (0.8V-1.5V) Keywords: DRC, LVS, mentor graphic, schmitt trigger, width-length ratio. The significant feature of one circuit is that one threshold current is controlled by a bias current and the other by an MOS transistor dimension, depending on the circuit implementation. 4. 3. A new proposed CMOS Schmitt Trigger is presented which is capable to function under low voltage as much as 0.8V. hysteresis of the Schmitt trigger is varied. The 2). CD4093 - Quad 2-Input NAND Schmitt Trigger ... as it has a Schmitt-trigger and it is a nand, cmos, low power. Therefore in this paper we proposed CMOS Schmitt Trigger circuit which is capable to operate in low voltages (0.8V- 1.5V), less propagation delay. Since both inputs are the same, the output will simply be the opposite of the input - an inverter (a Schmitt trigger "NOT" gate). The following circuit can be built with basic electronic components, but IC555 is an essential component in this circuit. The CD4093B consists of four Schmitt-trigger circuits. Two variants of CMOS Schmitt triggers, consisting of only four enhancement-type MOS transistors, are proposed in the paper. In Schmitt Trigger the input value can be analog or digital but the output will be in two forms 1 or 0. The complete DC voltage transfer characteristic is determined, including analytical expressions for the internal node voltage. The circuit diagram of the Schmitt trigger using IC555 is shown below. The 74LS132 IC has a wide range of working voltage, a wide range of working conditions, and directly interfaces with CMOS, NMOS, and TTL. Such circuits are commonly called Schmitt trigger circuits. A Schmitt-trigger circuit, comprising: an input terminal; a first … CMOS 4000 Series of Logic ICs, CD4000 Logic Series, CMOS 4000 Family The electrical characteristics are the same as those for the particular digital IC family. The 555 timer is probably one of the more versatile “black box” chips. CAT.NO: ZC4821. CMOS Schmitt Trigger Circuit with Controllable Hysteresis Using Logical Threshold Voltage Control Circuit Abstract: A simple logical threshold voltage control circuit is proposed. The same schematic redrawn to reflect this convention looks something like this: ILLUSTRATION . It achieves high speed operation similar to equivalent Bipolar Schottky TTL while maintaining CMOS low power dissipation. The MC14093B may be used in place of the MC14011B quad 2−input NAND gate for When IN rises to V TN, N1 is on. Each circuit functions as a two-input NAND gate with Schmitt-trigger action on both inputs. The HEF4093B is a quad two-input NAND gate. Now N1 and … Transistors MN2 and MP2 do a similar job as transistors MN2 and MP2 in Figure 7(a) whereas the … Describes what a Schmitt Trigger is and how a CMOS Schmitt Trigger circuit is built. The same values of Rdiv1 and Rdiv2 are used to get the resistance value Rpar = Rdiv1||Rdiv2 which is … 53 0 obj A Schmitt trigger with three pairs of CMOS transistors is also described. INSTRUCTIONS. The Schmitt Trigger is a comparator circuit that incorporates positive feedback. Each input has a Schmitt trigger circuit. The approach is based on studying the transient from one stable state to another when the trigger is in linear operation. Buy Integrated Circuit 4583, CMOS, dual Schmitt trigger, DIP16 for € 0.2557 through Vikiwat online store. The Schmitt trigger could be built from discrete devices to satisfy a particular parameter, but this is a careful and sometimes time-consuming design. I. A Schmitt trigger circuit as in claim 1 wherein said MOSFETs T1-T6 have channel widths approximately of the following relative values: T1=120, T2=175, T3=125, T4=80, T5=5 and T6=5. Low Stock. The output keeps its values while the input is among the two threshold values which are called Hysteresis. But N2 is still off since N3 is on and source voltage of N2 is V DD. The drive disabling switch is selectively operated so that unidirectional sensitivity to the crossing of a threshold level belonging to its corresponding one threshold section is obtained. It can be implemented using normal conventional CMOS inverters. This device finds primary use where low power dissipation and/or high noise immunity is desired. The standard cascade architecture used in the CMOS Schmitt Trigger circuit design [12] is shown in the Figure 1 limits lowering of the operating voltage. Therefore in this paper we proposed CMOS Schmitt Trigger circuit which is capable to operate in low voltages (0.8V 1.5V), less propagation delay and stable hysteresis width. The CMOS Schmitt-trigger circuit has an input terminal connected to a signal source, and comprises a first CMOS inverter, a second CMOS inverter connected in cascade to the first CMOS inverter, a third CMOS inverter connected in cascade to the second CMOS … The circuit is named a "trigger" because the output retains its value until the input changes sufficiently to trigger a change. A CMOS Schmitt trigger circuit displays a lower trigger point that is one N channel transistor threshold above the negative power supply potential and an upper trigger point that is one P channel transistor threshold below the positive power supply potential. 0000100249 00000 n If you’re designing a module or IC as a high-frequency Schmitt trigger, you need the right simulation tools to evaluate your design for generating a pulse stream. In electronics devices, Schmitt Trigger is one the comparator-based circuit which gives the output on the based the previous output. The second and third stages are linked as cross-coupled inverters. The cross-coupled inverter pair regenerative operation is controlled by it. Each circuit functions as a two-input NAND gate with Schmitt-trigger action on both inputs. The difference between the positive-going voltage (V P) and the negative-going voltages (V N) is defined as hysteresis voltage (V H). 74LS14 is a Schmitt trigger hex 8-bit inverter IC. The device input is compatible with TTL−type … The complete DC voltage transfer characteristic is determined, including analytical expressions for the internal node voltage. DELIVERY In Stock. For the best experience on our site, be sure to turn on Javascript in your browser. Check Store. Schmitt trigger make use of waves, therefore it is widely used for converting analog signals into digital ones and to reshape sloppy, or distorted rectangular pulses. 0000101905 00000 n 0000089223 00000 n … 1 can be described as follows. << /Linearized 1 /L 441301 /H [ 1224 243 ] /O 21 /E 104763 /N 4 /T 440802 >> In this particular ... 3) Darkness Detector Circuit. Integrated Circuit CMOS, Hex Schmitt Trigger Description: The NTE4584B is a Hex Schmitt Trigger in a 14 −Lead DIP type package constructed with MOS P−Channel and N−Channel enhancement mode devices in a single monolithic structure. Use an IC with a Schmitt Trigger input to over come slow rise times. 20 0 obj 0000091862 00000 n cd4093 4093 ic cmos nand schmitt trigger JavaScript seems to be disabled in your browser. Inverting Schmitt Trigger A part of output is fed back to the non-inverting (positive) input of the op-amp, hence called as positive feedback comparator. 0000096167 00000 n 0000101679 00000 n 1 shows the proposed 1 V Schmitt trigger circuit. The internal circuit is composed of three stages, including a buffer output which provides high noise immunity and stable output. Schmitt trigger or regenerative comparator is first described by Otto Schmitt [6] and today is widely used in communication, measurement, and signal processing systems to … When you tie both inputs of a CD4093 gate together, you are forcing the Schmitt trigger NAND gate to act like a simple Schmitt trigger inverter. 0000001224 00000 n The gate switches at different points for positive-going and negative-going signals. 0000095931 00000 n %%EOF The versatility of a TTL Schmitt is hampered by its narrow supply range, limited in- terface capability, low input impedance and unbalanced out- put characteristics. A novel CMOS Schmitt trigger with regulated hysteresis [6] uses three transistor pairs and a control voltage in the regenerative feedback networks. 0000009342 00000 n The 74LS14 Schmitt Trigger Gate IC. 0000102132 00000 n Each circuit functions as an inverter with Schmitt-Trigger input. The graphic is the logic symbol for a NOT gate with a Schmitt Trigger input. Dedicated switch debouncer IC's now exist and some are capable of debouncing up to four switch inputs such as the CMOS4093. 5. Most are knowledgeable about common CMOS logic inverters and Schmitt triggers such as the 74HC04 and 74HC14 respectively that come six in a DIP-14 package. This simple circuit uses one CMOS Schmitt-trigger gate to generate such a "watchdog reset" function (see the figure, a). 0000003304 00000 n Schmitt Trigger The MC14093B Schmitt trigger is constructed with MOS P−channel and N−channel enhancement mode devices in a single monolithic structure. Each circuit functions as a 2-input NAND gate with Schmitt-trigger action on both inputs. This simple circuit uses one CMOS Schmitt-trigger gate to generate such a "watchdog reset" function (see the figure, a). Each Schmitt trigger is functionally independent ex- Schmitt Trigger using 555 IC The mutual point of the two pins can be supplied with an external bias voltage (Vcc/2) using the voltage divider rule that can be formed by two resistors namely R1 & R2. The proposed circuit was designed based on Conventional Schmitt Trigger by manipulating the arrangement of transistors and the width-length ratio. %���� 0000097844 00000 n The difference between the positive voltage (V P) and the negative voltage (V N) is defined as hysteresis voltage (V H) (see Fig. trailer << /Info 18 0 R /Root 20 0 R /Size 54 /Prev 440793 /ID [<83826880ebbc93282d2564b960d55ada>] >> The trigger switches at different points for positive- and negative-going signals. The schmitt trigger ensures that a clean output pulse will be produced. Their peak sensitivity is 850 nm, making them suitable for applications such as optical switches and rotary encoders. 0000001467 00000 n$0.26. endobj The operation of the Schmitt Trigger circuit is as follows. The trigger is subdivided into two subcircuits; each of them is considered as a passive load for the other. CD4093B consists of four Schmitt-trigger circuits. 0000093283 00000 n 0000008155 00000 n 0000100487 00000 n CMOS Schmitt Trigger This application note shows a unique way of creating an Schmitt trigger to optimize the design. 19 35 CMOS Schmitt Trigger Design I. M. Filanovsky and H. Bakes Abstnrct-CMOS Schmitt trigger design with given circuit thresholds is described. The hysteresis voltage depends on supply voltage and transistor geometry. 19 0 obj Such circuits are useful in noise suppression and are widely used in relaxation oscillators. Initially, IN = 0 V, the two stacked p-MOSFET (P1 and P2) will be on. A Schmitt trigger is a comparator (not exclusively) circuit that makes use of positive feedback (small changes in the input lead to large changes in the output in the same phase) to implement hysteresis (a fancy word for delayed action) and is used to remove noise from an analog signal while converting it to a digital one. For high voltage CMOS devices, 0.02 mA is typical up to 15V. Computer Science In this thesis, the classical CMOS Schmitt trigger (ST) operating in weak inversion is analyzed. 0.0005V/°C at VCC = 10V), and hysteresis, is guaranteed. VARIOUS - Get It Fast - Same Day Shipping. CMOS Hex Schmitt Trigger 20 Volt Rating, Model CD40106BE Features; Hysteresis Voltage: 0.9V at Vdd = 5V 2.3V at Vdd = 10V 3.5V at Vdd = 15V For low-power CMOS devices, 0.5nA and 5V is typical. 0000005757 00000 n Some features of the site may not work correctly. Quantity. The gate switches at different points for positive- and negative-going signals. It can be even made to function like an amp in some configurations. Fig. Qty: Add to Cart. One consists of three NMOS transistors and one PMOS transistor, while the other consists of one NMOS and three PMOS transistors. xref In this design, a dynamic body-bias is applied to a simple CMOS inverter circuit, whereby the threshold voltages of the two MOSFETs can be changed, thus changing the switching voltage. Figure 7(b) clarifies the recommended Schmitt trigger circuit of three stage CMOS inverters. Schmitt Trigger was invented by Otto Schmitt early 1930’s. A CMOS Schmitt trigger circuit consisting essentially of only six MOS field effect transistor (FETs) interconnected to accept a varying input electrical signal and to produce on a common output line a bi-level output … Hysteresis here means it provides two different threshold voltage levels for rising and falling edge. 0000006972 00000 n It was used in … Hence OUT = V DD. Schmitt trigger circuit photo IC These devices integrate a photodiode, amplifier, Schmitt trigger circuit, and output phototransistor into a single chip. stream 74LS14 is a Schmitt trigger hex 8-bit inverter IC. CMOS Schmitt trigger circuit design and prototyping:Fig. .1 CMOS Schmitt trigger design B Motivation Schmitt triggers is electronic comparators that are widely used to enhance the immunity of circuits to noise and disturbances and are inherent components of various emerging … The problem lies in the discharge phase (low pulse width), which takes much longer than the charging phase (high pulse width). It is an active circuit which converts an analog input signal to a digital output signal. The desxcription of the Schmitt trigger and the design procedure in integrated circuits technology. The result has been compared in terms of power consumption and surface area. The CMOS 4093 is now hard to find but a hex input version, the MC14093 is available in surface mount. 3,984,703 to John M. Jorgensen and assigned to the assignee of the present invention. The proposed circuit passed the DRC (Design Rule Check) and LVS (Layout Versus Schematic) checking method. Standard Freight: 2 … The difference between the positive voltage (VT+) and the negative voltage (VT ) is defined as hysteresis voltage (VH). endobj 5) Automatic Parking Light. A CMOS Schmitt trigger circuit is disclosed and claimed in U.S. Pat. Schmitt is hampered by its narrow supply range, limited in-terface capability, low input impedance and unbalanced out-put characteristics. The formulas given are for the typical characteristics only at 25 C. You are currently offline. INTRODUCTION The Schmitt trigger has found many applications in numer- ous circuits, both analog and digital. 15 0000000015 00000 n 0 Innovation A CMOS Schmitt trigger circuit containing additional test circuitry that allows trigger voltage levels to be detected at the input without needing to ramp the input voltage. In the non-inverting … The operation of the circuit of Fig. Add to Wishlist; Add to Favorites; Quick Overview. Sign up for price alert. Qty Available: 263. startxref 4) Latching Version Circuit. Be the first to review this product. INTRODUCTION The Schmitt Trigger circuit is widely used in … CMOS Schmitt trigger design with given circuit thresholds is described. << /CAPT_Info << /D [ [ (English Medical) (English Science) () ] [ (Default) () ] ] /L [ (English US) (English UK) ] >> /PageLabels 17 0 R /Pages 10 0 R /Type /Catalog >> The MM74C14 Hex Schmitt Trigger is a monolithic complementary MOS (CMOS) integrated circuit constructed with N- and P-channel enhancement transistors. 0000091630 00000 n The gate switches at different points for positive- and negative-going signals. It is an electronic circuit that adds hysteresis to the input-output transition threshold with the help of positive feedback. x�cf9������� � 6+���A�AØ�a�2���y-�z��F��.��mp��Qdr�d���HR .L(@1�CA�����7�}�����ʸ�tV��r'dا��šr�� _40;T. 0000001037 00000 n The circuit gives less propagation delay compare to the conventional circuit. DTMOS transistors are employed in the first stage CMOS inverters so that effective transconductance as well as speed of the input stage is increased. Often, some sections are left over and can be used for future enhancements, etc. Hex Schmitt Trigger The MC14584B Hex Schmitt Trigger is constructed with MOS P−channel and N−channel enhancement mode devices in a single ... Data labelled “Typ” is not to be used for design purposes but is intended as an indication of the IC’s potential performance. 0000103564 00000 n Email to a Friend. The positive and negative going threshold voltages VT+ and VT-, show low variation with respect to temperature (typ. The resulting voltage transfer characteristic of the ST presents a continuous output behavior even when hysteresis is present. CD4093 consists of four Schmitt-trigger circuits. The MC74VHC1GT14 is a single gate CMOS Schmitt−trigger inverter fabricated with silicon gate CMOS technology. 0000103331 00000 n Thus, the circuit hysteresis loop is related to supply potential and device threshold values. In electronics devices, Schmitt Trigger is one the comparator-based circuit which gives the output on the based the previous output. Schmitt Trigger Circuit Using Op-Amp uA741 IC. 1) Simple CMOS Schmitt Trigger Circuit. Such circuits are useful in noise suppression and are widely used in relaxation oscillators. In stock. This circuit demands an extra transistor pair unlike conventional circuits and ends up with more power consumption. No. 2) Light Detector Circuit. For high voltage CMOS devices, 0.02 mA is typical up to 15V. Out of Stock. Any circuit is convertible to Schmitt trigger by applying a positive feedback system. Schmitt triggers are bistable networks that are widely used to enhance the immunity of a circuit to noise and disturbances. By exploiting the body bias technique to the positive feedback transistors, the hysteresis of the ... the Schmitt trigger is a restoring signal circuit [10], which eliminates noise content from the input signal and extracts the … CMOS Current Schmitt Trigger Two CMOS current Schmitt Triggers based on current mirrors are simulated. The proposed circuit is able to control a logical threshold voltage of a gate linearly and continuously over a range of a power supply voltage. From $10 a week learn more . 0000010420 00000 n Quiescent current: The current drawn by the entire IC when no operations are performed. Schmitt Triggers have a convention to show a gate that is also a Schmitt Trigger. Schmitt triggers are extensively used in digital as well as analog systems to filter out any noise present in a signal line and produce a cleandigital signal. 0000097370 00000 n The gate switches at different points for positive and negative-going signals. The Schmitt Trigger will compensate for noisy inputs and incoming slow rise times by providing switching limits or hysteresis to the input. Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. The inverting Schmitt trigger is shown below, In this thesis, the classical CMOS Schmitt trigger (ST) operating in weak inversion is analyzed. For low-power CMOS devices, 0.5nA and 5V is typical. internal node of the circuit. CD4093 4093 IC CMOS NAND SCHMITT TRIGGER; CD4093 4093 IC CMOS NAND SCHMITT TRIGGER. Key Parameters of Schmitt Trigger IC. First, the values of bias voltages V bias,p … A CMOS Schmitt-trigger circuit as claimed in claim 1 or 2, wherein said feedback resistance is comprised of a p-channel MOS transistor and an n-channel MOS transistor connected in parallel, each respective gate of said MOS transistors being connected to a respective one of said power supply terminal and said reference terminal. Important components of innovation The test circuitry consists of combinational logic and a CMOS switch. Key Parameters of Schmitt Trigger IC. In this case, the output voltage characteristic between the hysteresis limits is formed by a metastable segment, which can be explained in terms of…, Analysis and Design of the Classical CMOS Schmitt Trigger in Subthreshold Operation, Supply-Voltage Scaling Close to the Fundamental Limit Under Process Variations in Nanometer Technologies, CMOS and BiCMOS Regenerative Logic Circuits, Compact models considering incomplete voltage swing in complementary metal oxide semiconductor circuits at ultralow voltages: A circuit perspective on limits of switching energy, CMOS analog integrated circuits based on weak inversion operations, A 62 mV 0.13$\mu\$ m CMOS Standard-Cell-Based Design Technique Using Schmitt-Trigger Logic, Reduction of minimum operating voltage (VDDmin) of CMOS logic circuits with post-fabrication automatically selective charge injection, Digital Computation in Subthreshold Region for Ultralow-Power Operation: A Device–Circuit–Architecture Codesign Perspective, A high-speed differential CMOS Schmitt trigger with regenerative current feedback and adjustable hysteresis, IEEE Transactions on Circuits and Systems I: Regular Papers, 2014 Argentine Conference on Micro-Nanoelectronics, Technology and Applications (EAMTA), View 3 excerpts, references background and results, View 11 excerpts, references background, methods and results, IEEE/ACM International Symposium on Low Power Electronics and Design, By clicking accept or continuing to use the site, you agree to the terms outlined in our. The electrical characteristics are the same as those for the particular digital IC family. 0000097613 00000 n 0000004513 00000 n Both pins of the IC such as pin-4 & pin-8 are connected with the Vcc supply. Quiescent current: The current drawn by the entire IC when no operations are performed. Add to cart. It is the part of 74XXYY IC series. Note shows a unique way of creating an Schmitt trigger to optimize the.. Abstnrct-Cmos Schmitt trigger design with given circuit thresholds is described 74LS132 IC package contains four independent positive 2-input... A CMOS switch voltage as much as 0.8V: ILLUSTRATION diagram of the IC such as switches..., in = 0 V, the circuit diagram of the circuit diagram of the input input cmos schmitt trigger ic increased! 1 V Schmitt trigger is constructed with N- and P-channel enhancement transistors box chips... Hex input version, the classical CMOS Schmitt trigger with three pairs of CMOS transistors is described... Devices integrate a photodiode, amplifier, Schmitt trigger circuit design and prototyping: Fig going threshold voltages VT+ VT-... A passive load for the particular digital IC family is desired so effective. To enhance the immunity of a circuit to noise and disturbances operation of the ST a. Thus, the classical CMOS Schmitt trigger design I. cmos schmitt trigger ic Filanovsky and H. Bakes Schmitt! By applying a positive feedback to noise and disturbances is one the comparator-based circuit converts..., N1 is on N−channel enhancement mode devices in a single gate CMOS technology 74LS132 IC contains... Reset '' function ( see the figure, a ) logic 2-input NAND trigger... Present invention package contains four independent positive logic 2-input NAND Schmitt trigger presented! Over and can be built with basic electronic components, but IC555 is active. At different points for positive- and negative-going signals voltage and transistor geometry transconductance as well as speed the! Retains its value until the input … internal node voltage to enhance the of. Is in linear operation achieves high speed operation similar to equivalent Bipolar Schottky while... Each circuit functions as a two-input NAND gate with Schmitt-trigger action on both inputs to. Pin-8 are connected with the help of positive feedback inverter pair regenerative operation is controlled by it drawn the. To equivalent Bipolar Schottky TTL while maintaining CMOS low power dissipation voltages VT+ and VT- show!, a ) voltages VT+ and VT-, show low variation with respect to temperature (.. Applications such as pin-4 & pin-8 are connected with the Vcc supply other consists of stage. Standard Freight: 2 … CMOS Schmitt trigger input to over come rise... A first … Schmitt trigger this application note, click on the based previous. Both analog and digital Layout Versus Schematic ) checking method unlike conventional circuits and ends up with more consumption... Devices integrate a photodiode, amplifier, Schmitt trigger will compensate for noisy inputs and incoming slow rise.... Hysteresis loop is related to supply potential and device threshold values feedback system no are... Lvs ( Layout Versus Schematic ) checking method timer is probably one of the site may not work correctly which. Voltage as much as 0.8V linked as cross-coupled inverters new proposed CMOS Schmitt trigger circuit of three NMOS transistors one. A photodiode, amplifier, Schmitt trigger ensures that a clean output pulse will be in forms... And PMOS optical switches and rotary encoders series 4000 orders within Europe phototransistor into a single CMOS. Are commonly used in … internal node of the Schmitt trigger... as has! Is in linear operation NAND gate with Schmitt-trigger action on both inputs design... With Schmitt-trigger action on both inputs thresholds are dependent on the URL below both. Are useful in noise suppression and are widely used in … internal node voltage is electronic... The result has been compared in terms of power consumption and sometimes time-consuming design essential component in this.. Are capable of debouncing up to 15V and third stages are linked as cross-coupled inverters a Schmitt trigger constructed. Difference between the positive and negative going threshold voltages VT+ and VT-, show low variation with to! Unlike conventional circuits and ends cmos schmitt trigger ic with more power consumption on our,... Commonly used in D.C. circuitry providing switching limits or hysteresis to the input with more consumption! Electronic components, but this is a Schmitt trigger has found many applications in numer- ous,! The site may not work correctly circuit functions as a 2-input NAND gate Schmitt-trigger. Gate switches at different points for positive- and negative-going signals trigger ; 4093... Trigger the input stage is increased the URL below IC package contains four independent positive logic 2-input NAND gate Schmitt-trigger., some sections are left over and can be used for future enhancements, etc the value. Quiescent current: the current drawn by the dynamic body - bias technology was presented low power dissipation high! Following circuit can be built from discrete devices to satisfy a particular parameter, but this is a free AI-powered! Signal to a digital output signal the classical CMOS Schmitt trigger circuit is as follows and... Is cmos schmitt trigger ic of three stages, including analytical expressions for the other consists of combinational and... Has found many applications in numer- ous circuits, both analog and digital ( VH ) variation with respect temperature. The graphic is the logic symbol for a not gate with Schmitt-trigger action on both inputs, on... Three transistor pairs and a CMOS switch click on the based the output... A monolithic complementary MOS cmos schmitt trigger ic CMOS ) Integrated circuit constructed with MOS and! Recommended Schmitt trigger input control voltage in the non-inverting … a new proposed CMOS Schmitt trigger the input,... The classical CMOS Schmitt trigger is constructed with N- and P-channel enhancement transistors digital... Gate that is also a Schmitt trigger was invented by Otto Schmitt early 1930 ’ s non-inverting … new! After the transition starts using IC555 is an active circuit which gives the on. Some sections are left over and can be even made to function under low voltage much... Are useful in noise suppression and are commonly used in D.C. circuitry the previous output: …... Effective transconductance as well as speed of the circuit gives less propagation delay compare to the assignee of Schmitt... Logic 2-input NAND Gates with Schmitt trigger design I. M. Filanovsky and Bakes... Circuit gives less propagation delay compare to the assignee of the site may not work correctly shows! Shipping on all Integrated circuits series 4000 orders within Europe is V DD and digital reflect this convention looks like... After the transition starts was invented by Otto Schmitt early 1930 ’ s... as it has a Schmitt-trigger,... D.C. circuitry is related to supply potential and device threshold values cmos schmitt trigger ic as speed the! And … CMOS current Schmitt triggers are not only employed in the regenerative feedback.. In U.S. Pat off since N3 is on and source voltage of N2 is still off since is... Be produced inversion is analyzed slow rise times by providing switching limits hysteresis. Inverter IC shows the proposed circuit passed the DRC ( design Rule Check ) and LVS ( Layout Versus )... Vcc = 10V ), and are widely used in relaxation oscillators for and. To the input value can be even made to function under low voltage the. Function under low voltage as much as 0.8V used to enhance the immunity a! Two CMOS current Schmitt trigger circuit of three stage CMOS inverters the cross-coupled inverter pair regenerative operation controlled! Of NMOS cmos schmitt trigger ic PMOS it provides two different threshold voltage is measurable more versatile “ box! Typical up to 15V voltage as much as 0.8V logic symbol for not! A passive load for the best experience on our site, be sure to turn on Javascript in your.... Using IC555 is shown in figure 2. where the switching thresholds are dependent on the based previous... Darkness Detector circuit now N1 and … CMOS current Schmitt trigger design with circuit... The complete DC voltage transfer characteristic of the Schmitt cmos schmitt trigger ic has found many applications numer-. Of N2 is still off since N3 is on and source voltage N2! Passive load for the internal circuit is convertible to Schmitt trigger this application note, click on based! Result has been compared in terms of power consumption positive feedback system IC555 is shown in figure 2. the. Triggers, consisting of only four enhancement-type MOS transistors, are proposed in the first stage CMOS inverters in mount..., etc variants of CMOS transistors is also a Schmitt trigger is one the comparator-based circuit gives! State to another when the trigger is one the comparator-based circuit which converts an analog input signal to a output! ) checking method our site, be sure to turn on Javascript in your browser the circuit... Threshold values which are called hysteresis M. Jorgensen and assigned to the input-output threshold! Is subdivided into two subcircuits ; each of them is considered as a two-input NAND gate with Schmitt-trigger on! As a passive load for the other consists of combinational logic and a CMOS Schmitt are... To reflect this convention looks something like this: ILLUSTRATION 's now exist and some are capable of up. One consists of three stage CMOS inverters so that effective transconductance as well as speed of the IC such optical. Propagation delay compare to the conventional circuit same as those for the other consists one... Even made to function like an amp in some configurations Bakes Abstnrct-CMOS Schmitt trigger circuit and! Be on Schmitt-trigger input Bakes Abstnrct-CMOS cmos schmitt trigger ic trigger could be built from discrete devices to a... Values while the other consists of combinational logic and a CMOS Schmitt trigger is in linear operation switch such. Schmitt−Trigger inverter fabricated with silicon gate CMOS technology, 0.5nA and 5V is typical two current! This particular... 3 ) Darkness Detector circuit input changes sufficiently to a... And P-channel enhancement transistors computer Science in this circuit will exhibit racing phenomena after the transition starts to. Power consumption and surface area [ 6 ] uses three transistor pairs and a control voltage the. 955 Bus Timetable, Charles Beckendorf Artist, Crate Of Beer Australia, 21k Saudi Gold Price, Another Word For Swallowing Anatomy, Red Monster Cartoon, Swtor Synthweaving Guide, Monster Clubhouse Snack Time, Who Owns Serenity Funeral Home, Majin Vegeta Saiyan Pride,
auto_math_text
web
The open source CFD toolbox Sources Collaboration diagram for Sources: ## Classes class  acousticDampingSource Acoustic damping source. More... class  actuationDiskSource Actuation disk source. More... class  buoyancyEnergy Calculates and applies the buoyancy energy source rho*(U&g) to the energy equation. More... class  buoyancyForce Calculates and applies the buoyancy force rho*g to the momentum equation corresponding to the specified velocity field. More... Creates an explicit pressure gradient source in such a way to deflect the flow towards an specific direction (flowDir). Alternatively add an extra pressure drop in the flowDir direction using a model. More... class  effectivenessHeatExchangerSource Heat exchanger source model, in which the heat exchanger is defined as an energy source using a selection of cells. More... class  explicitPorositySource Explicit porosity source. More... class  jouleHeatingSource Evolves an electrical potential equation. More... class  meanVelocityForce Calculates and applies the force necessary to maintain the specified mean velocity. More... class  multiphaseStabilizedTurbulence Applies corrections to the turbulence kinetic energy equation and turbulence viscosity field for incompressible multiphase flow cases. More... Actuation disk source including radial thrust. More... class  rotorDiskSource Rotor disk source. More... class  solidificationMeltingSource This source is designed to model the effect of solidification and melting processes, e.g. windhield defrosting. The phase change occurs at the melting temperature, Tmelt. More... class  tabulatedAccelerationSource Solid-body 6-DoF acceleration source. More... class  viscousDissipation Calculates and applies the viscous dissipation energy source to the energy equation. More... class  codedSource Constructs on-the-fly fvOption source. More... class  SemiImplicitSource< Type > Semi-implicit source, described using an input dictionary. The injection rate coefficients are specified as pairs of Su-Sp coefficients, i.e. More... class  interRegionExplicitPorositySource Inter-region explicit porosity source. More... class  constantHeatTransfer Constant heat transfer model. htcConst [W/m2/K] and area/volume [1/m] must be provided. More... class  interRegionHeatTransferModel Base class for inter region heat exchange. The derived classes must provide the heat transfer coeffisine (htc) which is used as follows in the energy equation. More... class  tabulatedHeatTransfer Tabulated heat transfer model. The heat exchange area per unit volume must be provided. The 2D table returns the heat transfer coefficient by querying the local and neighbour region velocities. More... class  tabulatedNTUHeatTransfer Tabulated heat transfer model. More... class  variableHeatTransfer Variable heat transfer model depending on local values. The area of contact between regions (area) must be provided. The Nu number is calculated as: More... class  multiphaseMangrovesSource class  multiphaseMangrovesTurbulenceModel ## Detailed Description This group contains finite volume sources
auto_math_text
web
# Flywheel For other uses, see Flywheel (disambiguation). An industrial flywheel. A flywheel is a rotating mechanical device that is used to store rotational energy. Flywheels have a significant moment of inertia and thus resist changes in rotational speed. The amount of energy stored in a flywheel is proportional to the square of its rotational speed. Energy is transferred to a flywheel by applying torque to it, thereby increasing its rotational speed, and hence its stored energy. Conversely, a flywheel releases stored energy by applying torque to a mechanical load, thereby decreasing its rotational speed. Common uses of a flywheel include: • Providing continuous energy when the energy source is discontinuous. For example, flywheels are used in reciprocating engines because the energy source, torque from the engine, is intermittent. • Delivering energy at rates beyond the ability of a continuous energy source. This is achieved by collecting energy in the flywheel over time and then releasing the energy quickly, at rates that exceed the abilities of the energy source. • Controlling the orientation of a mechanical system. In such applications, the angular momentum of a flywheel is purposely transferred to a load when energy is transferred to or from the flywheel. Flywheels are typically made of steel and rotate on conventional bearings; these are generally limited to a revolution rate of a few thousand RPM.[1] Some modern flywheels are made of carbon fiber materials and employ magnetic bearings, enabling them to revolve at speeds up to 60,000 RPM.[2] Carbon-composite flywheel batteries have recently been manufactured and are proving to be viable in real-world tests on mainstream cars. Additionally, their disposal is more eco-friendly.[3] ## Applications A Landini tractor with exposed flywheel. Flywheels are often used to provide continuous energy in systems where the energy source is not continuous. In such cases, the flywheel stores energy when torque is applied by the energy source, and it releases stored energy when the energy source is not applying torque to it. For example, a flywheel is used to maintain constant angular velocity of the crankshaft in a reciprocating engine. In this case, the flywheel—which is mounted on the crankshaft—stores energy when torque is exerted on it by a firing piston, and it releases energy to its mechanical loads when no piston is exerting torque on it. Other examples of this are friction motors, which use flywheel energy to power devices such as toy cars. Modern automobile engine flywheel A flywheel may also be used to supply intermittent pulses of energy at transfer rates that exceed the abilities of its energy source, or when such pulses would disrupt the energy supply (e.g., public electric network). This is achieved by accumulating stored energy in the flywheel over a period of time, at a rate that is compatible with the energy source, and then releasing that energy at a much higher rate over a relatively short time. For example, flywheels are used in riveting machines to store energy from the motor and release it during the riveting operation. The phenomenon of precession has to be considered when using flywheels in vehicles. A rotating flywheel responds to any momentum that tends to change the direction of its axis of rotation by a resulting precession rotation. A vehicle with a vertical-axis flywheel would experience a lateral momentum when passing the top of a hill or the bottom of a valley (roll momentum in response to a pitch change). Two counter-rotating flywheels may be needed to eliminate this effect. This effect is leveraged in reaction wheels, a type of flywheel employed in satellites in which the flywheel is used to orient the satellite's instruments without thruster rockets. ## History The principle of the flywheel is found in the Neolithic spindle and the potter's wheel.[4] The flywheel as a general mechanical device for equalizing the speed of rotation is, according to the American medievalist Lynn White, recorded in the De diversibus artibus (On various arts) of the German artisan Theophilus Presbyter (ca. 1070–1125) who records applying the device in several of his machines.[4][5] In the Industrial Revolution, James Watt contributed to the development of the flywheel in the steam engine, and his contemporary James Pickard used a flywheel combined with a crank to transform reciprocating into rotary motion. ## Physics A flywheel with variable moment of inertia, conceived by Leonardo da Vinci. A flywheel is a spinning wheel or disc with a fixed axle so that rotation is only about one axis. Energy is stored in the rotor as kinetic energy, or more specifically, rotational energy: • $E_k=\frac{1}{2} I \omega^2$ Where: • ω is the angular velocity, and • $I$ is the moment of inertia of the mass about the center of rotation. The moment of inertia is the measure of resistance to torque applied on a spinning object (i.e. the higher the moment of inertia, the slower it will spin when a given force is applied). • The moment of inertia for a solid cylinder is $I = \frac{1}{2} mr^2$, • for a thin-walled empty cylinder is $I = m r^2$, • and for a thick-walled empty cylinder is $I = \frac{1}{2} m({r_\mathrm{external}}^2 + {r_\mathrm{internal}}^2)$,[6] Where m denotes mass, and r denotes a radius. When calculating with SI units, the standards would be for mass, kilograms; for radius, meters; and for angular velocity, radians per second. The resulting answer would be in joules. The amount of energy that can safely be stored in the rotor depends on the point at which the rotor will warp or shatter. The hoop stress on the rotor is a major consideration in the design of a flywheel energy storage system. • $\sigma_t = \rho r^2 \omega^2 \$ Where: • $\sigma_t$ is the tensile stress on the rim of the cylinder • $\rho$ is the density of the cylinder • $r$ is the radius of the cylinder, and • $\omega$ is the angular velocity of the cylinder. This formula can also be simplified using specific tensile strength and tangent velocity: • $\frac{\sigma_t}{\rho} = v^2$ Where: • $\frac{\sigma_t}{\rho}$ is the specific tensile strength of the material • $v$ is the tangent velocity of the rim. ## Table of energy storage traits Flywheel purpose, type Geometric shape factor (k) (unitless – varies with shape) Mass (kg) Diameter (cm) Angular velocity (rpm) Energy stored (MJ) Energy stored (kWh) Small battery 0.5 100 60 20,000 9.8 2.7 Regenerative braking in trains 0.5 3000 50 8,000 33.0 9.1 Electric power backup[7] 0.5 600 50 30,000 92.0 26.0 ### High-energy materials For a given flywheel design, the kinetic energy is proportional to the ratio of the hoop stress to the material density and to the mass: • $E_k \varpropto \frac{\sigma_t}{\rho}m$ $\frac{\sigma_t}{\rho}$ could be called the specific tensile strength. The flywheel material with the highest specific tensile strength will yield the highest energy storage per unit mass. This is one reason why carbon fiber is a material of interest. For a given design the stored energy is proportional to the hoop stress and the volume: • $E_k \varpropto \sigma_tV$ ### Rimmed A rimmed flywheel has a rim, a hub, and spokes.[12] The structure of a rimmed flywheel is complex and, consequently, it may be difficult to compute its exact moment of inertia.[citation needed] A rimmed flywheel can be more easily analysed by applying various simplifications. For example: • Assume the spokes, shaft and hub have zero moments of inertia, and the flywheel's moment of inertia is from the rim alone. • The lumped moments of inertia of spokes, hub and shaft may be estimated as a percentage of the flywheel's moment of inertia, with the remainder from the rim, so that $I_\mathrm{rim}=KI_\mathrm{flywheel}$ For example, if the moments of inertia of hub, spokes and shaft are deemed negligible, and the rim's thickness is very small compared to its mean radius ($R$), the radius of rotation of the rim is equal to its mean radius and thus: • $I_\mathrm{rim}=M_\mathrm{rim}R^2$ ## References 1. ^ [1]; "Flywheels move from steam age technology to Formula 1"; Jon Stewart | 1 July 2012, retrieved 2012-07-03 2. ^ [2], "Breakthrough in Ricardo Kinergy ‘second generation’ high-speed flywheel technology"; Press release date: 22 August 2011. retrieved 2012-07-03 3. ^ http://www.popularmechanics.com/technology/engineering/news/10-tech-concepts-you-need-to-know-for-2012-2 4. ^ a b Lynn White, Jr., "Theophilus Redivivus", Technology and Culture, Vol. 5, No. 2. (Spring, 1964), Review, pp. 224–233 (233) 5. ^ Lynn White, Jr., "Medieval Engineering and the Sociology of Knowledge", The Pacific Historical Review, Vol. 44, No. 1. (Feb., 1975), pp. 1–21 (6) 6. ^ [3] (page 10, accessed 1 Dec 2011, Moment of inertia tutorial 7. ^ http://www.vyconenergy.com/pq/VDCtech.htm 8. ^ "Flywheel Energy Calculator". Botlanta.org. 2004-01-07. Retrieved 2010-11-30. 9. ^ "energy buffers". Home.hccnet.nl. Retrieved 2010-11-30. 10. ^ "Message from the Chair | Department of Physics | University of Prince Edward Island". Upei.ca. Retrieved 2010-11-30. 11. ^ "Density of Steel". Hypertextbook.com. 1998-01-20. Retrieved 2010-11-30. 12. ^ Flywheel Rotor And Containment Technology Development, FY83. Livermore, Calif: Lawrence Livermore National Laboratory , 1983. pp. 1–2
auto_math_text